In Lecture 15, guest lecturer Song Han discusses algorithms and specialized hardware that can be used to accelerate training and inference of deep learning workloads. We discuss pruning, weight sharing, quantization, and other techniques for accelerating inference, as well as parallelization, mixed precision, and other techniques for accelerating training. We discuss specialized hardware for deep learning such as GPUs, FPGAs, and ASICs, including the Tensor Cores in NVIDIA’s latest Volta GPUs as well as Google’s Tensor Processing Units (TPUs). Keywords: Hardware, CPU, GPU, ASIC, FPGA, pruning, weight sharing, quantization, low-rank approximations, binary networks, ternary networks, Winograd transformations, EIE, data parallelism, model parallelism, mixed precision, FP16, FP32, model distillation, Dense-Sparse-Dense training, NVIDIA Volta, Tensor Core, Google TPU, Google Cloud TPU Slides: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture15.pdf ————————————————————————————– Convolutional Neural Networks for Visual Recognition Instructors: Fei-Fei Li: http://vision.stanford.edu/feifeili/ Justin Johnson: http://cs.stanford.edu/people/jcjohns/ Serena Yeung: http://ai.stanford.edu/~syyeung/ Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. [More]
The full automated Toolchain from 360° Camera to simulation for digital twin generation The use of a digital twin in the production process will be used, for example, for proactive planning, analysis of existing systems or process-parallel monitoring. Many companies, especially small and medium-sized enterprises, use the technology incorrectly or not. From your point of view, the generation of a digital twin is cost-, time- and resource-intensive. With our approach, these obstacles can be overcome quickly and easily, and the production layout and production logic (e.g. machine types, etc.) can be captured with a 360° Camera. The identification of CAD models and the transfer of geometric and other object data (e.g. machine types) from a reference library significantly reduces the recording of production. With the recognized machines, their properties are also known. With the comparison of the future production program, any simulation of the production can be created. Especially in the planning phase for investments, well-founded results based on a simulation model are indispensable for a target-oriented decision in today’s world. In the presentation we will show you the procedure from scanning, object recognition to simulation and its challenges out of the way.
Zachary Lipton (Carnegie Mellon University) https://simons.berkeley.edu/talks/tba-79 Emerging Challenges in Deep Learning
Come checkout my blog @ http://hotnoob.com just showing what ive been working on for the past couple days. — working example: http://www.youtube.com/watch?v=rF2G87b6qiQ — http://vpn.hotnoob.com