Cognitive Architectures & Cognitive Modelling Panelists (from left to right): Helgi Helgason, Joscha Bach, Alessandro Oltramari, Peter Lane, Pei Wang Winter Intelligence Oxford – AGI 12 http://winterintelligence.org – agi12 Continuing the mission of the first four AGI conferences, AGI-12@Oxford gathers an international group of leading academic and industry researchers involved in scientific and engineering work aimed directly toward the goal of artificial general intelligence. Appropriately for this Alan Turing centenary year, this is the first AGI conference to be held in the UK. The AGI conferences are the only major conference series devoted wholly and specifically to the creation of AI systems possessing general intelligence at the human level and ultimately beyond. By gathering together active researchers in the field, for presentation of results and discussion of ideas, we accelerate our progress toward our common goal. AGI-12@Oxford will feature contributed talks and posters, keynotes, and a Special Session on Neuroscience and AGI. It will be held immediately preceding the first conference on AGI Safety and Impacts, which is organized by Oxford’s Future of Humanity Institute; AGI-12 registrants will receive free admission to the latter conference. Proceedings will be published as a book in Springer’s Lecture Notes in AI series. “Artificial General Intelligence” The original goal of the AI field was the construction of “thinking machines” — that is, computer systems with human-like general intelligence. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called “narrow AI” [More]
QUANTUM SYNAPSE. Joscha Bach: Artificial Consciousness and the Nature of Reality | AI Podcast #101 with Lex Fridman. https://www.youtube.com/watch?v=P-2P3MSZrBM Dr. Joscha Bach (MIT Media Lab and the Harvard Program for Evolutionary Dynamics) is an AI researcher who works and writes about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He is the founder of the MicroPsi project, in which virtual agents are constructed and used in a computer model to discover and describe the interactions of emotion, motivation, and cognition of situated agents. Bach’s mission to build a model of the mind is the bedrock research in the creation of Strong AI, i.e. cognition on par with that of a human being. He is especially interested in the philosophy of AI and in the augmentation of the human mind. https://www.youtube.com/watch?v=da-9zPgxWBY The Artificial Intelligence Channel https://www.youtube.com/user/Maaaarth Polyworld: Using Evolution to Design Artificial Intelligence https://www.youtube.com/watch?v=_m97_kL4ox0   DOWNLOAD : Subtitles and Closed Captions (CC) from YouTube Full transcript available here: https://www.bilingualsubtitles.com/addvideo My Website: https://www.youtube.com/c/wrwin1QUANTUMSYNAPSE
In Lecture 9 we discuss some common architectures for convolutional neural networks. We discuss architectures which performed well in the ImageNet challenges, including AlexNet, VGGNet, GoogLeNet, and ResNet, as well as other interesting models. Keywords: AlexNet, VGGNet, GoogLeNet, ResNet, Network in Network, Wide ResNet, ResNeXT, Stochastic Depth, DenseNet, FractalNet, SqueezeNet Slides: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture9.pdf ————————————————————————————– Convolutional Neural Networks for Visual Recognition Instructors: Fei-Fei Li: http://vision.stanford.edu/feifeili/ Justin Johnson: http://cs.stanford.edu/people/jcjohns/ Serena Yeung: http://ai.stanford.edu/~syyeung/ Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. Website: http://cs231n.stanford.edu/ For additional learning opportunities please visit: http://online.stanford.edu/