PyData LA 2018 I will present some way in which tensor methods can be combined with deep learning, and demonstrate through Jupyter notebooks on how easy it is specify tensorized neural networks. — www.pydata.org PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R. PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
This virtual roundtable discussion, “The Path to More Flexible AI” features a panel of MIT and IBM experts discussing some of the biggest obstacles the industry faces in developing artificial intelligence that can perform optimally in real-world situations and how techniques like neurosymbolic will help with this effort. The event was moderated by IDC analyst David Schubmehl and panel speakers include Leslie Kaelbling (MIT), Josh Tenenbaum (MIT) and David Cox (MIT-IBM Watson AI Lab).
Machine learning is moving toward an important advance: the development of flexible systems that can learn to perform multiple tasks and then use what they have learned to solve new problems on their own. While substantial progress has been made, significant improvements to hardware, software and system design remain to be made before truly flexible systems are developed. Two important contributors to the field of artificial intelligence and machine learning – Jeff Dean, head of Google AI and the co-founder of Google Brain, and Chris Ré, associate professor of computer science at Stanford – discussed the future of flexible machine learning at a recent session of the AI Salon, hosted by the Stanford AI Lab and the Stanford Institute for Human-Centered Artificial Intelligence. The hour-long discussion highlighted the following takeaways: •The use of specialized processing chips has already contributed to advances in machine learning, but some of those devices are beginning to reach a performance plateau, said Ré. Improvements are still possible, but designing hardware tailored to artificial intelligence projects is difficult because the field is evolving so quickly. Designing learning models that can more efficiently utilize the computing systems they run on will solve at least some of the performance issues, the researchers said. •With privacy a key requirement, Google has advanced an approach called “federated learning,” which enables mobile phones to do a better job predicting what words users are attempting to input to their phone without sending the data to the cloud, Dean said. In the future, [More]