Machine learning is moving toward an important advance: the development of flexible systems that can learn to perform multiple tasks and then use what they have learned to solve new problems on their own.
While substantial progress has been made, significant improvements to hardware, software and system design remain to be made before truly flexible systems are developed.
Two important contributors to the field of artificial intelligence and machine learning – Jeff Dean, head of Google AI and the co-founder of Google Brain, and Chris Ré, associate professor of computer science at Stanford – discussed the future of flexible machine learning at a recent session of the AI Salon, hosted by the Stanford AI Lab and the Stanford Institute for Human-Centered Artificial Intelligence.
The hour-long discussion highlighted the following takeaways:
•The use of specialized processing chips has already contributed to advances in machine learning, but some of those devices are beginning to reach a performance plateau, said Ré. Improvements are still possible, but designing hardware tailored to artificial intelligence projects is difficult because the field is evolving so quickly. Designing learning models that can more efficiently utilize the computing systems they run on will solve at least some of the performance issues, the researchers said.
•With privacy a key requirement, Google has advanced an approach called “federated learning,” which enables mobile phones to do a better job predicting what words users are attempting to input to their phone without sending the data to the cloud, Dean said. In the future, smartphones will likely contain specialized “accelerators” to take better advantage of machine learning, he added.
•While there has been progress in some areas of flexible machine learning, “we don’t have a huge number of strong results,” Dean said. Scientists have yet to make their models capable of handling large numbers of tasks.