Organizers: Jason (Jinquan) Dai Location: Room 151 A-C & G Time: 0900-1200 (Half Day — Morning) Description: Recent breakthroughs in artificial intelligence applications have brought deep learning to the forefront of new generations of data analytics. In this tutorial, we will pre-sent the practice and design tradeoffs for building large-scale deep learning applications (such as computer vision and NLP) for production data and workflow on Big Data platforms. In particular, we will provide an overview of emerging deep learn-ing frameworks for Big Data (e.g., BigDL, TensorFlow-on-Spark, Deep Learning Pipelines for Spark, etc.), present the underlying distributed systems and algorithms, and discuss innovative data analytics + AI application pipelines (with a focus on computer vision models and use cases) for Big Data platforms and workflows. Schedule: 0900 Motivation 0910 Overview 0930 Analytics Zoo for Spark and BigDL 1000 Morning Break 1030 Distributed Training and Inference 1100 Advanced Applications 1130 Real-World Applications 1150 Q&A
AI and deep learning will be front and center at GTC 2017 across key industries like healthcare and financial services. Register now: http://nvda.ws/2nrJyK4
Rising AI startups will discuss key technologies and discoveries. Innovative researchers will speak to critical breakthroughs accelerated with GPU-based deep learning. The NVIDIA Deep Learning Institute will offer hands-on technical training on the latest open-source frameworks and GPU-accelerated deep learning platforms.
Human in the loop Machine learning and AI for the people
Paco Nathan is a unicorn. It's a cliche, but gets the point across for someone who is equally versed in discussing AI with White House officials and Microsoft product managers, working on big data pipelines and organizing and part-taking in conferences such as Strata in his role as Director, Learning Group with O'Reilly Media.
Nathan has a mix of diverse background, hands-on involvement and broad vision that enables him to engage in all of those, having been active in AI, Data Science and Software Engineering for decades. The trigger for our discussion was his Human in the Loop (HITL) framework for machine learning (ML), presented in Strata EU.
Human in the loop
HITL is a mix and match approach that may help make ML both more efficient and approchable. Nathan calls HITL a design pattern, and it combines technical approaches as well as management aspects.
HITL combines two common ML variants, supervised and unsupervised learning. In supervised learning, curated (labeled) datasets are used by ML experts to train algorithms by adjusting parameters, in order to make accurate predictions for incoming data. In unsupervised learning, the idea is that running lots of data through an algorithm will reveal some sort of structure.
The less common ML variant that HITL builds on is called semi-supervised, and an important special case of that is known as "active learning." The idea is to take an ensemble of ML models, and let them "vote" on how to label each case of input data. When the models agree, their consensus gets used, typically as an automated approach.
When the models disagree or lack confidence, decision is delegated to human experts who handle the difficult edge cases. Choices made by experts are fed back to the system to iterate on training the ML models.
Nathan says active learning works well when you have have lots of inexpensive, unlabeled data -- an abundance of data, where the cost of labeling itself is a major expense. This is a very common scenario for most organizations outside of the Big Tech circle, which is what makes it interesting.
But technology alone is not enough. What could be a realistic way to bring ML, AI, and automation to mid-market businesses?
AI for the people
In Nathan's experience, most executives are struggling to grasp what the technology could do for them and identify suitable use cases. Especially for mid-market businesses, AI may seem like a far cry. But Nathan thinks they should start as soon as possible, and not look to outsource, for a number of reasons:
We are at a point where competition is heating up, and AI is key. Companies are happy to share code, but not data. The competition is going to be about data, who has the best data to use. If you're still struggling to move data from one silo to another, it means you're behind at least 2 or 3 years.
Better allocate resources now, because in 5 years there will already be the haves and have nots. The way most mid-market businesses get on board is by seeing, and sharing experiences with, early adopters in their industry. This gets them going, and they build confidence.
Getting your data management right is table stakes - you can't talk about AI without this. Some people think they can just leapfrog to AI. I don't think there will be a SaaS model for AI that does much beyond trivialize consumer use cases. "Alexa, book me a flight" is easy, but what about "Alexa, I want to learn about Kubernetes"? It will fall apart.
This on-demand webinar covers the various ways in which artificial intelligence (AI) and machine learning (ML) are coming to dominate the cyber security landscape.
This webinar provides you with an understanding of how the various types of machine learning techniques are being applied to cyber security and how those techniques are being tailored to solve particular problems in cyber security. It also covers why using multiple artificial intelligence or machine learning-based solutions enhances a defense-in-depth approach to security and how the fundamentals of cyber defense and offense are changing due to the greater adoption of these solutions.
Talk 1: Uber’s Big Data Platform: 100+ Petabytes with Minute Latency
This talk will reflect on the challenges faced with scaling Uber’s Big Data Platform to ingest, store, and serve 100+ PB of data with minute level latency while efficiently utilizing our hardware. We will provide a behind-the-scenes look at the current data technology landscape at Uber, including various open-source technologies (e.g. Hadoop, Spark, Hive, Presto, Kafka, Avro) as well as open-sourced in-house-built solutions such as Hudi, Marmaray, etc. We'll dive into the technical aspects of how our ingestion platform was re-architected to bring in 10+ trillion events/day, with 100+ TB new data/day, at minute-level latency, how our storage platform was scaled to reliably store 100+ PB of data in the data lake, and our processing platform was designed to efficiently serve millions of queries and jobs/day while processing 1+ PB per day. You’ll leave the talk with greater insight into how data truly powers each and every Uber experience and will be inspired to re-envision your own data platform to be more extensible and scalable.
Speaker : Reza Shiftehfar (Uber)
Reza Shiftehfar currently leads Uber’s Hadoop Platform team. His team helps build and grow Uber’s reliable and scalable Big Data platform that serves petabytes of data utilizing technologies such as Apache Hadoop, Apache Hive, Apache Kafka, Apache Spark, and Presto. Reza is one of the founding engineers of Uber’s data team and helped scale Uber's data platform from a few terabytes to over 100 petabytes while reducing data latency from 24+ hours to minutes. Reza holds a Ph.D. in Computer Science from the University of Illinois, Urbana-Champaign.
Talk2 : Michelangelo PyML - Uber’s Platform for Rapid Python ML Model Development
Uber aims to leverage machine learning (ML) in product development and the day-to-day management of our business. In pursuit of this goal, hundreds of data scientists, engineers, product managers, and researchers work on ML solutions across the company. This talk will cover a brief history of Uber's machine learning platform - Michelangelo. We will take a closer look into a model life-cycle of prototyping, validation, and productionization and the importance of frictionless experience at each stage of this process. And finally, we will focus on PyML - a new extension of Michelangelo that enables faster Python ML model development and seamless integration with Uber's production infrastructure.
Speaker: Stepan Bedratiuk (Uber)
Stepan Bedratiuk is a lead engineer on Michelangelo's PyML team. His work focused on scaling model deployment pipelines and model serving services. Prior to ML platform team, Stepan worked on Uber's data platform team and helped to unify and scale the data access layer. Stepan holds B.S. and M.S. in Applied Mathematics from the Taras Shevchenko National University of Kyiv, Ukraine.
Presented at the Matroid Scaled Machine Learning Conference 2018
scaledml.org | #scaledmlconf
So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving cars, to cutting edge medical diagnosis and real-time language translation, there has been an increasing need for our computers to learn from data and apply that knowledge to make predictions and decisions. This is the heart of machine learning which sits inside the more ambitious goal of artificial intelligence. We may be a long way from self-aware computers that think just like us, but with advancements in deep learning and artificial neural networks our computers are becoming more powerful than ever.
Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios
Want to know more about Carrie Anne?
The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV
Want to find Crash Course elsewhere on the internet?
Facebook - https://www.facebook.com/YouTubeCrash...
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
Link para a palestra: https://www.youtube.com/watch?v=Nj2YSLPn6OY&t=302s
Confira outros RedTips: https://www.youtube.com/watch?v=dU3w66kpsJQ&list=PLJ_hR8ZDFCAYSGRQQN7-sZJZAscTpt1el
François Chollet is the creator of Keras, which is an open source deep learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class AI researcher and software engineer at Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of artificial intelligence. This conversation is part of the Artificial Intelligence podcast.
Podcast website: https://lexfridman.com/ai
Full episodes playlist: http://bit.ly/2EcbaKf
Clips playlist: http://bit.ly/2JYkbfZ
François twitter: https://twitter.com/fchollet
François web: https://fchollet.com/
0:00 - Introduction
1:14 - Self-improving AGI
7:51 - What is intelligence?
15:23 - Science progress
26:57 - Fear of existential threats of AI
28:11 - Surprised by deep learning
30:38 - Keras and TensorFlow 2.0
42:28 - Software engineering on a large team
46:23 - Future of TensorFlow and Keras
47:53 - Current limits of deep learning
58:05 - Program synthesis
1:00:36 - Data and hand-crafting of architectures
1:08:37 - Concerns about short-term threats in AI
1:24:21 - Concerns about long-term existential threats from AI
1:29:11 - Feeling about creating AGI
1:33:49 - Does human-level intelligence need a body?
1:34:19 - Good test for intelligence
1:50:30 - AI winter
- Subscribe to this YouTube channel
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
“Machine Learning: Living in the Age of AI,” examines the extraordinary ways in which people are interacting with AI today. Hobbyists and teenagers are now developing tech powered by machine learning and WIRED shows the impacts of AI on schoolchildren and farmers and senior citizens, as well as looking at the implications that rapidly accelerating technology can have. The film was directed by filmmaker Chris Cannucciari, produced by WIRED, and supported by McCann Worldgroup.
Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7
Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV. Here you can find your favorite WIRED shows and new episodes of our latest hit series Tradecraft.
WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture.
Machine Learning: Living in the Age of AI | A WIRED Film
Artificial intelligence is being used to do many things from diagnosing cancer, stopping the deforestation of endangered rainforests, helping farmers in India with crop insurance, it help you find the Fyre Fest Documentary on Netflix (or Hulu), or it can even be used to help you save money on your energy bill.
But how could something so helpful be racist?
Become an Inevitable/Human: https://inevitablehuman.com/
PyData London 2018
Machine learning and data science applications can be unintentionally biased if care is not taken to evaluate their effect on different sub-populations. However, by using a "fair" approach, machine decision making can potentially be less biased than human decision makers.
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals.
Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance - all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.
Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society's divides, such as the digital divide or income inequality?
This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields:
- Prof. Moshe Vardi: "Deep Learning and the Crisis of Trust in Computing"
- Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI”
Panelists: Ethics and Bias in AI
- Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University
- Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien
- Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna
- Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna
- Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG
Moderator: Markus Mooslechner, Terra Mater Factual Studios
The evening will be complemented by networking & discussions over snacks and drinks.
More details: http://www.aiethics.cisvienna.com
This presentation was recorded at GOTO Amsterdam 2017
David Stibbe - Consultant at Quintor
In this session we introduce the basics about Machine Learning, explain what it is and how it relates to terms like Big Data and Artificial Intelligence.
We’ll show the various machine learning platforms that are used today like Watson, Tensorflow and Deepmind, and illustrate this [...]
Download slides and read the full abstract here:
#DataScience #ML #DeepLearning
This gives overview of the features and the deep learning frameworks made available on AMD platforms. The speaker also presents some ideas about performance parameters and ease of use of AMD software too.
Dr. Prakash Raghavendra
PMTS (Software), AMD India Pvt Ltd
"Apache Spark is a powerful, scalable real-time data analytics engine that is fast becoming the de facto hub for data science and big data. However, in parallel, GPU clusters are fast becoming the default way to quickly develop and train deep learning models. As data science teams and data savvy companies mature, they will need to invest in both platforms if they intend to leverage both big data and artificial intelligence for competitive advantage.
This session will cover:
- How to leverage Spark and TensorFlow for hyperparameter tuning and for deploying trained models
- DeepLearning4J, CaffeOnSpark, IBM's SystemML and Intel's BigDL
- Sidecar GPU cluster architecture and Spark-GPU data reading patterns
- The pros, cons and performance characteristics of various approaches
You'll leave the session better informed about the available architectures for Spark and deep learning, and Spark with and without GPUs for deep learning. You'll also learn about the pros and cons of deep learning software frameworks for various use cases, and discover a practical, applied methodology and technical examples for tackling big data deep learning.
Session hashtag: #SFds14"
On Tuesday, September 25th, Jeff Dean, Head of Google AI and Google Brain, visited heidelberg.ai (http://heidelberg.ai) at the German Cancer Research Center in Heidelberg:
For the past seven years, the Google Brain team has conducted research on difficult problems in artificial intelligence, on building large-scale computer systems for machine learning research, and, in collaboration with many teams at Google, on applying our research and systems to many Google products. Our group has open-sourced the TensorFlow system, a widely popular system designed to easily express machine learning ideas, and to quickly train, evaluate and deploy machine learning systems. We have also collaborated closely with Google's platforms team to design and deploy new computational hardware called Tensor Processing Units, specialized for accelerating machine learning computations. In this talk, I'll highlight some of our research accomplishments, and will relate them to the National Academy of Engineering's Grand Engineering Challenges for the 21st Century, including the use of machine learning for healthcare, robotics, and engineering the tools of scientific discovery. I'll also cover how machine learning is transforming many aspects of our computing hardware and software systems.
This talk describes joint work with many people at Google.
Let’s separate the hype from reality and see what exactly machine learning (ML), deep learning (DL) and artificial intelligence (AI) algorithms can do right now in cybersecurity. We will look how different tasks, such as prediction, classification, clustering and recommendation, are applicable to the ones for attackers, such as captcha bypass and phishing, and for defenders, such as anomaly detection and attack protection. Speaking about the icing on the cake, we will cover the latest techniques of hacking security and non-security products that use ML and why its super hard to protect them against adversarial examples and other attacks.
Alexander is a co-founder of ERPScan, the president of EAS-SEC.org, an organization focused on enterprise application security, and a member of Forbes Technology Council. He has been recognized as R&D Professional of the Year by 2013. His expertise covers the security of enterprise business-critical software and includes ERP, industry-specific solutions and adopting Machine Learning and Deep learning inventions to cybersecurity problems. He has presented his research at over 100 conferences such as BlackHat, HITB, RSA held in more than 20 countries in all continents. He has held customized trainings for CISOs of Fortune 2000 companies.
Oded: “Decision-making is an important role in most businesses in the last decade. More and more tools based on artificial intelligence and machine learning are introduced to support these decisions.
The artificial intelligence and machine learning one-day program is designed for senior executives and for corporate decision makers who already invested or consider to invest in artificial intelligence and machine learning software.”
Fabrizio: “For the professional success of managers and competitive advantage of companies the interaction between human decision-making and machine learning is going to be crucial in the future and this day will keep you with all the knowledge required to prosper and benefit.”
Oded: “The format of a day AI and machine learning one day program is a mix of lectures, introduction for theories, discussion of applications and discussion of case studies. It will be done by experts from academia and by practitioners who are coming from array of industries with huge experience.
The benefits of managers for completing their artificial intelligence and machine learning program is their better understanding of their artificial intelligence and machine learning analytical tools. These tools are designed to support decisions.”
Fabrizio: “This program is designed for both multinational global corporates as well as smaller fast growing disruptors in multiple industries. As a result of the cost of these technologies dramatically reducing it is now economical feasible for big and smaller companies to leverage them for their own benefit.”
Oded: “There is sometimes a gap between what managers think they can generate from it from what actually is generated from these tools.”
Fabrizio: “Senior executives after attending this program will be able to manage digital transformations in a much more effective way because they will be able to tell the difference between just collecting data a few hundred pounds month of cost to managing large digital transformation projects at the few hundred thousand pounds of millions of pounds a month and this program will enable them to understand what to do, how and when.”
In Lecture 14 we move from supervised learning to reinforcement learning (RL), in which an agent must learn to interact with an environment in order to maximize its reward. We formalize reinforcement learning using the language of Markov Decision Processes (MDPs), policies, value functions, and Q-Value functions. We discuss different algorithms for reinforcement learning including Q-Learning, policy gradients, and Actor-Critic. We show how deep reinforcement learning has been used to play Atari games and to achieve super-human Go performance in AlphaGo.
Keywords: Reinforcement learning, RL, Markov decision process, MDP, Q-Learning, policy gradients, REINFORCE, actor-critic, Atari games, AlphaGo
Convolutional Neural Networks for Visual Recognition
Fei-Fei Li: http://vision.stanford.edu/feifeili/
Justin Johnson: http://cs.stanford.edu/people/jcjohns/
Serena Yeung: http://ai.stanford.edu/~syyeung/
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.
For additional learning opportunities please visit:
This video goes through an example of using TensorFlow for image recognition. Ubuntu is used via virtualbox on a Windows machine.
Business people have to make many decisions. Slowly though, machine learning is getting better at making many of these decisions. Will there be a point when human decision making is not required? This is the topic I explore in this short video.
CEVA introduces a new DSP-based offering bringing deep learning and Artificial Intelligence (AI) capabilities to low-power embedded systems.
A comprehensive, scalable, integrated hardware and software silicon IP platform that is centered around a new imaging and vision DSP – the CEVA-XM6.
It allows developers to efficiently harness the power of neural networks and machine vision for smartphones, autonomous vehicles, surveillance, robots, drones and other camera-enabled smart devices.
For more information, visit http://www.ceva-dsp.com
or Email: email@example.com
Uber Engineering is committed to developing technologies that create seamless, impactful experiences for our customers. We are increasingly investing in Machine Learning to fulfill this vision. At Uber, our contribution to this space is Michelangelo, an internal ML-as-a-service platform that democratizes machine learning and makes scaling AI to meet the needs of the business as easy as requesting a ride.
In this talk, I’ll go over some of Uber’s early challenges at applying ML at scale, and the context around which Michelangleo was born. We’ll also talk about what the Michelangelo system looks like, and some important components that aim to lower the bar on applying ML at Uber.
Achal is a Sr. Software Engineer working on Michelangelo, and Deep Learning infrastructure
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure.
In this session, we introduce MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size.
In this deep-dive session, through a complete ML model life-cycle example, you will walk away with:
MLflow concepts and abstractions for models, experiments, and projects
How to get started with MLFlow
Understand aspects of MLflow APIs
Using tracking APIs during model training
Using MLflow UI to visually compare and contrast experimental runs with different tuning parameters and evaluate metrics
Package, save, and deploy an MLflow model
Serve it using MLflow REST API
What’s next and how to contribute
When consumers experience AI/ML benefit from various sources in our daily life, enterprises are facing challenges when applying similar AI/ML techniques to transform business. In this session, we will share how Workday (Enterprise SaaS company on HCM and FIN) has identified specific business problem for ML to solve, collected enough data to prototype, and deployed the solution as part of Workday Application product available to all Workday customers in less than 18months. We will also share lessons learned from legal, privacy, and security aspect with Human-in-the-loop approach which is a critical part of the enterprise ML product development journey.
Can an AI learn to play the perfect game of Snake?
Huge thanks to Brilliant.org for supporting this channel, check them out: https://www.brilliant.org/CodeBullet
Art created by @Dachi.art. https://www.instagram.com/dachi.art
Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow, DeepLearning4J or H2O.ai. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.
Key takeaways for the audience:
- Insights are hidden in Historical Data on Big Data Platforms such as Hadoop
- Machine Learning and Deep Learning find these Insights by building Analytics Models
- Streaming Analytics uses these Models (without Redeveloping) to act in Real Time
- See different open source frameworks for Machine Learning and Stream Processing like TensorFlow, DeepLearning4J or H2O.ai
- Understand how to leverage Kafka Streams to use analytic models in your own streaming microservices
- Learn best practices for building and deploying analytic models in real time leveraging the open source Apache Kafka Streams platform
You can find the Java code examples and analytic models for H2O and TensorFlow in my Github project: https://github.com/kaiwaehner/kafka-streams-machine-learning-examples
Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. To learn more, please visit http://confluent.io
Mobile compute platforms provide an exciting vehicle for the deployment of new computer vision and deep learning applications. This webinar elaborates on real industry use-cases where the adoption of optimized low-level primitives for ARM processors has enabled improved performance and optimal use of heterogeneous system resources.
Best Machine Learning book: https://amzn.to/2MilWH0 (Fundamentals Of Machine Learning for Predictive Data Analytics).
Machine Learning and Predictive Analytics. #MachineLearning
Big Data, Hadoop, Federation is 5th video in this machine learning course. This video explains some of the machine learning platforms and technologies that are used. Keep in mind that these are the foundations, so we go into the types of infrastructures rather than specific products or vendors. The topics covered are big data, Hadoop, and federation. These are all terms that are very useful in predictive analytics.
This online course covers big data analytics stages using machine learning and predictive analytics. Big data and predictive analytics is one of the most popular applications of machine learning and is foundational to getting deeper insights from data. Starting off, this course will cover machine learning algorithms, supervised learning, data planning, data cleaning, data visualization, models, and more. This self paced series is perfect if you are pursuing an online computer science degree, online data science degree, online artificial intelligence degree, or if you just want to get more machine learning experience. Enjoy! Check out the entire series here: https://www.youtube.com/playlist?list=PL_c9BZzLwBRIPaKlO5huuWQdcM3iYqF2w&playnext=1
Support me! http://www.patreon.com/calebcurry
Subscribe to my newsletter: http://bit.ly/JoinCCNewsletter
More content: http://CalebCurry.com
Amazing Web Hosting - http://bit.ly/ccbluehost (The best web hosting for a cheap price!)
Jeffrey is the CTO and part of the team of founders of Stratified Medical. He is a serial technologist, start-up founder, fund-raiser and deep R&D strategist in Big Data, Natural Language Processing, state-of-the-art Deep Learning and deployment of AI platforms at internet scale for Tier1 Silicon Valley companies. He has a doctorate in Machine Learning and Computer Vision and another 7 years of Post-Doctoral research experience in brain-inspired pattern recognition at Imperial College. He has successfully spun-out a start-up out of Imperial with multi-million VC investment and revenue from a big UK retailer within 10 months. He is now working in big data and advanced machine learning to leverage the totality of human knowledge, teaching machines to understand and reason, with the goal of making a real difference in the world. Author of over 45 articles in scientific journals and conferences, 3 granted patents in US and EU and 4 pending patents.
AWS Public Sector Summit 2018 - Washington, D.C.
We have all seen the power of AI and ML used to transform industries of every kind. But what does this all mean for humans? Can advancements in AI boost human intelligence and make information more easily available? We believe so! We will hear from Cerego, creator of a personalized learning platform that helps millions of learners - in classrooms, at work, or even on the battlefield - improve their retention and understanding for any content they need, when they need it. They are leveraging services like Amazon Alexa to enable new voice-driven learning experiences that take the power of the cloud, AI, and now voice to the next level of boosting human performance. We will then explore an open source QnABot (chatbot) solution powered by Amazon Lex and Amazon Alexa for Q&A, Virtual Tours, Trivia quizzes, and more. The White House Historical Association (WHHA) will discuss and demo their work to implement a QnAot-powered virtual tour of the White House from the perspective of the roles of the US president.
Speakers: John Calhoun, Joanna Capps, Whitney Hayne, Andrew Smith Lewis
Developing machine learning capabilities will require heavy investment and the cultivation of a generation of developers with a background in data science.
Machine learning and artificial intelligence were the stuff of science fiction when an intelligent computer turned on its creators in 2001: A Space Odyssey. Fifty years later, intelligent algorithms are beginning to reshape many facets of health care, education and commerce – and that process is just beginning, says Jia Li, the head of R&D at Google Cloud AI.
“But machine learning development is a very complex and resource-consuming process. It will require investment and expertise in every single step: Collect the data, design a model, tune model parameters, evaluate, deploy it, and finally update and iterate the entire process,” Li said during her presentation at this year’s Women in Data Science (WiDS) conference at Stanford University.
AI, or artificial intelligence, has the potential to improve the outcome for patients and help clinicians make better decisions, she says. In a sense, AI can help medical teams connect the dots. AI could suggest guidance on everything from patient lifestyles to medications and provide automated monitoring and early assessment of critical conditions by noticing subtle signals that a human would not be able to detect.
Studies have shown that 10 percent of thoracic patient deaths are related to diagnostic errors, and 4 percent of the 400 million or so radiological interpretations conducted each year in the U.S. contain clinically significant errors. Machine learning could improve those outcomes, but developing and training the software is quite challenging, Li says.
Building the models needed to make the software accurate requires board-certified radiologists to label and classify the information in those X-rays, a costly and time-consuming process. Li says that she and other data scientists are working to develop models that are less labor intensive.
In education, artificial intelligence algorithms could help customize courses for individual students based on their past experience, strengths, weaknesses and personal preferences, Li says. AI could free up teachers to work with students by automating chores such as homework and exam assessment.
Although AI and machine learning are hot topics, there are only about one million developers who have a data science background, and far fewer with a background in deep learning, Li says. Google, she says, has a partial solution to the dearth of qualified AI developers: Cloud AutoML is a suite of products that enable developers to train high-quality machine learning and AI models even if they lack expertise in those areas.
Professor Stuart Russel (UC Berkeley) is a pioneer in Artificial Intelligence. We talked about the future developments of AI, and their implications on our lives. Is the movie Terminator just science fiction? Not really. The technology is already here. Stuart Russel tells us more in this 10-minute interview.
Emission produite par tomg conseils (http://tomg-conseils.com/) pour Regards Connectés.
Avec le soutien de Petit Web et Frenchweb.
All rights reserved, tomg conseils 2018
Don't forget to credit http://regards-connectes.fr/