THE FUTURE IS HERE

Solving AlphaGo/Zero, Explainable AI, Real-World AI for Biz – O'Reilly Strata Conf – San Francisco

Aligned with Strata San Jose 2018!

Location:

San Jose Convention Center, LL21 B
150 W San Carlos St.
San Jose, CA 95113

Talk 0: Meetup Updates and Announcements (by Chris Fregly, Founder & Engineer @ PipelineAI)
* We hit 11,000 members!!!

Talk 1: Deploying Serverless TensorFlow AI Models and Functions on a Kubernetes Cluster using PipelineAI and OpenFaaS by Chris Fregly, Founder & Engineer @ PipelineAI

Abstract:

Through a series of live demos, Chris will create and deploy a model ensemble using the PipelineAI Platform with GPUs, TensorFlow, and Scikit-Learn.

Talk 2: Alpha Go Zero/AlphaZero with TensorFlow, Probabilistic Methods, and Neural Network Techniques by Brett Koonce, CTO @ Quarkworks

Abstract: In this talk, I will discuss how Google built Alpha Go Zero/AlphaZero to master the game of Go by combining probabilistic methods, neural network techniques and Tensorflow.

* https://deepmind.com/blog/alphago-zero-learning-scratch/

Talk 3: Human in the Loop: Bayesian Rules Enabling Explainable AI by Pramit Choudhary, Lead Data Scientist @ DataScience.com

Abstract:

The adoption of machine Llearning to solve real-world problems has increased exponentially, but users still struggle to derive full potential of the predictive models. It is no longer sufficient to evaluate a model’s accurate prediction just on a validation set based on error metrics. However, there is still a dichotomy between explainability and model performance when choosing an algorithm. Linear models and simple decision trees are often preferred over more complex models such as ensembles or deep learning models for ease of interpretation, but this often results in loss in accuracy. However, is it actually necessary to accept a trade-off between model complexity and interpretability?

Pramit Choudhary explores the usefulness of a generative approach that applies Bayesian inference to generate human-interpretable decision sets in the form of “if. . .and else” statements. These human interpretable decision lists with high posterior probabilities might be the right way to balance between model interpretability, performance, and computation. This is an extension of DataScience.com’s ongoing effort to enable trust in predictive algorithms to drive better collaboration and communication among peers. Pramit also oultines DataScience.com’s open source model interpretation framework, Skater: https://github.com/datascienceinc/Skater, and explains how it helps practitioners understand model behavior better without compromising on the choice of algorithm.

Bio:

Pramit Choudhary is a lead data scientist at DataScience.com, where he focuses on optimizing and applying classical machine learning and Bayesian design strategy to solve real-world problems. Currently, he is leading initiatives on figuring out better ways to explain a model’s learned decision policies to reduce the chaos in building effective models and close the gap between a prototype and operationalized model.

Talk 4: Real-World AI Apps For Business (Andrew Waterman @ Founder and Principal Consultant @ Waterway Data)

Bio:

Andrew Waterman is Founder and Principal Consultant at Waterway Data. Previously, Machine Learning and Data Science Consultant at Casetext and Shaklee Corporation. Andrew has a B.S. and M.S. in Symbolic Systems, Network Theory, and Economics of Science from Stanford University. LinkedIn: https://www.linkedin.com/in/anwaterman/