How can the physical universe give rise to a mind? I suggest to replace this confusing question by another one: what kind of information processing system is the mind, and how is the mind computed? As we will see, even our ideas of the physical universe turn out to be computational. Let us explore some fascinating scenery of the philosophy underlying Artificial Intelligence. (More)

Patreon for conversations on Theories of Everything, Consciousness, Free Will, and God:
Help support conversations like this via PayPal:
Google Podcasts: (More)

Jim Hendler gave a talk on the semantic web layer cake during the gala banquet of the International Semantic Web Conference, Oct 2009. (More)

A New Philosophy on Artificial Intelligence Kristian Hammond TEDxNorthwesternU (More)

Mar.26 — Microsoft Post Doctoral Researcher Timnit Gebru discusses the effects of bias in artificial intelligence. She speaks with Emily Chang on “Bloomberg Technology.” (More)

Dan Fagella talks to Jason Barnard and they provide you with a 5 minute definition Artificial Intelligence and Machine Learning.
If you want to understand, in simple terms, what all the fuss is about, watch this 5 minute explanation in the form of an intelligent and enjoyable conversation between Dan and Jason. (More)

Subscribe: (More)

——————Support the channel————
PayPal Subscription 1 Dollar:
PayPal Subscription 3 Dollars:
PayPal Subscription 5 Dollars:
PayPal Subscription 10 Dollars:
PayPal Subscription 20 Dollars: (More)

What would you do if an army of drones opened fire on you? This terrifying simulation video is making the rounds on social media and prompting conversation on mass shootings, war, and how we can protect ourselves in situations of terror. (More)

French math superstar Cedric Villani delivers his report on AI to President Macron today. Here’s what he said about the problem of biases in AI algorithms. For more info please visit: (More)

Algorithms encode data, and that data can be affected by human bias. Industry luminaries explore what this means for artificial intelligence (AI) in the enterprise – and how we can work together to minimize bias and maximize accuracy. (More)

My biggest pet peeve in machine learning: loudmouths saying unsupervised learning is free of humans and human bias. You couldn’t be more wrong if you tried! (More)

Check out my collab with “Above the Noise” about Deepfakes:
Today, we’re going to talk about five common types of algorithmic bias we should pay attention to: data that reflects existing biases, unbalanced classes in training data, data that doesn’t capture the right value, data that is amplified by feedback loops, and malicious data. Now bias itself isn’t necessarily a terrible thing, our brains often use it to take shortcuts by finding patterns, but bias can become a problem if we don’t acknowledge exceptions to patterns or if we allow it to discriminate. (More)

Fei-Fei Li in conversation with Yuval Harari moderated by Nicholas Thompson (More)

Human-centric Artificial Intelligence : 2nd French-German-Japanese Symposium (November 16-20, 2020) (More)

The Humane AI project will develop the scientific foundations and technological breakthroughs needed to shape the ongoing Artificial Intelligence revolution. The goal is to design and deploy AI systems that enhance human capabilities and empower both individuals and society as a whole to develop AI that extends rather than replaces human intelligence. (More)

🚀 Artificial Intelligence पर Invest करो, आने वाले 5 साल में हो जाओगे मालामाल (More)

Buy Computer Graphics books (affiliate): (More)

An iOS app that can detect human emotions, objects and lot more. Made using coreML image detection API. (More)

This tutorial would help you understand Deep learning frameworks, such as convolutional neural networks (CNNs), which have almost completely replaced other machine learning techniques for specific tasks such as image recognition using large training datasets. In this webinar, we will go over how CNNs, their training methods, and hardware evolved since LeNet first appeared in the late 1990’s. We will examine the challenges that came along, and some key innovations that helped overcome these challenges. We will also look at a guide on how to get started with CNNs, some common pitfalls, and tips and tricks in training CNNs. Advanced Technology Group (ATG) of the CTO Office at NetApp. The ATG group is responsible for investigations, through early product prototypes, and leveraging technologies expected to become mainstream in 3+ years. (More)

🔥Intellipaat RPA course:
In this RPA tutorial for beginners video you will learn what is Robotic Process Automation (RPA), and the various tools which can be used to implement the RPA technology. RPA training is in much demand these days so we have come up with this video where we will show you how to create RPA programs using the UiPath Tool. So this RPA UiPath tutorials video is your one stop video for the basic rpa concepts required to get started with this technology.
#RPATutorialForBeginners #UiPathTutorials #RPATraining (More)

Robotic Process Automation – RPA Tutorial for Beginners on Blue Prism (More)

** RPA Training using UiPath – **
** RPA Training using Automation Anywhere – **
This Edureka tutorial video on Blue Prism vs UiPath vs Automation Anywhere will help you in demystifying the fundamental differences between each of these RPA Tools. Following are the topics which are used for comparison:
3:42 Offers Trial Version
4:48 Market Trend
5:18 Based Technologies
5:52 Architecture
6:47 Process Designer
7:30 Programming Skills
8:36 Accessibility
9:01 Re-usability
9:40 Recorders
10:20 Robots
11:16 Accuracy
12:16 Operational Scalability
12:43 Community & Support
13:06 Jobs Related To Tools
13:40 Certification (More)

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, we’re handling AI in a way that blocks that path — and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. What can be done? Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AI’s amazing capabilities while enhancing our skills at the same time. (More)

These robots milk cows when they demand it.
Subscribe: (More)
Shanghai Triowin Intelligent Machinery Co., Ltd import the most advanced technology from Italy, Europe and America, form brilliant technical program for citrus processing line. Customized design is available based on the investment and actual production situations of enterprises, realize real turn-key project for customer. (More)

Mahindra Bolero ZLX Voice Messaging System (More)

This video covers Stanford CoreNLP Example. (More)

Transfer Learning in Natural Language Processing (NLP): Open questions, current trends, limits, and future directions. Slides:
A walk through interesting papers and research directions in late 2019/early-2020 on:
– model size and computational efficiency,
– out-of-domain generalization and model evaluation,
– fine-tuning and sample efficiency,
– common sense and inductive biases.
by Thomas Wolf (Science lead at HuggingFace) (More)

Hi, everyone. You are very welcome to week two of our NLP course. And this week is about very core NLP tasks. So we are going to speak about language models first, and then about some models that work with sequences of words, for example, part-of-speech tagging or named-entity recognition. All those tasks are building blocks for NLP applications. And they’re very, very useful. So first thing’s first. Let’s start with language models. Imagine you see some beginning of a sentence, like This is the. How would you continue it? Probably, as a human,you know that This is how sounds nice, or This is did sounds not nice. You have some intuition. So how do you know this? Well, you have written books. You have seen some texts. So that’s obvious for you. Can I build similar intuition for computers? Well, we can try. So we can try to estimate probabilities of the next words, given the previous words. But to do this, first of all,we need some data. So let us get some toy corpus. This is a nice toy corpus about the house that Jack built. And let us try to use it to estimate the probability of house, given This is the. So there are four interesting fragments here. And only one of them is exactly what we need. This is the house. So it means that the probability will be one 1 of 4. By c here, I denote the count. So this the count of This is the house,or any other pieces of text. And these pieces of text are n-grams. n-gram is a sequence of n words. So we can speak about 4-grams here. We can also speak about unigrams, bigrams, trigrams, etc. And we can try to choose the best n,and we will speak about it later. But for now, what about bigrams? Can you imagine what happens for bigrams, for example, how to estimate probability of Jack,given built? Okay, so we can count all different bigrams here, like that Jack, that lay, etc., and say that only four of them are that Jack. It means that the probability should be 4 divided by 10. So what’s next? We can count some probabilities. We can estimate them from data. Well, why do we need this? How can we use this? Actually, we need this everywhere. So to begin with,let’s discuss this Smart Reply technology. This is a technology by Google. You can get some email, and it tries to suggest some automatic reply. So for example, it can suggest that you should say thank you. How does this happen? Well, this is some text generation, right? This is some language model. And we will speak about this later,in many, many details, during week four. So also, there are some other applications, like machine translation or speech recognition. In all of these applications, you try to generate some text from some other data. It means that you want to evaluate probabilities of text, probabilities of long sequences. Like here, can we evaluate the probability of This is the house, or the probability of a long,long sequence of 100 words? Well, it can be complicated because maybe the whole sequence never occurs in the data. So we can count something, but we need somehow to deal with small pieces of this sequence, right? So let’s do some math to understand how to deal with small pieces of this sequence. So here, this is our sequence of keywords. And we would like to estimate this probability. And we can apply chain rule,which means that we take the probability of the first word, and then condition the next word on this word, and so on. So that’s already better. But what about this last term here? It’s still kind of complicated because the prefix, the condition, there is too long. So can we get rid of it? Yes, we can. So actually, Markov assumption says you shouldn’t care about all the history. You should just forget it. You should just take the last n terms and condition on them, or to be correct, last n-1 terms. So this is where they introduce assumption, because not everything in the text is connected. And this is definitely very helpful for us because now we have some chance to estimate these probabilities. So here, what happens for n = 2, for bigram model? You can recognize that we already know how to estimate all those small probabilities in the right-hand side,which means we can solve our task. So for a toy corpus again,we can estimate the probabilities. And that’s what we get. Is it clear for now? I hope it is. But I want you to think about if everything is nice here. Are we done? (More)

Check out the official A.I.: Artificial Intelligence (2001) trailer starring Haley Joel Osment! Let us know what you think in the comments below.
► Buy or Rent on FandangoNOW: (More)

The tech billionaire tweets about the famous cognitive scientist’s comprehension of artificial intelligence.
» Subscribe to CNBC: (More)

SASTRA Day 4, Session 01 ATAL AICTE FDP on AI,ML u0026DL 2020 09 16 at 20 42 GMT 7
Workshop topics
– Introduction to Artificial Intelligence
– Introduction to Python
– Introduction to Internet of Things(IoT)
– Problem Formulations & Representations
– Uninformed and Informed Search Algorithms
– Knowledge Representation and different types of Knowledge Representation
– Ontology Engineering
– Fuzzy and Temporal Logic Systems
– Natural Language Processing
– Machine Learning and Deep Learning
– Reinforcement Learning
– Application and current trends of AI
– Sample Problems
– Case Studies & hands-on Coding using Python for the above topics
Full playlist: (More)

DARPA SUPERHIT 2021 Play Now!Close


(StoneBridge Mix)

Play Now!