You probably have one electricity supplier for your house. But these days the average household could probably buy from several such companies; it just can’t easily access the marketplace of possible suppliers. Wouldn’t it be smarter if you had an AI in your house that could purchase energy from these producers, including those within the local grid, at the best prices and at the best time of day?
That’s what the Tibber startup does in Norway, and it’s just raised a $12 million Series A funding from an iconic Silicon Valley VC.
Hailing originally from Stockholm, Tibber offers customers the ability to lower their energy bills in exactly the above manner, with the user using a simple app, and the purchasing of power is automatically done by its bots. That means Tibber is always looking for the lowest electricity prices as well as alerting customers to consume energy during the cheapest hours of the day.
The funding round was led by SF-based Founders Fund, known for their early investments in Spotify, Facebook, SpaceX, Palantir, Airbnb and Stripe. Tibber is the third investment ever in Europe for Founders Fund, which is quite something. The rest of the round came from existing investors, including Wellstreet, BKK, Petter Stordalen and RFF Vest.
Prior to this round the company had raised $3-4 million. It now plans to expand to Germany.
In a statement, Zack Hargreaves, principal at Founders Fund said: “The tools we currently use to manage our utilities are completely outdated. Tibber combines wholesale electricity prices with IoT integrations to save users an average of 20% on electricity bills. Consumers will see cost savings from simply downloading the app.”
Although Tibber only powers 40,000 homes right now, 25 percent are smart homes, where customers are able to control their power usage through Tibber-connected devices, such as electric-car charging, connected thermostats and smart plugs.
Edgeir Aksnes, CEO and founder, says all their customer growth has come from word of mouth: “With this funding round complete, we are set to further expand in the Nordics, develop our product and launch Tibber in new markets in Europe.”
Tibber has a team of 21 people and currently operates in Sweden and Norway.
Last year, Tibber launched a smart charging feature for Tesla and other electric cars and hybrids. The company claims that its solution can cut 20 percent off the charging price compared to the rest of the market.
Can we build ai without losing control over it
Will we build ai without losing control over it?
When the Industrial revolution was booming in the largest empire on the face of the earth, it was the first time that people could cross the Atlantic in stable vessels and carry resources from one end to the other with railways. With offering groundbreaking speeds in Ocean liners, luxury was added for the filthy rich of the English High-Class society. That is how the evolution of technology made travel easier for people living on the edges of the Atlantic.
It’s impressive to see how far we’ve come since then.
From the use of Iron and wood to build palaces, cathedrals and even steamships, we now stand at the gates of new technology. Artificial Intelligence (AI) and its subset Machine Learning have started their journey a long time ago and for decades it has been doing better at keeping up with the constant changes humanity is throwing at it.
To understand AI in elementary terms might just help people ease their fear from content dealing with too much of mathematics or a gigantic section of codes. Most of the time the tutorials that are available cannot get the message across. It is because it is akin to drinking water from a fire-hose. Most of the information doesn’t go where it’s supposed to.
So, allow me a few minutes of your time to explain what AI is.
When a machine begins to mimic the behaviour of a human, that is Artificial Intelligence being applied. Surely seeing a robot inside a car and getting into an episode of road rage would be pretty alarming, however, that isn’t something we have to worry about right now. The constant development of machines to perform tasks that would require the assistance of Human Intelligence is called AI. This can involve analyzing the patterns and giving a few possible outcomes, visual perception, speech to text and translations.
Believe it or not, AI applications are more common in our day to day lives than most people realize. If you’ve ever opened your phone by lifting it to your face or have inquired about the temperature in Aspen from OK Google or even played around with Siri asking redundant questions, you have already used an AI Application.
Many might ask if AI applications are so widespread, why don’t our lives emulate the Jetsons?
Well, that was a cartoon to start with.
Modern Day AI in the Flesh
In 2019, AI applications are more subtle but are showing great signs of evolving into something worthy to behold soon. As more and more tech giants are investing in Artificial Intelligence, inventions are being made that comes with an assurance to make daily activities a lot simpler.
Google’s self-driving cars, also called as Waymo, are driving across the streets of Phoenix Arizona as you read through this. The era of driverless cars is working through the streets of many cities in the States where it is legal for self-driving cars to wander. While a few tweaks are still not up to the mark, but still these vehicles have been dubbed very secure repeatedly.
Also, Smart homes are now being equipped with applications that recognize and functions according to audio commands given to them.
The 2002 Home Alone movie displayed what a smart house can do. Might have noticed the phrases like ‘door open’ or ‘open sesame’ or ‘maximum speed, sesame’ used by the Kevin, the other individuals, butler, maids, and even the burglars. With the new trends of AI that applies Voice Searches or Speech to Text, smart houses might be more common throughout the developed world.
Machine Learning- Definition
Machine learning, a subset of AI, is an area that deals with familiarizing the machine with scenarios and predict its outcome by giving it multiple events to see the same event (sometimes with minor changes) so that the machine will be able to grab patterns that are common between them and make predictions in the future.
It might be easier to cite an example for a better understanding.
Imagine the shape of a dog. There are some features that make us know that the wagging tail, the lopsided ears, the hanging out tongues, etc. To make sure a Machine recognizes a dog, numerous images are fed into the machine and it tries to recognize the face of a dog. So, after coming across many examples, it gets successful in recognizing patterns.
Now, if that same machine across a cat, it will not be able to recognize it. That is because it never came across an event where the images of cats could be processed. So, it would seem very new to the machine. Similarly, if a machine is accustomed to playing and predict the outcomes of a baseball match, suddenly giving it a hockey stick will leave the machine confounded. That’s the only limitation with Machine Learning.
To understand the neural network lets take an example of the human nervous system. Millions of neural connections helping the information reach the brain and that predicts and relays the outcomes.
If you were to be asked “Is it warm outside”, to know that you might step outside. Your skin will convey the heat felt, you might see the yard is completely filled with sunlight and above there are clear skies. This will give you an idea of the events occurring outside and subsequently make you aware of the temperature in your surroundings.
With examples like these, concepts like AI, Machine Learning, Neural Network, etc becomes quite easy to comprehend. That is not to say that you can excel your carer in these fields without getting some knowledge of coding and mathematics. Those are some of the prerequisites for this course.
However, you do not have to start off your basic understanding of AI and its subsets with equations and problems that will act as a hindrance. Also, for people who do not have the basic understanding of the foundational terminologies and are yet not completely familiar with the elementary understanding, they wouldn’t do extremely well in starting off with courses that don’t address that.
Since its a growing career option, it still has the potential to give an excellent salary package. So, if you feel like you hold a zeal to excel at a rapidly growing field with untapped potential, consider some really good courses which explain concepts clearly with easy language. I know how daunting it can be to search for a course that will help me in attaining a deeper insight into AI and Machine Learning. Among many, one which I found useful is one with a degree certification at the end of all the sessions within the course. Though it is not live and still in the funding phase, already more than 250 backers have backed this project. Seeing this and all other things which this E-Degree provides, this could be very useful for all those who get fascinated by the concepts of AI & ML.
Now since there is an ocean of such courses, its best to look for one that is providing a unique perspective into the field. The scarcity of exceptional content is scantly available. So, while you wish to pursue AI and Machine Learning, think twice before you choose.
Hope this has been helpful for you, readers. Till then, keep reading and stay inquisitive!
Understanding AI and Machine Learning the Easy Way was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
Do machine Learning models really act like black boxes? For a large majority of people, especially those who are not data scientists, the answer is “Yes”.
However, this is not completely true. With proper structuring and critical thinking, one can explain the predictions or decisions made by a machine learning model. In this article, I have shared a hybrid framework that uses the concepts of machine learning explainability which can be used to explain a so-called black box machine learning model. The framework uses the interpretations derived from a trained machine learning model. Unlike descriptive analysis to find key insights, the focus in this approach is to make use of model behaviours and characteristics such as Relative Feature Importances, Partial Dependencies, Permutation Importances, SHAP values.
Crowdfunding is the practice of funding a project or a venture by raising monetary contributions from many people across the globe. There are a number of organisations such as DonorsChoose.org, Patreon, Kickstarter which hosts the crowdfunding projects on their platforms. Kickstarter has hosted more than 250,000 projects on its website with more than $4 Billion collective amount raised.
While it is true that crowdfunding is one of the most popular methods to raise funds however the reality is that not every project is able to completely reach the goal. In fact, on Kickstarter, only about 35% of the total projects have raised successful fundings in the past. This fact raises an important question — which projects are able to successfully achieve their goal?. In other words, can project owners somehow know what are the key project characteristics that increase the chances of success?
In many studies, Researchers and analysts have used descriptive analysis methods on the crowdfunding data to obtain insights related to project success. While many others have also applied predictive modelling to obtain the probability of project success. However, these approaches have fundamental problems: The descriptive analysis part of the problem only gives surface level insights while the in Predictive analysis, models act as the black boxes.
The essential business use-cases in the crowdfunding scenario can be considered from two different perspectives — from the project owner’s perspective and the companies perspective.
a. From the project owner’s perspective, it is highly beneficial to be aware of the key characteristics of a project that greatly influence the success of any project. For instance, it will be interesting to pre-emptively know about the following questions:
b. From the perspective of companies which hosts crowdfunding projects such as DonorsChoose.org, Patreon, and Kickstarter, they receive hundreds of thousands of project proposals every year. A large amount of manual effort is required to screen the project before it is approved to be hosted on the platform. This creates the challenges related to scalability, consistency of project vetting across volunteers, and identification of projects which require special assistance.
It is due to these two perspectives, there is a need to dig deeper and find more intuitive insights related to the project’s success. Using these insights, more people can get their projects funded more quickly, and with less cost to the hosting companies. This also allows the hosting companies to optimize the processes and channel even more funding directly to projects.
Hypothesis Generation is a very powerful technique which can help an analyst to structure a very insightful and relevant solution of a business problem. It is a process of building an intuitive approach of the business problem without even thinking about the available data. Whenever I start with any new business problem, I try to make a comprehensive list of all the factors which can be used to obtain the final output. For example, which features should affect my predictions. Or, which values of those features will give me the best possible result. In the case of crowdfunding, the question can be — which features are very important to decide if a project will be successful or not.
So, to generate the hypothesis for the use-case, we will write down a list of factors (without even looking at the available data) that can possibly be important to model the project success.
So this is an incomplete list of possible factors we can think at this stage that may influence the project success. Now, using machine learning interpretability, not only we can try to understand which features are actually important but also what are the feature values which these features can take.
In this dataset, a number of features are about the active stage of the project. This means that a project was launched on a particular date and a partial amount is already raised. The goal of our problem statement is a little bit different, we want to focus on the stage in which the project is not launched yet and identify if it will succeed or not. Additionally, find the most important features (and the feature values) that influence this output. So we perform some pre-processing in this step which includes the following:
Feature Engineering (Driven from Hypothesis Generation)
For Category and Main Category, I have used LabelEncoder, Some people may argue that LE may not be a perfect choice for this rather OneHot Encoder should be used. But In our use-case we are just trying to understand the effect of a column as a whole, so we can use label encoder. Now, we can generate the count/aggregation based features for the main category and subcategory.
Now, with all those features prepared we are ready to train our model. We will train a single random forest regression model for this task. There are ofcourse many other models available as well such as lightgbm or xgboost, but in this article, I am not focussing on evaluation metric rather the insights from predictive modelling. Now, we have a model which predicts the probability of a given project to be successful or not. In the next section, we will interpret the model and its predictions. In other words, we will try to prove or disprove our hypothesis.
In tree-based models such as random forest, a number of decision trees are trained. During the tree building process, it can be computed how much each feature decreases the weighted impurity (or increases the information gain) in a tree. In a random forest, the impurity decrease from each feature is averaged and the features are ranked according to this measure. This is called relative feature importance. The more an attribute is used to make key decisions with decision trees, the higher its relative importance. This indicates that the particular feature is one of the important features required to make accurate predictions.
Most Important (Relative): Goal | NumChars | LaunchedWeek | DiffMeanCategoryGoal | Duration | SyllableCount | LaunchedMonth | NumWords | LaunchedDay | MeanCategoryGoal
Least Important (Relative): Music | Theater | Fashion | Comics | Games | Publishing | Technology | Film & Video | Food | Crafts | Design | Dance | Art | Photography | Journalism
By applying this approach, we primarily obtained the factors to look at a high level, But still, we need to answer, what are the optimal values of these features. This will be answered when we apply other techniques in the next sections. Before moving on to those techniques, I wanted to explore a little more about relative feature importance using a graph theory perspective.
In the last section, we mainly identified which the features at a very high level which are relatively important to the model outcome. In this section, we will go a little deeper and understand which features have the biggest impact on the model predictions (in an absolute sense). One of the ways to identify such behaviour is to use permutation importance.
The idea of permutation importance is very straightforward. After training a model, the model outcomes are obtained. The most important features for the model are the ones if the values of those features are randomly shuffled then they lead to the biggest drops in the model outcome accuracies. Let’s look at the permutation importance of features of our model.
With this method, we obtained the importance of a feature in a more absolute sense rather than a relative sense. Let’s assume that our feature space forms a majority of the universe. Now, it will be interesting to plot both permutation and relative feature importance and make some key observations.
Using permutation importance, we can also evaluate which keywords make the biggest impact on the model prediction. Let’s train another model which also uses keywords used in the project name and observe the permutation importance.
From the first plot, we can observe that there are certain keywords which when used in the project name are likely to increase the probability success of a project. Example — “project”, “film”, and “community”. While on the other hand, keywords like “game”, “love”, “fashion” is likely to garner less attraction. This implies that crowdfunding projects related to games or entertainment such as love or fashion may not be very successful as compared to the ones related to art, design etc.
So far we have only talked about which features are most or least important from a pool of many features. For example, we observed that Project Goal, Project Duration, Number of Characters used etc are some of the important features related to project success. In this section, we will look at what are the specific values or ranges of features which leads to project success or failure. Specifically, we will observe how making changes such as increasing or decreasing the values affect the model outcomes. These effects can be obtained by plotting the partial dependency plots of different features.
Project Name — Features
We observe that the projects having a fewer number of words (<= 3) in the name does not show any improvement in model success. However, if one start increasing the number of words in the project name, the corresponding model improvement also increases linearly. For all the projects having more than 10 words in the name, the model becomes saturate and shows similar predictions. Hence, the ideal word limit is somewhere around 7–10.
Number of Characters
From the 2nd plot, we observe that if the total number of characters are less than 20, then model performance decreases than a normal value. Increasing the characters in the name linearly also increases the model performances.
Let’s also plot the interaction between the number of words and characters used.
From the above plot, it can be observed that about 40–65 characters and 10–14 words are the good numbers for the project name.
Project Launched Day and Duration
For shorter project duration (less than 20 days), the chances that the project will be successful are higher. However, if the duration of a project is increased to say 60–90 days, it is less likely to achieve its goal.
We understood from the permutation importance that launched month has less impact, which we can observe from partial dependency plots. But I just wanted to see are there any specific months in which the chances of project success are more. Looks like that towards the last quarter of the year (months 9–12), the success rate of projects is slightly higher while it is slightly lesser in quarter 3.
For launch day, the model performance is lesser when the launched day is Friday — Sunday as compared to Monday — Wednesday.
Project Main Category
From the feature definition, category count is a feature which acts as the proxy of the popularity of a project category. For example, if in Travel category a large number of projects are posted then its category_count will be higher so it is a popular category on Kickstarter. On the other hand, if in the Entertainment category, very rarely someone adds a project, its category_count will be lesser and so is its popularity. From the plot, we can observe that chances that a project will be successful will be higher if it belongs to a popular category. Also holds true for the main category.
How about specific categories?
By plotting the pdp_isolate graph we can also identify the effect of specific project categories.
From the partial dependency plot for project category, we observe that the accuracy of a model predicting the project success increases if it belongs to “Music”, “Comics”, “Theater”, or “Dance” categories. It decreases if it belongs to “Crafts”, “Fashion Film & Video”. The same insights can be backed from the actual predictions plot.
5.4 Understanding the decisions made by the Model (using SHAP)
In this section, We make the final predictions from our model and interpret them. For this purpose, we will use of SHAP values which are the average of marginal contributions of individual feature values across all possible coalitions. Let’s try to understand this in laymen terms, Consider a random project from the dataset with the following features:
The trained model predicts that this project is likely to be successful with a probability of 75%. But, someone asks the question: Why this project has the success probability of 75%, not 95 %? To answer this question, we obtain the shap values for the prediction made by the model for this project. Shap values indicate the amount of increase or decrease in model outcome value from the average value of the predictions in the entire dataset. For example:
Let’s see the model predictions on the entire dataset.
In a sample of around 6500 crowdfunding projects, Model predicts that about 4200 will be failed and only about 2300 will be successful. Now, we are interested to understand what is driving the success and failure of these projects. Let’s plot the individual feature effects on some of these predictions to make sense out of them.
For this particular project, the prediction value is increased to 0.58 from the base value of 0.4178. This implies that the presence of certain features and their corresponding values in this project makes it more likely to be successful. For instance, the duration is 44, the number of characters in the project name is 27, and the difference in goal amount from the mean goal amount of the category is about 50K. These features increase the probability.
For this particular project, apart from the number of characters, duration, the goal amount = 2000 also increases the probability from the base value of 0.4178 to 0.72. Not many features decrease the probability significantly.
For this project, the probability is not increased much as compared to other projects. In fact, the probability is decreased due to the number of characters equal to 17, a high value of goal, and the duration of 29 days.
For this project, the duration of 25, a small goal of 1500 significantly increases the project changes. However, less number of words (only 4), and difference from mean category goal amounts decreases the probability almost equally.
For this project, a large duration, the presence of a particular category decreases the chances significantly. Not many features and the feature values are able to increase the project success chances.
Now, we can aggregate the shap values for every feature for every prediction made by the model. This helps to understand an overall aggregated effect of the model features. Let’s plot the summary plot.
We can observe that shap values are higher for the same set of features which we saw in relative feature importance and permutation importance. This confirms that features which greatly influence the project success are related to project features such as duration or goal.
After applying these different techniques we understood that there are certain factors which increase or decreases the chances of a project successfully able to raise the funds. Both from the project owner’s and the company’s perspective it is important to set the optimal values of project goal and the duration. A large duration or a very large amount may not be able completely successful. At the same time, it is important to choose the right number of words and characters in the name of the project. For example, a project having very few or very large numbers of words and characters may become less intuitive and less self-explanatory. Similarly, the project category also plays a crucial role, There will be some categories on the platform in which a total number of projects are very large, these are so-called popular categories. The chances may be higher if the project is posted in a popular category rather than a rare category.
In this article, I shared a general framework which I follow while solving any data science or analytics problem. This framework can be applied to many other different use-cases as well. The main focus of this article was different techniques related to machine learning explainability. Mainly, I used relative feature importance, permutation importance, partial dependencies, and shap values. In my opinion, here are some pros and cons of these techniques. Apart from these techniques, there are other alternatives as well (such as a skater, lime).
Thanks for reading. The complete code is available at this link.
Machine Learning Explainability: complete walkthrough using a detailed use-case was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
▶️ Follow us on Facebook
Hedgehog youtube worldwide news latest recent end times sign signs event events prophetic strange mystery mysterious biblical scientists world official trailer america 2019 “elon musk” AI A.I artificial intelligence ai a.i facial recognition google duplex assistant phone call robot singularity
First 500 people get a free 2 month trial of Skillshare http://skl.sh/thoughty5
JOIN The PRIVATE Thoughty2 Club & Get Exclusive Perks! http://bit.ly/t2club
SUBSCRIBE – New Video Every Two Weeks http://bit.ly/thoughty2
BECOME A PATRON and support Thoughty2: https://www.patreon.com/thoughty2
Thoughty2 Merchandise: https://shop.thoughty2.com/
Thoughty2 Facebook: http://bit.ly/thoughtyfb
Thoughty2 Instagram: http://bit.ly/t2insta
IBM took nearly a million photos from Flickr, used them to figure out how to train facial recognition programs, and shared them with outside researchers. But as NBC points out, the people photographed on Flickr didn’t consent to having their photos used to develop facial recognition systems — and might easily not have, considering those systems could eventually be used to surveil and recognize them.
While the photographers may have gotten permission to take pictures of these people, some told NBC that the people who were photographed didn’t know their images had been annotated with facial recognition notes and could be used to train algorithms.
“None of the people I photographed had any idea their images were being used in this way,” one photographer told NBC.
The photos weren’t originally compiled by IBM, by the way — they’re part of a larger collection of 99.2 million photos, known as the YFCC100M, which former Flickr owner Yahoo originally put together to conduct research. Each photographer originally shared their photos under a Creative Commons license, which is typically a signal that they can be freely used, with some limitations.
But the fact they could potentially be used to train facial recognition systems to profile by ethnicity, as one example, may not be a use that even Creative Commons’ most permissive licenses anticipated. It’s not entirely a theoretical example: IBM previously made a video analytics product that used body cameras to figure out peoples’ races. IBM denied that it would “participate in work involving racial profiling,” it tells The Verge.
It’s also worth noting that IBM’s original intentions may have been rooted in preventing AI from being biased against certain groups, though — when it announced the collection in January, the company explained that it needed such a large dataset to help train future AIs for “fairness” as well as accuracy.
Either way, it’s hard for the average person to check if their photos were included and request to have them removed, since IBM keeps the dataset private from anyone who’s not conducting academic or corporate research. NBC obtained the dataset from a different source and made a tool within its article for photographers to check if their Flickr usernames have been included in IBM’s collection. That doesn’t necessarily help the people who were photographed, though, if they decide they don't want to feed an AI.
IBM told The Verge in a statement, “We take the privacy of individuals very seriously and have taken great care to comply with privacy principles.” It noted that the dataset could only be accessed by verified researchers and only included images that were publicly available. It added, “Individuals can opt-out of this dataset.”
IBM is only one of several companies exploring the field of facial recognition and it’s not alone in using photos of regular people without expressly asking for their consent. Facebook, for instance, has photos of 800,000 faces open for other researchers to download.
Update March 12th 7:41PM ET: This article has been updated with a statement from IBM.
SUBSCRIBE for more speakers ► http://is.gd/OxfordUnion
Oxford Union on Facebook: https://www.facebook.com/theoxfordunion
Oxford Union on Twitter: @OxfordUnion
SUBSCRIBE for more speakers ► http://is.gd/OxfordUnion
Oxford Union on Facebook: https://www.facebook.com/theoxfordunion
Oxford Union on Twitter: @OxfordUnion
SUBSCRIBE for more speakers ► http://is.gd/OxfordUnion
Oxford Union on Facebook: https://www.facebook.com/theoxfordunion
Oxford Union on Twitter: @OxfordUnion
Artificial intelligence and machine learning has become essential if you are selling sales, customer service and marketing software, especially in large enterprises. The biggest vendors from Adobe to Salesforce to Microsoft to Oracle are jockeying for position to bring automation and intelligence to these areas.
Just today, Oracle announced several new AI features in its sales tools suite and Salesforce did the same in its customer service cloud. Both companies are building on artificial intelligence underpinnings that have been in place for several years.
All of these companies want to help their customers achieve their business goals by using increasing levels of automation and intelligence. Paul Greenberg, managing principal at The 56 Group, who has written multiple books about the CRM industry, including CRM at the Speed of Light, says that while AI has been around for many years, it’s just now reaching a level of maturity to be of value for more businesses.
“The investments in the constant improvement of AI by companies like Oracle, Microsoft and Salesforce are substantial enough to both indicate that AI has become part of what they have to offer — not an optional [feature] — and that the demand is high for AI from companies that are large and complex to help them deal with varying needs at scale, as well as smaller companies who are using it to solve customer service issues or minimize service query responses with chatbots,” Greenberg explained.
This would suggest that injecting intelligence in applications can help even the playing field for companies of all sizes, allowing the smaller ones to behave like they were much larger, and for the larger ones to do more than they could before, all thanks to AI.
The machine learning side of the equation allows these algorithms to see patterns that would be hard for humans to pick out of the mountains of data being generated by companies of all sizes today. In fact, Greenberg says that AI has improved enough in recent years that it has gone from predictive to prescriptive, meaning it can suggest the prospect to call that is most likely to result in a sale, or the best combination of offers to construct a successful marketing campaign.
Brent Leary, principle at CRM Insights, says that AI, especially when voice is involved, can make software tools easier to use and increase engagement. “If sales professionals are able to use natural language to interact with CRM, as opposed to typing and clicking, that’s a huge barrier to adoption that begins to crumble. And making it easier and more efficient to use these apps should mean more data enters the system, which result in quicker, more relevant AI-driven insights,” he said.
All of this shows that AI has become an essential part of these software tools, which is why all of the major players in this space have built AI into their platforms. In an interview last year at the Adobe Summit, Adobe CTO Abhay Parasnis had this to say about AI: “AI will be the single most transformational force in technology,” he told TechCrunch. He appears to be right. It has certainly been transformative in sales, customer service and marketing.
We present a benchmark for studying generalization in deep reinforcement
learning (RL). Systematic empirical evaluation shows that vanilla deep RL
algorithms generalize better than specialized deep RL algorithms designed
specifically for generalization. In other words, simply training on varied
environments is so far the most effective strategy for generalization. The code
can be found at https://github.com/sunblaze-ucb/rl-generalization and the
full paper is at https://arxiv.org/abs/1810.12282.
Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?
Can artificial intelligence be emotionally intelligent? In Boston, researchers have programed BB-8, the little droid from “Star Wars: The Force Awakens,” to detect expressions and determine how people are feeling. And that technology is being adapted for marketing, video games, even therapy for children diagnosed with autism. The NewsHour’s April Brown reports.
AI is going to be huge for artists, and the latest demonstration comes from Nvidia, which has built prototype software that turns doodles into realistic landscapes.
Using a type of AI model known as a generative adversarial network (GAN), the software gives users what Nvidia is calling a “smart paint brush.” This means someone can make a very basic outline of a scene (drawing, say, a tree on a hill) before filling in their rough sketch with natural textures like grass, clouds, forests, or rocks.
The results are not quite photorealistic, but they’re impressive all the same.
This software isn’t groundbreaking exactly — researchers have shown off similar tools in the past, including one from Google that turns doodles into clipart — but it is the most polished demonstration of this concept we’ve seen to date. The software generates AI landscapes instantly, and it’s surprisingly intuitive. For example, when a user draws a tree and then a pool of water underneath it, the model adds the tree’s reflection to the pool.
Demos like this are very entertaining, but they don’t do a good job of highlighting the limitations of these systems. The underlying technology can’t just paint in any texture you can think of, and Nvidia has chosen to show off imagery it handles particularly well.
For example, generating fake grass and water is relatively easy for GANs because the visual patterns involved are unstructured. Generating pictures of buildings and furniture, by comparison, is much trickier, and the results are much less realistic. That’s because these objects have a logic and structure to them that humans are sensitive to. GANs can overcome this sort of challenge, as we’ve seen with AI-generated faces, but it takes a lot of extra effort.
Nvidia didn’t say if it has any plans to turn the software into an actual product, but it suggests that tools like this could help “everyone from architects and urban planners to landscape designers and game developers” in the future.
“It’s much easier to brainstorm designs with simple sketches, and this technology is able to convert sketches into highly realistic images,” said Nvidia’s Bryan Catanzaro in a blog post.
How do you teach a car to drive? For many self-driving car makers and artificial intelligence researchers, the answer starts with data and sharing.
The Glassdoor Report last year (Glassdoor’s 50 Best Jobs In America For 2018) named data scientist as the best job in the US for three years running.
The report took into consideration three key factors, namely job satisfaction rating, median annual base salary, and the number of job openings. Each of these three factors was given equal importance, and it was found that data science jobs excelled across all three.
Apart from $110,000 as a median base salary, data science jobs were found to have a job satisfaction score and a job score of 4.4 and 4.8 (out of 5) respectively.
Similar findings were made public in a related report of CareerCast.com where jobs in data science were shown to have one of the best growth rates in the industry over the next decade and continue to be one of the most difficult positions to be filled.
According to rjmetrics.com statistics, these findings were supported and it was stated that over the past four years, merely 50% of the projected 19,500 data scientist positions got filled. All these statistics and predictions indicate how popular the job of a data scientist is and will become in the coming days.
Let’s take a deeper look into certain aspects to understand the driving factors behind this trend that makes data scientist the hottest job of the 21st century.
A research conducted by Business Insider some time ago predicted that by 2020, over 24 billion internet-connected devices will get installed globally. In other words, every person on this planet will have more than four devices to use. Together, these devices comprise the Internet of Things (IoT), and its presence is permanently changing our world.
IoT can be called the link between the digital world of data and physical world inhabited by humans. From your smartphones and smartwatches, to tablets, computers, smart TVs, and wearables — all come under the IoT.
What’s more, even your everyday appliances like lights, fans, smoke detectors, and thermostats have started boasting of smart capabilities, which make them a part of the IoT. Even how you socialize, or get from one place to other (via the transportation system) is changing and will change further because of the IoT. The tech giant has been enhanced by the time over.
If you are wondering how the IoT is connected to data, here’s the answer: all these varied smart devices and appliances draw a large amount of data. A number of sources are used to collect this data, which can be categorized into two types: unstructured data and structured data, both of which come under the domain of big data.
Human input is more likely to contribute to unstructured data, which is the fastest growing type of big data.
This includes your social media posts, the emails you send and receive from various sources, the videos your stream or share, the customer reviews you post etc. since unstructured data isn’t streamlined, it’s difficult to sort and manage with technology. On the other hand, structured data is collected by products, services, and electronic devices.
For example, your website traffic data, or GPS coordinates collected by your smartphone fall under this category. Since such data is organized, usually by categories, a computer or a program can be used to read, sort, and organize it automatically because of demand in data.
A data scientist works with both structured and unstructured data, and sorts, organizes and analyses them to present them in easily understandable forms to the stakeholders. This in turn would help the stakeholders examine if their departmental, business and revenue goals are being met, and also help them take important business decisions.
In other words, a data scientist’s job isn’t just to process and analyze data.
Rather, he/she should be able to translate departmental or company goals into data-based deliverables like pattern detection analysis, prediction engines, optimization algorithms etc, which would offer the stakeholders useful insights and facilitate informed decision making.
Not having a data scientist on your team would mean that even if you sit on a pile of data, you won’t be able to leverage it for your benefit as you can’t get any meaningful insights or use them to predict trends (like the surge in interest in a particular item), which would have helped you to make timely business decisions.
In today’s competitive business landscape where data never stops flowing and the nature of challenges undergoes a continuous change, it’s the data scientists who can help decision makers make a transition from ad hoc analysis to enjoying an ongoing dialogue with data.
Now that you have an idea of the role of a data scientist, let’s see what makes it the hottest job of the 21st century.
Lack of qualified talent is one of the key reasons why data scientist jobs are in high demand.
Even for the positions that are vacant at present, employers are finding it difficult to fill them as there aren’t enough skilled and qualified people around. The problem is that though companies need more data scientists, most are still not certified yet or are still studying for their degrees.
And this gap between demand and the availability of talents is set to worsen since IBM has predicted that by 2020, the demand for data scientists will skyrocket to 28%.
This sets the prefect stage for aspirants seeking jobs as data scientists. Thanks to the huge vacancy in this field (which is set to increase further in the future), these aspirants can apply for and land such in-demand jobs a lot faster than their counterparts seeking other jobs. Thus, companies want skilled person who is worker.
As already mentioned before, the median salary of a data scientist in the US is close to $110,000. Elsewhere in the world too, the job pays extremely well.
According to Burtch Works Study: Salaries of Data Scientists, the base salary of these professionals is up to 36% higher than their counterparts working with other predictive analytics.
As the demand for competent data scientists is set to grow significantly, the salary for the post is likely to become better.
Apart from the lure of a fat paycheck, the excitement of working with the latest technologies is also a big draw.
From Artificial Intelligenceand Machine Learning (with progressive future prospects) to R and Python (considered as the most popular technologies), and MongoDB (the most popular database), a data scientist gets to work with the constant evolution of technologies.
This first-hand experience together with the future prospects of these popular technologies make the position of data scientists the most coveted one.
Once, data scientists were thought to be employable only in the IT and finance sectors, and that too in large companies. But the scenario has changed today. While the bigger names in IT, Finance and Insurance sectors continue to hire these professionals in career stage, even the medium and smaller companies are now hiring them as they have realized the importance of data-driven decision making.
Though these smaller companies don’t have a data bandwidth as large as their bigger counterparts, they have started hiring qualified data scientists, who can help them get valuable insights from their metrics. With this, these smaller and medium companies can get a comparable “big data” advantage as the larger companies, which in turn would help them stay competitive.
And the good news is that it’s no longer just the IT, Professional Services, and Finance and Insurance that offer jobs to data scientists.
From companies in telecom, e-commerce, and BFSI (banking, financial services and insurance), to transportation and more, a lot of industries that generate or have access to a massive amount of data have woken up to the potential of leveraging such data to their business advantage.
And they are now hiring data scientists to process this huge amount of data to make the most of their business decision-making potential.
Be it proactive (where you anticipate what the problem could be and try to address it before it disrupts business operations) or preventive decision making, data science professionals can help. Even spotting trends to decide on the future course of business, or steering the business to an entirely new direction (in line with changing demands, preferences etc) becomes easy with the insights generated by data science.
Automating many small decisions is another key thing which can be done easily when the right data is collected and utilized.
For example, financial institutions using automated credit scoring systems to forecast their customers ‘credit-worthiness’ would not only free their employees from the task, but also bring a higher degree of accuracy, while speeding up the process and lowering the risk of not getting a return on the loans in case the customer wasn’t worthy of being granted a loan.
As the future scope of data science is extremely bright, it’s no wonder why there’s almost a mad rush to get qualified as a data scientist and find jobs in this highly lucrative domain withdata science in 6 weeks in Silicon Valley. Data science bootcamp in Bay Area provides an advantage to be accepted for a job.https://medium.com/media/55c903f3de11f03eb2d75f931f5d7893/hrefbody[data-twttr-rendered="true"] background-color: transparent;.twitter-tweet margin: auto !important;
function notifyResize(height) height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) donkey.resize(height); resized = true;if (parent && parent._resizeIframe) var obj = iframe: window.frameElement, height: height; parent._resizeIframe(obj); resized = true;if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) window.parent.postMessage(sentinel: "amp", type: "embed-size", height: height, "*");if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) window.webkit.messageHandlers.resize.postMessage(height); resized = true;return resized;twttr.events.bind('rendered', function (event) notifyResize();); twttr.events.bind('resize', function (event) notifyResize(););if (parent && parent._resizeIframe) var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) window.frameElement.setAttribute("width", "500");https://medium.com/media/adce9851fb539d9d7e5a57aaaa7ea12e/href
We can gladly answer your questions! Please contact us or go to https://t.co/lBSm5DjL6S #question #answer #glad #course #tobedatascientist #bootcamp #eventspace #career #technology #improve #world #visit #NLP #datascience #deeplearning #blockchain #data #future #magnimindacademy
If you liked this article, I think you might be interested in this one as well…
More information on this subject can be found here.
The Hottest Job Of The 21st Century(Data Scientist!) was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
As the biggest sales and marketing technology firms mature, they are all turning to AI and machine learning to advance the field. This morning it was Oracle’s turn, announcing several AI-fueled features for its suite of sales tools.
Rob Tarkoff, who had previous stints at EMC, Adobe and Lithium, and is now EVP of Oracle CX Cloud says that the company has found ways to increase efficiency in the sales and marketing process by using artificial intelligence to speed up previously manual workflows, while taking advantage of all the data that is part of modern sales and marketing.
For starters, the company wants to help managers and salespeople understand the market better to identify the best prospects in the pipeline. To that end, Oracle is announcing integration with DataFox, the company it purchased last fall. The acquisition gave Oracle the ability to integrate highly detailed company profiles into their Customer Experience Cloud, including information such as SEC filings, job postings, news stories and other data about the company.
“One of the things that DataFox helps you you do better is machine learning-driven sales planning, so you can take sales and account data and optimize territory assignments,” he explained.
The company also announced an AI sales planning tool. Tarkoff says that Oracle created this tool in conjunction with its ERP team. The goal is to use machine learning to help finance make more accurate performance predictions based on internal data.
“It’s really a competitor to companies like Anaplan, where we are now in the business of helping sales leaders optimize planning and forecasting, using predictive models to identify better future trends,” Tarkoff said.
The final tool is really about increasing sales productivity by giving salespeople a virtual assistant. In this case, it’s a chatbot that can help handle tasks like scheduling meetings and offering task reminders to busy sales people, while allowing them to use their voices to enter information about calls and tasks. “We’ve invested a lot in chatbot technology, and a lot in algorithms to help our bots with specific dialogues that have sales- and marketing-industry specific schema and a lot of things that help optimize the automation in a rep’s experience working with sales planning tools,” Tarkoff said.
Brent Leary, principal at CRM Essentials, says that this kind of voice-driven assistant could make it easier to use CRM tools. “The Smarter Sales Assistant has the potential to not only improve the usability of the application, but by letting users interact with the system with their voice it should increase system usage,” he said.
All of these enhancements are designed to increase the level of automation and help sales teams run more efficiently with the ultimate goal of using data to more sales and making better use of sales personnel. They are hardly alone in this goal as competitors like Salesforce, Adobe and Microsoft are bringing a similar level of automation to their sales and marketing tools
The sales forecasting tool and the sales assistant are generally available starting today. The DataFox integration will GA in June.
Say hello to Cozmo, the artificial intelligence Robot with emotions, a little guy with a mind of his own, with a one-of-a-kind personality that evolves the more you interact with him.
Director of the Centre for Quantum Photonics, University of Bristol
Starting Grant 2009 and Consolidator Grant 2014
AI or Artificial Intelligence is an emerging technology that has caught attention of the society. Tech leaders such as Elon Musk or Mark Zuckerberg have weighted in their opinion on the subject. While Musk referred to AI as the biggest existential threat, Zuckerburg has a firm belief in the benefit that AI will bring to humankind. In this article, we learn what AI is and see if we should fear it or not.
John McCarthy, the father of AI, explained that AI or Artificial Intelligence is a science of making an intelligent machine where intelligence means an ability to do a task that human can do.
There are three types of AI by a level of intelligence as follow:
1. Narrow AI is an AI to accomplish a single task. It has specific knowledge or is good at a particular area such as AlphaGo for playing Go, a system to classify spam emails or a virtual assistant. Even a self-driving car is considered a narrow AI (a self-driving car consists of several narrow AIs working together). Narrow AI is what researchers have achieved so far.
2. General AI is more sophisticated than Narrow AI. It can learn by itself and be able to solve problems better than or equal to a human. General AI is what the society has been talking about and anticipating its coming. Whereas we are still far from making a machine has this level of intelligence. Because the human brain is very complicated and researchers still don’t fully understand how the brain works. Therefore it is challenging to develop an AI that can interpret and connect knowledge from various areas to plan and make a decision.
3. Super AI or Superintelligence is an AI that is more intelligent than all geniuses from all domains of knowledge. It also has creativity, wisdom and social skill. Some researchers believe that we will achieve Super AI soon after we achieve General AI.
From the AI types, we can see that the definitions of General AI and Super AI are too general to measure if a machine archives that level of intelligence. One reason may be because it is still difficult to describe what human intelligence is. However, there is a test, called The Turing test, designed to test if a machine can think like a human.
The Turing Test was outlined by Alan Turing. So let’s get to know him a little bit. Alan Turing was an English mathematician and computer scientist, born in 1912. He is considered to be the father of computer science. He proposed a concept of a computing machine which is regarded as a model of a modern computer.
Alan Turing played a significant role during World War II where he and his team at Government Code and Cypher School build a machine named Colossus to decode German ciphers. The success of the code-breaking work helped the Allies to defeat the Nazis. It has been estimated that his work saved over 14 million lives. His work and life during World War II is the subject of The Imitation Game (2014).
After World War II, Turing worked on machine intelligence and proposed a method to measure the intelligence of a machine in term of its ability to think like a human. He named this test “The Imitation Game” which is now known as the Turing Test.
Turing described the test as a party game involving three players sitting in different rooms. Player A is a machine; Player B is a human and Player C is a human judge. The judge talks to Player A and Player B via a chat program. A machine passes a test if there are more than 30% of judges believing that it is a human.
Turing envisioned that a machine would pass this test within the year 2000. However, developing AI is more complicated than Turing thought it would be. The first machine that claimed to pass the test is a chat bot called Eugene Goostman, portrayed as a boy from Ukraine. Some researchers believed that Eugene’s characteristics, being young and non-native English speaker, influenced the judges to forgive his grammatical mistakes and lack of knowledge in some topics.
Even though we are still far from General AI, but many tech leaders have given their concerns about potential AI threats. Elon Musk, Tesla and SpaceX founder, commented during an interview at the AeroAsto Centennial Symposium, MIT, that AI could be the greatest existential threat to the human race.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
Bill Gates also expressed his thoughts on the potential risk of AI in the future.
“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”
Even the late Stephen Hawking who depended on AI technology to communicate with the world warned that “[Superintelligence] AI could spell the end of the human race.” He explained that because AI can evolve exponentially while human evolution is much slower. Eventually, AI will be smarter than human and beyond control.
Nevertheless, let’s see what the actual experts in AI have to say about this topic. On the one hand, we do not doubt that Elon Mush or Bill Gates are intelligent, but when we discuss a specific subject such as AI, we should also listen to what the AI experts have said. Yann LeCun, adirector of Facebook AI Research, said that:
“We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that.”
Andrew Ng, a professor at Stanford University and a leader in AIresearch, also said something similar:
“I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars.”
Steve Wozniak, an Apple co-founder, used to warn that “AI could turn humans into their pets” but he is not scared of AI anymore. His reason is that we still don’t completely understand how the human brain works; therefore it is very challenging to build a machine that can think like us. As a result, we shouldn’t waste our time worrying that Super AI will take over the world.
From the opinions above, I think those tech leaders are not against AI because AI has shown its potential in many applications that would be beneficial to society. For example, AI robots in operation rooms or AI system to detect cancer cells in medical images. And because of the potential benefits, Musk founded OpenAI, a research company to develop ethical AI under strict regulations to avoid potential dangers of AI being misused.
I think we are still very far from General AI, so we should not worry about AI taking over the world soon. Nevertheless, we should be aware of AI threats that can happen at some point in the future. All related organizations should get started in discussing and making a global framework to regulate AI developments and usages as well as creating an actionable solution, so we are ready when the time comes.
What is AI? Should we fear it? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
Last year, Google announced a new app to help the visually impaired named Lookout. The app uses AI to identify objects through your phone’s camera. It can also read text in signs and labels, scan barcodes, and identify currencies. This week, Google announced that Lookout will finally be available to download — though only for Pixel devices in the US.
Since announcing the app last year, Google says it’s been “testing and improving the quality” of its results. The company cautions that, as with all new technology, Lookout’s results will not always be “100 percent perfect,” but it’s soliciting feedback from early users.
To use Lookout, Google recommends that users wear their Pixel device on a lanyard around their neck or placed in the front pocket of a shirt or coat. That way, the phone’s camera gets an unobstructed view of the world and can identify objects and text “in situations where people might typically have to ask for help.”
It’s not clear when Lookout will be available on hardware other than Google’s own, but the company says it’s hoping to bring the app “to more devices, countries, and platforms soon.”
Luckily, this isn’t the first time we’ve seen a big tech company apply AI to the task of helping the visually impaired. Microsoft launched an app with very similar functionality named Seeing AI in 2017. And this week the Redmond company announced an update for Seeing AI that lets users feel the shape of objects on their phone screens using haptic feedback.
Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum.
Automation Hero, formerly SalesHero, has secured $14.5 million in new funding led by Atomico, with participation by Baidu Ventures and Cherry Ventures. As part of the deal, Atomico principal Ben Blume will join the company’s board of directors.
The automation startup launched in 2017 as SalesHero, giving sales orgs a simple way to automate back-office processes like filing an expense report or updating the CRM. It does this through an AI assistant called Robin — “Batman and Robin, it worked with the superhero theme, and it’s gender neutral,” co-founder and CEO Stefan Groschupf explained — that can be configured to go through the regular workflow and take care of repetitive tasks.
“We brought computers into the workplace because we believed they could make us more productive,” said Groschupf. “But in many companies, people spend a lot of time entering data and doing painful manual processes to make these machines happy.”
The idea was to give salespeople more time to actually do their job, which is selling to clients. If all the administrative and repetitive “paperwork” is done by a computer, human employees can become more productive and efficient at skilled tasks.
By weaving together click robots, Automation Hero users can build out their own workflows through a no-code interface, tying together a wide variety of both structured and unstructured data sources. Those workflows are then presented in the inbox each morning by Robin, the AI assistant, and are executed as soon as the user gives the go-ahead.
After launch, the team realized that other types of organizations, beyond sales departments, were building out automations. Insurance firms, in particular, were using the software to automate some of the repetitive tasks involved with filing and assessing claims.
This led to today’s rebrand to Automation Hero.
Groschupf said that by automating the process of filling out a single closing form, it saved one insurance firm’s 430 sales reps 18.46 years per year.
Automation Hero has now raised a total of $19 million.
“We’re really excited with Atomico to bring on a great VC and good people,” said Groschupf. “I’ve raised capital before and I’ve worked with some of the more questionable VCs, as it turns out. We’re super-excited we’ve found an investor that really bakes important things, like a diversity policy and a family leave policy, right into the company’s investment agreement.”
Though he didn’t confirm, it’s likely that Groschupf is referring to KPCB, which has run into its fair share of controversy over the past few years and was an investor in Groschupf’s previous startup, Datameer.
In this tutorial, I am going to talk about Neural Style Transfer, a technique pioneered in 2015, that transfers the style of a painting to an existing photography, using neural networks i.e. Convolutional Neural Networks. The original paper is written by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.
The code used for this article can be forked from this repository.
In my honest opinion, this is one of the coolest machine learning applications. It has percolated into mobile applications as well. One such example is of Prisma. A mobile application where in you can use styles of paintings to apply over your photograph in real time.
Looks cool and exciting right? Let’s dive in and implement our own style transfer algorithm.
Note: Before reading on further, it is highly recommended to read the original paper once or read the sections in parallel, so as to ensure you understand the article as well as the paper.
The style transfer algorithm draws its root from the family of texture generation algorithms. The key idea is to adopt the style of one image while conserving the content of the other. This can be formulated as an optimisation problem. We can define a loss function around our objective and minimise the loss.
For the ones, who love mathematics: this can be represented as
loss = distance(style(reference_image)-style(generated_image)) + distance(content(original_image)-content(generated_image))
Let’s break it down bit by bit…
distance is a norm function such as the L2 norm
content is a function that takes an image and computes a representation of its content
style is a function that takes an image and computes a representation of its style
So, plugging in all of these: we can see that minimising the loss causes style(reference_image) to be close to style(generated_image) and similarly content(original_image) to be close to content(generated_image). This is what was our objective, right ?
Let us begin by getting a clear understanding of what do we really mean by content and style ?
Content is the higher-level macro structure of the image.
Style refers to the textures, colours and visual patterns in the image.
Assuming you understand how CNNs work: they will identify local features of the image at the initial layers. The deeper we dive into the network, more higher level content is captured as opposed to just pixel values.
Each layer aims to learn a different aspect of the image content.
It is reasonable to assume that two images with similar content should have similar feature maps at each layer.
We will say x matches the content of at layer l, if their feature responses at layer l of the network are the same.
Deriving the style loss is a little tricky !
The feature responses of an image a at layer l encode the content, however to determine style we are less interested in any individual feature of our image but rather how they all relate to each other.
The style consists of the correlations between the different feature responses.
We will say x matches the style of a at layer l, if the correlations between their feature maps at layer l of the network are the same.
We will utilise something known as Gram Matrices for deriving the style representation.
We pick out two of these different feature columns (e.g. the pink and the blue dimensional vectors), then, we compute the outer product between them.
As a result, it will give us a CxC matrix that has information about which features in that feature map tend to activate together at those two specific spatial positions.
We repeat the same procedure with all different pairs of feature vectors from all points in the HxW grid and averaged them all out to throw away all spatial information that was in this feature volume.
Our objective here is to get only the content of the input image without texture or style and it can be done by getting the CNN layer that stores all raw activations that correspond only to the content of the image.
It is better to get a higher layer, because in CNN, first layers are quite similar to the original image.
However, as we move up to higher layers, we start to throw away much information about the raw pixel values and keep only semantic concepts.
Note the size and complexity of local image structures from the input image increases along the hierarchy.
Heuristically, the higher layers learn more complex features than lower layers, and produce a more detailed style representation
If you have made this far…Well congratulations to you ! You have understood the core machinery of the style transfer algorithm.
Moving onto the most exciting part now: IMPLEMENTATION !
Style transfer can be implemented using any pre-trained CNN. This tutorial uses the VGG19 network, which is a simple variant of VGG16 network with three more convolutional layers.
Kindly follow these installation notes to setup the environment. You will up and running in no time !
We will begin by defining the paths for the style reference image and the target image. Style transfer can be difficult to achieve if the images are of varying size. Therefore, we will resize them all to have same height.https://medium.com/media/e8e617e3f3fad8a18af921e7cb121863/href
We will now setup the VGG19 network to receive a batch of three images as input: a style-reference image, the target image and a placeholder that will contain the generated image.
The style-reference image and target images are static and thus defined using K.constant
On the other hand, the generated image will change over time. Hence, a placeholder is used to store the same.https://medium.com/media/6ec14298c939b8f3e6141d8f6e2a0d2b/href
Content Loss : It is the squared-error loss between the feature representation of the original image and the feature representation of the generated imagehttps://medium.com/media/75be0658e5f25cb1a36f7558618fa191/href
Computing Gram Matrices : We reshape the CxHxW tensor of features to C times HxW, then we compute that times its own transpose.
We can also use co-variance matrices but it’s more expensive to compute.
Style Loss: First, we minimise the mean-squared distance between the style representation (gram matrix) of the style image and the style representation of the output image in one layer l.https://medium.com/media/7c997016d2c8b2727bfd5d8997376076/href
Total Variation Loss is the regularisation loss between the content and style images. It avoids overly pixelated results and encourages spatial continuity in the generated image.
The constants a and b dictate how much preference we give to content matching vs style matching.https://medium.com/media/ec2e12022bfff499763e94034a9664c4/href
The loss we will be minimising is the weighted average of these three losses.
To compute the content loss, we use only one upper layer — block5_conv2 layer.
The style loss, on the other hand uses a list of layers that resides in both the high and low levels of the network.https://medium.com/media/b6e586d09a16277a649771c90ca7e31b/href
We will be setting up a class named Evaluator that computes the loss functions and the gradients at once. It will return the loss value when called the first time and caches the gradients for the next call.https://medium.com/media/c62be4467b490bb5d812d8e7acfb52eb/href
We will use SciPy’s L-BFGS algorithm to perform the optimization. It can only be applied to flat vectors. Hence, we will be flattening the image before passing it across.https://medium.com/media/ebf170580e26652035bc5bdeee53f1f1/href
Congratulations !!! You have implemented the style transfer algorithm successfully !!!https://medium.com/media/d870c8276e8c4b2b1048d97baec1c4e5/href
Hope you learned something by this article.
Utilising CNNs to transform your model into a budding artist was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.
The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.
In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].
“Perching,” you say? Go on…
We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.
This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.
Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.
The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.
If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off some or all its motors. These modules can easily be swapped out or modified depending on the mission.
I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.
The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.
Charlie Munger, Berkshire Hathaway vice-chairman shares his thoughts on American Express, Costco and IBM’s future working with artificial intelligence. And Bill Gates, explains why it will be a huge help.
For more of Warren Buffett’s wit and wisdom visit https://Buffett.CNBC.com
» Subscribe to CNBC: http://cnb.cx/SubscribeCNBC
What is artificial intelligence (AI)? How do robots work? Will robots deliver an economic paradise, kill us all, or both?
After graduating from college with honors and a dual Bachelors degree in economics and psychology, Jacob was excited to join the corporate world. He had dreams of one day becoming the CMO of a large organization. At his first job out of college he was promised that he would be working on amazing projects and traveling the country meeting with executives and entrepreneurs. Instead, he was stuck doing data entry, cold calling, and power point presentations. One day the CEO of the company asked Jacob to go buy him a cup of coffee, that was that last job he ever had. Since then he has been passionate about the future of work and designing great employee experiences.
Apple has announced the acquisition of AI company Laserlike to add to its growing roster of in-house talent.
Laserlike is known for its AI-powered app which makes it easier for users to follow news topics. Most notably, it was founded by former Google engineers.
Few people think of Apple as an AI leader. The firm’s closest rival, Google, has invested heavily in AI for many years (as much for its cloud and search businesses as mobile).
Apple is, therefore, coming from a position behind in terms of AI development. However, it is quickly catching up through hoovering up talented AI companies and solutions. Let’s all remember, even Siri was acquired from Nuance.
Here are some of the other AI companies Apple has acquired in the past few years:
Following the poaching of Google’s John Giannandrea, it’s been clear that Apple has been doubling-down on its AI efforts. Which makes sense, if you’re here on AI News then you know how important it is for Apple to become a leader in the space.
On its website, Laserlike pledges to keep personalisation at its core post-Apple acquisition:
“This is one of the things we want to fix on the Internet. Laserlike’s core mission is to deliver high quality information and diverse perspectives on any topic from the entire web. We are passionate about helping people follow their interests and engage with new perspectives.”
Apple is holding an event on March 25th where it’s rumoured to be launching a news subscription service. It will be interesting to see if this acquisition plays a part.
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.
The post Apple bolsters its AI talent with Laserlike acquisition appeared first on AI News.
How can a non-profit organization best use its available marketing budget to enhance its potential operations further? How can a business sort through customers’ purchasing data to develop a marketing plan to rise above the competition?
These questions become even more important when you consider the seemingly-infinite amount of data that can be sorted, interpreted, and implemented for a diverse range of purposes. For this reason, people should compare the data by learning data science.
A data scientist is a trained individual who can accumulate, organize, and analyze data, thus helping businesses from every walk of industry make informed decisions.
These high-tech breeds work extensively with massive amounts of structured as well as unstructured data to derive valuable insights in order to meet specific business goals and needs.
Essentially, data scientists wear multiple hats. They’re part computer scientist, part mathematician, part analyst, and part trend-spotter, apart from having some critical non-technical skills as well. We’ll delve deeper into this later, but first let’s have a look at the common industries that are being benefitted by data scientists.
Each industry comes with its own big data profile that can be analyzed by a data scientist. Here’re some of the common industries that can leverage big data.
Apart from these, other notable industries, which are on the constant look out for data scientists, include social networking, ecommerce, smart appliances, and utility providers, among others.
Though the key responsibilities of a data scientist depend on the project he/she is working on, it can be said that all of them are based on big data or complicated inputs. And all these responsibilities need a deep curiosity in order to be performed accurately. Let’s have a look at the common responsibilities of a data scientist, regardless of the nature and volume of the business.
Apart from these two, a data scientist has to stay updated about relevant industry’s trends continually to provide useful recommendations to the business.
Value-based programs and strategic initiatives are two of the key areas that are such a professional focuses upon. It’s important to understand that a data scientist’s role has to be collaborative. It means in order to solve complex business issues, he/she has to closely work with other teams like the IT department, product managers, data engineers, data analytics team etc.
Now that you know why these professionals are in massive demand, it’s important to see whether there exist an adequate number of data scientists. The truth is, the number of these professionals is just a handful, while the demand for them seems to be increasing by the day.
As businesses lean more and more toward machine learning and artificial intelligence, there’ll be more jobs than available experts to fill them, and perhaps this is the reason why data science has become one of the fastest growing tech employment fields today.
To begin with, you need to have a robust acumen of sophisticated visualization and adequate knowledge of statistical techniques that are used to derive forward-looking insights. So, what are the critical skills and attributes of a data scientist? Let’s have a look.
Though there’re different paths to become a data scientist, it’s absolutely impossible to land into the field without a bachelor’s degree. In addition, if your aim is to get an advanced leadership position, having a doctorate or a master’s degree should be your best bet.
There’re data science degrees offered by some schools that can empower you with the skills necessary to process and analyze a complex set of massive data. Most of these programs boast of an analytical and creative element, apart from technical aspects related to analysis techniques, statistics, computers, and more.
Some of the common fields of degrees that can help you become a data scientist include statistics, computer science, mathematics, economics etc.
Before you delve deeper into your endeavor of becoming a data scientist, it’s important to understand that you’ll have to work in different settings, with different teams and in collaboration. The actual work environment can vary largely based on the organization and the nature of business you’ll work for.
There’re lots of satisfying factors to becoming a data scientist like having a unique yet challenging career, options for working for a diverse range of companies, getting engaged in interesting and unique subjects and topics that offer you a wide perspective, and working with the latest technologies — among others. On the flip side, there’re some clear drawbacks too.
For instance, the technologies you’ll be using will be evolving constantly, which means you may find extreme variety of software and systems, which you’ll have to learn on a constant basis.
However, as data science is required by almost every organization and business across the globe, and all of them are increasingly relying on data for developing strategies, the need for data scientists will become all the more important, making the demand in data increasing steadily in the near future.body[data-twttr-rendered="true"] background-color: transparent;.twitter-tweet margin: auto !important;
function notifyResize(height) height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) donkey.resize(height); resized = true;if (parent && parent._resizeIframe) var obj = iframe: window.frameElement, height: height; parent._resizeIframe(obj); resized = true;if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) window.parent.postMessage(sentinel: "amp", type: "embed-size", height: height, "*");if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) window.webkit.messageHandlers.resize.postMessage(height); resized = true;return resized;twttr.events.bind('rendered', function (event) notifyResize();); twttr.events.bind('resize', function (event) notifyResize(););if (parent && parent._resizeIframe) var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) window.frameElement.setAttribute("width", "500");https://medium.com/media/9847f9d9b2ebd6435414866802f07cab/hrefhttps://medium.com/media/cb641a4305ba6e80ca1d7e10f27eee18/href
New understanding in data science course. Destroy your rules, come and join us. https://t.co/lBSm5DjL6S #new #understanding #course #destroy #rule #come #join #tobedatascientist #bootcamp #eventspace #NLP #datascience #deeplearning #blockchain #data #future #magnimindacademy
If you liked this article, I think you might be interested in this one as well…
More information on this subject can be found here.
How to Become a Data Scientist? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.