Google’s Doodles are often elaborate creations, but the upcoming Doodle to celebrate the birthday of composer Johann Sebastian Bach is positively baroque.

With the help of artificial intelligence, the interactive Doodle allows users to generate harmonies for any melody they input in the style of the famous 18th century composer. Google used machine learning to analyze the harmonies of more than 300 Bach compositions, replicating the patterns it found to fit the user’s suggested melody.

You can input short single-line melodies that are just two bars long and change the key of the music and its tempo. You can also download the resulting composition as a MIDI file or share it with friends. The Doodle also includes some hidden surprises. Click the mini amplifier to the right of the keyboard to upgrade the instruments to ‘80s synths.

 Image: Google
Bach and friends in their ‘80s regalia.

The Doodle is a neat demonstration of both the possibilities and limitations of AI to generate music. As Anna Huang, a resident AI researcher with Google’s Magenta project who created the Doodle, explains to The Verge, the underlying AI model was trained on Bach’s chorale harmonizations, which are harmonizations of existing hymns.

This is particularly compliant data for AI to learn from, says Huang. “The Bach compositions in this dataset are highly structured, and the style is very concise, yet with rich harmonies, allowing machine learning models to learn more with less data.” It also helps that Bach is a composer of Baroque music: a highly formalized genre with consistent rules.

Huang, who studied music composition as an undergrad and grad, says she’s always looking for new ways to compose. AI gives her a tool that can fill in the missing parts of a piece, giving her new material to sculpt. “As a result, you can try out ideas more quickly, and see if you encounter something that sparks,” she says.

She also notes that as with other musical AI projects, this technology is far from a perfect composer. One thing machine learning generators struggle with, for example, is creating long-term structure and coherence. “What is harder to replicate is Bach’s balance in simplicity and expressiveness and the longer arcs in his music,” says Huang.

The Bach-inspired Google Doodle will go live 12AM ET on Thursday, March 21st, and it will be available for 48 hours across 77 markets. You can read more about the technology behind it here.

Update March 20th 3:00PM ET: Updated with additional comment from Magenta AI resident Anna Huang.

Subscribe: http://bit.ly/2mYXInj
Join our secret society: http://bit.ly/lansana (More)

Demis drew on his eclectic experiences as an AI researcher, neuroscientist and videogames designer to discuss what is happening at the cutting edge of AI research, including the recent historic AlphaGo match, and its future potential impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind. (More)

Garry Kasparov and DeepMind’s CEO Demis Hassabis discuss Garry’s new book “Deep Thinking”, his match with Deep Blue and his thoughts on the future of AI in the world of chess. (More)

Demis Hassabis, world-renowned British neuroscientist, artificial intelligence (AI) researcher and the co-founder and CEO of DeepMind, explores the groundbreaking research driving the application of AI to scientific discovery. (More)

"Quantum Computing as a Service" - Matt Johnson & Randall Correll of QCWare (More)

Read "Quantum Engineering: A New Frontier" from "ENGenious" Issue No.14. http://eas.caltech.edu/engenious/14/quantum_engineering (More)

(12 Sep 2017) LEADIN
Audi is showing its futuristic autonomous, electric concept car at the Frankfurt Motor Show.
The Audi Aicon - the name is a play on AI for artificial intelligence and Icon - does not have a steering wheel or pedals and has a range of 700 kilometres according to the company.
STORYLINE
German car manufacturer Audi is showing off its latest concept of how a future autonomous, electric car will look like at the IAA motor show in Frankfurt.
The four-door Audi Aicon certainly looks like something from a science fiction movie.
The car has no steering wheel and no pedals. Instead, it has two chairs that can swivel around and a full entertainment system.
According to Audi, the Aicon can drive 700-800 kilometer per charge.
Rupert Stadler, Audi Chairman, says it shows what a car that has the highest level of autonomous driving capabilities, called level five, can look like.
Lou Ann Hammond, CEO of Driving the Nation automobile news website says Audi is at the very forefront of autonomous driving worldwide.
As the cars start driving themselves, the interior design will become more important, she says.
"Because what you are going to be doing is watching TV, doing video conferencing, listening to music," she says.
The IAA motor show in Frankfurt opens to the media today, Tuesday, and runs through September 24. (More)

The Future of Audi Car | Human & Artificial Intelligence. Audi demonstrates Artificial intelligence with Automatic driving along with Audi AI Technology and NVIDIA. At CES 2017 they stated that Audi and Nvidia to Make AI-Powered Cars by 2020. (More)

European governments have been bringing the hammer down on tech in recent months, slapping record fines and stiff regulations on the largest imports out of Silicon Valley. Despite pleas from the world’s leading companies and Europe’s eroding trust in government, European citizens’ staunch support for regulation of new technologies points to an operating environment that is only getting tougher.

According to a roughly 25-page report recently published by a research arm out of Spain’s IE University, European citizens remain skeptical of tech disruption and want to handle their operators with kid gloves, even at a cost to the economy.

The survey was led by the IE’s Center for the Governance of Change — an IE-hosted research institution focused on studying “the political, economic, and societal implications of the current technological revolution and advances solutions to overcome its unwanted effects.” The “European Tech Insights 2019” report surveyed roughly 2,600 adults from various demographics across seven countries (France, Germany, Ireland, Italy, Spain, The Netherlands, and the UK) to gauge ground-level opinions on ongoing tech disruption and how government should deal with it.

The report does its fair share of fear-mongering and some of its major conclusions come across as a bit more “clickbaity” than insightful. However, the survey’s more nuanced data and line of questioning around specific forms of regulation offer detailed insight into how the regulatory backdrop and operating environment for European tech may ultimately evolve.

Distractions

An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity.

The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation.

Cybersecurity expert Chad Loder noticed that not a single member of Stanford’s new AI faculty was black. Tech site Gizmodo reached out to Stanford and the university quickly added Juliana Bidadanure, an assistant professor of philosophy.

Part of the institute’s problem could be the very thing it’s attempting to address – that, while improving, there’s still a lack of diversity in STEM-based careers. With revolutionary technologies such as AI, parts of society are in danger of being left behind.

The institute has backing from some big-hitters. People like Bill Gates and Gavin Newsom have pledged their support that “creators and designers of AI must be broadly representative of humanity.”

Fighting Algorithmic Bias

Stanford isn’t the only institution fighting the good fight against bias in algorithms.

Earlier this week, AI News reported on the UK government’s launch of an investigation to determine the levels of bias in algorithms that could affect people’s lives.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.

Meanwhile, activists like Joy Buolamwini from the Algorithmic Justice League are doing their part to raise awareness of the dangers which bias in AI poses.

In a speech earlier this year, Buolamwini analysed current popular facial recognition algorithms and found serious disparities in accuracy – particularly when recognising black females.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be mistakenly stopped more often. We’re in serious danger of automating profiling.

Some efforts are being made to create AIs which detect unintentional bias in other algorithms – but it’s early days for such developments, and they will also need diverse creators.

However it’s tackled, algorithmic bias needs to be eliminated before it’s adopted in areas of society where it will have a negative impact on individuals.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity appeared first on AI News.

In this episode of the Making Sense podcast, Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we’re living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. (More)

Google is honouring the 334th birthday of famous German composer Johann Sebastian Bach with an AI-powered ‘doodle’ that mimics his musical style.

Users can input their own melody and the AI will create a harmony in the Baroque style of Bach.

“Bach was a humble man who attributed his success to divine inspiration and a strict work ethic,” wrote Google in a post. “He lived to see only a handful of his works published, but more than 1,000 that survived in manuscript form are now published and performed all over the world.”

Aside from being a fun way of passing time, the doodle also intends to educate users on some basic fundamentals about how machine learning works.

Google’s model for its first AI-powered doodle was trained on 330 of Bach’s compositions. It was developed by Anna Huang from Google Magenta, in partnership with the Google PAIR (People + AI Research) team which provided TensorFlow expertise to allow the experience to run in just a browser.

Huang built Coconet, the model which powers this AI doodle that can harmonise melodies or compose them from scratch.

In a technical post explaining how Coconet works, the Magenta team wrote:

“Coconet is trained to restore Bach’s music from fragments: we take a piece from Bach, randomly erase some notes, and ask the model to guess the missing notes from context.

The result is a versatile model of counterpoint that accepts arbitrarily incomplete scores as input and works out complete scores.

This setup covers a wide range of musical tasks, such as harmonizing melodies, creating smooth transitions, rewriting and elaborating existing music, and composing from scratch.”

The doodle is available on Google’s homepage between Bach’s birthday (March 21st) to the 22nd.

Creating AIs is difficult, though arguably easier than putting 334 candles on a birthday cake to honour the man himself. Well (classically-)played, Google.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Become a modern-day Bach with Google’s AI-powered doodle appeared first on AI News.

Previous Videos:
https://youtu.be/R5Xlcteu_iQ
https://youtu.be/xAFNQoXNecw (More)

http://www.experiencecoin.org, the future of money is digital! (More)

Guiding our fingers while typing, enabling us to nimbly strike a matchstick, and
inserting a key in a keyhole all rely on our sense of touch. It has been
shown
that the sense of touch is
very important for dexterous manipulation in humans. Similarly, for many robotic
manipulation tasks, vision alone may not be
sufficient

often, it may be difficult to resolve subtle details such as the exact position
of an edge, shear forces or surface textures at points of contact, and robotic
arms and fingers can block the line of sight between a camera and its quarry.
Augmenting robots with this crucial sense, however, remains a challenging task.

Our goal is to provide a framework for learning how to perform tactile servoing,
which means precisely relocating an object based on tactile information. To
provide our robot with tactile feedback, we utilize a custom-built tactile
sensor, based on similar principles as the GelSight
sensor
developed at MIT. The sensor is
composed of a deformable, elastomer-based gel, backlit by three colored LEDs,
and provides high-resolution RGB images of contact at the gel surface. Compared
to other sensors, this tactile sensor sensor naturally provides geometric
information in the form of rich visual information from which attributes such as
force can be inferred. Previous work using similar sensors has leveraged the
this kind of tactile sensor on tasks such as learning how to
grasp
, improving
success rates when grasping a variety of objects.

According to Google's director of engineering and world leading futurologist Ray Kurzweil - by 2029 computers and robots will be much smarter than us. (More)

The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people’s lives.

A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today’s algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.

Digital Secretary Jeremy Wright said:

“Technology is a force for good which has improved people’s lives but we must make sure it is developed in a safe and secure way.

Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.

I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre’s recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society.”

Durham police are currently using AI for a tool it calls ‘Harm Assessment Risk’. As you might guess, the AI determines whether an individual is likely to cause further harm. The tool helps with decisions on whether an individual is eligible for deferred prosecution.

If an algorithm is more or less effective on individuals with different characteristics over another, serious problems would arise.

Roger Taylor, Chair of the CDEI, is expected to say during a Downing Street event:

“The Centre is focused on addressing the greatest challenges and opportunities posed by data driven technology. These are complex issues and we will need to take advantage of the expertise that exists across the UK and beyond. If we get this right, the UK can be the global leader in responsible innovation.

We want to work with organisations so they can maximise the benefits of data driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.

I am delighted that the Centre is today publishing its strategy setting out our priorities.”

In a 2010 study, researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Similar worrying discrepancies were highlighted by Algorithmic Justice League founder Joy Buolamwini during a presentation at the World Economic Forum back in January. For her research, she analysed popular facial recognition algorithms.

These issues with bias in algorithms need to be addressed now before they are used for critical decision-making. The public is currently unconvinced AI will benefit humanity, and AI companies themselves are bracing for ‘reputational harm’ along the way.

Interim reports from the CDEI will be released in the summer with final reports set to be published early next year.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post UK government investigates AI bias in decision-making appeared first on AI News.

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.

Throughout 2018, we've brought you the world's leading thinkers on artificial intelligence.
Now we're calling on you to pose your questions to our panel of experts, to find out what challenges and opportunities you think AI will present us with in the next decade. Will AI affect our jobs? What risks might AI pose to society? Can we train AIs to make moral and ethical decisions? (More)

We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a computer. (More)

Have you ever wondered what AI is, why it is so difficult to grasp, and how one could define AI? Then you should watch this 5 minutes video. I hope you will find it useful. (More)

Time for a summary of this week in AI. (More)

Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age.

Called GauGAN, the software is just a demonstration of what’s possible with Nvidia’s neural network platforms. It’s designed to compile an image how a human would paint, with the goal being to take a sketch and turn it into a photorealistic photo in seconds. In an early demo, it seems to work as advertised.

GauGAN has three tools: a paint bucket, pen and pencil. At the bottom of the screen is a series of objects. Select the cloud object and draw a line with the pencil, and the software will produce a wisp of photorealistic clouds. But these are not image stamps. GauGAN produces results unique to the input. Draw a circle and fill it with the paint bucket and the software will make puffy summer clouds.

Users can use the input tools to draw the shape of a tree and it will produce a tree. Draw a straight line and it will produce a bare trunk. Draw a bulb at the top and the software will fill it in with leaves producing a full tree.

GauGAN is also multimodal. If two users create the same sketch with the same settings, random numbers built into the project ensure that software creates different results.

In order to have real-time results, GauGAN has to run on a Tensor computing platform. Nvidia demonstrated this software on an RDX Titan GPU platform, which allowed it to produce results in real time. The operator of the demo was able to draw a line and the software instantly produced results. However, Bryan Catanzaro, VP of Applied Deep Learning Research, stated that with some modifications, GauGAN can run on nearly any platform, including CPUs, though the results might take a few seconds to display.

In the demo, the boundaries between objects are not perfect and the team behind the project states it will improve. There is a slight line where two objects touch. Nvidia calls the results photorealistic, but under scrutiny, it doesn’t stand up. Neural networks currently have an issue on objects it was trained on and what the neural network is trained to do. This project hopes to decrease that gap.

Nvidia turned to 1 million images on Flickr to train the neural network. Most came from Flickr’s Creative Commons, and Catanzaro said the company only uses images with permission. The company says this program can synthesize hundreds of thousands of objects and their relation to other objects in the real world. In GauGAN, change the season and the leaves will disappear from the branches. Or if there’s a pond in front of a tree, the tree will be reflected in the water.

Nvidia will release the white paper today. Catanzaro noted that it was previously accepted to CVPR 2019.

Catanzaro hopes this software will be available on Nvidia’s new AI Playground, but says there is a bit of work the company needs to do in order to make that happen. He sees tools like this being used in video games to create more immersive environments, but notes Nvidia does not directly build software to do so.

It’s easy to bemoan the ease with which this software could be used to produce inauthentic images for nefarious purposes. And Catanzaro agrees this is an important topic, noting that it’s bigger than one project and company. “We care about this a lot because we want to make the world a better place,” he said, adding that this is a trust issue instead of a technology issue and that we, as a society, must deal with it as such.

Even in this limited demo, it’s clear that software built around these abilities would appeal to everyone from a video game designer to architects to casual gamers. The company does not have any plans to release it commercially, but could soon release a public trial to let anyone use the software.

On the heels of Hyundai becoming the latest investor in Ola, today another key deal was revealed that underscores Hyundai’s ambitions in next-generation automotive services. Yandex, the Russian search giant that has been working on self-driving car technology, has inked a partnership with Hyundai to develop software and hardware for autonomous car systems.

While companies like Google, Apple and Baidu have been working on different aspects of connected cars with automotive companies — covering both infotainment integrations as well as some starts in self-driving technology — this is Yandex’s first partnership with a carmaker, and specifically, an OEM.

Yandex said its memorandum of understanding covers working with Hyundai Mobis, the car giant’s OEM parts and service division, where the plan is “to create a self-driving platform that can be used by any car manufacturer or taxi fleet” that will cover both a prototype as well as parts for other car-makers. Mobis supplies Hyundai as well as its partly-owned Kia and fully-owned Genesis subsidiaries, along with other automakers, so those are likely the first vehicles that will see the fruits of this deal.

“This is our first partnership, and a clear validation of the intensive development of our self-driving platform. We have already performed thousands of rides in our autonomous taxi service fulfilled without a driver in the driver’s seat,” Dmitry Polishchuk, who heads up Yandex’s self-driving car efforts, said to TechCrunch in an email. “We are excited to combine the experience of Hyundai Mobis in the automotive industry with Yandex’s technological achievements. This should help us to accelerate the pace of self-driving tech development.” In terms of future partnerships, Yandex notes that the agreement is “not exclusive, and we are open to work with other partners.”

The financial terms of the deal are not being disclosed, a Yandex spokesperson told TechCrunch. To give some context, Hyundai Motors is the third-largest automotive company in the world, and it describes Mobis as the sixth-largest OEM. In addition to the $300 million stake it announced earlier today in India’s ride-sharing upstart Ola, it’s forged financial and strategic partnerships with a string of other companies building technology for autonomous systems, including WayRaySoundHound, and Aurora.

Yandex, meanwhile, has been working on self-driving car tech since 2017, equipping Toyota models for a series of pilots in closed-campus environments in Russia, Tel Aviv and most recently Las Vegas, Nevada (during the CES show, where cars-as-the-latest-hardware has become a dominant theme). Yandex said that its pilots so far have been so-called “robotaxi” efforts: that is, there are safety engineers sitting in the driver’s passenger seat, but the cars have been operating autonomously otherwise.

Yandex — similar to Baidu in China and Google, well, globally — initially made its name in search but has diversified into a variety of areas over the years, tapping R&D in machine learning and other technologies to move into maps and ride-sharing services, among other related areas.

Yandex.Taxi is now active in 15 countries — Russia, Armenia, Belarus, Georgia, Kazakhstan, Israel, the Ivory Coast, Kyrgyzstan, Latvia, Lithuania, Moldova, Serbia, Uzbekistan, Finland, and Estonia — and that service is one obvious application for this partnership. Similar to Uber (which handed off some operations to Yandex in 2017), Yandex is looking at self-driving technology — which is part of the bigger Yandex.Taxi operation — as one way of expanding its fleet in the years to come.

Hyundai, similar to other automakers, has been chipping away at self-driving with multiple partnerships with third parties, this deal is breaking new ground for Yandex. Some have pigeonholed as “Russia’s Google” and it has for years been looking for ways to expand its profile and reach into more countries outside its home market. Self-driving cars is a ripe opportunity for Yandex, since it’s proving to be a very complex area that will likely involve a number of players collaborating together — automakers, AI specialists, mapping companies, component manufacturers, nano-energy experts, network operators and more — in the bigger effort to reach level-5 fully-autonomous systems.

“Our self-driving technologies are unique and have already proven their scalability. Yandex’s self-driving cars have been successfully driving on the streets of Moscow, Tel Aviv and Las Vegas, which means that the fleet can be expanded to drive anywhere,” said Arkady Volozh, CEO of Yandex. “It took us just two years to go from the first basic tests to a full-fledged public robotaxi service. Now, thanks to our agreement with Hyundai Mobis, we will be able to move even faster.”

Sidenote: I asked Yandex why Hyundai hasn’t included a quote or issued its release, and I was told that Hyundai’s own statements will be coming after the official signing ceremony, which is happening later.

Photo by Marcel Scholte on Unsplash

Artificial Intelligence (AI) and machine learning are increasingly being used across healthcare. From diagnostics to targeted treatments, there is emerging evidence of clinical benefit. However, challenges remain, not least the lack of robust governance, regulations, and standards, to ensure applications are safe, effective, and quality assured.

The BSI’s and Association for Advancement of Medical Instrumentation’s (AAMI) recent publication of The emergence of artificial intelligence and machine learning algorithms in healthcare: Recommendations to support governance and regulation marks significant progress on this front.

The report was commissioned by the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) and includes input from the BSI and AAMI, the US Food and Drug Administration (FDA), and other stakeholders.

Trending AI Articles:

1. Ten trends of Artificial Intelligence (AI) in 2019
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code

Why it matters

AI describes a set of advanced technologies that enable machines to carry out highly complex tasks effectively — tasks that require the equivalent of or more than the intelligence of a person performing the task.

I’ve noted in a previous article on the regulation of medical devices, there are risks associated with the use of AI within the health context. These include:

  • After system development, will the system continue to learn and refine its internal model? How do we regulate medical devices that ‘learn’?;
  • To what extent is human decision making involved? Does the system make suggestions that we can disagree with, or does the system make decisions on its own?

And at the heart of this is the concern that AI and machine learning can and does get it wrong. Outside the context of health, there are some well known examples of this — Tay, Microsoft’s Twitter bot, that went from being friendly to racist and sexist in less than 24 hours, and the case last year of a woman killed by an experimental Uber self-driving car in the US.

Why AI is different

There is a strong case for the introduction of new standards, regulations, and governance frameworks, for AI in health.

First, AI technologies introduce a level of autonomy. In this there are particular challenges in areas where AI solutions potentially provide unsupervised patient care (p.5), for example with monitoring and adjustments of medications for people with long term health conditions.

Second, outputs can change over time in response to new data as is the case with ‘adaptive’ algorithms. This means there is a real need for effective supervision of continuous learning systems. At the heart of this is the question: how do we regulate devices that learn?

The UK’s National Institute for Clinical Evidence (NICE) recently published Evidence Standards Framework for Digital Health Technologies. These standards differentiate between AI using fixed algorithms i.e. where outputs do not automatically change over time; and, those using adaptive algorithms i.e. where algorithms automatically and continually update over time meaning that outputs will also change.

And the distinction between ‘fixed’ and ‘adaptive’ algorithms is an important one. While the NICE Evidence Standards may be the most appropriate to use in the case of fixed algorithms, for adaptive algorithms, they make clear that separate standards will need to apply.

Important here will be how the principles outlined in the UK government’s Code of conduct for data-driven health and care technology move from principles to real world standards, regulations, and governance. Showing ‘what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision’ will be the key here (Principle 7). I suspect going forward there will be requirements to perform regular audits of the metrics and impacts during the use of algorithms in same use cases. This may also become the case where with ‘fixed’ algorithms if there is any change to context.*

And the third point concerns explainability and understanding of how outputs and decisions have been reached. This is significant. A real challenge with algorithms is that it can be difficult or impossible to understand the underlying logic of outputs. While under GDPR there are restrictions on the use of automated decision making with regards to individuals and profiling, the scope of this is yet to be tested [see Rights related to automatic decision making including profiling].

This point on explainability and understanding is important to both ensure systems are safe and effective, and to ensure public and professional confidence and trust.

The recommendations

The report includes a number of recommendations. These include:

  1. Create an international task force to provide oversight for AI in healthcare.
  2. Undertake mapping to review the current standards landscape and identify opportunities.
  3. Develop a proposal for a terminology and categorization standard for AI in healthcare.
  4. Develop a proposal for guidance to cover validation processes.
  5. Create a communications and engagement plan.

All these recommendations make sense. I particularly welcome the comms and engagement plan as one of the key areas of work. This is likely to include a wide range of stakeholders: patients and the public; health and care professionals; policy makers; data scientists and so on. These ongoing conversations will be essential for ensuring confidence and trust in AI systems.

Next Steps

The AI policy and regulatory environment in health is fast moving and complex. Over the next few months, BSI and AAMI intend to publish draft plans for comment on how they intend to implement these recommendations. This is something I very much look forward to reading.

*With thanks to Dr Allison Gardner for clarifying this for me.

Keyah Consulting helps clients navigate complex and fast-moving policy environments. I provide policy analysis and strategy for innovative public sector and commercial clients.

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

AI in Health: Standards, regulations, governance was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Unboxing and installation of Cambridge Consultants' NVIDIA DGX-1 Deep Learning Supercomputer (More)

On April 2, 2019 at the Galvanize Campus in San Francisco, California, Data.World will host an Afternoon of Data to raise questions and brainstorm insights surrounding the very prominent concept of data literacy in today’s society.

With the event two weeks out, its speakers, all prominent figures in the data space, weighed in on some widespread issues that surface as data literacy grows across industries.

What are the best hacks for streamlining datasets?

The term “dirty data” may sound catchy, but disorganized datasets can cause a lot of problems within a company. We asked our speakers to weigh on how we can consistently streamline datasets for efficient use.

Trending AI Articles:

1. Ten trends of Artificial Intelligence (AI) in 2019
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code

Ben Jones, Founder and CEO of Data Literacy, believes that being overly obsessed with datasets’ levels of perfection can lead to operational inefficiency. Fortunately, there are ways to minimize disorganization while also knowing where to stop. “What companies can’t afford to do is stop using data until it’s 100% clean, organized, and in one place,” said Jones. “I think it helps to start by taking stock of the existing data landscape and identifying a roadmap for building a better one. The order of the steps on said roadmap depends on three factors: cost and effort to get it done, business value resulting from the change, and any technical dependencies.”

Pallav Agrawal, Director of Data Science at Levi and Strauss, also views data governance as a constant weighing of priorities. “A good place to start is by speaking with the hands-on employees who are using data assets to learn how critical each asset is for them to perform their daily functions, and then tally the results to obtain a priority ranking of all assets,” he said.

Finally, Lisa Green, Executive Director of Solve For Good, left us with her best set of steps to make sure that streamlining happens in line with company goals:

  1. Define what data assets you have.
  2. Survey data practitioners and data consumers in all departments on how they use the data.
  3. Design a test catalog with a small subset of your data that represents all the types of data you have in an accurate proportionality.
  4. Evaluate and iterate on the test catalog.
  5. Share the latest version of the test catalog with various stakeholders within the company to get buy in or to feedback on the need for further iteration.
  6. Scale up the final version of the test catalog to include all data assets.

How Can Companies Procure The Right Data People?

Data strategies are only as strong as the people that apply them. And the unique combination of math skills, programming knowledge and business acumen required can be tough to find in potential job applicants. “Where many hiring managers fall down is in assessing candidates’ business skills and whether they can use them to tie all three skill sets together,” explained Green. Her solution involves crafting a hands-on interview process: “Choose an interview method that evaluates how well candidates understand a specific business problem faced by your company, how they would translate it to a data problem, and how they would communicate the results of their data solution to their non-technical colleagues within the company.”

Jones believes that an employees success has less to do with skills and more to do with culture fit. “Often hiring managers make the mistake of focusing narrowly on the knowledge or skills of a candidate. While it’s necessary to define these traits, it isn’t sufficient,” he said. “It’s also important to define the attitudes and behaviors that the person needs to have in order to thrive in the current environment and the one they’re hoping to build.”

Agrawal suggested that social media can be a powerful tool in cultivating a competitive talent pool. “Look through at least a few dozen LinkedIn profiles of data people and determine what types of projects and experience in a person’s profile excite you, and highlight it,” he advised. “Once you have a significant number of highlighted fields, find common patterns and use those along with the simple English descriptions in your job requirements.”

Noren sees the optimal career trajectories for budding data scientists as changing over time, with a background in engineering’s no longer being ideal to transition into data science. “Meaningful change towards becoming a data-driven organization has to come from key leadership,” she said. “Avoid hiring expensive data scientists, machine learning engineers, and data infrastructure engineers if they won’t be working in a leadership structure that truly understands their value and how to leverage their assets.”

How can those currently outside the data science industry get the experience they need to be marketable?

The expectations surrounding data deliverance were formed so quickly that companies have no choice but to satisfy demand with a shortage in supply of existing data scientists. A viable solution would be to fashion data scientists out of existing employees through teaching, as data is a language that more and more people need to speak to function in the modern workforce.

Fortunately, Noren believes that there are many ways to go about acquiring data knowledge. “For young data scientists, attending hackathons and competing in Kaggle competitions as a member of a cross-functional team is one way to go. More companies are offering internships. And for the incredibly self-motivated, preparing a project on a question relevant to the field one intends to enter and then using data science to address it and data visualization to present it would also be enough to get attention from some employers,” she said.

Agrawal offered up a dynamic strategy for breaking into the data space while still advancing one’s current career: “If an individual wants to be a data scientist, but is having difficulty finding a data science job due to lack of experience, then they should consider becoming part of a team that solves problems through the use of data such as product manager, data engineer, project manager, DevOps engineer or data analyst,” he suggested. “As they work with data scientists, they will learn how to think like one, while working in their personal time to build a portfolio of projects that demonstrate data-driven insight generation and problem-solving skills.”

Jones’ closing thought on acquiring data experience expanded beyond the scope of formal career paths. “There are so many great ways to build skills and gain experience in data these days. I highly recommend joining active data communities on Twitter, Slack, LinkedIn and other places,” he said. “There are always interesting projects and challenges going on, like the Makeover Monday project that asks participants to remake a chart each week, or Viz for Social Good that matches data workers with non-profits that are in need of talent. I think it’s very narrow-minded to think we can only build or apply our data skills in the context of our career. Why stop there?”

Feeling inspired by all things data?

The discussion continues with these featured speakers and more at Afternoon of Data in San Francisco on April 2nd.

Get Tickets Here

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

Should Data Science Become A Universal Skill? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition. Demis will draw on his eclectic experiences as an AI researcher, neuroscientist and video games designer to discuss what is happening at the cutting edge of AI research, including the recent historic AlphaGo match, and its future potential impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind. (More)

In recent years, advances in AI have produced algorithms for everything from image recognition to instantaneous translation. But when it comes to applying these advances in the real world, we’re only just getting started. A new product from Nvidia announced today at GTC — a $99 AI computer called the Jetson Nano — should help speed that process.

The Nano is the latest in Nvidia’s line of Jetson embedded computing boards, used to provide the brains for robots and other AI-powered devices. Plug one of these into your latest creation, and it’ll be able to handle tasks like object recognition and autonomous navigation without relying on cloud processing power.

This sort of setup is known as edge computing, and because it means that the data being processed from cameras and microphones never leaves the device, the end result is usually hardware that is faster, more reliable, and more secure. So everybody wins. Past Jetson boards have been used to power a range of devices, from shelf-scanning robots made for Lowe’s to Skydio’s autonomous drones. But the Nano is aiming even smaller.

The Jetson Nano devkit being attached to Nvidia’s new open-source robotics kit, named JetBot.

Nvidia is launching a developer kit of the Nano targeting “embedded designers, researchers, and DIY makers” for $99, and production-ready modules for commercial companies for $129 (with a minimum buy of 1,000 modules).

The company also unveiled a fun DIY project for any advanced makers: an open-source $250 autonomous robotics kit named JetBot (above). It includes a Jetson Nano along with a robot chassis, battery pack, and motors, allowing users to build their own self-driving robot.

With the $99 devkit you get 472 gigaflops of computing powered by a quad-core ARM A57 processor, 128-core Nvidia Maxwell GPU, and 4GB of LPDDR RAM. The Nano also supports a range of popular AI frameworks, including TensorFlow, PyTorch, Caffe, Keras, and MXNet, so most algorithms will be pretty much plug-and-play. And there’s the usual brace of ports and interfaces, including USB-A and B, gigabit Ethernet, and support for microSD storage.

Nvidia says it hopes the Nano’s price point should open up AI hardware development to new users. “We expect a lot of the maker community that wants to get into AI, but has been unable to in the past, the Jetson Nano will allow them to do that,” Nvidia’s VP and GM of autonomous machines, Deepu Talla, told reporters at a briefing.

It’s certainly true that the Nano is competitively priced, though it’s not unique in that. Intel, for example, sells its Neural Compute Stick for $79, while Google recently unveiled two similar devices under its Coral brand: a $150 devkit and $75 USB accelerator. But if this shows anything, it’s that Nvidia is entering a fertile market. Let’s see what AI-powered creations start to grow.

Demis Hassabis is the founder and CEO of DeepMind, a neuroscience-inspired AI company, bought by Google in Jan 2014 in their largest European acquisition to date. He leads projects including the development of AlphaGo, the first program to ever beat a professional player at the game of Go. (More)

Learn more about D-Wave: http://geni.us/9B99IE (More)

Previously in the last article, I had described the Neural Network and had given you a practical approach for training your own Neural Network using a Framework (Keras), Today's article will be short as I will not be diving into the maths behind Neural but will be telling how we create our own Neural Network from Scratch .

We will be using the MNIST dataset. For just importing the dataset we will be using Keras and all other will be written using numpy.

The backpropagation

The toughest part that you might face during the whole code will be How does this Backpropagation works and what is the logic behind this.

Let me explain something that is very simple and might be very easy to understand. Let’s say that you want to minimize some variable ‘ywith respect to a variable ‘xso what we do is:

Yes you got it right we do differentiate it and apply the condition of dy/dx=0

Trending AI Articles:

1. The AI Job Wars: Episode I
2. Bursting the Jargon bubbles — Deep Learning
3. How Can We Improve the Quality of Our Data?
4. Machine Learning using Logistic Regression in Python with Code

Now, this is similar to what happens in backpropagation too. We have a loss function after the end of feed-forward which needs to be minimized with respect to the weights vector or matrices of each layer. So basically what we have to do is find dc/dw(n)…..to dc/dw(1) and finally multiply it with the learning rate and finally subtract it from the corresponding ‘w’s after each set of the epoch.

So if this is so easy why shouldn't you first try on your own for a single layer and then finally see my code.

We will be covering three layers of Neural Network and will be constructing it from scratch.

The FeedForward :

As I had explained earlier in my post of Neural Networks we have a linear line function whose output is given non-linearity with the help of activation function like ReLu, Sigmoid, Softmax,tanh and many more

Our feedforward equation is given by —

y=wx+b where y is the output and w are the weights and for now we neglect the bias values.

So if we have a three-layer neural network we have:

#Making of feed-forward function
import numpy as np
def sig(s):
return 1/(1+np.exp(-1*s))
def sig_der(s):
return s*(1-s)
class NN:
def __init__(self,x,y):
self.x = x
self.y = y
self.n = 64 #no of neurons in the middle layers
self.input_dim = 784
self.out_dim = 10

self.w1 = np.random.randn(784,self.n)
self.w2 = np.random.randn(self.n,self.n)
self.w3 = np.random.randn(self.n,10)
   def feedforward(self):
self.z1 = np.dot(self.x,self.w1)
self.a1 = sig(self.z1)
self.z2 = np.dot(self.a1,self.w2)
self.a2 = sig(self.z2)
self.z3 = np.dot(self.a2,self.w3)
self.a3 = sig(self.z3)

Till now we have built our normal feedforward network which requires actually minimal thinking. Now, lets start off with the hard part THE BACKPROPAGATION.

Code the Hard BACKPROP

Now basically what does neural network do is that it first passes on the random set values through the layers and predicts a value and compares it with the actual image and gets the error, now the task is to minimize this error and how we do it is by using the basic chain rule of derivatives.

dc/dw3 = dc/da3 * da3/dz3 * dz3/dw3

Basically, as we are doing a classification problem and thus we will be using cross_entropy for this.

def cross_entropy(real,pred):
return (pred - real)/number of samples
so dc/da3 * da3/dz3 = a3 - y
and dz3/dw3 = a2
dc/dw2 = dc/da3 * da3/dz3 * dz3/da2 * da2/dz2 * dz2/dw2
#The upper equation is simply followed by a simple chain rule
dc/w1 = dc/da3 * da3/dz3 * dz3/da2 * da2/dz2 * dz2/da2 * da2/dz1 * dz1/dw1

I will not be writing the code for backpropagation but I have provided enough information to write the code. Just write and can confirm it through

The only track you have to keep is the matrix size that is used if that is handled carefully your output will be perfect

dubesar/Mnist-Character-Prediction-Tenserflow-Deep-neural-Network

And finally, if you have more interests regarding neural networks you can try out the similar problem for Dogs vs Cats Dataset and see the accuracy. In the next article, I will be starting off with CNN — Convolutional Neural Networks.

We will also write Convolutional Neural Networks from Scratch and also through Keras.

Follow my articles from https://medium.com/@dubeysarvesh5525

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

Neural Network from Scratch was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.