Artificial intelligence and future mobility are central topics at the 2018 South by Southwest (SXSW) in Austin. Mercedes-Benz brought the Concept EQ, the electric SUV of the Future, that will go into series production soon. The car demonstrates what future mobility might look like. #switchtoEQ Subscribe to the channel so you get notified for new exciting videos here: http://www.youtube.com/subscription_center?add_user=MercedesBenzTV (More)

Superintelligence, as defined by Oxford Philosopher Nick Bostrom, refers to an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Will our computers be super-intelligent? Will they understand the soft nuances of human interaction? In this session, Bostrom will awaken us to the limitations of A.I, and whether they can be overcome. (More)

New computer applications are changing the ways in which we live and work, from travel to manufacturing. VOA’s Mike O’Sullivan spoke with developers at a . (More)

Information and subscription on http://www.usievents.com
Lire le compte-rendu : http://bit.ly/2tfUyyV (More)

In computer sciences, we usually use human as a role model to develop machine learning algorithms and concepts. Interestingly, our efforts to develop machine learning led to better self-understanding.

Illustration by Dustin Yellin

But what if we use machine learning concepts to explains some of our social behaviors?

In life, you have a limited number of observations. The observations could be scientific, social or any other observations. For the sake of this article, let’s focus on social observations.

You see some is a hard working person. Someone is doing a crime. Those are a few examples of social observations. Your social observations are training data for your social model. Your future observations are your test data. What about validation data? Apparently, as a human, we don’t have access to this type of data and basically, we use test data as our validation data too.

Trending AI Articles:

1. Neural networks for solving differential equations
2. From Perceptron to Deep Neural Nets
3. Are you using the term ‘AI’ incorrectly?
4. Making a Simple Neural Network

Your experience in life, or simply your age, could be interpreted as your training epochs. Time goes, you collect data and your brain starts fitting a model on those observations.

Your initializer function is probably your family, friends, and environment. When you born, your gene was your only initializer. But, as soon as you born, your environment, family, and friends start shaping your mind and beliefs.

Your education, knowledge, and judgment is your optimizer. If you know better about social sciences, you can find better models. If you are kind and passionate about human, you try to fit a better model. If you have a bad temper, you probably try to fit a cruel model to your social observations.

Time goes and you see the society and people. You start fitting simple models to your observations. If you have limited social interactions, your models remain simple since they explain your limited observations well enough. At this stage, you form some stereotypes in your mind that might explain some behaviors well enough. If you get obsessed by your new social model and only look for more data to confirm it, you can always find those data. Simply, your brain starts ignoring observations that are not aligned with your initial social model. If you stay in a same social environment for a long time, basically, your training and test data are coming from the same dataset and your model becomes a more local model than a global model. People who travel and go outside of their origin society usually find more contradicting observations (new test data from the new dataset) and starts to develop better global models.

In another word, staying in a same social environment tends to make your model over-fitted. In the absence of different test data, your model tends to become such an over-fitted model that cannot be updated via new test data or even good optimizers.

Here, I tried to simply explain our social models using some machine learning concepts. The best way to avoid developing over-fitted social models in our minds is trying to interact with social environments outside of our comfort zone.

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

AI, a Model for Self Understanding was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

“Someone on TV has only to say ‘Alexa’ and she lights up, She’s always ready for action, the perfect woman, never says ‘Not tonight, dear’” — Sybil Sage as quoted in the NY Times article: ‘Alexa, Where Have You Been All My Life?’.

Machine Learning has changed many aspects of the modern world we are living in, in a positive way. Self-driving cars? intelligent virtual assistants on smarthphones? recommendation systems used by companies like Amazon and Netflix, cibersecurity automation, Social Media’s News Feed are big examples of how far technology has come.

What is Machine Learning?

Machine learning is a data analytics technique that teaches computers to do what comes naturally to humans and animals: learn from experience. Machine learning algorithms use computational methods to “learn” information directly from data without relying on a predetermined equation as a model. The algorithms adaptively improve their performance as the number of samples available for learning increases.

We can mention a lot of reasons why Machine Learning matters and its application to key industries:

In the financial services industry, Machine Learning helps to track customer happiness. By analysing user activity, smart machines can spot a potential account closure before it occurs. They can also track spending patterns and customer behavior to offer tailored financial advice.

Trending AI Articles:

1. Neural networks for solving differential equations
2. From Perceptron to Deep Neural Nets
3. Are you using the term ‘AI’ incorrectly?
4. Making a Simple Neural Network

If you have an Apple watch, you know this device is getting better every year. Apple watches don’t have keyboards, they use Machine Learning for handwriting recognition, but the model needs to learn how to recognize letters a user might draw.

One of my favorite uses of Machine Learning is online recommendation systems that allows retailers to offer you personalized recommendations based on your previous activity.

Below, I’m using Python’s machine learning library, scikitlearn, to predict human handwriting. The result is pretty amazing!

#Importing Standard Scientific Python Library
import matplotlib.pyplot as plt
#Using a simple dataset of 8×8 gray level images of handwritten digits
from sklearn.datasets import load_digits
#Loading the dataset provided by scikit-learn
digits = load_digits()
#Analyzing a sample image, in this case we are using number 8
import pylab as pl 
pl.gray()
pl.matshow(digits.images[8])
pl.show()
#Analyzing image pixels. Each element represents the pixel of our grayscale image. The value range from 0 to 255 for 8 bit pixel
digits.images[8]
#Visualizing first 15 images
images_and_labels = list(zip(digits.images, digits.target))
plt.figure(figsize=(5,5))
for index, (image, label) in enumerate(images_and_labels[:15]):
plt.subplot(3, 5, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('%i' % label)

import random
from sklearn import ensemble
#Defining variables
n_samples = len(digits.images)
x = digits.images.reshape((n_samples, -1))
y = digits.target
#Creating random indices.The integer division(//) should be used instead.
sample_index=random.sample(range(len(x)),len(x)//5) #20-80
valid_index=[i for i in range(len(x)) if i not in sample_index]
#Sample and validation images
sample_images=[x[i] for i in sample_index]
valid_images=[x[i] for i in valid_index]
#Sample and validation targets
sample_target=[y[i] for i in sample_index]
valid_target=[y[i] for i in valid_index]
#Using the Random Forest Classifier
classifier = ensemble.RandomForestClassifier()
#Fit model with sample data
classifier.fit(sample_images, sample_target)
#Attempt to predict validation data
score=classifier.score(valid_images, valid_target)
print ('Random Tree Classifier:n')
print ('Scoret'+str(score))
i=961
pl.gray() 
pl.matshow(digits.images[i])
pl.show()
#Make sure to add an extra set of square brackets.
classifier.predict(x[[i]])

We can see that the machine predicted that the image was a 3. Just imagining handwriting recognition technology applications is mind-blowing!

You can find me on GitHub:

viritaromero - Overview

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

Machine Learning, Alexa and handwriting recognition was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S099: Artificial General Intelligence. The conversation and lectures are free and open to everyone. Audio podcast version is available on https://lexfridman.com/ai/ (More)

What constitutes active citizenship? The easiest way to answer this is by asking why all democracies have a minimum voting age. This is because children are deemed not mature, independent nor wise enough to make political decisions — at least that is how the thinking goes.

In the same vein, active citizenship relies on citizens being not only politically mature and independent-minded but also capable of forming and making their own judgements. Unfortunately, the presence of technology weakens all three of these attributes of active citizenship.

An example of how technology achieves this is the way it exposes people to constant public scrutiny on social media. This, in turn, encourages self-censorship, which discourages political development. If we take Twitter, many people are afraid to speak their minds due to being fearful of facing a backlash from other users, exposure to data collection or potential employer scrutiny. How many times have we seen a single stupid thing said a number of years ago, only to come back and haunt someone in the future.

Trending AI Articles:

1. Neural networks for solving differential equations
2. From Perceptron to Deep Neural Nets
3. Are you using the term ‘AI’ incorrectly?
4. Making a Simple Neural Network

As a result, keeping quiet and never saying anything controversial is a safer option, along with mimicking the acceptable public responses on any given issue. Essentially, stopping you from putting yourself in a position where your opinion will be questioned or challenged, which in turn, inhibits your ability to learn and develop your political thinking.

Furthermore, the development of increasingly sophisticated data collection methods and processing algorithms of big data are leading to an increased level of citizenry manipulation. This is achieved through the introduction of personalised advert delivery systems, which can target an individual’s specific interests and even moods. Examples of this could be someone tweeting after a bad encounter with a foreigner and then being targeted by an anti-immigration advert from a nativist politician, or a tweet about environmental concerns and be exposed to a targeted advert from Greenpeace.

In the near future, we are also likely to see a more existential threat to active citizenship through artificial intelligence. As AI becomes more powerful, it will be able to make decisions that are increasingly wiser, shrewder and ultimately better than ours. Most likely meaning we will increasingly doubt our abilities to form opinions and make our own decisions and defer to AI to make them for us.

There are already examples of this with the creation of apps like ‘iSideWith’, which suggests who you vote for based on your preferences. Many British citizens used this app in the last few elections, and by doing so, effectively outsourced their judgement to an algorithm.

In both politics and life in general, humans have the natural tendency to congregate into groups of like-minded individuals. What turns a group into a political gathering is a shared sense of both struggle and grievance.

There have always been such gatherings historically, but technology significantly facilitates their creation. By making it much easier for individuals to find and create associations with each other, the internet facilitates the clustering of small groups with specific grievances, in turn fragmenting the population more and more. In turn, no matter your background or particular grievance, you are likely to encounter like-minded individuals online.

Not only acting as a facilitator, technology then reinforces these groups by encouraging members to consume a diet of information that fans the flames of their shared sense of struggle and grievance. Due to the sheer amount of content available online, it is easy for people to find like-minded sources of information that support their views and fuel their sense of oppression.

Furthermore, algorithmic curation of information then amplifies people’s gravitation to like-minded sources. Youtube provides a myriad of options when compared to the mass television of yesteryear. Once you start opening and viewing videos, Youtube’s algorithms begin analysing your preferences, predicting what you are most likely to view next, and offering suggestions that both reinforce and reflect those preferences.

In turn, people become increasingly agitated and entrenched in their views and beliefs, which makes it increasingly difficult for them to both communicate and cooperate with those who share different opinions, resulting in a political deadlock. What is worse is the division between groups deepens and people feel increasingly under attack from other groups, they view other groups as ‘the enemy’, and in this instance, a leader is sought who can not only protect them but also help fight their foes.

From 1992 to 2014, the number of Americans with very negative views of supporters of the opposing political party more than doubled. Then in 2016, many Donald Trump supporters flocked to him because they saw him as a leader who would save them from their perceived enemy, be it the liberals, the Mexicans, the Muslims or the mainstream media.

Imagine the scenario in which an evil genius gains mind control over all the citizens of a particular democracy. Election day arrives and surprise surprise, the evil genius wins by a landslide!

Would this election be deemed free and fair? Most certainly not and the reasoning is simple. To participate in both a free and fair election, voters must be able to make up their own mind without any undue influence. However, technology is, unfortunately, making this increasingly difficult. Whilst mind control might still be the stuff of science fiction (for now!) by leveraging big data political parties are gaining an unprecedented ability to influence voters’ decision-making processes.

Being able to collect and analyse large sets of data ranging from people’s shopping preferences, web browsing histories and voting records allow political parties to be able to gain an increasingly perceptive understanding of their potential voters. This also allows them to target and communicate with sympathetic votes more and more precisely.

Referring to 2016 again, whilst teaming with the Trump campaign, the political consulting firm Cambridge Analytica determined that there was a correlation between US-made automobiles and the likelihood they could be a potential Trump voter. Meaning that if someone had recently purchased a Chrysler but had not voted in a number of years, then the campaign could identify them as a ‘promising target’.

Cambridge Analytica aided the campaign by identifying 13.5 million persuadable voters in what was considered 16 battleground states, resulting in the creation of a roadmap of where to have rallies, where to knock on doors, and where to advertise on television. Given the decisiveness of the voters in these states, Cambridge Analytica played an instrumental role in the election of Donald Trump.

Whilst this seems a worrisome precedent, the influence of big data will only continue to grow, with each political party leveraging it in order to keep up with rival parties. Meanwhile, the consulting firms used by parties will be able to collect data from a variety of new sources, e.g. network integrated fridges, allowing them to monitor your eating habits. As big data transitions to ‘huge data’, the sort of analysis used by Cambridge Analytica in the Trump campaign may be viewed as very basic.

What is the first word that you think of when you hear the term ‘democracy?’ For many, it would be ‘freedom’, especially when individual liberty is a vital component of democracy.

That being said, an equally vital component is the opposite of said freedom — state coercion. To be able to enforce laws that express the will of the people, the government must have a system of coercion in place to facilitate things, i.e. the payment of taxes, and in turn, to justify and organise such a system, the government needs to control information like taxation records.

However, with the increased prominence of crypto-anarchy, the government’s ability to control information is under threat and is its authority to coerce its citizens. Crypto-anarchy seeks to undermine the authority of the government through encryption, which allows individuals to communicate, store and retrieve information beyond the reach of the government.

A notable example of this is Bitcoin, an encrypted digital currency (more commonly known as cryptocurrency) which facilitates secure, quasi-anonymous transactions without a central government controlling the currency’s value or supply. Such a currency poses a threat to governments because it challenges their abilities to exercise state monopolies on money, monitor transactions and therefore collect taxes.

Bitcoin is just one illustration of blockchain technology, further applications are already proliferating and will continue to grow, resistant to government surveillance and interference.

The culminating factor of this results in governmental laws, and by extension, the government itself becoming increasing toothless as malefactors become able to break the rules with impunity.

Technology has brought us undeniable benefits but also presents challenges to democracy. These challenges stem from tendencies of technologically driven social changes which are unravelling before our eyes. These very tendencies are eroding the essential pillars of democracy. If left unaddressed these pillars may eventually crumble, leaving a totalitarian or dystopian state in the rubble. It is important Governments stay up to date with innovation to allow them to withstand the impending changes.

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

The impact of technology on democracy was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

ATIN #4 (More)

This is just the beginning of the end. (More)

AI needs thousands of pictures in order to correctly identify a dog from a cat, whereas human babies and toddlers only need to see each animal once to know the difference. But AI won’t be that way forever, says AI expert and author Max Tegmark, because it hasn’t learned how to self-replicate its own intelligence. However, once AI learns how to master AGI—or Artificial General Intelligence—it will be able to upgrade itself, thereby being able to blow right past us. (More)

AI beats us in complex games, can translate between languages, and even diagnose skin cancer. Yet, it is still very far from understanding humour. Why? Let Eric Steinberger, a Machine Learning researcher tell you. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx (More)

Devin Coldewey talks to Andrew Ng about what makes an AI-first company, his vision for an AI-powered society, and the transformation of education. (More)

NDTV speaks to Andrew Ng, the celebrated artificial intelligence expert and the co-founder of popular educational platform Coursera. He talks about the increasing importance of AI and the inevitable role it will play in the near future. (More)

Dr. Andrew Ng, Founder + CEO, deeplearning.ai;
Co-founder, Coursera; Adjunct Professor, Stanford (More)

R.G. Lewis with the Comment of the year on this video… (More)

Robot Who Once Said It Would ‘Destroy Humans’ Becomes First Robot Citizen In World. (2017-2018) (More)

On Oct. 9, 2018, John Lennox addressed the critical questions surrounding artificial intelligence and how the future of artificial intelligence bears on a Christian vision of reality. This event was hosted at the Zacharias Institute in Alpharetta, GA, and is part of a new series called #TrendingQuestions. For more information about upcoming installments in this series visit: https://rzim.org/trending-questions/ (More)

Check out the Most AMAZING Examples Of Artificial Intelligence (AI)! From deep learning sophisticated robots to machine learning computers, this top 10 list of incredible technology will amaze you! (More)

A robotics researcher afraid of robots, Peter Haas, invites us into his world of understand where the threats of robots and artificial intelligence lie. Before we get to Sci-Fi robot death machines, there’s something right in front of us we need to confront – ourselves. Peter is the Associate Director of the Brown University Humanity Centered Robotics Initiative. He was the Co-Founder and COO of XactSense, a UAV manufacturer working on LIDAR mapping and autonomous navigation. Prior to XactSense, Peter founded AIDG – a small hardware enterprise accelerator in emerging markets. Peter received both TED and Echoing Green fellowships. He has been a speaker at TED Global, The World Bank, Harvard University and other venues. He holds a Philosophy B.A. from Yale. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx (More)

A independent report commissioned by the UK government to examine how competition policy needs to adapt itself for the digital age has concluded that tech giants don’t face adequate competition and the law needs updating to address what it dubs the “novel” challenges of ‘winner takes all’ platforms.

The panel also recommends more policy interventions to actively support startups, including a code of conduct for “the most significant digital platforms”; and measures to foster data portability, open standards and interoperability to help generate competitive momentum for rival innovations.

UK chancellor Philip Hammond announced the competition market review last summer, saying the government was committed to asking “the big questions about how we ensure these new digital markets work for everyone”.

The culmination of the review — a 150-page report, published today, entitled Unlocking digital competition — is the work of the government’s digital competition expert panel which is chaired by former U.S. president Barack Obama’s chief economic advisor, professor Jason Furman.

“The digital sector has created substantial benefits but these have come at the cost of increasing dominance of a few companies which is limiting competition and consumer choice and innovation. Some say this is inevitable or even desirable. I think the UK can do better,” Furman said today in a statement.

In the report the panel writes that it believes competition policy should be “given the tools to tackle new challenges, not radically shifted away from its established basis”.

“In particular, policy should remain based on careful weighing of economic evidence and models,” they suggest, arguing also that “consumer welfare” remains the “appropriate perspective to motivate competition policy” — and rejecting the idea that a completely new approach is needed.

But, crucially, their view of consumer welfare is a broad church, not a narrow price trench — with the report asserting that a consumer welfare basis to competition law is able to also take account of other things, including (but also not limited to) “choice, quality and innovation”. 

Furman said the panel, which was established in September 2018, has outlined “a balanced proposal to give people more control over their data, give small businesses more of a chance to enter and thrive, and create more predictability for the large digital companies”.

“These recommendations will deliver an economic boost driven by UK tech start-ups and innovation that will give consumers greater choice and protection,” he argues.

Commenting on the report’s publication, Hammond said: “Competition is fundamental to ensuring the market works in the interest of consumers, but we know some tech giants are still accumulating too much power, preventing smaller businesses from entering the market,” adding that: “The work of Jason Furman and the expert panel is invaluable in ensuring we’re at the forefront of delivering a competitive digital marketplace.”

The chancellor said that the government will “carefully examine” the proposals and respond later this year — with a plan for implementing changes he said are necessary “to ensure our digital markets are competitive and consumers get the level of choice they deserve”.

Pro-startup regulation required

The panel rejects the view — mostly loudly propounded by tech giants and their lobbying vehicles — that competition is thriving online, ergo no competition policy changes are needed.

It also rejects the argument that digital platforms are “natural monopolies” and competition is impossible — dismissing the idea of imposing utility-like regulation, such as in the energy sector.

Instead, the panel writes that it sees “greater competition among digital platforms as not only necessary but also possible — provided the right policies are in place”. The biggest “missing set of policies” are ones that would “actively help foster competition”, it argues in the report’s introduction.

“Instead of just relying on traditional competition tools, the UK should take a forward-looking approach that creates and enforces a clear set of rules to limit anti-competitive actions by the most significant digital platforms while also reducing structural barriers that currently hinder effective competition,” the panel goes on to say, calling for new rules to tackle ‘winner take all’ tech platforms that are based on “generally agreed principles and developed into more specific codes of conduct with the participation of a wide range of stakeholders”. 

Coupled with active policy efforts to support startups and scale-ups — by making it easier for consumers to move their data across digital services; pushing for systems to be built around open standards; and for data held by tech giants to be made available for competitors — the suggested reforms would support a system that’s “more flexible, predictable and timely” than the current regime, they assert.

Among the panel’s specific recommendations are a call to set up a new competition unit with expertise in technology, economics and behavioural science, plus the legal powers to back it up.

The panel envisages this unit focusing on giving users more control over their data — to foster platform switching — as well as developing a code of competitive conduct that would apply to the largest platforms. “This would be applied only to particularly powerful companies, those deemed to have ‘strategic market status’, in order to avoid creating new burdens or barriers for smaller firms,” they write.

Another recommendation is to beef up regulators’ existing powers for tackling illegal anti-competitive practices — to make it quicker and simpler to prosecute breaches, with the report highlighting bullying tactics by market leaders as a current problem.

“There is nothing inherently wrong about being a large company or a monopoly and, in fact, in many cases this may reflect efficiencies and benefits for consumers or businesses. But dominant companies have a particular responsibility not to abuse their position by unfairly protecting, extending or exploiting it,” they write. “Existing antitrust enforcement, however, can often be slow, cumbersome, and unpredictable. This can be especially problematic in the fast-moving digital sector.

“That is why we are recommending changes that would enable more use of interim measures to prevent damage to competition while a case is ongoing, and adjusting appeal standards to balance protecting parties’ interests with the need for the competition authority to have usable tools and an appropriate margin of judgement. The goal is to place less reliance on large fines and drawn-out procedures, instead enabling faster action that more directly targets and remedies the problematic behavior.”

The expert panel also says changes to merger rules are required to enable the UK’s Competition and Markets Authority (CMA) to intervene to stop digital mergers that are likely to damage future competition, innovation and consumer choice — saying current decisions are too focused on short-term impacts.

“Over the last 10 years the 5 largest firms have made over 400 acquisitions globally. None has been blocked and very few have had conditions attached to approval, in the UK or elsewhere, or even been scrutinised by competition authorities,” they note.

More priority should be given to reviewing the potential implications of digital mergers, in their view.

Decisions on whether to approve mergers, by the CMA and other authorities, have often focused on short-term impacts. In dynamic digital markets, long-run effects are key to whether a merger will harm competition and consumers. Could the company that is being bought grow into a competitor to the platform? Is the source of its value an innovation that, under alternative ownership, could make the market less concentrated? Is it being bought for access to consumer data that will make the platform harder to challenge? In principle, all of these questions can inform merger decisions within the current, mainstream framework for competition, centred on consumer welfare. There is no need to shift away from this, or implement a blanket presumption against digital mergers, many of which may benefit consumers. Instead, these issues need to be considered more consistently and effectively in practice.

In part the CMA can achieve this through giving a higher priority to merger decisions in digital markets. These cases can be complex, but they affect markets that are critically important to consumers, providing services that shape the digital economy.

In another recommendation which targets the Google -Facebook adtech duopoly, the report also calls for the CMA to launch a formal market study into the digital advertising market — which it notes suffers from a lack of transparency.

The panel also notes similar concerns raised by other recent reviews.

Digital advertising is increasingly driven by the use of consumers’ personal data for targeting. This in turn drives the competitive advantage for platforms able to learn more about more users’ identity, location and preferences. The market operates through a complex chain of advertising technology layers, where subsidiaries of the major platforms compete on opaque terms with third party businesses. This report joins the Cairncross Review and Digital, Culture, Media and Sport Committee in calling for the CMA to use its investigatory capabilities and powers to examine whether actors in these markets are operating appropriately to deliver effective competition and consumer benefit.

The report also calls for new powers to force the largest tech companies to open up to smaller firms by providing access to key data sets, albeit without infringing on individual privacy — citing Open Banking as a “notable” data mobility model that’s up and running.

“Open Banking provides an instructive example of how policy intervention can overcome technical and co-ordination challenges and misaligned incentives by creating an adequately funded body with the teeth to drive development and implementation by the nine largest financial institutions,” it suggests.

The panel urges the UK to engage internationally on the issue of digital regulation, writing that: “Many countries are considering policy changes in this area. The United Kingdom has the opportunity to lead by example, by helping to stimulate a global discussion that is based on the shared premise that competition is beneficial, competition is possible, but that we need to update our policies to protect and expand this competition for the sake of consumers and vibrant, dynamic economies.”

And in just one current example of the considerable chatter now going on around tech + competition, a House of Lords committee this week also recommended public interest tests for proposed tech mergers, and suggested an overarching digital regulator is needed to help plug legislative gaps and work through regulatory overlap.

Discussing the pros and cons of concentration in digital markets, the expert competition panel notes the efficiency and convenience that this dynamic can offer consumers and businesses, as well as potential gains via product innovation.

However the panel also points to what it says can be “substantial downsides” from digital market concentration, including erosion of consumer privacy; barriers to entry and scale for startups; and blocks to wider innovation, which it asserts can “outweigh any static benefits” — writing:

It can raise effective prices for consumers, reduce choice, or impact quality. Even when consumers do not have to pay anything for the service, it might have been that with more competition consumers would have given up less in terms of privacy or might even have been paid for their data. It can be harder for new companies to enter or scale up. Most concerning, it could impede innovation as larger companies have less to fear from new entrants and new entrants have a harder time bringing their products to market — creating a trade-off where the potential dynamic costs of concentration outweigh any static benefits.

The panel takes a clear view that “competition for the market cannot be counted on, by itself, to solve the problems associated with market tipping and ‘winner-takes-most’” — arguing that past regulatory interventions have helped shift market conditions, i.e. by facilitating the technology changes that created new markets and companies which led to dominant tech giants of old being unseated.

So, in other words, the panel believes government action can unlock market disruption — hence the report’s title — and that it’s too simplistic a narrative to claim technological change alone will reset markets.

For example, IBM’s dominance of hardware in the 1960s and early 1970s was rendered less important by the emergence of the PC and software. Microsoft’s dominance of operating systems and browsers gave way to a shift to the internet and an expansion of choice. But these changes were facilitated, in part, by government policy — in particular antitrust cases against these companies, without which the changes may never have happened.

The panel also argues there’s an acceleration of market dominance in the modern digital economy that makes it even more necessary for governments to respond, writing that “network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web”.

They also point to the risk of AI and machine learning technology leading to further market concentration, warning that “the companies most able to take advantage of [the next technological revolution] may well be the existing large companies because of the importance of data for the successful use of these tools”.

And while they suggest AI startups might offer a route to a competitive reset, via a substantial technology shift, there’s still currently no relief to be had from entrepreneurial efforts because of “the degree that entrants are acquired by the largest companies – with little or no scrutiny”.

Discussing other difficulties related to regulating big tech, the panel warns of the risk of regulators being “captured by the companies they are regulating”; as well as point out they are generally at a disadvantage vs the high tech innovators they are seeking to rule.

In a concluding chapter considering the possible impacts of their policy recommendations, the panel argues that successful execution of their approach could help foster startup innovation across a range of sectors and services.

“Across digital markets, implementing the recommendations will enable more new companies to turn innovative ideas into great new services and profitable businesses,” they suggest. “Some will continue to be acquired by large platforms, where that is the best route to bring new technology to a large group of users. Others will grow and operate alongside the large platforms. Digital services will be more diverse, more dynamic, with more specialisation and choice available for consumers wanting it. This could drive a flourishing of investment in these UK businesses.”

Citing some “potential examples” of services that could evolve in this more supportively competitive environment they suggest social content aggregators might arise that “bring together the best material from people’s friends across different platforms and sites”; “privacy services could give consumers a single simple place to manage the information they share across different platforms”; and also envisage independent ad tech businesses and changed market dynamics that can “rebalance the share of advertising revenue back towards publishers”.

The main envisaged benefits for consumers boil down to greater service and feature choice; enhanced privacy and transparency; and genuine control over the services they use and how they want to use them.

While for startups and scale-ups the panel sees open standards and access to data — and indeed effective enforcement, by the new digital markets unit — creating “a wide range of opportunities to develop and serve new markets adjacent to or interconnected with existing digital platforms”.

The combined impact should be to strengthen and deepen the competitive digital ecosystem, they believe.

Another envisaged benefit for startups is “trust in the framework and recognition that promising, innovative digital businesses will be protected from foreclosure or exclusion” — which they argue “should catalyse investment in UK digital businesses, driving the sector’s growth”.

“The changes to competition law… mean that where a business can grow into a successful competitor, that route to further growth is protected and companies will not in the future see being subsumed into a dominant platform as the only realistic business model,” they add.

In the coming weeks, AWS is launching new G4 instances with support for Nvidia’s T4 Tensor Core GPUs, the company today announced at Nvidia’s GTC conference. The T4, which is based on Nvidia’s Turing architecture, was specifically optimized for running AI models. The T4 will be supported by the EC2 compute service and the Amazon Elastic Container Service for Kubernetes.

“NVIDIA and AWS have worked together for a long time to help customers run compute-intensive AI workloads in the cloud and create incredible new AI solutions,” said Matt Garman, vice president of Compute Services at AWS, in today’s announcement. “With our new T4-based G4 instances, we’re making it even easier and more cost-effective for customers to accelerate their machine learning inference and graphics-intensive applications.”

The T4 is also the first GPU on AWS that supports Nvidia’s raytracing technology. That’s not what Nvidia is focusing on with this announcement, but creative pros can use these GPUs to take the company’s real-time raytracing technology for a spin.

For the most part, though, it seems like Nvidia and AWS expect that developers will use the T4 to put AI models into production. It’s worth noting that the T4 hasn’t been optimized for training these models, but they can obviously be used for that as well. Indeed, with the new Cuda-X AI libraries (also announced today), Nvidia now offers an end-to-end platform for developers who want to use its GPUs fr deep learning, machine learning and data analytics.

It’s worth noting that Google launched T4 support in beta a few months ago. On Google’s cloud, these GPUs are currently in beta.

In this episode Sam Harris speaks with computer scientist Stuart Russell about the challenge of building artificial intelligence that is compatible with human well-being. (More)

In this episode Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the “alignment problem,” IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics. (More)

Use my Tile referral link & check out this app that helps you find your keys! http://ssqt.co/meeAf5P (More)

This clip is taken from the Joe Rogan Experience podcast #804 with Sam Harris (https://youtu.be/RJ5_hAEsLkU), also available for download via iTunes & Stitcher (http://bit.ly/1XvSzR3). (More)

Were the Eagles the George Orwell of music? Did they somehow ‘take a trip’ into our present age where we are (virtual) ‘prisoners of our own device’, courtesy tech giants at Silicon Valley, California?

Yes, there are multiple interpretations of the legendary song ‘Hotel California’ and the Silicon Valley angle may not have been on the mind of the artists. But, the nature of art is such that no artist can ever fully comprehend all aspects of his own creation. It is observers who interpret art from diverse perspectives, making the act of observation itself an art in its own right.

The following perspective gently crept up to me while I was doing some thinking on our age of digital dystopia, while in the background the Eagles sang about their strange experiences at Hotel California. Somewhere, from the deep depths of my mind, an unexpected question gently surfaced seeking light in the digital age. Why was the title of the song ‘Hotel California’ and not ‘Hotel Chicago’ or ‘Hotel Florida’?

Trending AI Articles:

1. Deep Learning Book Notes, Chapter 1
2. Deep Learning Book Notes, Chapter 2
3. Machines Demonstrate Self-Awareness
4. Visual Music & Machine Learning Workshop for Kids

Another thought followed suit. How come the lyrics like ‘you can check out anytime you like, but you can never leave’ and ‘we are programmed to receive’ take on a whole new meaning when examined in the context of present day California that is home to Silicon Valley?

One after another, like divers surfacing for air, random phrases from the song started surfacing with a whole new meaning.

The song starts with the words ‘On a dark desert highway.

Today we live in a Dark Age of Instagrammesque narcissism, a desert like digital wasteland of big and largely useless data on the high speed information superhighway.

Warm smell of colitas, rising up through the air
Up ahead in the distance, I saw a shimmering light
My head grew heavy and my sight grew dim
I had to stop for the night.

There she stood in the doorway;
I heard the mission bell
And I was thinking to myself
‘This could be heaven or this could be Hell’

Then she lit up a candle and she showed me the way
There were voices down the corridor,
I thought I heard them say

Welcome to the Hotel California

The warm smell of colitas (marijuana flower buds) transports them to the doorway of the digital age that ‘could be heaven or hell’. Led by a lady holding a candle, they moved down the broadband corridors, where invisible (possibly automated) voices welcomed them to Hotel (Silicon Valley) California, the nerve centre of the Age of Information Technology.

Such a lovely place (such a lovely place)
Such a lovely face.

This is disturbingly indicative of photo shopped face selfies taken in air brushed, picture perfect places where every frame is filtered to X-pro II, Lo-Fi and their cousins. The word ‘face’ inevitably brings up images of Facebook, the social media forum that has definitively changed the way the world works.

Plenty of room at the Hotel California
Any time of year (any time of year) you can find it here

There is always plenty of room or unlimited server space any time of the year to host your multiple digital avatars, courtesy tech giants at Hotel California.

Her mind is Tiffany-twisted
This imagery refers to a girl, who represents a superficial generation whose mind is warped to a point of perversion by everything glittery, just like the jewellery at the super expensive Tiffany’s jewellery store. It’s interesting that one of the main standouts of Tiffany are its blue boxes. Popular social media platforms like Twitter, Facebook, Linkedin etc are all headquartered in Silicon Valley and have blue as their primary colour.

She got the Mercedes bends

She has the Mercedes bends (not benz). “The bends,” is also known as decompression sickness. You get the bends, for example, if you ‘surface’ too fast while scuba diving. Mercedes bends is an interesting play of words meaning a flashy, superficial life only lived at the surface, just like our shallow digital personas.

She got a lot of pretty, pretty boys, that she calls friends
This again refers to an entire generation obsessed with looking good by hanging out with stylish people they barely know, whom they call friends. This also reminds us of all our hundreds of social media ‘friends’, whom we don’t know.

How they dance in the courtyard, sweet summer sweat
There is something borderline disturbing about this line, reminiscent of how we display our private lives, opinions and emotions in the open courtyard of social media, for good and for bad, while the world watches, comments and judges and how this has become our new normal.

Some dance to remember, some dance to forget

Some of our social media engagements are to reconnect with past friends we remember or to create memories that we can later remember digitally as our biological memory fades. At other times we just want to dissolve bad memories by living larger than life digital avatars that barely resemble our real lives.

So I called up the Captain,
‘Please bring me my wine’

From superficiality and narcissism, there is a sudden shift to a need for sophistication indicated by asking the captain for wine. Wine indicates finesse and captains at formal restaurants ensure quality control and finesse.

He said, ‘we haven’t had that spirit here since nineteen sixty-nine’

The words ‘spirit’ and ‘nineteen sixty nine’ bring validation to this digital age interpretation of Hotel California. Interestingly, 1969 was the year when the internet actually started in the ‘Spirit of Community’.

The Advanced Research Project, under the U.S. Department of Defense, hired Dr. J.C.R. Licklider in 1962 to respond back to the Soviet Union’s launch of a satellite manned by a dog. Dr. Licklider’s intentions were as follows:

His idea for the project was the “spirit of community” and he was interested in “having computers help people communicate with other people” (Licklider, Licklider, and Robert Taylor) as opposed to using the computer to communicate for us.

The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock’s laboratory at University of California to the second network node at Stanford Research Institute, thus kickstarting the internet era as we know it today.

The lyrics ‘we haven’t had that spirit here since nineteen sixty nine’ seem to suggest that the founding spirit of the internet i.e to help people connect with other people using computers has transformed into a narcissist and dystopian beast that is threatening the fabric of social interactions.

They livin’ it up at the Hotel California
What a nice surprise (what a nice surprise), bring your alibis

At Hotel California, everyone irrespective of their imperfect lives in the real world make sure their digital avatars are living it up in surprising style on the silicon playground.

Alibi refers to a situation when one is physically in one place, but is building up an illusion of being in another place. That’s what we do through our social media avatars. Irrespective of our physical locations, our online activities are basically happening on Silicon Valley controlled servers. This also draws our attention to the problem of cyber crime and the dark web that transcends geographical locations.

Mirrors on the ceiling,

There is something voyeuristic in this statement. It indicates the perverted pleasure of watching oneself (on selfie mode), surveillance and violation of privacy by tech giants; all ailments that plague residents of our digital age.

The pink champagne on ice

Pink Champagne is also called rose wine. This indicates the rose — tinted world that we live in digitally, that is such a contrast to our ice like colourless real lives of increasing isolation, loss of privacy etc and crumbling down of the world as we know it.

Interestingly, a description of Pink Champagne reveals very interesting parallels to modern day human behavior.

According to Sam Heitner, director of the Office of Champagne USA, ‘its fashion, its hip, its a way to make a big statement that you know what you are doing, to have a different look at a party because its about the color of what’s in the glass’.”

Laura Maniec, director of wine & spirits for B.R.Guest Restaurant Group in New York, finds that the striking color has a contagious effect. If you are pouring it by the glass, she says, people see it, and all of a sudden everybody will start ordering it. (Source: Forbes)

Both quotes indicate how people on social media are drawn to the saturation of a substance rather than the substance itself.

And she said, ‘we are all just prisoners here, of our own device’

This conjures up images of a picture perfect generation who are mentally prisoners of their internet enabled devices that they simply can’t disconnect from. They have traded their privacy to tech overlords in Hotel California in exchange for living digitally augmented super — lives online.

And in the master’s chambers,
They gathered for the feast

Now that they had the whole world in their digital net, the tech overlords gathered at their headquarters in Silicon Valley to gather their billions of dollars.

They stab it with their steely knives,
But they just can’t kill the beast

But, they realize that their own creation has turned into a beast that refused to obey its master and cannot be ‘killed’ anymore by human means. This is indicative of the increasing powers of Artificial Intelligence and predictions of how it could become an existential threat for humans and would not respond to a kill switch.

Last thing I remember, I was
Running for the door
I had to find the passage back to the place I was before
Realising that Hotel California was not only sleazy and superficial, but that it’s real manager was a beast that controlled even its creators, they started running back to the gateway to their old and happier lives.

‘Relax’ said the night man,
‘We are programmed to receive.
You can check out any time you like,
But you can never leave!’

At the gateway, they meet the night man who cryptically says ‘we are programmed to receive’. This indicates information we receive through our devices from a tightly knit corporate matrix that hopes to control our thinking. He tells them that physically they can check out of Hotel California anytime they like. But, their digital avatars can never leave. They will always remain guests on the spacious servers of Hotel California.

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href

Is Hotel California about Silicon Valley? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Earlier this year, Helen Greiner-founded drone startup CyPhy Works announced a major change. The company was rebooting and renaming itself Aria Insights, a move that arrived with a newfound AI/data-driven focus. Now, just over two months later, the company is no more.

Reports that Aria had shuttered began surfacing earlier this week. Moments ago, the company confirmed the move in a tersely worded statement offered to TechCrunch:

Aria Insights has ceased operations effective March 21, 2019.

That’s the sum total of the insight provided by Lance VandenBrook, the former CyPhy CEO who resumed that role as the company transitioned back in January. The move appears to be an abrupt one, with little to no information offered to external parties. It brings to mind last year’s sudden closure of Rethink Robotics, another company launched by a former iRobot co-founder.

Full disclosure: We announced last month that the company’s CTO would be appearing onstage at our Robotics event next month. That, like everything else apart from Aria’s drones, appears to be up in the air at the moment.

More information as we get it.

Artificial Intelligence and the Future According to Demis Hassabis Founder & CEO of DeepMind (More)

Dr Demis Hassabis, Co-founder and CEO of DeepMind speaking at CSAR – Apr 6, 2017 (More)

Google’s Doodles are often elaborate creations, but the upcoming Doodle to celebrate the birthday of composer Johann Sebastian Bach is positively baroque.

With the help of artificial intelligence, the interactive Doodle allows users to generate harmonies for any melody they input in the style of the famous 18th century composer. Google used machine learning to analyze the harmonies of more than 300 Bach compositions, replicating the patterns it found to fit the user’s suggested melody.

You can input short single-line melodies that are just two bars long and change the key of the music and its tempo. You can also download the resulting composition as a MIDI file or share it with friends. The Doodle also includes some hidden surprises. Click the mini amplifier to the right of the keyboard to upgrade the instruments to ‘80s synths.

 Image: Google
Bach and friends in their ‘80s regalia.

The Doodle is a neat demonstration of both the possibilities and limitations of AI to generate music. As Anna Huang, a resident AI researcher with Google’s Magenta project who created the Doodle, explains to The Verge, the underlying AI model was trained on Bach’s chorale harmonizations, which are harmonizations of existing hymns.

This is particularly compliant data for AI to learn from, says Huang. “The Bach compositions in this dataset are highly structured, and the style is very concise, yet with rich harmonies, allowing machine learning models to learn more with less data.” It also helps that Bach is a composer of Baroque music: a highly formalized genre with consistent rules.

Huang, who studied music composition as an undergrad and grad, says she’s always looking for new ways to compose. AI gives her a tool that can fill in the missing parts of a piece, giving her new material to sculpt. “As a result, you can try out ideas more quickly, and see if you encounter something that sparks,” she says.

She also notes that as with other musical AI projects, this technology is far from a perfect composer. One thing machine learning generators struggle with, for example, is creating long-term structure and coherence. “What is harder to replicate is Bach’s balance in simplicity and expressiveness and the longer arcs in his music,” says Huang.

The Bach-inspired Google Doodle will go live 12AM ET on Thursday, March 21st, and it will be available for 48 hours across 77 markets. You can read more about the technology behind it here.

Update March 20th 3:00PM ET: Updated with additional comment from Magenta AI resident Anna Huang.

Subscribe: http://bit.ly/2mYXInj
Join our secret society: http://bit.ly/lansana (More)