THE FUTURE IS HERE

Is Artificial Intelligence Compromising Our Privacy?

The field of Artificial Intelligence (AI) has existed for over 60 years, but recent advancements have led to an explosion of speculation about the technology. Proponents say AI will make our lives more efficient, less tedious, and more automated, while opponents warn that the robo-apocalypse is nigh.

The truth is probably somewhere in between these two extremes. But what often gets left out of these conversations is how AI will almost certainly change privacy as we know it. In order to work well, AI needs as much information as it can get. Especially when solving complex problems like employee retention, or something simpler like which YouTube videos to recommend, AI needs a lot of information.

Trending AI Articles:

1. A hundred days to Deep Learning: Part 2 the 100 days

2. Bursting the Jargon bubbles — Deep Learning

3. How Can We Improve the Quality of Our Data?

4. Machine Learning using Logistic Regression in Python with Code

What many people don’t realize is that, in many ways, the future of AI is already here, and our privacy is already suffering. So what are the privacy tradeoffs with AI, is there any way to opt out, and why does YouTube keep recommending Wham!’s “Last Christmas” to me no matter what I watch? Let’s investigate.

The future of AI is here

In our heated debates about AI, we forget an important truth: AI already powers many services that we use every day. Voice assistants like Alexa and Siri, streaming services like Netflix, YouTube, and Spotify; and our Facebook, Instagram, and Twitter newsfeeds are already using AI to predict what content and media we might enjoy based on factors like previous activity, demographic, and even private messages we send on these services (though Facebook says it doesn’t use private messages or activity outside the app to target users).

Recommendation engines are one thing, but when our data becomes available to advertisers or social media firms monitor our activity on other apps to target ads or suggest content, many users feel a line has been crossed. Facebook users inadvertently gave access to their data to a third-party app in the Cambridge Analytica scandal, which isn’t an AI-related situation per se, but all our data was gathered for the purpose of teaching Facebook’s algorithms how to best reach us, and it shows how easily this can backfire when stringent restrictions around how this data can be used aren’t in place.

The privacy tradeoff with AI

It’s convenient to get content tailored specifically to your interests and, for companies using AI to get insights about everything from employee satisfaction to customer preferences, it’s a core part of a business model that ostensibly prioritizes the user, whether they’re employees, clients, or potential customers.

The rise of AI has meant that data has become central to any viable business model, and when anything becomes commoditized, malicious actors and unscrupulous companies are incentivized to cross ethical lines to gain access to your data. The Cambridge Analytica scandal is a great example of this: Facebook users unwittingly gave a third-party application access to the treasure trove of data that Facebook had collected about them. The proper protocols to prevent this type of data misuse were not in place, and that’s definitely one way collecting data for AI purposes can backfire.

But there are other, less flagrant, privacy concerns. For instance, your activity on a social network can help it predict your political affiliations, emotional state, and sexual orientation, even if these are things you haven’t explicitly stated. And users may not be aware of what all these inferences are being used for. Proponents of AI maintain that user data is anonymized, so there is no privacy concern. But as one expert points out, “if you take enough big data and combine it with other bits of big data, you can re-identify almost everybody.”

One particularly worrying use of AI is being implemented in China. Chinese commerce giant Alibaba’s finance wing developed a social credit score based on purchase history and financial data collected from sources like taxi companies to give users a social credit score. The types of products people buy affects the score: diapers are favored over frivolous purchases like video games.

The Chinese government is watching this project closely, and plans to implement an even more intrusive system to assign citizens a social credit score, bringing together information from traffic infraction databases, police records, tax payments, academic institutions, and even women’s birth control use.

Collecting every piece of available information on citizens makes sense in a way: it gives a comprehensive picture of each person and could be used to recommend social assistance programs or to recognize crime patterns to improve the criminal justice system. But social credit scores will inevitably be used to rank citizens and could cost people in some professions their jobs.

How can we ensure our privacy is protected?

In our digitally connected world, it’s almost impossible not to expose yourself to some form of data mining. If you use social media, own a smartphone, connect to the internet, shop online, or even have a bank account, your data is being stored and analyzed by AI.

If you’re concerned about the way your data is being used, in some cases you can choose to opt out. Facebook allows you to opt out of targeted ads, or you could choose to remove your data from social media and stop using it entirely. You can also disable cookies, which will stop your online activity from being tracked by companies like Facebook and Google. If you’re particularly concerned, you can use an anonymous web browser like Tor, which will encrypt the data you send and receive.

In general, just make sure you read the fine print when signing up for new services, and check the settings on the services you already have to ensure you have at least some degree of control over how your data is used. And, while it’s tempting to hand over all your Facebook data to a third-party app in exchange for learning how old you really look, don’t give sketchy sites access to your entire photo library. It’s not worth it, and I can assure you that you look 29.

And why does YouTube keep recommending “Last Christmas” to me?!

The last question some of you may have pertains to my own personal YouTube recommendations. Why does YouTube recommend the Wham! hit “Last Christmas” to me every single time I use the service?

In this case YouTube has mined the valuable data from my viewing history to determine that I exclusively use YouTube to watch the Wham! hit “Last Christmas” and episodes of the television classic Rhoda. The results are in and AI has determined I am lame, once again proving what a powerful technology it truly is.

Liked what you just read?

Do you share our vision of making life easier for people WITHOUT compromising their privacy?

➞ Click the 👏 below to CLAP for this piece.

SHARE our story with people you think will benefit from it.

➞ Get the latest updates — FOLLOW our blog, Reddit, Facebook, or Twitter.

We’re working hard to bring you great content. If you have something you want us to write about, let us know in the comments below!

Written by: Kristen Pyszczyk

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href


Is Artificial Intelligence Compromising Our Privacy? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.