AI Artificial Intelligence

Artificial intelligence (AI) is improving in astounding ways. The systems that use it can often diagnose illnesses more accurately than physicians with decades of experience or determine the content of images and then automatically tag them for easier retrieval later.

However, AI is not free from flaws. The people who work with it know that AI has blind spots. These so-called adversarial examples represent what can happen when AI miscalculates things.

They’re bugs that can make the AI misbehave or overlook information that it should easily recognize. The people familiar with AI and machine learning understand that adversarial examples can crop up after someone makes small tweaks to an image, for example.

They can cause severe problems, but some researchers believe they could use these AI blind spots to protect users’ privacy.

Manipulating AI Data to Cause Mistakes

Neil Gong recently joined the Duke University faculty to investigate the effectiveness of inserting false information into a person’s profile to protect their privacy. Part of his work centers on figuring out which information works best for that, and how much is needed to keep the data safe from prying eyes.

Along with another Duke researcher named Jinyuan Jia, Gong relied on a data set similar to the one associated with in the Cambridge Analytica scandal that exposed Facebook profile information to a third party without consent.

The Duke team used information compiled from user ratings in the Google Play Store. The key was to work with users who had also revealed their locations while submitting their opinions about apps.

The researchers trained a machine learning algorithm with those users and found that it could successfully predict a person’s city based on Google Play likes alone on the first try with 44% accuracy.

However, the scientists threw off the likelihood of correctness with just a few minor adjustments. For example, if they removed app ratings or made only three ratings mention the incorrect city, the algorithm’s accuracy suddenly became on par with random guesses.

Using Adversarial Examples to Stop Other Privacy Leaks

Researchers at the Rochester Institute of Technology and the University of Texas at Arlington came up with another privacy-protecting method associated with adversarial examples. Hackers use a technique called web fingerprinting to identify which websites people. The team discovered that adding “noise” with adversarial examples brought the accuracy rate of that technique down from 95% to between 29-57%.

They mixed changes from adversarial examples in with decoy web traffic with a randomized method that would reportedly be difficult for hackers to notice. That’s crucial since cybercriminals can carry out adversarial training to fool existing algorithms that they believe are in place to protect privacy.

Looking at AI Flaws Differently

People are increasingly interested in learning about AI and how it may shape the future. Their fascination opens opportunities for people like Tim Hwang to weigh in on what to expect. Tim Hwang spent time at Google and MIT and took part in a $26 million AI initiative. He now spends some of his time as a guest speaker who educates people on machine learning and related topics.

As the public becomes more familiar with AI and how it works, they may gradually realize that, in cases like those described above, imperfect AI may not always be a bad thing. Outside of using AI mistakes to boost privacy, though, flaws associated with AI could remind developers to slow down and remember that some kinds of AI progress may carry unintended consequences.

For example, researchers know that AI algorithms can have an unintended bias. When that happens, responsible developers shut those projects down and go back to the drawing board with them.

A common line of thought is that AI is not dangerous, but the biases brought into it by humans pose dangers. In some cases then, the instances that cause AI to make mistakes could remind people not to build AI tools haphazardly and put too much blind faith in them.

Enhancing Privacy Through Another AI-Based Method

Applying adversarial examples to machine learning is undoubtedly an interesting way to beef up privacy. But, it’s not the only option for bringing AI to help battle privacy-related concerns. One option is a program called DeepPrivacy that involves using generative adversarial networks (GANs) to swap someone’s face with bits of features from a database containing 1.47 million faces.

The result is a mask-like rendition of constantly shifting face parts that show up in place of someone’s actual face. This technique makes it almost impossible to identify a person by their facial features.

DeepPrivacy is still a work in progress, though, and it doesn’t anonymize every part of a person’s face, including their ears. Even so, the research may lead to better ways of obscuring someone’s features when they talk in front of a camera and give sensitive information.

Reshaping People’s Views of Imperfect AI

Even though AI has come a tremendously long way, it’s not perfect — and that’s okay. The examples given here should encourage people to broaden their perceptions of what they deem flawed AI. Even when AI doesn’t perform precisely as people intend in every case, it can still be valuable.

Similar Posts