AI

THE stunning successes of artificial intelligence would not have happened without the availability of massive amounts of data, whether its smart speakers in the home or personalised book recommendations. And the spread of AI into new areas of the economy, such as AI-driven marketing and self-driving vehicles, has been driving the collection of ever more data. These large databases are amassing a wide variety of information, some of it sensitive and personally identifiable. All that data in one place makes such databases tempting targets, ratcheting up the risk of privacy breaches.

The general public is largely wary of AI’s data-hungry ways. According to a survey by Brookings, 49 per cent of people think AI will reduce privacy. Only 12 per cent think it will have no effect, and a mere five per cent think it may make it better.

As cybersecurity and privacy researchers, we believe that the relationship between AI and data privacy is more nuanced. The spread of AI raises a number of privacy concerns, most of which people may not even be aware. But in a twist, AI can also help mitigate many of these privacy problems.

Revealing models

Privacy risks from AI stem not just from the mass collection of personal data, but from the deep neural network models that power most of today’s artificial intelligence. Data isn’t vulnerable just from database breaches, but from “leaks” in the models that reveal the data on which they were trained.

Deep neural networks—which are a collection of algorithms designed to spot patterns in data—consist of many layers. In those layers are a large number of nodes called neurons, and neurons from adjacent layers are interconnected. Each node, as well as the links between them, encode certain bits of information. These bits of information are created when a special process scans large amounts of data to train the model.

For example, a facial recognition algorithm may be trained on a series of selfies so it can more accurately predict a person’s gender. Such models are very accurate, but they may also store too much information—actually remembering certain faces from the training data. In fact, that’s exactly what researchers at Cornell University discovered. Attackers could identify people in training data by probing the deep neural networks that classified the gender of facial images.

They also found that even if the original neural network model is not available to attackers, attackers may still be able to tell whether a person is in the training data. They do this by using a set of models that are trained on data similar, but not identical, to the training data. So if a man with a beard was present in the original training data, then a model trained on photos of different bearded men may be able to reveal his identity.

AI to the rescue?

On the other hand, AI can be used to mitigate many privacy problems. According to Verizon’s 2019 Data Breach Investigations Report, about 52 per cent of data breaches involve hacking. Most existing techniques to detect cyberattacks rely on patterns. By studying previous attacks, and identifying how the attacker’s behaviour deviates from the norm, these techniques can flag suspicious activity. It’s the sort of thing at which AI excels: studying existing information to recognise similar patterns in new data.

Still, AI is no panacea. Attackers can often modify their behaviour to evade detection. Take the following two examples. For one, suppose anti-malware software uses AI techniques to detect a certain malicious programme by scanning for a certain sequence of software code. In that case, an attacker can simply shuffle the order of the code. In another example, the anti-malware software might first run the suspicious programme in a safe environment, called a sandbox, where it can look for any malicious behaviour. Here, an attacker can instruct the malware to detect if it’s being run in a sandbox. If it is, it can behave normally until it’s released from the sandbox—like a possum playing dead until the threat has passed.

Making AI more privacy friendly

A recent branch of AI research called adversarial learning seeks to improve AI technologies so they’re less susceptible to such evasion attacks. For example, we have done some initial research on how to make it harder for malware, which could be used to violate a person’s privacy, to evade detection. One method we came up with was to add uncertainty to the AI models so the attackers cannot accurately predict what the model will do. Will it scan for a certain data sequence? Or will it run the sandbox? Ideally, a malicious piece of software won’t know and will unwittingly expose its motives.

Another way we can use AI to improve privacy is by probing the vulnerabilities of deep neural networks. No algorithm is perfect, and these models are vulnerable because they are often very sensitive to small changes in the data they are reading. For example, researchers have shown that a Post-it note added to a stop sign can trick an AI model into thinking it is seeing a speed limit sign instead. Subtle alterations like that take advantage of the way models are trained to reduce error. Those error-reduction techniques open a vulnerability that allows attackers to find the smallest changes that will fool the model.

These vulnerabilities can be used to improve privacy by adding noise to personal data. For example, researchers from Max Planck Institute for Informatics in Germany have designed clever ways to alter Flickr images to foil facial recognition software. The alterations are incredibly subtle, so much so that they’re undetectable by the human eye.

The third way that AI can help mitigate privacy issues is by preserving data privacy when the models are being built. One promising development is called federated learning, which Google uses in its Gboard smart keyboard to predict which word to type next. Federated learning builds a final deep neural network from data stored on many different devices, such as cellphones, rather than one central data repository. The key benefit of federated learning is that the original data never leaves the local devices. Thus privacy is protected to some degree. It’s not a perfect solution, though, because while the local devices complete some of the computations, they do not finish them. The intermediate results could reveal some data about the device and its user.

Federated learning offers a glimpse of a future where AI is more respectful of privacy. We are hopeful that continued research into AI will find more ways it can be part of the solution rather than a source of problems.

—AP

This article is republished from The Conversation under a Creative Commons licence.

RECOMMENDED FOR YOU

Kevin Beharry has always been one to think outside box.

Beharry, head of music production/DJ unit System 32, is perhaps best known as the producer of the immensely popular Carnival 2020 smash hit Knock About Riddim.

Doesn’t ring a bell? Remember Viking Ding Dong’s (Andre Houlder) epic dive off the International Soca Monarch stage into a scampering audience? Yeah, that riddim—The same that featured Ding Dong’s “Outside”, Mical Teja’s “Birthday” and Sekon Sta’s (Nesta Boxhill) “Waste Man”.

A true self-examination heart and soul.

That’s how gospel artiste Positive (Joel Murray) describes the music of his new album Heartwired.

Positive said the 15-track offering, his fourth studio release, is an open challenge for all people to find balance in aligning their individual lives with the will of their respective God.

Music to soothe worried minds and temper growing anxieties.

In the face of the global Covid-19 pandemic that’s exactly the effect veteran crooner Kelwyn Hutcheon hopes his latest eight-track LP will have on every ear it reaches.

The 86-year-old Hutcheon recently released the self-titled Kelwyn Hutcheon Sings in the Key of Love. He hopes the record has the same calming effect on listeners as he experienced during its creation.

Gloria Alcazar made San Jose Serenaders into a superstar band.

So says the legendary band’s co-founder Lennox Flores.

Flores started San Jose with his brother Wayne in 1959. Back then they were one of many parang bands on the island exclusively fronted by male lead singers.

Master artist LeRoy Clarke was on November 11 visited by the Minister of Tourism, Culture and the Arts Randall Mitchell at his Cascade home, museum and art gallery, Legacy House.

Clarke celebrated his 82nd birthday on November 7. On that day Minister Mitchell called Clarke to wish him happy birthday and promised to visit.

MOVIEGOERS in Central Trinidad will have to find alternative options as the owner of MovieTowne, Chaguanas, announced that the entertainment facility would permanently close its doors.