Privacy and cybersecurity are converging. “It’s not just a coincidence that privacy issues dominated 2018,” writes Andrew Burt (chief privacy officer and legal engineer at Immuta) in his Harvard Business Review article Privacy and Cybersecurity Are Converging. Here’s Why That Matters for People and for Companies. “These events are symptoms of larger, profound shifts in the world of data privacy and security that have major implications for how organizations think about and manage both.”

SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the free PDF version (TechRepublic)

Not a new concern

Burt’s concern is not new. Examples started appearing in 2009, when Carnegie Mellon researchers Alessandro Acquisti and Ralph Gross warned that:

“Information about an individual’s place and date of birth can be exploited to predict his or her Social Security number (SSN). Using only publicly available information, we observed a correlation between individuals’ SSNs and their birth data and found that for younger cohorts the correlation allows statistical inference of private SSNs.”

Something else threatened by the power of AI and machine learning is online anonymity. Arvind Narayanan et al. in the research paper On the Feasibility of Internet-Scale Author Identification demonstrate how the author of an anonymous document can be identified using machine-learning techniques capable of associating language patterns in sample texts (unknown author) with language-patterns (known author) in a compiled database.

Ten years ago, the ability to compile and make sense of disparate databases was limited. “And it was a world in which privacy and security were largely separate functions, where privacy took a backseat to the more tangible concerns over security,” explains Burt. “Today, however, the biggest risk to our privacy and our security has become the threat of unintended inferences, due to the power of increasingly widespread machine-learning techniques.”

What is unintended inference?

In the research paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, co-authors Sandra Wachter and Brent Mittelstadt of the Oxford Internet Institute at University of Oxford describe how the concept of unintended inference applies in the digital world. The researchers write that artificial intelligence (AI) and big data analytics are able to draw non-intuitive and unverifiable predictions (inferences) about behaviors and preferences:

“These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.”

What does this mean to businesses?

There are plenty of examples where the lack of online privacy cost the targeted business a great deal of money–Facebook for instance. From a July 2018 article in The Guardian by Rupert Neate: “More than $119bn (£90.8bn) has been wiped off Facebook’s market value, which includes a $17bn hit to the fortune of its founder, Mark Zuckerberg, after the company told investors that user growth had slowed in the wake of the Cambridge Analytica scandal.”

SEE: Facebook data privacy scandal: A cheat sheet (TechRepublic)

Granted, the Facebook example is somewhat grandiose, but it does not take much effort to come up with situations that could affect even the smallest of businesses. For example, a competitor being able to compile a new proprietary application from data outsourced to various third-party vendors.

No simple solution

Burt points out a rather chilling consequence of unintended inferences. “Because the threat of unintended inferences reduces our ability to understand the value of our data, our expectations about our privacy–and therefore what we can meaningfully consent to–are becoming less consequential,” continues Burt. “Being surprised at the nature of the violation, in short, will become an inherent feature of future privacy and security harms.”

To further his point, Burt refers to all the people affected by the Marriott breach and the Yahoo breach, explaining that, “The problem isn’t simply that unauthorized intruders accessed these records at a single point in time; the problem is all the unforeseen uses and all the intimate inferences that this volume of data can generate going forward.”

SEE: Privacy policy (Tech Pro Research)

Responsibility for cybersecurity and privacy blurs

Considering cybersecurity and privacy two sides of the same coin is a good thing, according to Burt; it’s a trend he feels businesses, in general, should embrace. “From a practical perspective, this means legal and privacy personnel will become more technical, and technical personnel will become more familiar with legal and compliance mandates,” suggests Burt. “The idea of two distinct teams, operating independent of each other, will become a relic of the past.”

Sandra Wachter agrees with Burt writing, in the Oxford article, that legal constraints around the ability to perform this type of pattern recognition are needed.

SEE: Hiring kit: GDPR data protection compliance officer (Tech Pro Research)

Redefine privacy

Way back in 1928 Supreme Court Justice Louis Brandeis defined privacy as “the right to be let alone,” Burt concludes his commentary suggesting that, “Privacy is now best described as the ability to control data we cannot stop generating, giving rise to inferences we can’t predict.”