Big technology’s data ethics | 2019-04-08 | daily-sun.com

SCITECH SPECIAL

Big technology’s data ethics

Scitech Report

8 April, 2019 12:00 AM printer

Big technology’s data ethics

You may not have noticed it, but there’s a feeding frenzy under way in the tech world. Traditionally, such frenzies are driven by greed.

This one, interestingly, is driven by fear, though you’d never guess that from its cover story, which is that it’s all about “ethics”, specifically the ethics of using (and, more commonly, abusing) personal data. Suddenly, wherever you look, data ethics has become the obsession du jour of governments, tech companies and regulators.

Everyone and his dog is now publishing data-ethics guides, codes and pious exhortations. The Department for Digital, Culture, Media and Sport, for example, is setting up a Centre for Data Ethics and Innovation.

Consortiums of tech companies have set up initiatives such as the Partnership on AI (motto: “The best way to ensure a good future for AI is to invent it together”). Google has published a set of “AI principles” and the other day followed up with an external advisory council “to help advance the responsible development of AI”. And so on.

I’ve been tracking this obsession for a while, tagging every instance of it that I found on the web with the software I use for keeping track of memes. At first, I thought that the accumulating stack of references was just a reflection of journalistic scepticism and my suspicious temperament.

But it turns out that I was not alone in noticing this trend. No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019.

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here.

In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites.

Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

An independent group set up to oversee Google's artificial intelligence efforts, has been shut down less than a fortnight after it was launched.

The Advanced Technology External Advisory Council (ATEAC) was due to look at the ethics around AI, machine learning and facial recognition.

One member resigned and there were calls for another to be removed.

The debacle raises questions about whether firms should set up such bodies.

Google told the BBC: "It's become clear that in the current environment, ATEAC can't function as we wanted.

"So we're ending the council and going back to the drawing board. We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics."

There had been an outcry over the appointment of Kay Coles James, who is president of conservative thinktank The Heritage Foundation. Thousands of Google employees signed a petition calling for her removal, over what they described as "anti-trans, anti-LGBTQ and anti-immigrant" comments.

At the weekend, board member Prof Alessandro Acquisti resigned, tweeting: "While I'm devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don't believe this is the right forum for me to engage in this important work."

The panel had been announced at a conference at the Massachusetts Institute of Technology, and had planned to meet four times in 2019.

One of the eight members, Joanna Bryson, a professor from Bath University, expressed anger at Google's decision to pull the plug.

One of the luminaries chosen to do this is Kay Coles James, president of the Heritage Foundation, an influential rightwing thinktank that played a significant role in helping Trump identify suitable candidates for his White House staff.

James, for her part, has fought against equal-rights laws for gay and transgender people, a fact that prompted an open letter objecting to her membership of the council. “In selecting James,” the authors write, “Google is making clear that its version of ‘ethics’ values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants.”

Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash.

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.


Top