Last week’s call by Apple boss Tim Cook for GDPR-style regulation – and his outspoken criticism of what he called the “data-industrial complex” – have been welcomed by many. Others questioned the ethics of Apple’s own business models.
“What kind of world do we want to live in?” That was the fundamental question put by Apple chief executive Tim Cook last week, as he took aim on business practices that undermined consumer privacy. “If we get this wrong, the dangers are profound,” he said.
In an impassioned speech to privacy regulators in Brussels, Cook said modern technology’s application has created a “data-industrial complex”, where our personal information is “weaponised against us with military efficiency”.
“Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies,” he warned. “Rogue actors and even governments have taken advantage of user trust to deepen divisions, incite violence, and even undermine our shared sense of what is true and what is false. This crisis is real. It is not imagined, or exaggerated, or crazy.”
The ferocity of the attack by Cook on big tech’s data-grabbing business models will have taken some by surprise – but it was refreshing to hear the head of the world’s most valuable technology company articulate the anxiety that many of us have about the way big tech exploits our data.
This year’s Cambridge Analytica/Facebook data privacy scandal shook the tech industry, and other security breaches have raised anxiety further. These episodes have led to regulators on both sides of the Atlantic putting an unprecedented focus on data privacy and calling big tech firms like Facebook to account.
Quite how that will translate into action in the form of regulation is unclear, but the addition this year in Europe of the GDPR privacy law has been welcomed by many and it is something that many want to see in the US, the world’s biggest consumer marketplace.
Cook welcomed GDPR’s “successful implementation” in Europe, and said it was time for the rest of the world to follow the lead.”We at Apple are in full support of a comprehensive federal privacy law in the United States,” he said.
In his speech, Cook pointed to four areas he said needed regulation: the right to have personal data minimised, the right for users to know the data that is collected on them, the right to access that data, and the right for that data to be kept secure.
Cook also took aim at the ways artificial intelligence is being deployed. “At its core this (AI) technology promises to learn from people individually to benefit us all. But advancing AI by collecting huge personal profiles is laziness, not efficiency.”
“For artificial intelligence to be truly smart it must respect human values – including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility – it is a responsibility.”
Welcoming the call, Babylon health founder Ali Parsa said: “I couldn’t agree more with Tim Cook on this. Those of us involved in the creation of artificial intelligence must take the responsibility of respecting human values as the basis of everything we shape extremely seriously. No ifs, no buts.”
Critics of Cook pointed to Apple’s own business models – with Alex Stamos, Facebook’s former CTO, attacking the company’s banning of customers in China for installing a VPN to get around censorship there.
“I agree with almost everything Tim Cook said in his privacy speech today, which is why it is so sad to see the media credulously covering his statements without the context of Apple’s actions in China,” Stamos tweeted.
At DigitalAgenda’s Power & Responsibility Summit this month, writer Andrew Keen said big tech’s data-driven business models had created a “surveillance economy”.