A new independent institute is being established to study the ethics of data, artificial intelligence and algorithms, the UK foundation behind the plan confirmed last week. Given recent events, the news could not be more timely. Julian Blake reports.
The new £5m Ada Lovelace Institute is being created by the Nuffield Foundation “to examine profound ethical and social issues arising from the use of data, algorithms, and artificial intelligence, and to ensure they are harnessed for social wellbeing”.
The foundation says the new institute will:
* convene diverse voices to build a shared understanding of the ethical questions raised by the application of data, algorithms, and artificial intelligence
* initiate research and build the evidence base on how these technologies affect society as a whole, and different groups within it
* promote and support ethical practices that are deserving of public trust.
The institute – named after the 19th century mathematician and computer pioneer – “will act as an independent voice, speaking on behalf of the public interest and society, informing thinking of governments, industry, public bodies and civil society organisations, in the UK and internationally”.
The foundation has spent the past six months convening a partnership of leading organisations to “address the need for agreed ethical frameworks and codes of practice for the use of new technologies, which have developed rapidly over recent years”.
Momentum has been building around the ethics of tech, as government, industry and academic bodies alike express ever-more-vocal concerns, and public trust continues to be tested by major data protection stories.
In its November budget, the government confirmed a new £9m Centre for Data Ethics and Innovation (CDEI) to “set standards for the use and ethics of AI and data”.
As the government began its search for a leader of what will be an interim body in January, culture secretary Matt Hancock said the new centre would “make sure we have a governance regime which fully supports both ethical and innovative uses of these technologies. It will deliver its work through extensive engagement with industry, regulators, civil society and the public.”
The Ada Lovelace Institute is expected to complement the work of regulators and the new CDEI.
Nuffield Foundation trustee Dame Colette Bowe said: “This month we have seen the first pedestrian fatality in a self-driving car crash, leading to calls for testing programmes on public roads to be suspended. And revelations about Cambridge Analytica’s alleged use of Facebook data have heightened public concern about how data is used, with serious implications for trust in digital technologies and industry.
“These examples show that in many cases, public scrutiny of the use of data and automated technologies only occurs when something ‘goes wrong’. Valid questions are being asked about data rights, as well as about consent, public interest and what constitutes an ethical approach. The Ada Lovelace Institute will work with its partners to ensure we have these conversations before a critical incident, with the aim of developing codes of behaviour for the application of innovations of data and AI that are deserving of public trust.”
Government director general for digital Matthew Gould last week welcomed the Nuffield Foundation move. “With this new Ada Lovelace Institute, and the Govt’s new Centre for Data Ethics & Innovation, the UK is establishing itself as a leader in #AIethics,” he tweeted.
Other recent work on tech ethics has included the Royal Statistical Society’s Data Manifesto, the Science and Technology Committee Report on the Big Data Dilemma and last year’s Royal Society and British Academy Data management and use: Governance in the 21st Century report.
Contributing partners to the new body include The Alan Turing Institute, the Royal Statistical Society, the Nuffield Council on Bioethics, the Wellcome Trust, the Royal Society, the British Academy, techUK and Omidyar Network’s Governance & Citizen Engagement Initiative.
In a further development, MPs Darren Jones (Lab) and Lee Rowley (Con) have bridged the party political divide to establish a new parliamentary commission on technology ethics. Its first public meeting is on 17 April.
The work of the commission is to be directed by BCS, the Chartered Institute for IT – which itself is making the ethics of Ai a priority under its new chair Chris Rees. “IT has to be ethical if it is to be good for society,” Rees said last month.