A week after US politicians engaged with Mark Zuckerberg, UK counterparts in both houses of parliament confirmed their determination to keep the heat in the debate about tech ethics on this side of the pond. But is new legislation any closer?
After US politicians had their turn last week, this week the UK parliament showed its determination to keep the debate on tech ethics alive. Aside from the ongoing select committee inquiry into fake news – which has generated some spectacular recent headlines on Cambridge Analytica – there was more activity this week from both MPs and peers.
On Monday, the House of Lords select committee on artificial intelligence published a report, Ai in the UK: Ready, Willing and Able? That concluded that the UK is in a strong position to be a world leader in Ai development – but said the best way to do it is to “put ethics at the centre of Ai’s development and use”.
The Lords said that a new industry-led Ai council could deliver more transparency in artificial intelligence and help ensure that the prejudices of the past are not built into automated systems. But they stopped short of calling for Ai-specific laws or a state-backed Ai regulator.
The Lords report was published just ahead of a scoping event for a new parliamentary commission on technology ethics, led by MPs Darren Jones and Lee Rowley. That brought together big tech firms like Microsoft and Google with experts from the Oxford Internet Institute and professional bodies like BCS and techUK to assess the most pressing issues in tech ethics today.
Jones set out five possible themes for debate – trust, bias and diversity, public understanding, data rights and the boundaries of use around machine learning – and said a commission would be able to influence the role of the new centre for data ethics and innovation, unveiled in the government’s industrial strategy in November.
That centre was on the agenda for the Lords Ai report too. High on the list of their recommendations was work for a new Ai council, to increase transparency, with a voluntary mechanism informing consumers when Ai is being used to make “significant or sensitive decisions”. The industrial strategy also unveiled the Ai council idea, alongside a new government office for Ai, to offer leadership for the sector.
None of these bodies would have regulatory power as currently envisaged. Though the Lords committee urged clarity on the roles and the remits of the new institutions, it held back on calling for a full-scale Ai regulator. “Blanket AI-specific regulation, at this stage, would be inappropriate,” said the committee.
Instead, it called for a new “cross-sector Ai code”, guided by five overriding ethical principles, to be applied nationally and internationally:
- Artificial intelligence should be developed for the common good and benefit of humanity
- Artificial intelligence should operate on principles of intelligibility and fairness
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
This new voluntary approach could be used alongside existing legislation, including the new data protection bill, GDPR and the power of the information commissioner.
In a wide-ranging 180-page report, the Lords committee warned that “the prejudices of the past must not be unwittingly built into automated systems”. To counter this, it said the government should incentivise the development of new approaches to the auditing of datasets used in Ai, and encourage greater diversity in the training and recruitment of Ai specialists.
The Lords said children should be adequately prepared for working with and using Ai – urging that the ethical design and use of Ai becomes an integral part of the curriculum.
The committee also questioned the ability of current legislation to work when machines go wrong. “It is not clear whether existing liability law will be sufficient when Ai systems malfunction or cause harm to users,” says the committee.
On data privacy, individuals need to be able to have greater personal control over their data,” said the committee, urging that “the monopolisation of data by big technology companies must be avoided”. It said the government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
On jobs, the committee said many jobs will be enhanced by Ai, though many will disappear. “Many new, as yet unknown jobs, will be created,” said the report, with significant government investment in skills and training needed to mitigate Ai’s negative effects.
Committee chair Lord Clement-Jones said: “The UK has a unique opportunity to shape Ai positively for the public’s benefit and to lead the international community in Ai’s ethical development, rather than passively accept its consequences.
“The UK contains leading Ai companies, a dynamic academic research culture, and a vigorous startup ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in Ai’s development and use.”
Clement-Jones added: “Ai is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.
“We want to make sure that this country remains a cutting-edge place to research and develop this exciting technology. However, startups can struggle to scale up on their own. Our recommendations for a growth fund for SMEs and changes to the immigration system will help to do this.”