View this email in your browser
AI, algorithmic and automation trust and transparency
#3 / June 15, 2021
Eric Schmidt is wrong: AI transparency is feasible, and essential
Former Google CEO Eric Schmidt grabbed the headlines at Politico’s recent AI Summit by lambasting the EU’s ‘regulation-first’ approach to artificial intelligence and the European Commission’s draft regulation’s mandatory transparency requirements.
Transparency, he argues, would be ‘very harmful to Europe’ because the designers and engineers who design AI systems don’t understand how they work. Schmidt is correct when he says explainability is technically tough. This is particularly true of neural networks, which are notoriously tricky to explain.
But neural networks, whilst increasingly popular and powerful, are only one of many types of algorithm, many of which can be explained relatively simply. And decent progress is now being made by Google, IBM and others to make algorithms technically explainable.
Furthermore, transparency takes many forms beyond communicating how an AI model works. There is the formal disclosure of the provenance and use of data and known risks to official intermediaries, regulators and auditors.
There’s the need for clear, accurate, honest communication with users – an unfortunate rarity in the world of AI and algorithms. The need for simple, accessible, and responsive feedback and complaint systems.
There’s also the potential for involving users and other stakeholders in the design and deployment of AI systems. The potential for AI and algorithmic responsibility, trustworthiness nutrition labels. And the growing importance of ESG reporting.
Schmidt may be primarily concerned about geo-politics and protecting the interests of US business. Fair enough. But he should appreciate that accountability and transparency are mutually reinforcing prerogatives in today’s intrinsically connected and volatile environment.
Trust and transparency reads, listens and watches
A new report from the Australian Human Rights Commission recommends citizens are told when a decision about them has been made by an algorithm, and they should have the right to appeal against it. Exactly | link
Security, privacy, ethics, robustness, reliability, and resiliency constitute the core pillars of the trustworthiness of digital identity systems, argues the Alan Turing Institute. Though transparency is barely mentioned | link (pdf)
Soft law is likely to be the main form of AI governance over the next few years. Of the 634 programmes across the world studied by academics and researchers at Arizona State University, general mentions of AI transparency, bias/discrimination and literacy account dominate. But to what extent do they translate into real-world actions? | link
Accountability requires two forms of transparency: ‘control’ transparency, and ‘entitlement’ transparency, according to AlgorithmWatch in its recent paper Towards accountability in the use of Artificial Intelligence for Public Administrations. Hmmm ... can communication really be ‘controlled’? | link (pdf)
The Markup has published a useful quiz on so-called dark patterns, the shady and widespread practice of cajoling users into making unexpected and unwanted choices online. Test and educate yourself, your colleagues and friends | link
By ignorance or malice, policymakers’ sweeping platitudes to regulate artificial intelligence may end up harming citizens more than protect them, argues Justin Sherman in WIRED. Regrettably, political misunderstanding and hype are almost as prolific as their marketing and PR cousins | link
Latest AI incidents and controversies
The UK’s NHS quietly launched a programme to collect and use the medical histories of 55 million British citizens for largely unspecified purposes by unspecified third parties. It has since delayed the date by which patients may opt-out.
Google and HCA Healthcare struck a deal to develop healthcare algorithms using patient data, sparking privacy concerns. In a repeat run of Google/Ascension’s ‘Project Nightingale’ in 2019, the WSJ exposed the deal, prompting the two companies to confirm its existence.
US healthcare data company Epic’s Deterioration Index, widely used across the US healthcare system, is called out for its opaque, proprietary nature and inadequate peer review of data and analysis.
The US Custom and Border Protection’s new CBP One asylum seeker mobile app, which uses facial recognition, geolocation and other technologies, raises privacy and surveillance concerns.
Image and video search results for ‘Tank Man’ on Microsoft’s Bing search engine mysteriously vanished on the anniversary of the Tiananmen Square massacre. Microsoft put it down to ‘human error’, much of the media called it censorship.
The Security Industry Association expelled Chinese technology company Dahua Technology for ethics violations. Dahua has been discovered helping Beijing surveil Uyghurs and rigging fever cameras, and other things.
A UN mention of a Turkish drone attack in Libya led to reports of the world’s first known fully autonomous lethal attack. Others questioned the degree of automation.
Apple and its contractor SIS are being taken to court by a student who was misidentified using facial recognition, leading to his wrongful arrest.
The RCMP is found to have broken Canadian privacy law by using Clearview AI facial recognition software. The Mounties had originally denied using the system.
Visit the AIAAIC Repository for details of these and other AI, algorithmic and automation-driven incidents and controversies
AIAAIC news and views
Attention AIAAIC Repository users: we are running a survey to find out how to make the repository better meet your requirements and expectations. It should take no more than 5 minutes. Thank you. | link
Details of how the AIAAIC Repository is run are now publicly available on the AIAAIC’s website, including how incidents and controversies are identified, assessed and approved. Let us know if anything is unclear or can be strengthened. | link
Incidents and controversies can be submitted to the AIAAIC Repository here; those being assessed are now listed on the AIAAIC repository ‘Pending’ sheet/tab. | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org