View this email in your browser
AI, algorithmic and automation trust and transparency
#1 / June 1, 2021
Most companies make little or no effort to make AI systems transparent
In a new study FICO/Corinium finds nearly 70% of 100+ USD 100m+ revenue companies surveyed on how they are operationalising AI are unable to explain how their AI models work. More concerningly, it finds 65% say they make little or no effort to make their systems transparent and accountable.
Furthermore, 78% said they were "poorly equipped to ensure the ethical implications of using new AI systems" and “have problems getting executive support for prioritizing AI ethics and responsible AI practices.”
The reluctance to communicate transparently and openly with external audiences stems from a variety of concerns – some legitimate, others little more than convenient pretexts.
The most common concerns involve the loss of intellectual property and potential or actual competitive advantage; greater vulnerability to cyberattacks and to gaming by users, trolls and activists; and the protection of user privacy.
There are also concerns that providing public information about how their systems work and setting out their limitations and risks exposes companies more to operational, legal and reputational risks.
This information may include the sources and use of data, the real purpose of their technologies and their primary and secondary intended impacts (such as productivity efficiencies and job losses), how bias and other risks have been mitigated, the scope for dual or misuse, and the degree of human oversight.
With bias difficult if not impossible to eliminate, misinformation, harassment and other dual uses rampant, and the secondary impacts of RPA and other robotic programmes frequently circumnavigated or hidden, it is perhaps hardly surprising that most companies are reluctant to manage ethical risks or say much about their systems.
By doing so, companies risk appearing unconcerned about their impact of their activities and more preoccupied with the risks to themselves than to the users or targets of their products and services.
Transparency laggards exist in every sphere and organisations developing and deploying AI are little different.
But with users able to complain publicly and to switch services easily and mandatory AI transparency legislation being proposed in the US Congress and EU, organisations are likely to have to manage and disclose AI risks and communicate a good deal more openly and genuinely.
Download The State of Responsible AI report
Trust and transparency reads, listens and watches
Trust ‘will become even more important the less we know about our technology’ argues the US National Institute of Standards and Technology in its Trust and Artificial Intelligence report, intended to kick off a public discussion about trust in AI
Big tech must explain to users how they use algorithms and what information they use to run them, according to Senators Ed Markey and Doris Matsui’s proposed Algorithmic Justice and Online Platform Transparency Act of 2021
In the wake of several high profile controversies, the UK government has published Ethics, Transparency and Accountability Framework for Automated Decision-Making to improve AI literacy across the public sector
The EU’s risk and trust-based approach to AI regulation is contrasted with lighter touch approaches by Travers Smith lawyers in Regulating AI - Which approach will prevail?
Carnegie Mellon researchers conclude tech companies have a moral duty to explain how their algorithms make decisions
Reform’s Sebastian Rees argues transparency is essential as we enter the era of ‘government by algorithm’
Twitter analyses, fixes and discloses image cropping algorithm bias.
Latest incidents and controversies
BookCorpus’ dataset is found to abuse copyright and contain multiple biases
Insurance company Lemonade is forced to deny it uses facial or emotion recognition to assess and deny insurance coverage or claims
Georgetown CSET study concludes OpenAI’s GPT-3 is easily used for short form mis and disinformation
A whistleblower reveals to BBC Panorama that Beijing is testing an emotion detection system on Uyghers
Google’s Derm Assist dermatology app to help users identify skin conditions raises bias and privacy concerns
An Irish photographer manipulated images of Cambodian torture victims, prompting widespread outrage
Activist group GLAAD calls out Facebook, Twitter, YouTube, Instagram and TikTok for discrimination against LGBTQs
Citizen app’s CEO personally ordered a manhunt for an individual wrongfully accused of starting a wildfire; the company has also been found to be testing a private secondary security force
Visit the AIAAIC Repository for details of these and other AI, algorithmic and automation-driven incidents, issues and controversies
AIAAIC news and views
AIAAIC is drafting its manifesto for AI and algorithmic transparency. It will feed into AIAAIC’s strategy, plan and ongoing projects, and will help make the case for AI transparency and openness to government and business across the world. We would love to hear your comments | link
AIAAIC founder Charlie Pownall sets out the EU’s draft AI regulation mandatory transparency requirements in INFLUENCE magazine | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org