AI, algorithmic, and automation trust and transparency
#7 | May 5, 2023
Prising open the Generative AI black box
Generative AI is heralded both as a massive step forwarded for humanity, and the end of society as we know it. The extent to which it is being met with equal measures of acclaim and concern is difficult to say. Certainly, the benefits are manifold and hardly need to be repeated here.
On the other hand, ChatGPT, Bard et al have catapulted the risks and harms of large language models (LLMs), generative AI, of and AI as a whole, into the public domain and have persuaded the US, EU, China and other governments to swing into action.
The transparency of generative AI systems is cited as a fundamental obligation by most governments. A read-out of yesterday's White House meeting with OpenAI, Anthropic, Google and Microsoft singled out:
'the need for companies to be more transparent with policymakers, the public, and others about their AI systems; the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and the need to ensure AI systems are secure from malicious actors and attacks.'
A black box in the true sense of the word, ChatGPT is most prominently in the regulatory cross-hairs and it is encouraging that it, along with several other prominent LLMs, agreed to US government arm twisting that their systems be independently assessed at this year's DEF CON hacking conference in Las Vegas in August.
Red teaming involving thousands of hackers, rather than the 40 used by OpenAI to assess ChatGPT, is an intruiging model, and one to watch carefully. It may also indicate the evolving AI audit industry has a way to go before it gains the trust of the White House.
AI and algorithmic incidents and controversies
A handful of notable recent additions to the AIAAIC Repository:
German magazine Die Aktuelle claimed an exclusive interview with former F1 driver Michael Schumacher. It didn't go down well | link
Levi Strauss was called out for claiming that it would use AI-generated models to supplement and diversify its human models. Cue diversity-washing backlash | link
An AI writing detection tool developed by US-based Turnitin was hauled over the coals for claiming its system was 89 percent accurate. A Washington Post investigation found it is inaccurate over half the time | link
ChatGPT was used to generate a fake firestorm in which the Hangzhou government in China was reputedly to lift its number plate driving restrictions, causing mass confusion and a police investigation | link
A voice simulator launched by London-based ElevenLabs quickly ran into trouble for the ease with which it could generate racist, transphobic, homophobic, anti-semitic, violent, and offensive audio content in other people's names | link
Visit the AIAAIC Repository for details of these and other AI, algorithmic and automation-driven incidents and controversies
Report AI, algorithmic, and automation incidents
AIAAIC news and views
AIAAIC is steadily expanding and consolidating the spreadsheet and web versions of the AIAAIC Repository, which now counts over 1,000 entries.
Recent projects include incorporating - with kind permission - GW Law's AI Litigation Database and Exposing.AI's investigation into facial recognition datasets, and developing a list of fatalities connected with Tesla Autopilot (accessible to Premium Members as separate hidden sheets).
Usage of the AIAAIC website has grown rapidly and now averages approximately 90,000 unique users a month, driven largely by incidents involving deepfakes and other forms of mis and disinformation, generative AI, and gaming.
A number of volunteers have signed up to expand the AIAAIC Repository's coverage in the so-called 'global south', amongst other things. We look forward to introducing them shortly.
AIAAIC in the news
Stanford University's 2023 AI Index Report incorporates headline AIAAIC Repository data to show a massive increase in AI-driven incidents | link
The US National Institute of Science and Technology (NIST) references the AIAAIC Repository as a safety, validity & reliability resource in its AI Risk Management Framework Playbook | link (pdf)
The UK's Ada Lovelace Institute named the AIAAIC Repository a 'best practice' AI ethical review research open database | link
AIAAIC founder Charlie Pownall was quoted on the merits and likelihood of supranational regulation of AI to mark Japan's G7 digital ministerial meeting by Kyodo News | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org