AIAAIC Alert #12
The ChatGPT effect, Netherlands algorithmic risk report, melting eggs, human rights and ethics research mentions
AI, algorithmic, and automation trust and transparency
#12 | October 2, 2023
The ChatGPT effect
To the dismay of techno-solutionalists and optimists, a number of recent research studies suggest general public concerns about AI are increasing and trust, such as it is or was, is on the wane.
Assuming these studies are accurate, what’s driving this dent in confidence?
The rapid acceleration in the development of massive, powerful new systems with apparently scant regard for their actual or potential negative impacts on privacy, equality, democracy, jobs, and the environment is surely one factor.
The gap between the rhetoric and reality about safety and ethics almost certainly doesn’t help, and may well contribute to an even larger gap between awareness and understanding of the opportunities/benefits and limitations/risks.
There’s the fear of catastrophic risks such as robot warfare and human extinction. Primarily aimed at politicians but in recent months seemingly bandied about like confetti to anyone willing and able to listen, these risks are likely to have seeped deeper into public consciousness.
And then there’s the ChatGPT effect. As recently noted by the Wall Street Journal, OpenAI’s consumer-friendly chatbot is seen - rightly or wrongly - to have almost single-handedly exposed many of the day-to-day risks and harms associated with AI to the general public.
(AIAAIC has documented dozens of incidents and controversies driven by and associated with ChatGPT - far more than for Google Bard, Midjourney, or any other foundation model or generative AI system).
Like it or not, ChatGPT has become the de facto bellwether for the AI industry and for big tech. How it works and is seen to work, and how OpenAI and its leadership behave and are seen to behave, will frame views about the rest of the industry for the foreseable future.
We will be watching with interest, and recommend you do too.
Recommended reads, listens, and watches
The ISO and IEC publicly shared their 22989 foundation standard. The standard defines 100+ AI concepts, and provides guidance on NLP, computer vision, and other technologies. Useful, though a fairly narrow set of terms.
In Compute and AI, AI Now Institute’s Jai Vipra and Sarah Myers West set out the environmental, competition, and geo-political dangers of the Compute arms race, and recommends a series of policy interventions. Further evidence that the AI business model as a whole is increasingly under the microscope.
The UK’s Competition & Markets Authority published a punchy initial report on the impact of AI Foundation Models on competition and consumer protection. Notable risks include price collusion, fraud, and disinformation, the CMA reckons.
Also in the UK, the House of Commons Science, Technology & Innovation Committee took a swipe at the UK government’s pro-innovation approach to AI in a detailed report on AI governance. The government has two months to respond. Neat timing ahead of the UK’s AI Safety Summit in early November; will the UK’s pro-innovation stance withstand the heat?
The Netherlands’ DPA highlighted the risks of chatbots and other emerging technologies in its inaugural Algorithm Risk Report Netherlands, and called on the Dutch government to place more focus on controlling high-risk AI systems. A sensible, clear, and timely report.
Meta’s OpenLoop ‘regulatory innovation’ programme published (pdf) results of research on EU AI Act Article 52(a) AI system transparency obligations, and set out preliminary compliance insights.
The Ada Lovelace Institute published DNA.I, a report exploring the use of AI in genomics. The report suggests areas for immediate exploration as ‘the issues posed by the…technologies become harder to predict, more complex and more numerous.’ AIAAIC currently excludes genomics from its coverage, but an area to watch.
AI and algorithmic incidents and controversies
September 2023 additions to the AIAAIC Repository. Entries are categorised as Incidents and occurred in 2023 unless otherwise stated. Click here for classifications and definitions.
Rikers Island prisoner risk classification system increases violence 50% (2015)
State Farm fraud detection system discriminates against Black homeowners (2022)
'Racist' Uber Eats facial ID check gets Pa Edrissa Manjang fired
Z-Library shadow library (data)
Walgreens fails to gain customer facial recognition consent (2020)
Instagram offers Xanax, exctasy, opioid pipeline to kids (2021)
Allocation algorithm wrongly places thousands of Italian teachers (2016)
AI unmasks anonymous chess players (issue, 2022)
MTV Lebanon uses deepfakes to commemorate bomb victims (2021)
Deepfake video alleges France opposes Mali military junta (2021)
Systems highlighted on the AIAAIC website during September 2023.
Visit the AIAAIC Repository for details of these and 1,100+ other AI, algorithmic and automation-driven incidents and controversies
Report an incident or controversy.
AIAAIC news
AIAAIC is pleased to welcome Alex Read as a volunteer. Alex has been working in international development with parliaments across the world for 15 years, including as Senior Technical Advisor in Myanmar for the UNDP and the Inter-Parliamentary Union. He is currently researching the impact of emerging technologies and how democratic institutions can respond.
In addition to how we performed during September, the results of AIAAIC’s 2023 user survey are now available on our new transparency web page. The skinny: you value our independence and objectivity; would like more focus on education, policing, justice, security, generative AI, emotion recognition, and bias.
AIAAIC in the news
Information Week. Weighing the AI Threat By Incident Reports
IAPP. AI incident response plans: Not just for security anymore
Forbes Mexico. La inteligencia artificial revoluciona los negocios; así puedes aprovecharla
AIOT Brasil. Estudo identifica mais de mil problemas éticos da IA
Segye. 사고·논란 9년 새 26배 급증… ‘FATE’ 확보에 ‘운명’ 달렸다 [심층기획-AI 앞에 선 민주주의
More news and coverage mentioning AIAAIC
AIAAIC research citations, mentions
Ramírez Sánchez A.M., Bhatia I., Firmino Pinto S. Navigating Artificial Intelligence from a Human Rights Lens (pdf)
Burema D., Debowski-Weimann N., von Janowski A., Grabowski J., Maftei M., Jacobs M., van der Smagt P., Benbouzid D. A sector-based approach to AI ethics: Understanding ethical issues of AI-related incidents within their sectoral context
Stinckwitch S. On the Unsustainability of ChatGPT: Impact of Large Language Models on the Sustainable Development Goals
Explore research reports citing or mentioning AIAAIC.
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org