AIAAIC Alert #47
Character AI nasties; Amazon Alexa false facts; Whisper AI medical hallucinations; AI scams; Generative AI fair use; Toronto police algorithmic black boxes; Call for high-level experts
Keep up with AIAAIC via our Website | X/Twitter | LinkedIn
In the crosshairs
AI and algorithmic incidents hitting the headlines
Image: Russell family
Molly Russell, Brianna Ghey chatbots discovered on Character AI
All sorts of nasties are coming to light just as Character’s AI founders Noam Shazeer and Daniel De Freitas skip back to Google. It seems safe to assume they are being hired for their programming rather than their policy enforcement chops.Synthesia accused of violating AI actors' integrity, trust
Thanks to vague terms of service and feeble enforcement of the AI video company’s acceptable use policies.Amazon Alexa attributes false facts to UK fact checking organisation
Alexa seems to have been trained on the Donald Trump/Boris Johnson playbook.Michael Parkinson AI podcast sparks ethics controversy
About authenticity and legacy, but also about jobs - a topic the creators neatly sidestepped in their “100%” ethics defence.
Kiwi pensioner loses NZD 224,000 to deepfake scam
Prime Ministers are often used as bait for these kinds of scams but they are just as often widely distrusted. Has anyone conducted research into which kinds of celebrities work best/worst for deepfake scams?Study: Whisper AI transcription invents medical treatments
Nobody’s sure why OpenAI’s voice recognition and transcription tool has a hallucination problem but it’s clear it shouldn’t be used for sensitive and high-risk purposes, including healthcare.
Study: AI search engines promote white supremacism
More AI slop for the interwebs, this time with a potent edge - courtesy of Google, Microsoft and Perplexity.Study: OpenAI voice agents can automate phone scams
Raising serious questions about the safety of OpenAI’s Realtime voice API.UK man jailed for 18 years for creating AI child abuse images
In this case the perpertrator drummed up 3D images using free online software. It seems likely there’s going to be a lot more of this, sadly.
Report an incident or issue.
AIAAIC news
Call for participation. AIAAIC is looking for high-level neuroscientists, psychologists, economists, social scientists, lawyers, human rights/civil liberties advocates, educators, business people and others to provide feedback on our AI and algorithmic harms taxonomy. | Project info, nominate an expert
Citations and references
AI impact on global capital markets. The International Monetary Fund lays out the risks and potential benefits of AI, notably greater efficiency and volatility. Good to see the IMF using AIAAIC data and noting our definition of an ‘incident’ (p 99). | Read
Generative AI and CSAM. A new UNICRI/Bracket Foundation report shows how generative is supercharging the sexual exploitation (using AIAAIC and OECD data) and abuse of children and recommends what should be done about it. | Read
AI Incident impacts on US banks. The impact of AI incidents on five US banks and financial services firms includes higher bankruptcy risk and lower operational cash flows, according to US-based researchers. | Read
“Algorithmovigilance”. A group of French researchers make the case for the assessment, monitoring and understanding of AI healthcare systems in order to prevent adverse effects and improve responses when they happen. (Otherwise known as risk management.) | Read
Explore more research citing or mentioning AIAAIC.
New and noteworthy
Generative AI fair use. Former OpenAI researcher Suchir Balaji opened up to the New York Times on how OpenAI broke US copyright law training ChatGPT and other language services. He also wrote a blog post comparing legal fair use factors with AI model training practices. | NYT article, Balaji blog
Data thievery. Over 32,000 people - including AIAAIC founder and author Charlie Pownall - have signed an open letter calling on AI companies and others to stop damaging the livelihoods of copright owners by training their systems on their rightful content. | Open letter
Preparing for AI agents. Timely CSET report on how developers and others can prepare for autonomous AI agents. AIAAIC will be closely tracking incidents associated with AI agents of various kinds. | Read
AI Risk Atlas. A free new IBM resource that sets out some of the risks of generative AI, language models and machine learning, differentiating new and amplified risks. | Visit
From our network
What AIAAIC advisors, contributors and others are up to
Law enforcement and algorithmic registers. AIAAIC contributor Ushnish Sengupta published a book chapter discussing the lack of transparency of law enforcement algorithms and the Toronto Police Services to the citizens of Toronto. A solution: the publication of dedicated algorithm registers connecting the product and system names to databases of incident reports such as the AIAAIC, and enable mitigation of known harms for the same product or service in other jurisdictions. | Read paper