AIAAIC Alert #22
The weekly update on incidents and issues driven by AI, algorithms and automation.
Keep up with AIAAIC via our website | X/Twitter | LinkedIn
In the crosshairs
Image: World Health Organisation
WHO chatbot provides inaccurate health information
Trained on GPT-3.5, the bot fails to provide up-to-date information on US-based medical advisories and news events. The WHO should surely know better.Tesla driver allegedly using Autopilot runs over, kills motorcyclist
According to the driver of the car, Autopilot was engaged at the time of the crash. Authorities say they have not yet independently verified whether this was the case.'Intrusive' AI speed cameras criticised by UK motorists
21 percent of Brits are said to be unhappy about potential privacy implications of an AI speed camera system being tested by several police forces.Netflix documentary uses AI to manipulate true crime story
Resulting in accusations of reality distortion and historical revisionism, and disregard for human jobs.Michel Janse deepfake used for advert without consent
To sell erectile dysfunction pills, of all things.OpenAI's GPT store faces copyright complaints
GPTs pass the IP abuse parcel. Well, we never.Film studio use of AI to promote Civil War backfires
Sparking concerns about AI’s role in entertainment, notably its impact on jobs, and the need for transparency in the use of the technology.Maori woman misidentified by Foodstuffs facial recognition system
Labelled by the system as a trespassed 'thief', the woman was accused of being a shoplifter and thrown of the retail outlet, despite having 3 forms of ID on her that proved who she was. Automation bias writ large.
Support our work collecting and examining incidents and issues driven by AI, algorithms and automation, and making the case for technology transparencsy and openness.
System spotlight - Snapchat My AI chatbot
Image: Snap Inc
The yang: SnapChat’s My AI chatbot is billed as an always-available 'virtual friend' willing to chat, provide information, and support its (primarily teenage) users.
The yin: My AI is seen to have inadequate safety safeguards, a slapdash and opaque approach to privacy, provide inadequate and unclear guidance to users about what it should and should not be used for, 'hallucinate' facts, generate misinformation and disinformation, and reinforce ageist and sexist bias and cultural stereotypes.
Incidents involving My AI recorded in the AIAAIC Repository:
Snapchat algorithm recommends teen connects with sex offenders
UK privacy watchdog accuses Snapchat of failing to assess My AI privacy risks
Visit the AIAAIC Repository for details of these and 1,450+ other AI, algorithmic and automation-driven incidents and controversies
Report an incident or controversy.
Get in touch
AIAAIC Alert is written by Charlie Pownall and other AIAAIC team members.
We’d love to hear from you if you have questions, comments or suggestions about this newsletter, or about AIAAIC more generally. Email us at: info@aiaaic.org