AIAAIC Alert #41
Trump's AI pets; AI song streaming scam; New Delhi's covert AI influence campaign; Steak 'n Shake facial recognition woes; Amazon Alexa yins and yangs; AI Snake Oil; Korean deepfake porn harms
Keep up with AIAAIC via our website | X/Twitter | LinkedIn
In the crosshairs
New AIAAIC Repository entries
Republicans support Trump using AI-generated kitten and duck images
Should we laugh, squirm or cry?Top AI models spout misleading US election information 30 percent of the time
Another study shows ChatGPT, Gemini, Claude et al are busily generating inaccurate and false US election responses. Mistral, which is open source and hence replicable, has the dubious honour of being the worst offender.Researchers uncover covert AI-powered pro-India influence network
Not only China and Russia are trying to get away with using using AI to pursue their national and geo-political agendas.Stalker doxxes and harrasses woman using AI chatbot
James Florence Jr.’s apparently highly unpleasant antics aside, CrushonAI’s crush on AI seems to have gotten to its head. Sometimes a few guardrails might be appropriate, folks.Music producer accused of using AI songs to scam streaming platforms
Anyone know the name of the undisclosed AI music company?Steak 'n Shake sued for alleged facial biometric violations
Another outfit gets BIPA-ed for questionable use of facial recognition.Report: Hidden text able to manipulate ChatGPT
NYT journalist Kevin Roose gets into the “knowledge engine optimisation” game.
Report an incident or controversy.
Support our work collecting and examining incidents and issues driven by AI, algorithms and automation, and making the case for technology transparency, openness and accountability.
Amazon Alexa
Featured system/dataset
Image: Amazon Inc
Alexa is a 'virtual assistant' developed by Amazon uses natural language processing and speech recognition technologies to enable users to ask questions and receive answers, get traffic alerts, set alarms, make lists, play music, and other things.
Launched in 2014 and available in multiple countries and languages, Alexa was first used in Amazon's Echo and Echo Dot smart speakers, and later in its Echo Studio and other products.
Amazon recently told Reuters that the next version of Alexa will be primarily powered by Anthropic's Claude large language model. The company has also said the next version of the system will not be free.
Amazon’s devices business lost over USD 25 billion from 2017 to 2021, according to The Wall Street Journal.
The yang: Amazon Alexa has been associated with many benefits, including its convenience, hands-free control and ability to customise.
It is seen to enable elderly and disabled people to live more independently by controlling smart home devices, setting reminders for medication, appointments, and alarms, and can provide remote assistance, fall detection, urgent response and caregiver connectivity.
The yin: Alexa is seen to pose a wide range of potential risks and to have caused many actual harms to its users and those nearby, from the unreliability and inappropriate behaviour of its product and inadequate safety and security, to the abuse of privacy, gender and racial stereotyping and bias.
The system and its developer also stand accused of opaque governance.
Incidents and issues associated with Amazon Alexa:
August 2024. Amazon Alexa favours Kamala Harris
October 2023. Amazon Alexa says 2020 US election was rigged
July 2023. Amazon wrongly disables Echo account after hearing racial slur
May 2023. Amazon uses Alexa child data to tune voice algorithm
April 2022. Amazon Echo voice data used to target ads
December 2021. Amazon Alexa recommends girl touches electric plug
July 2019. Amazon Alexa retains recordings, transcripts indefinitely
April 2019. Amazon employees listen to Alexa recordings
February 2018. TV advert makes Amazon Alexa order cat food
November 2017. Amazon Alexa holds 2am party when owner is out
January 2017. Amazon Alexa mistakenly orders USD 160 dollhouse
December 2016. Amazon Alexa plays child pornography
New and noteworthy
Autonomous vehicle maker Waymo says its self-driving cars are safer than human drivers on the same roads. Sounds good, but its analysis is based on methods and benchmarks introduced in two non-peer reviewed research papers | H/t, read
Hugging Face’s Sasha Luccioni sets out some of the more damaging environmental impacts of AI | Read
Trump falsely says that photos of him with writer E. Jean Carroll, who he was found guilty of sexually abusing and defaming, “could’ve” been generated using AI. A new legal and/or quasi-legal defence? | Read
Self-proclaimed entrepreneur and king of open source models Matt Schumer gets firmly hoisted by his own petard. Clue: “Claude, built by Anthropic”. | Read, h/t
The New York Times dives into on how Telegram became a playground for criminals, extremists and rerrorists. Many of whom are apparently hastily legging it to Signal. | Read
Argentina’s government is creating a new agency to predict crimes - while slashing other federal offices of state. | Read
Princeton’s
and have published AI Snake Oil, a guide on “what you need to know about AI and how to defend yourself against bogus AI claims and products.” Sensibly, the books draws on the AIAAIC Repository. | Read
From our network
AFP talks to teenage activist Bang Seo-yoon and other South Koreans dealing with the fall-out of Pavel Durov’s refusal - seemingly in the name of libertarianism, $$ and 💪 - to get a grip on Telegram’s massive AI porn problem.