AIAAIC Alert #61
Ross Intelligence, Cohere copyright abuse; AI summary bot failures; Deepfake university applicants; Allstate muddled AI messaging; External AI research rights; AI reputational incident response
In the crosshairs
Latest AI and algorithmic meltdowns and misuses
Image: Ross Intelligence
Ross Intelligence ruled to have violated Thomson Reuters copyright when training AI model
A US judge granted a partial summary judgement to Thomson Reuters, finding that Ross Intelligence directly infringed on 2,243 Westlaw headnotes when creating its competing AI legal research platform. The decision is seen to have far-reaching implications for the AI industry and content creators, and may set a precedent for other pending copyright cases.Cohere accused of violating publishers' copyright, trademarks to train AI
Condé Nast, The Atlantic, Forbes and other publishers sued AI company Cohere for allegedly using “at least” 4,000 copyrighted works without permission to train its AI models.Study: AI chatbots fail to summarise news accurately
ChatGPT, Copilot, Gemini and Perplexity AI regularly produce inaccurate and misleading summaries of news articles, according to a BBC study.AI-generated video condemning Kanye West anti-semitism backfires
A deepfake video created by a pair of Israeli digital marketers shows Scarlett Johansson, Jerry Seinfeld, Adam Sandler and other Jewish celebrities appearing to wear t-shirts sporting an anti-Kanye West message. Johansson condemned the video's creation, stating that the misuse of AI poses a greater threat than individual hate speech.Deepfake applicants discovered duping UK university interviews
Applicants have been using AI-generated images or audio to replace faces and voices during remote interviews, according to education software platform Enroly.Tesla Cybertruck using FSD crashes into pole
A Tesla Cybertruck with FSD activated failed to merge out of an ending lane in Nevada, USA, hitting a curb and crashing into a pole.
Tesla Cybertruck attempts to turn into oncoming SUV
A Cybertruck suddenly attempted to turn left into what appeared to be a driveway, directly into the path of an approaching vehicle, forcing the Tesla driver to intervene.Investigation finds Match Group dating app systems fail to detect rapists
The company’s systems fail to ban users accused of sexual assault despite the company knowing about them. It also failed to notify authorities, according to an investigation by The Markup.DeepSeek accused of denying claims of Uyghur genocide
When asked about Uyghur genocide, DeepSeek asserted that the claim was a "severe slander of China's domestic affairs" and "completely unfounded".
Report an incident or controversy.
New and noteworthy
Safeguarding third-party AI research. A Stanford HAI evaluation of seven major AI companies’ policies, access provisions, and related enforcement processes by a multi-disciplinary machine learning, legal and policy team found that none offers comprehensive protections for third-party AI research, and that many firms’ terms of service legally prohibit third-party safety and trustworthiness research, “in effect threatening anyone who conducts such research with bans from their platforms or even legal action.” | Policy brief, paper
ChatGPT model rules. OpenAI updated its 187-page rulebook to make ChatGPT engage on more controversial topics. The new approach emphasises neutrality and multiple perspectives. | Blog, model spec, source code
Anthropic Economic Index. Drawing on millions of anonymised conversations with its Claude chatbot, data published by the generative AI company indicates AI is being widely used to augment specific tasks - especially in software development, technical writing and business analysis. | Research
Allstate’s mixed messages. A spat between US insurer Allstate and the WSJ and Futurism reveals the dangers of saying one thing - that many emails it sends to customers are now generated by AI - to the business/financial media and another to an edgy and skeptical trade publication. | WSJ, Futurism 1, Futurism 2
Vox populi
Europeans support AI protections at work. Most Europeans support clear rules for the use of digital technologies through protecting workers’ privacy (82 percent) and involving workers and their representatives in the design and adoption of new technologies (77 percent), according to Eurobarometer. | Survey
AIAAIC news
New AIAAIC paper on taxonomy human-centredness. By examining a number of high-profile AI and algorithmic harm and risk taxonomies and associated research studies and databases from a human/user perspective, a new AIAAIC working paper finds that many taxonomies muddle risks and harms, potentially resulting in frustrated users and tangled policymaking. | Working paper
AIAAIC citations, references & mentions
AI bias and fairness. King’s College London researchers review definitions of fairness, techniques, and metrics for measuring bias, as well as methods that can be applied to address any bias founds in a medical imaging context. | Paper
AI risk regulation. US legal academics make the case for a flexible ‘leash’ as opposed to a prescriptive or “guardrail” approach to regulating AI. | Paper
Responding to AI reputational incidents. Italian researchers draw extensively on the AIAAIC Repository to examine emerging types of reputational issues and the corresponding crisis communications responses of companies engaging with AI. | Paper ($)
From our network
What AIAAIC advisors and contributors are up to
Global AI governance. AIAAIC contributor Alex Read and the UNDP’s Sarah Lister make the case for national parliaments to act as the central bridge between public interests and recommendations and agreements flowing from global reports such as the UN High-level Advisory Body on Artificial Intelligence’s Governing AI for Humanity. | Article

