AIAAIC Alert #39
Legal opacity; Pixel 9 Reimagine; Viggle AI training; Microsoft AI micro-management; Outabox facial biometrics data breach; ICRC killer robot report; D&I incident analysis; Personhood credentials
#39 | August 30, 2024
Keep up with AIAAIC via our Website | X/Twitter | LinkedIn
Weapons of algorithmic legal opacity
Author: Charlie Pownall, Founder and Managing editor, AIAAIC
New research penned by MIT cognitive scientists concludes that the crafty folk who write laws and pen legal documents are casting "magic spells" to confound and dominate us. Deliberately convoluted legalese, they argue, is intentionally used to signal the “authoritative nature” of the law at the cost of clarity and understanding.
This complexity extends beyond word choice. Legal documents are often excessively long, further discouraging thorough reading. For example, website terms typically span many pages of dense text. The EU’s General Data Protection Regulation (GDPR) comes in at 88 pages, excluding accompanying directives.
In stark contrast, the Gettysburg Address - that masterpiece of concision, clarity and influence - is all of 68 words (depending on which version).
The use of verbose and jargon-filled language in legal documents is a long-standing practice that maintains the legal profession's exclusivity, and keeps legal drafters, solicitors and the inns of court in business just fine.
However, this opacity extends beyond traditional legal writing to include a panoply of legal and quasi-legal tools and techniques and is on full display when it comes to AI and algorithmic systems.
While some legal protections are necessary (eg., safeguarding legitimate trade secrets), there is growing evidence that disproportionate and questionable legal and quasi-legal tactics are being used to limit AI transparency and accountability.
These tactics include:
Denial of responsibility - example
Disproportionate or covert policy updates - example, example
Misapplication of tort law - example
Non-disclosure agreements - example
Content redaction and removal - example
Client-attorney and other privilege claims - example
Partial or non-existent freedom of information request responses - example
Mandatory arbitration - example
These strategies not only hinder accountability in specific AI-related incidents, but also restrict public access to potentially important information.
Going forward, AIAAIC will continue to document instances of legal opacity.
Please get in touch if you’d like to help us hold not just the algorithmic systems themselves to account but their “legal” protectors as well.
Support our work collecting and examining incidents and issues driven by AI, algorithms and automation, and making the case for technology transparency and openness.
In the crosshairs
AI and algorithmic incidents hitting the headlines
Image: The Verge
Pixel 9 Reimagine blasted for lack of safeguards
Photo editing has been going on a long, long time. But this makes it possible to generate very high quality fake images on any topic at the touch of a button. Google surely anticipated the risk but went ahead anyway.Viggle admits to training AI models on YouTube data without consent
This is what happens when a tech bro media novice comes up against a journalist who knows what he’s doing. Still, Google will likely turn a blind eye as it is more than likely up to much the same .Microsoft app accused of enabling employee mobile surveillance
Just about every aspect of an employee’s existence can now be digitally watched and analysed, even out in the field, and Microsoft’s making the lives of their watchers even easier. On the plus side, automated micro-management might just free up senior leaders to think beyond the next quarter. Or is that wishful thinking?Outabox data breach exposes 1m biometric records
Alienate your staffers and this is what can easily happen. And a neat case study in how not to handle an employee data leak incident to boot.Donald Trump uses AI to fake Taylor Swift endorsement
Trump admitted sharing the fake images but denied his team creating them. Does it matter either way?
And, last but not least:
YouTube crime page discovered to be entirely AI-generated
Yep, everything - images, videos, scripts and narration. And, despite requests to remove it over many months, Google still hasn’t done so, despite the potential brand, reputational and financial damage to advertisers.
Visit the AIAAIC Repository for details of these and 1,700+ other AI, algorithmic and automation-driven incidents and controversies.
Report an incident or controversy.
Research/advocacy citations, mentions
The Geneva Academy and International Committee of the Red Cross cited AIAAIC in a report detailing the implications of using AI for military purposes. A timely report given the use - by both sides - of fully automated kamikaze drones in the Ukraine-Russia war. | Read paper
A study exploring AI ethics through incident repositories concludes that none provide nuanced answers. Perhaps, but perhaps they were not set up with this focus in mind. As an aside, more entries from the AIAAIC Repository were selected as relevant than any other database and the researchers made their lives no easier by apparently not having access to AIAAIC harms data. | Read paper
Three CSIRO researchers analyse diversity and inclusion (D&I)-related AI incidents, including from AIAAIC, with the aim of understanding the extent to which D&I issues exist within AI systems and developing more inclusive, unbiased, and trustworthy systems. | Read paper
A bunch of MIT and other researchers cite an AI voice scam detailed by AIAAIC to help make the case for so-called ‘personhood credentials’ that allow people to interact online anonymously without their activities being tracked, thereby reducing the likelihood of fraud. | Read paper, WAPO article $
Explore more research reports citing or mentioning AIAAIC.
From our network
What advisors, contributors and others in AIAAIC’s network are up to
Sciences Po Paris lecturer and AIAAIC contributor Pierre Noro scans the challenges of AI, including the difficulties of identifying and assessing harms (a topic tackled in AIAAIC’s recent harms taxonomy paper) at the University of Strasbourg.
Watch Pierre’s presentation (in French)