AI, algorithmic, and automation trust and transparency
#10 | August 1, 2023
Transparency without openness is like McCartney without Lennon
Meta's release of its open-source Llama 2 large language model has been interpreted as an opportunity for Zuckerberg et al to catch up with OpenAI, Anthropic, Google, and Microsoft. As an open model, it was also viewed as a chance for Meta to re-build trust in the wake of its many privacy and safety travails.
Meta certainly pumped the openness prime. Nick Clegg (disclaimer: a former colleague of this author at the European Commission) took to the airwaves and opinion columns to trumpet transparency as 'the best way to combat fears' about emerging technologies.
And the company walked the talk by publishing the model, model weights, and evaluation code, together with a responsible use guide for developers and detailed white paper setting out the safety measures it took to protect users from the hallucinations and other harms of ChatGPT et al.
That said, Meta failed to release information about Llama 2's training data set, presumably to protect it from copyright and privacy complaints. Nor has it shared the code used to train its data, omissions captured in a detailed analysis of 21 open-source models by Dutch researchers.
The reseachers assessed each model's availability, documentation, and methods of access - relatively standard measures of AI disclosure - and concluded that Llama 2 is actually one of the most closed large language models.
Noticeably absent is any sense of the end user perspective or operational openness. A business-to-business proposition, Meta can hardly be expected to explain Llama 2's inner workings, risks, and impacts to the end user.
But neither do direct operators. Google labels Bard an 'Experiment', and says 'I have limitations and won't always get it right, but your feedback will help me to improve.' ChatGPT states it 'may produce inaccurate information about people, places, or facts', and provides basic release notes. But very little else.
Furthermore LLM developers appear highly reluctant to discuss their creations in the wild. A quick glance at Google's Play store reveals no attempt to respond to user comments, complaints, or suggestions by any other the major players.
Hardly surprising then that lawyers, teachers, and others new to the AI-driven chatbot phenonomen are easily confused about their capabilities and tendencies.
It is surely high time meaningful openness is put at the centre of AI operators' transparency efforts.
Reads, listens, and watches
The Biden adminstration says it secured 'Voluntary commitments' from OpenAI, Google, and other big AI companies. How voluntary these are treated by Sam Altman and co will be interesting to watch
The Vatican teamed with the Markkula Center for Applied Ethics at Santa Clara University to publish an operational ethics roadmap (pdf) for disruptive technologies
US-based Integrity Institute published a detailed deck of social media platform transparency best practices. Full of useful observations and tips, yet no word on operational openness (see above)
Open AI's GPT-3 large language model produces easy-to-understand information, as well as more compelling disinformation than humans, according to University of Zurich researchers
The Ada Lovelace Institute published a paper spelling out the different kinds of AI supply chains and a framework for policy-makers and regulators when considering supply chain management and liability
A new Cap Gemini report finds consumers love and trust generative AI, but warns this trust may be misplaced, exposing users to security, privacy, and misinformation threats.
AIAAIC research citations, mentions
McKinsey cited AIAAIC data on AI incident trends in its voluminous, 82-page Technology Trends Outlook 2023
Georgetown University's Center for Security and Emerging Technologies (CSET) drew on the current AIAAIC Repository taxonomy to develop its new AI Harm Framework
Laura Lucaj, Patrick van der Smagt, and Djalel Benbouzid at VW Group's Machine Learning Research Lab make the case for a comprehensive lifecycle audit of AI systems in a new paper.
Check out a selection of notable recent AI attitudinal/perception research studies
Explore research reports citing or mentioning AIAAIC.
AI and algorithmic incidents and controversies
New additions to the AIAAIC Repository during July 2023. All entries are Incidents unless otherwise stated. Click here for classifications and definitions.
Deepfake 'Pan Africanists' support Burkina Faso military junta
AI text detector language bias (issue)
Microsoft How Old do I Look app (system)
Google Smart Reply (system)
Google Flu Trends (system)
Boston Street Bump pothole reporting (system)
Fake security analyst peddles Hunter Biden intelligence report
Visit the AIAAIC Repository for details of these and 1,050+ other AI, algorithmic and automation-driven incidents and controversies
Report an incident or controversy.
AIAAIC news and views
AIAAIC is delighted to welcome a new group of volunteers. Nathalie Samaha, Josette Abi-Tamer, Amari Cowan, Mithila Harish, and J-P Akinyemi are helping clean and manage data, develop our new taxonomy, and strengthen editorial coverage, amongst other things.
AIAAIC would (still) love to hear your thoughts on how we're doing and were we're going. The survey will only take a few minutes, and your input will feed into our strategy and planning.
After several instances of AIAAIC data misuse, the terms governing the use of AIAAIC data have been tightened so that it may now only be used substantially in the public interest.
On a related note, readers of this newsletter are reminded that you are required to demonstrate your understanding of and adherence to AIAAIC's terms and the AIAAIC Repository governance when applying for Premium Membership.
AIAAIC in the news
Hindustan Times. 10 key takeaways on AI from not quite being an eco killer to dirty tricks | link
Nieuwe Revu. Grote zorgen over kunstmatige intelligentie: alles nep op het web | link (in Dutch)
Virtualization Review. Scientists Seek Government Database to Track Harm from Rising 'AI Incidents' | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org