AI, algorithmic, and automation trust and transparency
#8 | June 2, 2023
Existential risk and smokescreen lobbying
Another day, another open letter about catastrophic/existential AI risk, this time from California-based Center for AI Safety (CAIS) and signed by a few dozen high-profile scientists and researchers from Open AI, Deepmind, Microsoft, and other AI industry players.
The statement, in its entirety, reads: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Why the need for this foot shuffle so shortly after the six-month moratorium proposed by the Future of Life Institute and signed by over 30,000 people, and which contains several signatories of the latest missive?
Several possible reasons spring to mind. That it demonstrates the AI community is conscious of the risks. That it wants to reduce those risks (using AI, needless to say). And that it sends a message to politicians and legislators to take the issue seriously.
Fair enough, you might say. But it doesn't take much to expose the ideological underbelly.
According to the CAIS, 'many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.'
The giveaway lies CAIS's focus on 'advocating for safety standards'. Standards, not regulation. And therein lies the rub. Standards are voluntary and often patchily applied and poorly enforced.
As such, the letter appears to be little more than a veiled attempt to say 'leave us alone, you can trust us to fix things', and a smokescreen to deflect attention away from the very real, immediate risks and harms of AI and related technologies.
Reads, listens, and watches
Anthropic's Constitution for its Claude large language model/chatbot sets out in detail how its system aligns with with the UN Declaration of Human Rights and other principles and rules, including the spelling out guardrails it uses to guard user safety. An ambitious, evolutionary approach to algorithmic transparency.
CrowdTangle co-founder Brandon Silverman takes a thorough look at TikTok’s transparency activities, and concludes by arguing that given its size, popularity and the regulatory scrutiny they are under, 'their transparency efforts are even more disappointing relative to the responsibilities they have.'
The EU's new European Centre for Algorithmic Transparency (ECAT), which conducts assessments and investigations of major social media platform, search engine, and ecommerce site black box systems, has officially opened. AIAAIC strongly endorses this initiative, and will be tracking ECAT's work, noting its effectiveness, and adding its assessments and audits to relevant incidents and systems detailed on the AIAAIC Repository.
Several European cities, including Amsterdam, Barcelona, Brussels, Mannheim, Rotterdam and Sofia, have been working together to develop an Algorithmic Transparency Standard to help European cities provide clear information about the algorithmic tools they use, and why they are using them. One to keep an eye on.
AI and algorithmic incidents and controversies
New entries added to the AIAAIC Repository during May 2023:
Visit the AIAAIC Repository for details of these and 1,000+ other AI, algorithmic and automation-driven incidents and controversies
Report AI, algorithmic, and automation incidents and controversies
AIAAIC news and views
AIAAIC is delighted to announce a new cohort of volunteers to help strengthen the AIAAIC Repository. Beki Ndlovu and Jose Meza will focus on extending the repository into Africa and South America respectively, while AJ Litchfield will hone in on large language models and chatbots. We also have a burgeoning team of technology volunteers, who will be introduced shortly.
The governance of the AIAAIC Repository has been tightened and clarified, including the definitions and classifications used to identify, assess, publish and edit new incident reports. Both are now available on the aiaaic.org website, with a more detailed working version available to AIAAIC volunteers.
A UK-based AI risk management consultancy was discovered to be flouting AIAAIC's terms by not re-publishing AIAAIC data under its CC BY-SA 4.0 license. The consultancy, which will remain nameless, has agreed to stop re-publishing AIAAIC data. Thank you to noted IP lawyer Christian Gordon-Pullar for his kind and sound advice in helping resolve this matter.
In the pipeline
With AIAIAC gearing up for the future, our 2023 user survey will be distributed in the next few days. We would greatly appreciate your participation.
AIAAIC in the news
The AIAAIC Repository was recommended as 'an important source of information' on the 'disfunctionality of AI' in testimony by the Center for War Studies, University of Southern Denmark, to the UK House of Lords AI in Weapons Systems Select Committee. | link (pdf)
China Daily mentioned the AIAAIC Repository in the context of a speech on responsible AI by Microsoft China's CTO | link
The Data is Plural newsletter listed the AIAAIC Repository as a recommended public dataset | link
EPIC Senior Counsel Ben Winters praised the AIAAIC Repository for 'making [a] substantial achievement in transparency' in an article for Data & Society | link
Italy's Istituto di Ricerche sulla Pubblica Amministrazione (IRPA) described the AIAAIC Repository as 'a comprehensive, detailed and highly up-to-date resource' | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org