AI, algorithmic, and automation trust and transparency
#9 | July 3, 2023
Peering behind AI hype
The term 'AI hype' has been hitting the headlines. Stick the phrase into Google and 3.7 million results are returned; Google News picks up some 37,800 articles.
Business English site Jargonism defines AI hype as 'the trend of unrealistic claims made about the capabilities of artificial intelligence (AI) to solve problems' and there's been no shortage of unrealistic claims made about AI over the years.
From companies selling AI when they are doing no such thing to governments selling somewhat unlikely AI aspirations. (Search 'marketing' in the AIAAIC Repository for numerous examples.)
Recently, the term has become associated with so-called AI existentialists (aka AI panic marketers) highlighting the risks of artificial general intelligence in order to sell products and carve out market share. At least that's what the anti-panic brigade would have us think.
Whatever the truth, it is a timely reminder that we may be close to, or at, peak AI hype. And we might usefull bear in mind that the more times a claim is made, however hyperbolic, the more ingrained it becomes.
All the more reason to question critically the many different, mostly undisclosed motives - from stock ramping and deflecting job losses to AI 'panic' lobbying - behind public statements, and to call them out when they are clearly exaggerated, deliberately misleading, or curiously opaque.
Reads, listens, and watches
Meta released 22 system cards setting out how its recommendation algorithms work, and published information about the signals it uses to determine what users see in their newsfeeds. A significant step towards greater transparency.
Britons are aware of and broadly support AI but are 'highly' concerned about advanced robotics such as driverless cars (72%) and autonomous weapons (71%) and are most worried about transparency, accountability, and loss of human judgement, according to a survey of 4,000 UK citizens by the Ada Lovelace Institute and Alan Turing Institute.
A BCG survey of 13,000 people in 18 countries discovered 'vast differences' in perceptions of AI between leaders and frontline employees, and that few employees have received any kind of training and upskilling.
Harvard University researchers found that most major foundation/large language model comply with the proposed EU AI Act, and are particularly deficient when it comes to disclosing information on coyright, compute, and energy.
Princeton researchers Arvind Narayanan and Sayash Kapoor make a strong case for generative AI companies to publish comprehensive transparency reports. But can they be relied upon to tell the truth?
UC Berkeley cybersecurity researchers argue there is no excuse for generative AI companies to avoid communicating the limitations and risks of large language models - decades of literature about risk communication exists to draw on.
Stuart Russell and Andrew Critch released (pdf) research analysing the societal risks of AI, and proposed a taxonomy of these risks. We are glad to see this aligns closely with AIAAIC's approach.
Researchers at Heriot-Watt University set out the dangers of anthropomorphising generative AI systems, and propose (pdf) developers take real care with language when designing, developing, and releasing these systems.
National Research Council of Italy researcher Giampiero Lupo draws on the AIAAIC Repository to investigate industry sectors most affected by AI incidents, and the nature of the risks affecting them.
Check out a selection of notable recent AI attitudinal/perception research studies
Explore research reports citing or mentioning AIAAIC.
AI and algorithmic incidents and controversies
New additions to the AIAAIC Repository during June 2023. All entries are Incidents unless otherwise stated. Click here for classifications and definitions.
Punjab 'Safe City' surveillance (system)
Problem gambler AI detection (issue)
Tesla Autopilot (system)
Google, Meta C4 dataset (data)
Visit the AIAAIC Repository for details of these and 1,050+ other AI, algorithmic and automation-driven incidents and controversies
Report an incident or controversy.
AIAAIC news
AIAAIC would love to hear your thoughts on how we're doing and were we're going. The survey will only take a few minutes, and your input will help us refine our strategy and planning. Take our 2023 user survey.
We are delighted to announce another tranche of volunteers supporting AIAAIC. Sammy Samkough, Paul Prae, David Prae, and Dylan Martin will build our new technology infrastructure, starting with a new knowledge sharing platform.
AIAAIC in the news
Data analysts at top Indonesian news publication Kompas used AIAAIC Repository data to explore AI risk trends | link (in English, $)
UNESCO cited AIAAIC data when launching its Policy Paper on AI Foundation Models | link
Hubspot drew on a number of incidents detailed in the AIAAIC Repository to explore the nature of bias in AI | link
Carnegie Mellon researchers cited AIAAIC in a new paper on the use of hazard analysis tools and processes for engineering AI systems | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org