View this email in your browser
AI, algorithmic and automation trust and transparency
#4 / June 29, 2021
Algorithmic transparency and simplicity
Having recommended mandatory public sector algorithmic transparency, the UK’s Centre for Data Ethics and Innovation (CDEI) has published what it would look like in practice.
With polling showing a minority of citizens appreciate that algorithmic systems use personal data, the CDEI worked with 36 UK citizens over a three-week period to discover the most effective ways to increase understanding about public sector use of algorithms.
Perhaps unsurprisingly, the research finds that transparency is expected of all public sector algorithmic systems. The real question is one of degree and visibility, which is where the findings get more interesting.
Participants expect all systems should disclose their role, purpose, how to get further information, privacy details, degree of human oversight, data sources, risks, and impacts. So not dissimilar to the EU’s draft AI regulation.
It differs in that people want to see basic information immediately and actively available at the point of interaction with the system. These details should be understandable and accessible to disabled and other users, especially for higher risk and impact uses.
Trust and transparency reads, listens and watches
New algorithms should be submitted to randomised controlled trials, according to Privacy is Power author Carissa Velez. Suffice to say, enforced accountability of this ilk would make transparency less challenging and contentious | link
England’s exam grade algorithm was ‘never workable’ and primarily a ‘PR problem’, claims former Ofqual chairman Roger Taylor in an essay for the Centre for Progressive Policy. Taylor explanation for refusing to share the model with expert statisticians as increasing ‘the risk of it leading to gaming’ is unconvincing | link (pdf)
Ethical principles focused primarily on the public good will not be employed in most AI systems by 2030, concludes a Pew Research/Elon University survey of AI industry insiders. Intriguingly, transparency barely rates a mention | link
Most companies approach ‘responsible AI’ through a risk management rather than ethical prism, concludes data labelling company Appen in its latest State of AI and Machine Learning Report. In line with Pew’s findings about window-dressing ethics, though suggests impending regulation may finally be focusing some minds | link
A University of Queensland/KPMG study of trust in AI in the USA, Canada, Germany, the UK and Australia finds Australians (72%) are most sceptical of AI, with most (62%) believing it will destroy more jobs than it creates. 6,000 or so is a fairly small sample size, but underlines fears of AI as a net job destroyer | link (pdf)
Public distrust in AI stems from the lack of comprehensive and visibly functioning regulatory systems, argue Lancaster University’s Bran Knowles and IBM’s John T. Richards. Adds to the chorus of opinion that meaningful AI and algorithmic transparency and accountability must be legally mandated | link
Latest AI incidents and controversies
Bloomberg reveals Amazon is firing its Flex delivery drivers by algorithm, sometimes unfairly, with almost no human intervention involved. Flex drivers have no access to how the algorithm works, and limited time to appeal | link
Epic Systems sepsis prediction algorithm has been found to work 63% of the time, in contrast to its stated 76% accuracy, leading to allegations of misleading marketing and poor transparency | link
A ProPublica/NYT investigation has discovered Beijing is running a fake video-based influence campaign that attempts to convince Han Chinese (presumably) that life is rosy in Xinjiang. Needless to say, the videos are not labelled as propaganda | link
Tesla has recalled 285,000 Model 3 and Model Y cars in China due to problems with its Autopilot cruise control activation. The car maker’s stock price promptly dropped 8% | link
People in 21 states across the US have been unable to access their unemployment benefits because of problems with identity verification service ID.me facial recognition system. The company’s contact and complaints services are singled out for particular criticism | link
A doctored video of a gruesome decapitation that went viral on TikTok raises questions about the ability of its systems to detect offensive content, and the opacity of its decision-making | link
McDonald's is being sued by a customer who objected to the company using his voice without consent to test its drive-through voice recognition order taking system | link
With its business model under pressure from regulators and retailers, US gig shopping company Instacart is actively exploring using robots to cut costs and replace many of its shoppers | link
AI-created virtual versions of Aespa, Eternity and other K-pop groups may be able to express themselves more freely and be less exposed to comment and criticism, but also raise ethical and copyright concerns | link
Visit the AIAAIC Repository for details of these and other AI, algorithmic and automation-driven incidents and controversies
Report AI and algorithmic incidents
AIAAIC news and views
The AIAAIC Repository has moved to a freemium model in which the full repository, including transparency and impact information and taxonomies, is now limited to AIAAIC members. Register to become a Premium member
As mentioned in AIAAIC Alert #3, AIAAIC repository governance have been published to the AIAAIC’s website, including how incidents and controversies are identified, assessed and approved. Let us know if anything is unclear or can be strengthened.
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org