AI, algorithmic, and automation trust and transparency
#11 | September 1, 2023
The distrust in AI conceit
UK citizens are reluctant to trust their beloved NHS with their data as they fail to understand how secure its technology systems are, and how responsibly they are governed, according to a recent survey by cloud computing company VMware.
Over half (56%) of UK citizens do not trust the NHS to use AI to analyze patient data due to security and privacy concerns, the survey found. And 25% of respondents said they are completely against the NHS using AI to process their patient data.
The answer, VMware says, is increased transparency and education. Which sounds sensible enough. Open up, explain hard, and get the message out, and Brits would be only too happy to let an AI get to work on their data. And, sure, plenty of studies indicate trust correlates with transparency and openness.
But it is no straight line. For instance, the NHS may be regarded as reasonably transparent and to command a high level of trust, but if one part of the public health ecosystem is clouded by questions marks over its performance (eg. the UK government’s handling of important aspects of the COVID-19 pandemic), confidence can be tricky to build or manage across the system as a whole.
And then trust takes time to earn, but is also highly malleable and can collapse in an instant, as UK health minister Matt Hancock discovered to his cost. Understanding, then, can easily go both ways.
VMware & co may do better to recall that concerns about AI far outweigh excitement about it in many parts of the world, and that meaningfully involving customers and others in the design, development, and management of AI systems is likely to have a greater impact than letting them have a peek under the bonnet and hoping they will trust what you have to say.
Benjamin Franklin (reputedly) recommended:
Tell Me and I Forget; Teach Me and I May Remember; Involve Me and I Learn
.
Wise words.
Reads, listens, and watches
2 hours+ interview with Yuval Noah Harari full of compelling insights into the risks of AI, anthropomorphism, ethics, the need for regulation - and the power of meditation
As the harms start to crystallise … companies are increasingly fearful of a backlash to their AI activities, according to the Wall Street Journal
The UK government’s 2023 national risk register 2023 (pdf) singles out malicious drone attacks (p 87) as a distinct if low-level threat, while AI is mentioned as an ongoing 'chronic' risk that includes 'uncertainty about its transformative impact'
Norway’s Consumer Council Forbrukerrådet published a detailed examination of the risks and harms posed to consumers by generative AI. Well worth a read
TikTok wants creators to declare whether their content is AI generated, or risk having it removed. A voluntary programme limited to images of 'realistic scenes', and one that fails to address AI-generated audio and text …
Massachusetts regulators launched an investigation into the use of AI by the securities industry. ‘If deployed without the guardrails necessary to ensure proper disclosure and consideration of conflicts, I am concerned that this technology could result in harm to investors,’ said Massachusetts Secretary of State Bill Galvin. Non-technical governance is increasingly in regulatory cross-hairs
Do we know what we want from AI? George Washington University PhD candidate Elizabeth (Bit) Meehan explores the limitations of transparency, and argues it won’t be sufficient for meaningful accountability. She’s correct
The AI open source community is not as open as it might like people to believe, according to WIRED. Unsurprising, given its role in hoovering and disseminating public data to all and sundry
Related, data scraping incidents that harvest personal information can constitute reportable data breaches in many jurisdictions, warn (pdf) 12 data protection regulators.
AI and algorithmic incidents and controversies
New additions to the AIAAIC Repository during August 2023. All entries are categorised as Incidents unless otherwise stated. Click here for classifications and definitions.
Google Autocomplete (system)
Books3 AI training dataset (data)
Brave AI user data sales (issue)
Prosecraft fiction analytics (data)
Visit the AIAAIC Repository for details of these and 1,100+ other AI, algorithmic and automation-driven incidents and controversies
Report an incident or controversy.
AIAAIC news
AIAAIC has launched a transparency dashboard providing monthly updates on how it is performing. Basic for now, fuller information on AIAAIC governance, operations, and partnerships will be provided in due course.
We are delighted to welcome two new contributors. An assistant professor at Algoma University in Canada, Ushnish Sengupta is supporting efforts to improve the quality of AIAAIC data. Ruby Thelot, who is helping devise our transparency advocacy initiatives, is the founder of the creative research and design studio 13101401 inc. in New York, and adjunct professor of Media Theory at NYU.
More news and coverage mentioning AIAAIC
AIAAIC policy and research citations, mentions
Australian Strategic Policy Institute. Is artificial intelligence about to be regulated?
Boyle K., Berridge S. The Routledge Companion to Gender, Media and Violence
Shoker S, et al. Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings
Farlow H., Garratt M. et al. The race to robustness: Exploiting fragile models for urban camouflage and the imperative for machine learning security
Happe A., Cito, J. Getting pwn'd by AI: Penetration Testing with Large Language Models
Attard-Frost B., Widder D.G. The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practice, and Governance
Cupertino R.T. Impactos da Inteligência Artificial na economia mundial
Explore research reports citing or mentioning AIAAIC.
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org