View this email in your browser
AI, algorithmic and automation trust and transparency
#1 / May 17, 2021
Lessons in transparency from the UK’s exam grade meltdown
The UK government’s use of algorithms to grade student exam results resulted in students taking to the streets and generated swathes of negative media coverage. Many grades were seen as unfair, even arbitrary. Others argue the alorithms and grades were a reflection of a broken educational system.
The government would do well to understand the root causes of the problem and make substantive changes in order to stop it happening again. It also needs to regain the confidence and trust of students, parents, teachers, and the general public.
Whilst the government appears reluctant to tackle some of the deeper challenges facing education, it has wisely scrapped the use of algorithms for next year’s exams.
And now the UK’s Office for Statistics Regulation has issued its analysis of what went wrong, highlighting the need for government and public bodies to build public confidence when using statistical models.
Unsurprisingly, transparency and openness feature prominently in the OSR’s recommendations. Specifically, exam regulator Ofqual and the government are praised for regular, high quality communication with schools and parents but criticised for poor transparency on the model’s limitations, risks and appeal process.
Of course, Ofqual is no outlier. Much talked about as an ethical principle and prerogative, AI and algorithmic transparency remains elusive and, if research by Cap Gemini is accurate, has been getting worse.
The UK exam grade meltdown shows that good communication (aka openness) must go hand in hand with meaningful transparency if confidence and trust in algorithmic systems are to be attained. The one is redundant without the other. And they must be consistent.
Watch/listen to the Ada Lovelace Institute’s webinar on the OSR review
Trust and transparency reads, listens and watches
We need to design distrust into AI systems to make them safer | link
The future of trust must be built on data transparency | link
Why transparency won’t save us | link
How platforms like Facebook can make algorithms more transparent | link
You and the algorithm: it takes two to tango | link
AI and algorithmic incidents and controversies
Sao Paulo Metro operator ViaQuatro convicted of illegal advertising facial biometrics
AI development company Appen assesses potential employees with skin colour tests
Study finds Airbnb Smart Pricing algorithm increases racial disparities ($)
Instagram, Twitter remove and block Palestinian posts
Driver abuses Tesla Autopilot by sitting in rear seat
Beijing discovered to be running extensive diplomatic fake influence campaign
Bytedance/TikTok alleged to be abusing and selling childrens’ personal data
Facebook approves teen alcohol, drug, gambling ads, raising questions about the accuracy and reliability of its ad review system
Dartmouth Medical School accuses students of cheating during remote exams, raising concerns about its Canvas system robustness and fairness ($)
Visit the AIAAIC Repository for details of these and other AI, algorithmic and automation-driven incidents and controversies
AIAAIC news and views
The IEEE Global AI/S Ethics Initiative has published a Q&A with AIAAIC founder Charlie Pownall on AI and algorithmic risks and the AIAAIC repository | link
AIAAIC’s new website is live. Basic for now, it will be expanded over the coming weeks and months. We’d love to hear your comments and suggestions | link
AIAAIC Alert is written by Charlie Pownall. Feedback and questions to info@aiaaic.org