AIAAIC Alert #30
A monthly round-up of goings-on connected with the AI, algorithmic and automation transparency, openness, and accountability
#30 | June 28, 2024
Keep up with AIAAIC via our Website | X/Twitter | LinkedIn
Is ‘shitposting’ the future of political deepfakery?
Author: Ed Barnetson - Masters student, Philosophy and Politics, University of Bristol
If recent attacks on UK politicians Keir Starmer and Sadiq Khan were anything to go by, the prospect of a UK general election swarming with political deepfakery and manipulation appeared highly likely. But these fears have largely not been realised, though a new playbook has emerged that may point the way to a more organised, diffuse, opaque and effective form of manipulation.
Last autumn negative audio deepfakes depicting Labour Party Leader Keir Starmer apparently swearing at an employee and London Mayor Sadiq Khan appearing to denigrate the UK's 2023 Remembrance commemorations went viral online, intensifying apprehensions about the threat of artificial intelligence in politics.
The two deepfakes followed a familiar playbook: use social media to release a single piece of synthetic content designed to tease and shock, and see what happens next. Whilst it's impossible to say for certain, the damage appeared limited despite massive media coverage, possibly as their credibility was quickly undermined by fact checkers and general public incredulity.
Others may have been more effective. In Slovakia, a deepfake audio recording appeared to depict opposition politician Michal Šimečka discussing how to manipulate the country’s general election with a journalist. Whilst it was quickly outed as a hoax, its last minute timing was designed to resonate with voters as they entered the voting booth, and caused real controversy.
UK experts and politicians have since been braced for a blast of deepfakery during the five-week election period. Thankfully, their fears have been largely confounded. Instead, a new playbook has emerged - one which hugs the shadows and appears to be rather more organised.
One example: the BBC’s Undercover Voter recently revealed a deepfake ‘smear network’ of people ‘shitposting’ to X/Twitter large amounts of ironic, low-effort content which is then amplified by an ‘army’ of anonymous supporters coordinating their activities on a Discord server.
As the BBC notes, some of the outputs are clearly absurd and satirical, whilst others falsely portray candidates saying politically damaging things. But all are designed to derail productive discussion, distract people and provoke a reaction. And with multiple accounts posting a wide range of content, giving the appearance of independent actors; if any single item or account is discredited, other members in the network remain unaffected.
Dispersing content amongst accounts and posts also avoids public condemnation, as most clips remain too small to draw scrutiny while the total network manages to reach massive numbers.
A report by Fenimore Harper uncovered another organised misinformation campaign on Facebook, including over 100 deepfake videos of Rishi Sunak. Though most individual posts only received a couple of thousand ‘impressions’, collectively these malicious deepfakes reached up to 462,000 people.
Only time will tell if this model is making any difference to voter intention or behaviour. But it is a sign that political deepfakery is becoming more diffuse, opaque, and difficult to manage.
Support our work collecting and examining incidents and issues driven by AI, algorithms and automation, and making the case for technology transparency and openness.
In the crosshairs
Image: RIAA, Suno
Major music labels sue AI startups for copyright infringement
Damages of up to USD 150,000 per copyrighted work used without permission are being sought.Perplexity AI ignores requests not to scrape websites
And its not just Perplexity - OpenAI (gasp) and Anthropic are also reputedly at it.Danish child protection algorithm accused of age discrimination
Echoes of the Gladsaxe model.Chinese geo chatbot accused of censorship, bias
The underlying AI model was developed by Alibaba.ChatGPT invents 'Holocaust by drowning'
Automated historical revisionism is becoming a thing.
Visit the AIAAIC Repository for details of these and 1,500+ other AI, algorithmic and automation-driven incidents and controversies
Report an incident or controversy.
Research & advocacy citations, mentions
Marchal M. et al. Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
Ressel J. et al. Addressing the notion of trust around ChatGPT in the high-stakes use case of insurance
Waltersdorfer L et al. AuditMAI: Towards An Infrastructure for Continuous AI Auditing
Bastidas J., Schooling V. Socio-Technical AI Design For Public Value
Moro Visconti R. Sustainable Artificial Intelligence Issues: From ESG Valuation to Ethical Concerns
Explore more research reports citing or mentioning AIAAIC.
AIAAIC news
Welcoming Megan and Meem to our ranks
A content marketer at Splunk, Megan Jooste has devised and implemented marketing and communications strategies for NASA, Google, the National Parks Foundation, the Rockefeller Foundation, and the Hewlett Foundation, amongst others, and edited a Pulitzer Prize-winning book. Megan will be helping to market AIAAIC. Meem Manab is a Dublin-based postgraduate student currently pursuing a master's degree in IT law, cybersecurity, and data protection. Meem’s research has been published in Nature Scientific Communications and IEEE and ACM conferences. He will join AIAAIC’s editorial team, with a focus on diffusion models.
Introducing AIAAIC values
AIAAIC has developed a set of values that represent our ethos and culture, and inform the behaviour expected of our people (including volunteers), partners and suppliers. We are now working on a Code of Conduct that will align with our values.