AI produced image of a person standing in a bubble surrounded by screens with different faces.

Navigating AI’s Role in Politics, Campaigns, and Advocacy: The TON Reading List

Introduction

Tectonica has compiled a comprehensive reading list on AI’s impact on politics, campaigning, and advocacy, covering various perspectives and concerns, from the potential for AI to exacerbate trust issues in politics to the need for ethical guidelines in its use. This compilation encompasses practical applications and speculative pieces on how AI is poised to change our politics, as well as ongoing regulatory efforts.

In this moment as AI technology is rapidly poised to reshape the landscape of politics, campaigning, workers’ rights, and advocacy, the emergence of AI is both promising and dangerous in important ways.

Kicking off this compilation are two articles by Tectonica. The first dissects the effects AI will have on human connection and how it is poised to further exacerbate a lack of trust in our politics. The second article calls for consensus, offering guiding principles and thoughtful considerations to shape the ethical implementation of AI in politics and campaigning. These articles set the stage for the resources and ideas that follow.

From practical implementations of AI in our current reality to speculative pieces on its potential to reshape our political sphere, this compilation attempts to capture the complete spectrum of AI dialogue at the moment. They also provide an overview of the ongoing efforts and challenges in regulating this technology, as we grapple with determining which voices should lead these efforts and how we can effectively regulate AI, given its global ubiquity.

Current AI discourse inspires both fears and optimism, echoing worries of a future impacted by biassed algorithms and, conversely, curiosity in how the technology could hold potential to advance progressive causes. They have categorised things here, with an eye towards those working in politics and campaigns, to try to ease the pain of finding information most relevant to your interests and needs.

These articles have encompassed concerns about the technology’s potential to amplify misinformation, the profound ethical considerations of AI use in campaigning, and the need for transparency and rules governing the use of AI.

Topics

1. Tectonica’s Take on AI: Organising and ethics in its application to politics

Do Activist Androids Dream of Electric Voters?: Examining the frenemy of AI application in civics, the misconceptions of bias in AI use, and the unforeseen revolutionary potential of AI on participatory democracy (Part 1 of 2), 19 October 2023

This first article in a two-part series explores the dual nature of AI in modern civics, highlighting its implications as both a threat due to inherent biases and the likelihood of boosting transactional over transformational politics, and its revolutionary potential for enriching civic participation. This article emphasises the need for careful progressive intentionality in the application of AI.

Do Activist Androids Dream of Electric Voters? Part 2: How Generative AI Can Help Us Build an Intentional Course to a Stronger Participatory Democracy (Part 2 of 2), 2 November 2023

This second article in our two-part series concludes the discussion of the risks and opportunities of AI use in politics, highlighting the danger of amplifying transactional tactics over the transformational and how this could spell the end of progressive political movements if realised. However, if used with intention there is real potential to enhance participatory democracy and empower marginalised voices.

Organising is Needed More Than Ever in the Age of AI, 11 May 2023

This blog post discusses the challenges faced by progressive movements in the social media age, emphasising the trust gap that has emerged due to social media. While we recognise the potential opportunities and benefits that AI can offer, there is a concern that it may worsen the erosion of human connection and participatory engagement within our movements, elements which are pivotal in building effective and sustainable power.

The Democratic Dilemma of AI: Navigating Ethical Challenges for Political and Advocacy Campaigns, 12 June 2023

Amidst the rapid adoption of advanced AI in political campaigns, this article emphasises the need for deep dialogue on the novel ethical responsibilities of AI’s use in politics, highlighting its transformative potential and advocating for discussions to establish ethical standards and responsible utilisation to navigate the evolving landscape.

2. Today’s Tools: Current tools, and practical advice for using tools in non-profits and progressive campaigns

Time to Calm Down About AI and Politics, May 30th 2024, The Connector

AI’s impact on this election cycle has been underwhelming, focusing on efficiency in drafting emails and social media posts rather than innovative political strategies, highlighting the need for more imaginative applications while remaining vigilant about risks and misuse.

Nervous About Falling Behind the GOP, Democrats are Wrestling With How to Use AI, 6 May 2024, Associated Press

Democrats are racing with Republicans to harness AI’s potential in transforming American elections, cautiously using it for voter engagement, data analysis, and content generation while addressing concerns about misinformation

Rest of World: 2024 AI Elections Tracker

Rest of World’s tracker monitors AI’s use in elections—from campaigning to misinformation and memes—offering insights into AI’s evolving influence on politics and guiding future policy and regulation needs.

Your Organization Isn’t Designed to Work with GenAI, 26 February 2024, Harvard Business Review ($)

This article explains why many organizations fail to leverage generative AI effectively, emphasizing the need to see AI as an assistive agent enhancing human intellect, and introduces the “Design for Dialogue” framework to foster dynamic collaboration between humans and AI

Advancing Equitable AI in the US Social Sector, 12 March 2024, Stanford Social Innovation Review

This article explores AI’s potential to revolutionize nonprofits by enhancing efficiency, reducing bias, and bridging the data divide, emphasizing equitable development and providing strategies for effective AI integration.

Beyond Efficiency: A Human-first AI Adoption Strategy, 21 February 2024, Candid

This article emphasizes the need for a human-first adoption strategy for nonprofits to optimize efficiency without undermining human interactions, advocating for a blend of human and AI that redefines productivity, adapts job tasks without eliminating jobs, and nurtures soft skills.

You Don’t Need AI Transformation: You Need to Transform Your Organisation for AI, 6 February 2024, Modern Change

This article stresses that charity organizations must transform their strategies, cultures, and operations to fully utilize AI’s potential, highlighting the need for speed, innovation, and resilience, and providing actionable steps to prepare for AI’s societal impacts.

How AI Is Transforming the Way Political Campaigns Work, 1 February 2024, The Nation

This article explores how AI is revolutionizing political campaigns by automating tasks such as strategy planning, voice-calling, and personalized media creation with tools like VotivateAI, while also raising concerns about disinformation and voter alienation.

How Italy’s Government is Using AI, 30 January 2024, Democracy Technologies

This article explores Italy’s use of AI in public services, detailing practical applications such as customer service chatbots and tax evasion prevention, and examines the broader deployment of AI in government operations under resource constraints.

Exclusive: AI Turbocharges Campaign Fundraising, 30 January 2024, Axios

Tech for Campaigns, an organization supporting Democrats, has effectively used AI to enhance fundraising efficiency through AI-assisted emails, freeing up staff for voter engagement, and plans to expand AI applications beyond email and fundraising.

Meet Ashley, the World’s First AI-powered Political Campaign Caller, 15 December 2023, Reuters

Shamaine Daniels, a Democratic congressional candidate, has launched “Ashley,” an AI-powered campaign caller that engages voters in multiple languages, ensuring transparency with a robotic voice and AI disclosure while addressing disinformation concerns.

Donor Box Blog: AI for Nonprofits – How to Use Artificial Intelligence for Good, 8 December 2023

This guide helps nonprofits leverage AI to revolutionize fundraising and campaigns by streamlining tasks, analyzing donor data, and enhancing engagement on social platforms, offering practical applications from content generation to fraud detection

Presidential Candidate’s AI Chatbot Fields Policy Questions, 20 November 2023, Tech Target

Asa Hutchinson’s presidential campaign introduces an AI chatbot to share his policy views, drawing mixed reactions from experts concerned about misinformation and reflecting broader anxieties about AI’s role in the 2024 election.

A.I. Checklist for Charity Trustees and Leaders, November 2023, Zoe Amar Digital

This AI checklist is a detailed guide for charity trustees and leaders to understand and integrate AI effectively, addressing the sector’s widespread recognition of AI’s relevance and the prevailing unpreparedness for its challenges and opportunities.

In Search of “AI Savvy” in Non-profits, 10 November 2023, LinkedIn

This article from Nick Scott discusses the importance of AI savviness for non-profits, outlining its potential benefits and risks, and advocating for a balanced approach to AI integration in their operations.

From Comms to Strategy: Where AI Will Have the Biggest Impact In ’24, 25 October 2023, Campaigns & Elections

Approaching the 2024 elections, political campaigns are adapting their strategies to counter AI-generated misinformation by creating powerful narratives and using AI for tasks like transcription, while grappling with its limitations in conveying human emotion and inspiration.

Five Tips for Leading in the AI Era, 10 October 2023, CharityComms

This article offers guidance for charity and nonprofit leaders on integrating AI, emphasizing a balanced approach that leverages AI’s potential while maintaining human-centric leadership, and provides practical tips for incorporating AI into organizational strategies.

ChatGPT, Generative AI, & AI Alternative Use Cases for Your Nonprofit, 22 September 2023, Future Frontier

The article explores how generative AI can benefit nonprofit organisations by enhancing volunteer orientation, translation, community outreach, advocacy coordination, training, and data/reporting, thereby improving efficiency and helping nonprofits achieve their missions more effectively.

Early Use Cases for AI in Politics & Campaigns, 21 April 2023, Higher Ground Labs

This article provides a curated list of AI tools categorised by their utility, offering readers a glimpse into the evolving landscape of AI applications in progressive politics, while acknowledging the ethical and privacy considerations inherent in this fast-evolving domain.

Micah White: ProtestGPT – Activist AI

ProtestGPT is an experimental AI resource designed to help activists generate tailored protest ideas and strategies, offering campaign concepts, theories of change, press releases, and step-by-step guides, with 200 unique protest campaign ideas available for potential use.

Can AI Images Work For Your Campaign?, 28 August 2023, Campaigns & Elections 

This article explores the use of AI art generators like DALL-E 2 and Midjourney for political campaigns seeking cost-effective image solutions, highlighting their potential as supplementary campaign visuals while cautioning against relying on AI for central campaign imagery.

The AI Revolution is Canceled, 2 August 2023, Campaigns & Elections

AI’s current usage and impact on campaigns has thus far been limited, excelling in content generation but not replacing human consultants; while AI enhances efficiency, it hasn’t yet revolutionised the field due to the nuanced nature of politics and public affairs.

Using AI in Advocacy and Political Campaigns, 2 June 2023, Speaking Moylanguage

A veteran campaigner examines the impact of AI on political campaigns, discussing the practical advantages of AI in terms of efficiency alongside ethical concerns, and offering his perspective on making the best use of AI, while providing additional resources on the topic.

AI and Political Campaigns: Let’s Get Real, 5 April 2023, Campaigns & Elections

This article explores the positive applications of AI in political campaigns, highlighting its role in improving audience targeting, tracking disinformation, and aiding the creative process for content creation, while emphasising the need for human supervision to ensure authenticity.

3. Ethical Implications: Thoughts on AI’s ethical use in politics and transparency of use

Q&A: Microsoft’s AI for Good Lab on AI Biases and Regulation, 29 April 2024, Mobi Health News

Juan Lavista Ferres, head of Microsoft’s AI for Good Lab, discusses his book “AI for Good: Applications in Sustainability, Humanitarian Action and Health,” highlighting the ethical use of AI for humanity, the importance of mitigating data biases.

What the Heck are the AI Ethicists Up To?, 5 April 2024, Daily Trojan

This article explores the vital contributions of AI ethicists, detailing their work in AI ethics, safety, and alignment, and highlighting the philosophical debates and practical challenges they face in guiding the ethical evolution of AI technology.

Responsible Technology Use in the AI Age, 15 February 2024, MIT Technology Review

Thoughtworks’ Rebecca Parsons highlights the need for equitable tech amid AI’s rise, addressing biases, privacy, and environmental impacts, with 73% of leaders prioritizing responsible tech alongside financial goals, emphasizing the integration of inclusive principles in innovation.

Social Justice & Activism: How Can We Develop AI Systems That Are More Respectful, Ethical and Sustainable?, 31 January 2024, The Creative Process

This podcast episode features Dr. Sasha Luccioni discussing the impact of AI on society, emphasizing key challenges and highlighting her work with Hugging Face, Climate Change AI, and Women in Machine Learning to promote ethical, respectful, and sustainable AI systems.

How to Spot a Deepfake—and Prevent it from Causing Political Chaos, 29 January 2024, Science

This article discusses the challenge of identifying AI-generated deepfakes, revealing that most people struggle to detect them accurately, and highlights the subtle cues like too-perfect imagery and unnatural speech patterns that can help identify them.

Susan Mernit: Ethical Frameworks for AI – An Introductory Review of Key Resources for Nonprofits

This resource collection reviews ethical AI adoption in nonprofits, emphasizing organisational, ethical, and technical aspects, and offers practical advice on stakeholder involvement, policy-making, and human-centred deployment while stressing continuous learning and inclusion.

We Need to Democratize AI, 27 November 2023, IAI News

Hélène Landemore and John Tasioulas advocate for democratizing AI governance through citizens’ assemblies to ensure AI development aligns with diverse human interests, citing OpenAI’s recent issues as evidence of current oversight inadequacies.

The Ethics of AI in Political Creative, 9 October 2023, Campaigns & Elections

In navigating the ethical concerns of AI in political campaigns, creative practitioners stress the importance of maintaining voter trust, candidate consent, responsible AI use aligned with campaign values, and transparency with legal counsel.

Ethics And Transparency In AI-Powered Political Advertising, 25 August 2023, Forbes

This article explores the evolving role of AI in political advertising, highlighting its potential for personalisation and optimisation while discussing ethical concerns related to data privacy, bias, and misinformation.

AI and Political Email — Ethics, Practice and Labor, 22 July 2023, AaronHuertas.com

The incorporation of AI tools into political email programs raises ethical concerns about non-human communication, while promising efficiency gains and potentially displacing jobs, underscoring the importance of transparency in disclosing AI-generated content.

Is Your Nonprofit Thinking About Using ChatGPT? Your First Step is to Do No Harm, 8 June 2023, Candid

A nonprofit’s failed chatbot project offers lessons on ethical AI implementation, stressing a human-centred approach, cautioning against over-reliance on bots, and advocating for careful design, testing, human oversight, increased AI literacy, and specific use case identification.

4. Potential: Envisioning future potential uses in campaigns

Using AI for Political Polling, June 27th 2024, Harvard Kennedy School Ash Center

AI-assisted polling is set to transform our understanding of public opinion by conducting real-time surveys and providing detailed demographic insights, despite raising questions about data accuracy and trustworthiness.

Forget Deepfakes: Social Listening Might be the Most Consequential Use of Generative AI in Politics, June 18th 2024, Tech Policy Press

In 2024, generative AI is starting to change political campaigns by shifting from content creation to analyzing voter sentiment, using tools like interactive robo-callers to transform voter engagement and campaign strategies.

Bruce Schneier: 5 Ways AI Could Shake Up Democracy, 8 May 2024, Information Week

Bruce Schneier explores AI’s potential to revolutionize democracy across politics, lawmaking, administration, the legal system, and citizen engagement, while emphasizing the need for a balanced approach to harness AI’s potential and safeguard democratic values.

How to Make AI Equitable in the Global South, 3 April 2024, Stanford Social Innovation Review

Achieving gender-equitable AI in the Global South requires bridging data gaps and addressing biases through inclusive data collection, transparency, stakeholder engagement, and risk assessments, ensuring AI systems prioritize equity and rectify historical injustices.

FiveThirtyEight: 2024 is the First AI Election – What Does That Mean?, 1 December 2023, ABC

This podcast episode, featuring Ethan Bueno de Mesquita from the University of Chicago, delves into the impact of ChatGPT on AI and its potential influence on the 2024 presidential election, discussing his co-authored white paper on generative AI and electoral politics.

AI Chatbots Fall Short in Dozens of Languages. A Non-profit Project Aims to Fix That, 19 November 2023, The Globe and Mail

The article covers Cohere For AI’s project to develop an AI model conversant in 101 languages, including less-represented ones, aiming to overcome Large Language Models’ limitations and promote global AI inclusivity and safety.

Can Unions Adopt the Best of AI to Fight the Worst of AI?, 27 November 2023, Unions21

This article discusses the crucial role unions can play in the AI era, urging them to adopt AI for improved strategies and member engagement, and to address challenges like algorithmic bias, mirroring the influential role unions played during the industrial revolution.

Generative AI Like ChatGPT Could Help Boost Democracy – If it Overcomes Key Hurdles, 7 November 2023, The Conversation

Kaylyn Jackson Schiff and Daniel S. Schiff, associate professors from Perdue University, examine the potential of generative AI to improve democratic processes, emphasizing the need to address challenges like bias and authenticity in political engagement.

Exponentially: A Golden Age for Democracy?, 18 October 2023, Bloomberg

Yale professor Hélène Landemore explores AI’s potential to enhance democracy by aiding inclusive civic assemblies, despite challenges like misinformation, in a podcast discussing AI’s role in democratic participation and problem-solving.

Party Party: Political Journalism – Between AI and Polarisation, 7 September 2023

This panel discussion focuses on the challenges and opportunities in political journalism for the upcoming 2024 EU election in light of AI, addressing voter preferences, discontent with institutions, AI’s potential in trend identification, and the significance of tech advancements.

The AI Breakdown: Could AI End Up Being Good for Democracy?, 19 August 2023, Breakdown Network

This podcast episode discusses two recent articles focused on the positive potential of AI in politics, emphasising its capacity to enhance democracy by reducing campaign costs and levelling the playing field for candidates running for office.

The Case for More AI in Politics, 1 August 2023, Politico

Hear from a digital strategist who contends that the positive transformative impact of AI in politics is underway albeit subtly, citing AI-enhanced ads and other applications in the 2024 race, while also advocating for tech platforms to adopt a more adaptable stance.

Six Ways That AI Could Change Politics, 28 July 2023, MIT Technology Review

This article outlines six potential milestones for a new era of AI-infused politics, envisioning scenarios like AI-generated testimony, novel legislative amendments, and AI-driven political parties, highlighting the complex possibilities AI could introduce to reshape democratic politics.

Political Campaigns May Never Be the Same, 27 May 2023, The Atlantic

This article presents contrasting views on AI’s influence on politics, discussing its potential to democratise campaigning and enhance accessibility, alongside apprehensions about AI misuse through deep fakes, misinformation, and election manipulation.

5. Threats & Bias: Dangers in using AI in political contexts, considerations when using AI technology, and the potential for malicious uses

AI Candidate Running for Parliament in the U.K. Says AI Can Humanize Politics, June 13th 2024, NBC News

“AI Steve,” an AI candidate represented by businessman Steve Endacott, is on the ballot for the U.K.’s general election, enabling voters to ask policy questions and share concerns through an AI-driven platform, while raising important questions about the future of democratic representation.

AI Experimentation is High Risk, High Reward for Low-profile Political Campaigns, June 17th 2024, ABC News

AI is reshaping state and local political campaigns in the US, as demonstrated by Adrian Perkins’ reelection bid for mayor of Shreveport, Louisiana, where a deepfake AI-generated attack ad played a key role in his defeat, raising new ethical challenges and changing campaign strategies.

Google’s and Microsoft’s AI Chatbots Refuse to Say Who Won the 2020 US Election, June 7th 2024, Wired

Google’s and Microsoft’s AI chatbots, Gemini and Copilot, refuse to confirm Joe Biden’s 2020 US presidential election win or provide global election results, redirecting users to search engines instead, raising concerns about misinformation ahead of the 2024 election.

Employees Say OpenAI and Google DeepMind Are Hiding Dangers From the Public, June 4th 2024, Time

A recent letter from current and former OpenAI and Google DeepMind employees raises concerns about AI safety, accusing these companies of prioritizing profits over public safety, and calling for stronger regulations, greater transparency, and protections for whistleblowers.

Russia-Linked CopyCop Uses LLMs to Weaponize Influence Content at Scale, 9 May 2024, Recorded Future

A Russia-linked influence network is using generative AI to manipulate media content in the US, UK, and France, promoting pro-Russian perspectives on issues like the Ukraine conflict and Israel-Hamas tensions, posing significant challenges to election security and public awareness.

‘The Future is Going to be Harder Than the Past’: OpenAI’s Altman and Brock Address High-Profile Resignation, 18 May 2024, Mashable

OpenAI faced a shake-up with Jan Leike’s resignation over priority disagreements, prompting CEO Sam Altman and co-founder Greg Brockman to emphasize their commitment to safety, including developing a preparedness framework and delaying model releases to ensure safety standards.

What Happens When We Train Our AI on Social Media?, 19 April 2024, Fast Company

AI models on social media risks replicating toxicity and misinformation from these platforms; experts call for solutions like AI-generated content watermarks and aligning AI behavior with human values to mitigate these risks.

Preparing to Fight AI-Backed Voter Suppression, 16 April 2024, Brennan Center

This article highlights the growing trend of AI-enhanced voter suppression, such as voice-cloning technology used in robocalls impersonating President Biden during the New Hampshire primary, linking historical voter suppression methods to modern, scalable AI-driven practices.

How People View AI, Disinformation and Elections — in Charts, 16 April 2024, Politico

As countries prepare for elections, deep concerns about AI spreading disinformation are evident, especially in areas with lower development indices, where the gap in understanding AI’s potential for generating fake news heightens fears about its impact on democratic processes.

How I Built an AI-Powered, Self-Running Propaganda Machine for $105, 12 April 2024, Wall Street Journal

A journalist created a fully automated, AI-generated news site for just $105 in a few days, demonstrating how easily powerful propaganda machines can be built with AI to produce politically skewed articles, posing a significant threat to democratic integrity

Africa Check’s Tips for Spotting AI-generated Images and Videos, 2 April 2024, Hive Mind

With AI rapidly evolving in media, Africa Check’s guide helps distinguish real from AI-generated images and videos by identifying discrepancies like unnatural features or inconsistent shadows, emphasizing the importance of detail and context in spotting AI manipulation.

The Deepfake Threat to the 2024 US Presidential Election, 27 March 2024, Global Network on Extremism & Technology

Ahead of the US 2024 election, AI and deepfakes threaten democratic integrity, exemplified by a fake Biden robocall, raising concerns about extremists weaponizing AI to spread misinformation and disrupt elections, prompting discussions on countermeasures to safeguard the process.

Election Disinformation Takes a Big Leap with AI Being Used to Deceive Worldwide, 15 March 2024, Associated Press

The global surge of AI-generated deepfakes in elections, from Bangladesh to Slovakia, marks a new era of digital deception threatening democratic integrity, with the ease of creating convincing fake content amplifying the risk of misinformation, raising significant concerns.

Experts War-gamed What Might Happen if Deepfakes Disrupt the 2024 Election. Things Went Sideways Fast., 16 March 2024, NBC News

In a recent war game exercise, experts simulated the 2024 US election facing rampant AI-generated deepfakes to anticipate disruptions that could sway outcomes, highlighting the urgent need to safeguard democratic processes against such threats.

Deepfake Kari Lake Video Shows Coming Chaos of AI in Elections, 24 March 2024, Washington Post

The Arizona Agenda created a deepfake video of right-wing Senate candidate Kari Lake to demonstrate AI’s potential to manipulate political narratives, showcasing the capabilities of AI-generated content and emphasizing the need for vigilance against digital misinformation.

AI Will Allow More Foreign Influence Operations in 2024 Election, FBI Director Says, 29 February 2024, CNN

FBI Director Christopher Wray has warned that AI will significantly enhance foreign influence operations in the 2024 US election, allowing for the creation of hyper-realistic deepfakes that can more easily sway public opinion and interfere with the electoral process

Top AI Photo Generators Produce Misleading Election-Related Images, Study Finds, 6 March 2024, CNN

As the 2024 US Presidential election nears, concerns about AI spreading political misinformation intensify, with a study showing AI image generators can create deceptive visuals, and despite AI companies’ efforts, challenges in curbing disinformation persist

Trump Supporters are Using A.I. to Give Him More Black Friends, 5 March 2024, MSNBC

A BBC investigation reveals that Trump supporters are using AI to fabricate images of Trump with Black individuals to falsely portray his appeal among Black voters, reflecting a broader trend of unethical AI use to manipulate public perception in political campaigns.

Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them, 17 February 2024, Gizmodo

Recent findings reveal that leading AI technologies like Gemini and ChatGPT are vulnerable to political disinformation, as they can be manipulated to generate misleading campaign content, highlighting the inadequacy of current safeguards.

Pakistan’s Imran Khan Uses AI to Make Victory Speech from Jail, 12 February 2024, Politico

Amid an unconventional electoral campaign, former Pakistani Prime Minister Imran Khan’s AI-generated victory speech from prison spotlights the deceptive use of technology and the urgent need for regulation to address the implications of AI-generated content in politics.

Misunderstood Mechanics: How AI, TikTok, and the Liar’s Dividend Might Affect the 2024 Elections, 22 January 2024, Brookings

AI’s influence on misinformation and elections is explored here, questioning whether fears are overstated, examining platforms like TikTok that could amplify its impact, and introducing the ‘liar’s dividend’ phenomenon where true information is dismissed as AI-generated falsehoods.

Fake Biden Robocall ‘Tip of the Iceberg’ for AI Election Misinformation, 24 January 2024, The Hill

This article highlights the growing concern of AI-generated election misinformation, exemplified by a fake robocall impersonating President Biden, emphasizing the need for stronger regulatory measures as AI tools become more sophisticated

AI-Generated Fake News Is Coming to an Election Near You, 22 January 2024, Wired

This article discusses the threat of AI-generated fake news on politics and elections, citing University of Cambridge research on neural networks creating convincing false narratives, highlighting Americans’ difficulty in distinguishing them from real news.

Raising the Democratic Shield, 13 December 2023, Democracy Technologies

This article by the Association for Civic Technology Europe examines AI’s impact on elections, using the Slovakian election to highlight deep fake threats and calling for strict EU policies and an AI “code of conduct,” while recognizing AI’s potential to enhance democratic engagement.

Artificial Intelligence’s Threat to Democracy, 3 January 2024, Foreign Affairs($)

This article examines the rising threat of AI-powered misinformation and cyberattacks in U.S. elections, highlighting vulnerabilities from voter registration to result announcements, and calls for a unified defense combining technology, government, and public efforts.

Deepfake Elections: How Indian Politicians Are Using AI-Manipulated Media To Malign Opponents, 24 November 2023, Outlook

The article highlights the growing issue of deepfake videos in Indian politics, used to manipulate voters and distort democracy, with a call for urgent regulation to preserve electoral integrity, especially in light of the upcoming national elections.

‘Existential to Who?’ US VP Kamala Harris Urges Focus on Near-term AI Risks, 1 November 2023, Politico

During London’s AI Safety Summit, Kamala Harris emphasized the need to address immediate AI risks like deepfake abuse and biased AI, announcing U.S. commitments to safe AI practices, including a new AI Safety Institute and responsible military AI use.

A.I. Could Prove Disastrous for Democracy. How Can Philanthropy Prepare?, 23 October 2023, The Chronicle of Philanthropy

The article highlights the risks of AI in undermining democracy and human agency, advocating for philanthropic investment in community organizing and AI usage that promotes human interaction to preserve democratic governance.

Generative AI is Already Catalyzing Disinformation. How Long Until Chatbots Manipulate Us Directly?, 23 October 2023, Tech Policy Press

The article underscores the risks of generative AI in political campaigns and its potential for manipulation, especially in undemocratic countries, advocating for proactive regulation and transparency ahead of the 2024 elections.

Joy Buolamwini: “We’re Giving AI Companies a Free Pass”, 29 October 2023, MIT Technology Review

AI researcher Joy Buolamwini delves into the racial biases of facial recognition technology and the exploitative data practices of AI firms, advocating for stringent testing and audits of AI systems and calling for united efforts to challenge the injustices perpetrated by tech giants.

AI was Asked to Create Images of Black African Docs Treating White Kids. How’d it Go?, 6 October 2023, NPR

This article explores how an AI experiment aimed at challenging stereotypes instead reinforced them by struggling to accurately depict Black African doctors treating white children, underscoring AI’s potential to perpetuate societal biases

The AI Heretic, 18 September 2023, Business Insider

An expert in technology’s economic impact cautions against AI’s potential to benefit elites and leave many workers with low-wage jobs. He urges thoughtful regulation and development to empower workers and benefit society.

China Sets AI Sights on Democracies – Reports, 13 September 2023, Radio Free Asia

Recent reports reveal China’s AI-driven disinformation efforts, with concerns about potential interference in elections, as AI-enhanced disinformation becomes more sophisticated and divisive, posing a grave threat to democratic processes.

Poll: Americans Believe AI Will Hurt Elections, 11 September 2023, Axios

A recent poll shows that half of Americans fear AI-generated misinformation will affect the 2024 election, leading one-third to express reduced trust in its results, highlighting concerns about AI’s influence on public opinion and elections, alongside skepticism about effective AI regulation.

The Problem Behind AI’s Political ‘Bias’, 24 August 2023, Politico

Debate has arisen over political bias in AI tools, with a research paper suggesting a notable progressive bias facing criticism for methodological limitations, but underscoring the challenge of comprehending AI behaviour due to limited transparency from developers.

These Women Tried to Warn Us About AI, 12 August 2023, Rolling Stone

Prominent women in AI raised early concerns about AI risks due to the lack of diversity in development, noting biases and societal prejudices perpetuated by AI systems, with companies initially overlooking these concerns despite research exposing biassed algorithms.

AI Botched Their Headshots, 8 August 2023, Wall Street Journal

AI-generated professional headshots are becoming popular among young workers, but biases in the technology’s training data are causing problems for women of colour by lightening skin tones, altering hairstyles, and changing facial features.

Disinformation Reimagined: How AI Could Erode Democracy in the 2024 US Elections, 19 July 2023, The Guardian

AI’s progress raises concerns about disinformation threatening democracy prior to the 2024 US elections, as easy production of realistic content makes detection difficult, potentially eroding trust in news sources and exacerbating voter suppression campaigns.

British Political Candidate Uses Artificial Intelligence to Draw up Election Manifesto, 19 July 2023, U.S. News and World Report

A candidate for UK Parliament has utilised AI to formulate his election manifesto, integrating constituents’ sentiments via crowdsourcing and machine learning to generate policies, raising concerns about diminishing human representatives and oversimplification of complex matters.

AI Version of Francis Suarez Hits Digital Campaign Trail but Doesn’t Have all the Answers, 6 July 2023, Miami Herald

Miami Mayor Francis Suarez, a 2024 Republican presidential candidate, has launched an AI chatbot for his campaign, responding in his voice and providing information about his agenda, but with a limited set of answers and noted shortcomings in addressing certain topics.

Fakery and Confusion: Campaigns Brace for Explosion of AI in 2024, 18 June 2023, Politico

Campaign strategists anticipate AI-generated content in the 2024 elections, highlighting the need to educate voters to recognise and combat AI-powered misinformation due to limited AI regulations, while Democratic operatives express scepticism about pre-election regulations.

Humans Aren’t Mentally Ready for an AI-Saturated ‘Post-Truth World’, 18 June 2023, Wired

The rising integration of AI in society is prompting worries about its effects on mental well-being, as it enables disinformation which could undermine trust and personal identity, and impact critical thinking skills, requiring more research into the psychological consequences of AI.

How AI Could Take Over Elections—And Undermine Democracy, 7 June 2023, Scientific American

This article imagines an AI-driven political campaign where a machine named Clogger employs personalised messaging and reinforcement learning to manipulate voter behaviour, raising alarms about democratic erosion while advocating for privacy protection and regulatory oversight.

On With Kara Swisher: A.I. Doomsday with Tristan Harris, 25 May 2023, New York Magazine

Tech journalist Kara Swisher interviews Tristan Harris, co-founder of the Center for Humane Technology, in a podcast exploring AI risks honestly without instilling fear, offering insights on the complex topic and serving as an excellent introduction to the evolving AI landscape.

‘Cambridge Analytica on Steroids’: Artificial Intelligence is About to Change our Democracy, 19 May 2023, Inews

Experts warn of the harmful impact of AI on democracy due to its capacity for deep fakes and misinformation spread, highlighting the necessity for regulations, transparency, and testing, as well as a suggested pause in AI’s use in UK political campaigns until a framework is established.

The World of Campaigning is About to Change Forever, 17 May 2023, Medium

This article explores the possible effects of large language models on political campaigns, advocating for proper regulation due to risks such as privacy concerns, automated decision-making, and public opinion manipulation.

What the President of Signal Wishes You Knew About A.I. Panic, 16 May 2023, Slate

Meredith Whittaker, president of the Signal and co-founder of the AI Now Institute, underscores concerns about the current dangers of AI controlled by profit-driven corporations, highlighting issues like data bias, accountability, and concentrated power over more existential concerns.

A Campaign Aide Didn’t Write That Email. A.I. Did., 28 March 2023, The New York Times

The article discusses the accelerating use of AI in political campaigns for tasks like predictive analysis and voter data patterns, while also highlighting the potential disruption caused by AI-driven disinformation campaigns that challenge the concept of truth.

How Generative AI Impacts Democratic Engagement, 21 March 2023, Brookings

Researchers caution that the rise of large language models, exemplified by ChatGPT, poses a democracy threat due to their potential to create high-quality content, automate an overwhelming volume of text, and drown authentic public opinion.

6. Impact on Society: The way AI will change the landscape of our society and the new injustices we will likely need to fight

AI Will Upset Democracies, Dictatorships, and Elections, 5 March 2024, Gzero

AI is set to influence elections in democracies and dictatorships, using deepfakes and AI avatars to sway opinion and engage voters, while also providing tools for opposition leaders to challenge authoritarian regimes, with notable examples already from Pakistan to the US.

Your Undivided Attention, How Will AI Affect the 2024 Elections?, 21 December 2023, Centre for Humane Technology

This podcast episode discusses the unprecedented challenges AI poses to global democracies in 2024, as seventy countries hold national elections affecting over two billion voters, and offer insights into safeguarding democratic integrity.

The Dawn of the AI Election, 4 January 2024, Prospect Magazine

This article highlights the critical concerns over AI’s role in politics amidst the largest voter turnout in history, emphasizing the urgency of safeguarding elections from AI’s potential misuse and the influence of major tech platforms.

Make no mistake—AI is owned by Big Tech, 5 December 2023, MIT Technology Review

The article highlights the dominance of Big Tech companies like Microsoft, Amazon, and Google in the AI industry, emphasizing the risks of concentrated power, the lack of alternatives for AI development, and the potential threats to democracy, culture, and agency.

Today in Focus: He’s Back – Sam Altman and the Chaos at the Heart of the AI Industry, 26 November 2023, The Guardian

Blake Montgomery’s podcast episode explores Sam Altman’s brief dismissal and reinstatement as head of OpenAI, reflecting on the internal uproar it caused and hinting at a possible shift from altruistic to profit-driven motives in AI development which could have big implications.

Ten Ways AI Will Change Democracy, 6 November 2023, Harvard Kennedy School

This essay discusses ten ways AI could reshape democracy, including roles as educators and legislators, highlighting the rapid evolution of AI technology and its growing integration into democratic processes.

AI Firms Must Be Held Responsible for Harm They Cause, ‘Godfathers’ of Technology Say, 24 October 2023, The Guardian

Senior AI experts warn of the dangers of powerful AI systems and advocate for policies enforcing accountability, safety measures, and ethical development, including a licensing system and independent audits.

How Musk, Thiel, Zuckerberg, and Andreessen—Four Billionaire Techno-Oligarchs—Are Creating an Alternate, Autocratic Reality, 22 August 2023, Vanity Fair

Billionaires like Peter Thiel, Elon Musk, Mark Zuckerberg, and Marc Andreessen are influencing a new reality through AI, transhumanism, and other radical ventures, causing concern about the power concentration in these techno-oligarchs and the potential for significant social disruption.

How Unions Can Use ChatGPT and Generative AI for Growth, 29 July 2023, AlexWhite.org

Labour unions could leverage generative AI for growth, communication, campaigns, and operations, while being mindful of ethical considerations, by utilising it for creating content, summarising meetings, analysing text, extending education, and improving member services.

‘High Risk, High Reward’: How Leadership Should Embrace AI in the Workforce, 7 June 2023, Worklife

AI is rapidly reshaping the future of work, and leadership should embrace and adapt to AI for organisational success, with the article noting effective leaders will be those who understand how to utilise AI tools responsibly and ethically, and invest in employee re-skilling.

Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization, 25 May 2023, Vice

The National Eating Disorders Association (NEDA) has replaced its helpline staff with a chatbot named Tessa shortly after staff unionised, leading to a controversy where NEDA claims AI will improve services while union members see it as union-busting.

The Long Game Between Writers and AI, 10 May 2023, Politico

The Writers Guild of America strike involves concerns about generative AI as it seeks to prohibit AI’s involvement in writing, reflecting growing anxiety about automation’s impact on the entertainment industry, leading to a greater emphasis on establishing rules for AI use.

7. Regulation

A. EUROPEAN REGULATION

A Conversation with Dragoș Tudorache, the Politician Behind the AI Act, 8 April 2024, MIT Technology Review

Dragoș Tudorache, a key architect of the AI Act, discusses his role in creating legislation to reshape the AI sector, highlighting the balance between innovation and responsibility in European AI policy.

Artificial Intelligence Act: MEPs Adopt Landmark Law, 13 March 2024, European Parliament News

The EU’s AI Act sets a global precedent with stringent safeguards for general-purpose AI, limits on law enforcement’s biometric identification, and bans on social scoring and manipulative AI, aiming to enhance safety and position Europe as a leader in ethical AI development.

EU Lawmakers Ratify Political Deal on Artificial Intelligence Rules, 13 February 2024, Reuters

The European Parliament has provisionally ratified the AI Act, aiming to establish the world’s first comprehensive AI legislation that balances innovation with safeguarding fundamental rights and safety across various sectors

Five Things You Need to Know About the EU’s New AI Act, 11 December 2023, MIT Technology Review

The EU’s new AI Act, focusing on “high-risk” AI systems, mandates transparency, ethics, and human oversight in AI use, establishes an AI Office for enforcement, and exempts military AI, giving tech companies up to two years for compliance.

Deepfake Election Risks Trigger EU Call for More Generative AI Safeguards, 26 September 2023, Tech Crunch

The EU is expressing concerns about the impact of widely available generative AI tools on democratic societies, particularly during elections, and is calling for additional safeguards to address AI-generated disinformation while working on future regulations that may necessitate user disclosures.

Banning AI: How EU Regulation Might Affect Your Party, 4 July 2023, Party.party

Proposed EU AI regulations, designed to tackle issues like stereotype reinforcement and cognitive manipulation, could impact on political campaigns by limiting targeted voter outreach, access to sentiment analysis data, and imposing transparency demands on AI algorithms.

Digital Europe: Sandboxing the AI Act, June 2023

DigitalEurope’s Pre-Regulatory Sandboxing Initiative assesses the impact of the proposed AI Act on European start-ups and SMEs, revealing support for regulatory clarity but concerns about innovation slowdown, compliance costs, and international competitiveness.

Europeans Take a Major Step Toward Regulating A.I., 14 June 2023, The New York Times

The European Parliament has advanced the A.I. Act draft law, a comprehensive attempt to mitigate AI risks with strict restrictions on facial recognition and increased data transparency for AI developers, showcasing Europe’s leading efforts in AI regulation compared to other nations.

B. US & CANADA REGULATION

FiveThirtyEight Politics: How Much AI Regulation is the Right Amount?, June 14th 2024, ABC News

Gregory Allen, the Director of the Wadhwani Center for AI and Advanced Technologies, is interviewed about a new bipartisan AI policy roadmap by US senators, which proposes $32 billion for AI research and raises questions about regulation, copyright, and privacy in the tech industry.

Feds Targeting Disclosure Requirements For AI Use in Campaign Advertising, May 22nd 2024, Campaigns & Elections

The US federal government is preparing to mandate AI disclosure in campaign ads, requiring disclaimers on AI-generated content to increase voter transparency, with proposed laws and FCC rules aiming to inform voters when AI is used in political messaging.

The White House Issued New Rules on How Government Can Use AI. Here’s What They Do, 29 March 2024, NPR

The Biden administration has introduced guidelines for federal AI use, mandating strict safeguards by December 1st to ensure ethical, transparent, and accountable AI applications in government, requiring agencies to demonstrate safe AI use or cease its deployment.

‘An Arms Race Forever’ as AI Outpaces Election Law, 7 February 2024, Politico

As the election cycle nears, this article examines the tension between innovative AI applications in politics and the urgent need for regulatory frameworks, highlighting the legislative vacuum and ethical implications of AI in shaping public discourse.

The Promising Movement Toward Creating AI Guardrails, 19 February 2024, Forbes

The establishment of the U.S. AI Safety Institute Consortium marks a significant step towards the ethical deployment of AI, reflecting a movement to balance innovation with responsibility by ensuring safety, fairness, and accountability, and aiming to build public trust.

The Campaign to Take Down the Biden AI Executive Order, 26 January 2024, Politico

This article explores the political struggle to undermine Biden’s AI executive order, detailing pushback from tech lobbyists, GOP lawmakers, and conservative activists against using the Defense Production Act for AI regulation.

States Act, but Can Legislation Slow AI-Generated Election Disinformation?, 27 October 2023, Governing

As the US enters a contentious election season, there’s growing concern over AI-generated misinformation and debate about whether the tech sector or congressional regulation should lead in addressing it, amid challenges in state-level legislation.

Canada’s AI Regulation is Moving Fast and Breaking (Legislative Process) Things, 3 October 2023, LinkedIn

Canada’s proposed artificial intelligence law, AIDA, is under scrutiny for its hasty legislative process, with critics accusing the government of adopting an “agile” approach, potentially lacking adequate regulation and contextual consideration for the diverse applications of AI.

CEOs Tell Senators: Time to Regulate AI, 14 September 2023, Axios

CEOs, including Musk, Gates, Zuckerberg, and Altman, stressed the need for government AI regulation in a closed-door Senate briefing in D.C., yet the secrecy and CEO questioning limits prompted criticism, led by Senator Warren calling for transparency.

FEC to Consider New Rules for AI in Campaigns, 10 August 2023, The Hill

The Federal Election Commission has unanimously agreed to consider a proposal by consumer advocacy group Public Citizen to extend anti-“fraudulent misrepresentation” laws to deceptive AI-generated campaign communications, including generative AI and deepfakes.

The AI Rules that US Policymakers are Considering, Explained, 1 August 2023, Vox

The US government is engaging with AI policies and regulations, as evidenced by Biden’s announcements, safety commitments from AI companies, and the Senate’s proposed approach, spanning developer rules, regulatory enforcement, research funding, and workforce initiatives.

Schumer, Humbled by AI, Crafts Crash Course for Senate, 18 July 2023, Axios

Senate Majority Leader Chuck Schumer intends to conduct a series of nine “AI Insight Forums” aimed at educating Congress about AI before regulating it, acknowledging the complexities of the technology, and thereby postponing comprehensive AI regulations until at least 2024.

Deepfake Ads Strain Pre-AI Campaign Laws, Puzzling US Regulators, 17 July 2023, Bloomberg Law

Growing apprehension surrounds the use of deepfake images and videos in political campaigns especially following Ron DeSantis’ recent use of a deepfake in his campaign, as advocacy groups call for federal intervention due to a deadlock at the FEC regarding regulations.

Congress wants to regulate AI. Big Tech is Eager to Help, 5 July 2023, Los Angeles Times

Congress aims to regulate AI, yet faces difficulties in comprehending the swiftly evolving technology while tech companies lobby for regulations balancing existential threats and benefits, but concerns persist this distracts from current fundamental and systemic AI threats.

A.I.’s Use in Elections Sets Off a Scramble for Guardrails, 25 June 2023, The New York Times

The increasing use of AI in political campaigns is prompting debates over regulations, as politicians are using AI for generating campaign content, while concerns about disinformation and manipulation are driving efforts to establish safeguards such as disclaimers on political ads.

WIRED Business (Spoken Edition): Politicians Need to Learn How AI Works – Fast, 19 May 2023, WIRED

Missy Cummings discusses the need for policymakers to understand AI’s concepts and impacts, outlining her course at George Mason University aimed at educating regulators about AI’s risks and effects, urging politicians to become well-versed in AI for informed governance decisions.

Congress Wants To Regulate AI, But It Has a lot of Catching Up To Do, 15 May 2023, NPR

Drafting legislation aimed at regulating AI and mitigating its potential negative consequences are complicated by challenges such as AI’s rapid evolution, historical struggles in regulating emerging technologies, and computer science and law expertise gaps among lawmakers.

C. GLOBAL APPROACH TO REGULATION

U.S., U.K. Announce Partnership to Safety Test AI Models, 1 April 2024, Time

The U.S. and U.K. have partnered to enhance AI model safety testing, aiming to harmonize methodologies and share resources, highlighting the importance of international cooperation for safe AI development.

Big Tech’s New Rules of the Road for AI and Elections, 27 February 2024, Governing

Major tech corporations have pledged to address AI misuse in elections amidst rising concerns over AI’s deceptive potential, though the success of this initiative depends on their commitment to substantial resource allocation and good faith efforts.

Big Tech Tells Politicians: We’ll Control the Deepfakes, 16 February 2024, Politico

Tech giants have formed an alliance to develop tools for detecting and debunking manipulated media to protect election integrity, although concerns remain about the effectiveness of these measures and their impact on information authenticity.

Meta Will Enforce Ban on AI-powered Political Ads in Every Nation, No Exceptions, 30 November 2023, ZDNet

Meta has implemented a global ban on using its generative AI advertising tools for political campaigns and sensitive issues, as it assesses risks and builds safeguards amid expanding AI capabilities and upcoming elections in several nations.

Minding the AI Power Gap: The Urgency of Equality for Global Governance, 17 November 2023, Tech Policy Press

The article emphasises the need for equitable and inclusive global AI governance to address potential inequalities and power imbalances, advocating for the involvement of diverse stakeholders and fair distribution of AI benefits.

A Patchwork of Rules and Regulations Won’t Cut it for AI, 5 November 2023, The Hill

This opinion piece praises AI’s potential in sectors like healthcare and clean energy, while advocating for international regulatory harmony to avoid fragmentation and promote responsible AI development aligned with democratic values.

‘When Regulating Artificial Intelligence, We Must Place Race and Gender at the Center of the Debate’, 14 September 2023, El País

Brazilian anthropologist Fernanda K. Martins highlights algorithmic discrimination on digital platforms and advocates for a more inclusive approach to regulating AI and digital platforms, with a focus on race and gender, to address systemic issues like misinformation and hate speech.

Whose AI Revolution?, 1 September 2023, Project Syndicate

Divergent approaches to AI regulation are emerging worldwide, with the US, EU, and China each promoting their distinct models. Achieving international coordination is essential to establish consistent standards and promote equitable AI access, but remains challenging.

Who Will Win the War over Artificial Intelligence: Digital Tech or Global Powers?, 20 August 2023, Politics Today

The evolving global power dynamics, including the rise of new contenders like China and India, and the significant role of tech giants in geopolitical events are discussed in the context of global AI regulations, particularly Europe’s AI Act, China’s AI measures, and the US’s regulatory lag.

The AI Power Paradox, 16 August 2023, Foreign Affairs

This article envisions the year 2035, where AI’s advancements coexist with substantial risks, underscoring AI’s potential while necessitating governance to mitigate its challenges using principles such as precaution, agility, inclusivity, and targeting, to create a global model.

Don’t Use Deliberative Democracy to Distract From Regulation, 3 August 2023, Democracy Technologies

OpenAI and Meta are advocating for global citizens’ assemblies to regulate AI, funding deliberative processes to establish rules, but scepticism arises in their motivations and conflicts of interest, leading to debates on the authenticity of these initiatives as democratic endeavours.

D. NON-GOVERNMENTAL AND INDUSTRY REGULATION

An AI Mayor? OpenAI Shuts Down Tools for Two AI Political Candidates, June 19th 2024, CNN

A mayoral candidate in Cheyenne, Wyoming, planned to use a customized AI chatbot named VIC for political decisions and governance but faced a setback when OpenAI shut down his access, underscoring the ongoing debate over AI’s role in politics as technology outpaces regulation.

ChatGPT Will Digitally Tag Images Generated by DALL-E 3 to Help Battle Misinformation, 7 February 2024, Engadget

OpenAI has introduced a new feature for ChatGPT and DALL-E 3 that embeds provenance metadata in generated images to help users verify digital content origins and combat AI-generated misinformation..

Meta Will Label AI-generated Content on Facebook, Instagram and Threads, 6 February 2024, Venture Beat

Meta is enhancing transparency by announcing plans to label AI-generated content on Facebook, Instagram, and Threads to address concerns about deepfakes and misinformation ahead of the 2024 elections, collaborating with the Partnership on AI to develop standards.

OpenAI Won’t Let Politicians Use its Tech for Campaigning, for Now, 15 January 2024, Washington Post ($)

OpenAI has announced that it will not allow its technology to be used for political campaigns, aiming to prevent the spread of AI-generated misinformation and disinformation in elections by implementing strict policies to ensure transparency and integrity.

To Help 2024 Voters, Meta Says it will Begin Labeling Political Ads that use AI-generated Imagery, 9 November 2023, AP News

Meta’s new policy mandates disclosure of AI-generated imagery in political ads on Facebook and Instagram, as part of broader initiatives by tech companies like Microsoft’s digital watermark, to combat AI-assisted disinformation in politics.

Giving Workers Power to Thrive in the Face of New Technology, 1 November 2023, Stanford Social Innovation Review

Valerie Wilson’s article emphasises the role of policy and unions in ensuring equitable distribution of benefits from AI and automation, highlighting the need for stronger labor standards to combat rising inequalities exacerbated by Big Tech’s influence.

Artists Across Industries are Strategizing Together Around AI Concerns, 7 October 2023, Tech Crunch

#AIdayofaction, led by Fight for the Future and musicians’ labor groups, advocates for measures to prevent corporations from gaining copyrights on AI-generated art, to protect human creativity and maintain artists’ involvement in the creative process amid AI’s growing influence.

How the Hollywood Writer’s Strike Will Impact the Wider World of Work, 3 October 2023, Fast Company

The general secretary of UNI Global Union highlights the influence of the Hollywood writers’ strike, where the WGA’s wins in technology, job security, and fair compensation resonate with workers in various fields, emphasising the important precedent for broader labour protections.

Workers Could be the Ones to Regulate AI, 2 October 2023, Financial Times

While the AI regulation debate has typically been top-down, the WGA’s negotiation of rules for AI use in the entertainment industry shows that a bottom-up approach, giving workers a say in AI use and transparency, can shape AI regulation and may serve as a model for future efforts.

Hollywood Writers’ Strike Ends with First-Ever Protections Against AI, 26 September 2023, Venture Beat

The Hollywood writers’ strike has concluded with an agreement that includes strong protections against AI in screenwriting, ensuring that AI cannot independently create or modify literary material and that its use must remain at the discretion of human writers.

Google to Require Disclosures of AI Content in Political Ads, 8 September 2023, CNN

Starting in November, Google will require political ads to prominently disclose their use of AI-generated content in a “clear and conspicuous” manner, reflecting growing concerns about the spread of AI-generated misinformation in political campaigns.

AI-Generated Election Content Is Here, And The Social Networks Aren’t Prepared, 6 July 2023, Forbes

AI-generated content presents a challenge for social media platforms during elections, as the absence of regulations leads platforms to grapple with self-regulation and containment of misinformation, with transparency measures like those on TikTok falling short.

UK to Host First Global Summit on Artificial Intelligence, 7 June 2023, Gov.uk

The UK has organised an inaugural global summit on AI safety, uniting countries, tech firms, and researchers to address AI risks, enable international cooperation, and advance responsible AI development, aligning with the nation’s dedication to leadership in AI safety endeavours.

OpenAI Told DC Company it Can’t Pitch Using ChatGPT for Politics, 17 May 2023, Semafor

OpenAI has intervened to restrict Washington, D.C.-based company FiscalNote from using ChatGPT for political advertising, limiting its use to grassroots advocacy campaigns and implementing measures to monitor and classify text related to electoral campaigns.

Disclaimer: The image used in this article was partially generated by AI and augmented by our design team.

About Tectonica

Tectonica is a movement building agency with a mission to create a seismic shift in the way politics are done, through innovations that empower social, economic, and environmental justice movements. With a broad array of strategic, creative and technological services, their work helps organisations, political parties, candidates and unions unlock transformational opportunities, build movement infrastructure, and run successful social and political campaigns rooted in people-power.

Explore Further