AI presents new ethical challenges for political and advocacy campaigns. Here is an excerpt from an article by Ned Howey from Tectonica, with further reading from the Tectonica Organising Network TON newsletter. The TON newsletter curates content on digital organising and strategies for progressive change-makers. Sign up to the TON Newsletter here.
The rapid mainstream adoption of the new generation of AI calls for reflection on the ethical responsibilities for its use in politics and campaigns. We can use this moment to establish ethical standards for the responsible use of AI in our politics.
The rapid adoption of the newest generation of AI is poised to have a profound impact on political campaigns and civic engagement, both through broader changes in society and our specific use of these tools in campaigns. Amidst its rapid development and ever-emerging applications, it is crucial to dive deep and engage in a dialogue about the ethical responsibilities of its use in politics – including entirely new ethical considerations which did not previously exist for earlier technologies. This transformative technology holds the potential to reshape campaign operations, information dissemination, and resource allocation on a significant scale. Now is the moment for meaningful discussions and to strive towards a consensus regarding the ethics surrounding its use in political campaigns. Those of us conducting political work simply cannot wait for regulation to decide what practices should be considered ethical or not.
In this blog, we dive into fundamental principles and key ethical considerations we have identified (so far) as essential for this discussion. Our aim is to share the breadth of concerns, and promote dialogue and consensus-building for the establishment of ethical standards. By facilitating these discussions and promoting consensus, we aim to be better prepared to effectively navigate the intricacies surrounding AI, and lay the groundwork for responsible and ethical utilisation of these powerful tools.
Core principles which we believe are needed to recognize underlying our basis of logic in this discussion:
- Principle 1: We Need Rules and Norms Specific to Political Use.
- Principle 2: Existential Threats Might Be Real But Might Also Be a Dangerous Distraction.
- Principle 3: New Generation AI Cannot Safely Determine Our Ethics.
- Principle 4: Disinformation is a Symptom, Not the Root Problem Itself.
- Principle 5: AI Ethical Obligations Go Beyond That of Current Technologies (Internet, Data and Social Media).
- Principle 6: Human Authenticity and the Role of AI.
Key considerations that I hope will foster discussion and encourage a consensus of norms around use of these tools in politics include:
- Consideration A: Data and Privacy.
- Consideration B: Voter Suppression and Discouragement of Civic Participation.
- Consideration C: Disinformation, Fake News, and Deep Fakes (On a Scale Never Before Seen).
- Consideration D: Inaccuracy, Systemic Guidance is a Greater Threat than Examples of Full Blown Misinformation.
- Consideration E: AI in Decision-making.
- Consideration F: Disclosure and Transparency of AI Use.
- Consideration G: Limits on Which New Generation AI Techs Should Not be Used for Politics.
- Consideration H: Limits on How We Should Use AI Techs for Politics (Application to Certain Activities).
- Consideration I: Our Responsibility to Systemic Externalities and Impact.
The full article provides more detail on each principle and consideration. Read The Democratic Dilemma of AI: Navigating Ethical Challenges for Political and Advocacy Campaigns by Ned Howey on the Tectonica site.
We simply cannot wait in silence until ‘everything is figured out’ before committing to ethical agreements about the use of new generation AI in political and advocacy campaigns.
A selection of articles and podcasts gathered in the June and July 2023 editions of the TON (Tectonica Organising Network) Newsletter.
The use of deepfake images and video in political campaigns has sparked increasing concern. While 9 states have implemented regulations, federal legislation remains uncertain. Recently the FEC faced a deadlock when considering regulations, prompting advocacy groups to call on Congress to intervene. Ron DeSantis’ use of a deepfake showing Trump embracing Dr. Fauci has drawn attention to the issue, as it is the first clear use of a deepfake image by a major presidential candidate.
Senate Majority Leader Chuck Schumer plans to organise a series of nine “AI Insight Forums” in an effort to educate Congress about AI before regulating the technology. Recognising the need for humility in the face of AI’s rise, Schumer aims to provide his colleagues with a crash course on the complexities of AI through these forums, covering topics such as copyright, national security, privacy, and elections. By taking this approach, Schumer acknowledges the lack of bipartisan consensus for AI legislation. The forums are expected to delay comprehensive AI legislation until at least 2024.
Disinformation Reimagined: How AI Could Erode Democracy in the 2024 US Elections, 19 July 2023, The Guardian
Experts warn that advances in generative AI could lead to a future where disinformation becomes ubiquitous, posing a significant threat to democracy ahead of the 2024 US elections. AI-generated content, including photorealistic images, voice audio, and human-like text, can now be created easily and at scale, making it increasingly difficult to detect and debunk false information. This acceleration of propaganda is likely to erode trust in news sources and intensify voter suppression campaigns. Furthermore, the lack of effective regulation poses a significant challenge in countering this threat.
The European Parliament has taken a significant step toward regulating AI by passing a draft law known as the A.I. Act. This law, considered the world’s most far-reaching attempt to address the potential risks of AI, introduces new restrictions on the technology’s riskiest uses. It includes severe limitations on facial recognition software and requires AI system developers, like the creators of ChatGPT, to provide more information about the data used in their programs. While the law has not gone through final approval, it highlights Europe’s advanced progress in regulating AI compared to other Western governments.
The UK will host the first major global summit on AI safety, bringing together key countries, leading tech companies, and researchers to agree on safety measures and evaluate the most significant risks from AI. The summit aims to promote internationally coordinated action to mitigate these risks and develop a shared approach to ensure safe and responsible development and use of AI. The summit aligns with the UK’s commitment to an open and democratic international system and its aim to lead the way in AI safety efforts.
Is Your Nonprofit Thinking About Using ChatGPT? Your First Step is to Do No Harm, 8 June 2023, Candid
In this article, we hear about the lessons learned from a nonprofit’s unsuccessful chatbot project and recommendations for nonprofits looking to adopt ChatGPT and other AI technologies. Stressing the importance of ethical AI adoption, this article emphasises the need for a human-centred approach that avoids harm, and highlights examples of chatbots misbehaving, cautioning against substituting bots for human counsellors. They advocate for careful design, testing, and human oversight, and suggest steps such as increasing AI literacy and identifying specific use cases for AI.
The use of AI in political campaigns is raising concerns and prompting calls for regulations. A growing number of politicians are utilising AI technology to generate campaign materials, including images and messaging, which can be spread rapidly to a wide audience. While proponents argue AI is a cost-effective tool to engage voters, there are valid concerns about its potential for spreading disinformation and manipulating public opinion. Efforts are underway to establish new guardrails, including legislation requiring disclaimers on political ads that use artificially generated content.
The increasing presence of AI in our lives is raising concerns about its impact on our mental well-being and our concept of truth. AI has made it easier to produce disinformation, including fake images, deepfakes, and fake news, eroding people’s trust in what they see and hear. Deepfakes pose a threat to personal identity, as individuals may be portrayed doing things they never did and reliance on AI may lead to a diminished capacity for critical thinking and learning. Psychologists are only beginning to explore the implications of living in an AI-saturated world and we need more research into the psychological effects of AI engagement.
This article explores the role of AI in the future of work, emphasising the need for leadership to embrace and adapt to AI rather than fear it. It highlights that AI is here to stay and will be critical for organisational success, with AI-informed leadership possibly setting effective organisations apart. Senior leaders must understand how to ask the right questions to make sound judgments in an AI environment. This article encourages leaders to experiment with AI, develop a deeper understanding of AI tools, and pay attention to the impact of AI on their workforce, while being sure to use AI responsibly, consider ethical implications, and invest in re-skilling employees.
Campaign strategists are grappling with the anticipated surge of AI-generated content during the 2024 elections. Progressive group Arena hosted a meeting discussing the potential for generative AI to produce misinformation on an unprecedented scale. While recognising the need to train campaign staff to use AI, the focus was on educating voters to identify and combat AI-powered misinformation. Concerns were raised about the lack of federal regulations around AI use and Democratic operatives expressed scepticism about industry-wide regulations before the 2024 elections and emphasised the importance of trusted messengers and human involvement in campaigns to navigate the volatile landscape.
In this podcast interview, tech journalist Kara Swisher sits down with Tristan Harris, co-founder of the Center for Humane Technology and a key voice among the calls for slowing down the A.I. arms race, to dive into the complex topic of AI risk. They explore the potential dangers of artificial intelligence, with the intention of assessing the risks and potential mitigations honestly, rather than instilling fear or doom of far-off hypothetical scenarios. This is an excellent introduction to the AI discussion and captures some key insights in a rapidly evolving landscape.
In this podcast episode, Missy Cummings emphasises the urgency for politicians and policymakers to grasp the concepts and implications of AI. To address this, she’s developing a dedicated course at George Mason University for policymakers and regulators, focusing on AI’s effects and potential risks. Cummings draws parallels between AI in autonomous driving and large language models like ChatGPT, highlighting the need to prevent regulatory capture. She urges politicians to familiarise themselves with AI to make informed decisions about its governance.
This article on the potential impact of AI on politics offers both optimistic and pessimistic perspectives. Some believe that AI could revolutionise democracy by dramatically reducing campaign costs, levelling the playing field for small campaigns, and making politics more accessible. However, there are concerns about the misuse of AI, including voice impersonation, deep-fake videos and use in misinformation campaigns that could sway elections. While AI’s use in increasing political participation is debatable, its consequences and ethical implications remain a huge area of concern.
This article discusses the potential impact of AI on elections and democracy, exploring the concept of an AI-driven political campaign by envisioning a realistic scenario where a machine called Clogger, developed by political technologists, uses AI to maximise the chances of its candidate winning an election. Clogger would generate personalised messages tailored to individual voters, employing reinforcement learning to refine its strategies and manipulate behaviour. The article highlights concerns about such a tool’s lack of regard for truth and the potential erosion of democratic processes if such machines determine election outcomes. It ends with a call for privacy protection measures and regulatory scrutiny as potential safeguards against the negative impacts of AI-driven campaigns.
Sebastián Rodríguez, a veteran campaigner, explores the impact of AI on advocacy and political campaigns. While AI offers benefits such as saving time and resources, there are clearly concerns about its ethical use. Rodríguez distinguishes between “General AI,” which aims to outperform humans in cognitive tasks but is still a work in progress, and “Generative AI,” which is already prevalent in applications like facial recognition, self-driving cars, and ChatGPT. Rodríguez shares his personal approach to AI and provides further resources on the topic (including from our own Ned Howey!).
In this interview, Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at NYU, raises concerns about the dangers of AI and the risks associated with profit-driven corporations controlling these systems. Downplaying hypothetical future scenarios of superintelligent AI, she argues we should be more concerned with the immediate issues at hand such as data bias, lack of accountability, and concentrated power, as well as the harm faced by marginalised communities due to AI deployment. She notes we must enact proactive measures to address these already present risks.
Artificial Intelligence Warning Over Human Extinction Labelled ‘Publicity Stunt’, 1 June 2023, Independent
In a recent letter from the Centre for AI Safety, concerns were raised about the potential risk of AI leading to human extinction, sounding the alarm for urgent attention similar to what we give pandemics or nuclear war. However, Professor Sandra Wachter from the University of Oxford (we believe rightly) dismisses these claims as “science fiction fantasy” and labelled the letter as a “publicity stunt.” Instead the focus should be on immediate issues such as bias, discrimination, and environmental impact, rather than speculative scenarios of the distant future.
The National Eating Disorders Association (NEDA) has recently fired its helpline staff and replaced them with a chatbot named Tessa, shortly after the workers unionised. NEDA claims the move to AI was planned and will better serve individuals with eating disorders, but union members argue that it is a union-busting tactic. The chatbot aims to address body image issues using predetermined responses, but advocates note that the personal touch and lived experience of human helpline staff cannot be replaced by a chatbot, especially in the sensitive area of mental health support.
- Organizing is Needed More Than Ever in the Age of AI
- Artificial Intelligence and Social Justice – this 2019 article outlines some of the issues related to earlier generations of AI and the implications for campaigners.