Listen to two Commons Conversations Podcasts about AI technologies, campaigning and social change.
Listen to Podcasts
Emerging AI Technologies and Campaigning with Hannah O’Rourke
Aoife Carli-Hannan, engagement officer with the Commons Library, chats with Hannah O’Rourke, co-founder of the UK based Campaign Lab and a trainer with Social Movement Technologies. They discuss the role AI technologies can play in supporting campaigners with messaging, outreach and other opportunities. They also canvas issues in regards to ethics and regulation, and ways in which activists can shape the use of technology as well as overcome its barriers and limitations.
Aoife Carli Hannan
Before we begin the interview, I would just like to acknowledge that I’m personally tuning in from the unceded land of the Wurundjeri Woi-Wurrung peoples of the Kulin nation in Naarm, which is also known as Melbourne, and just pay my respects to Elder’s past, present and emerging. So, we have Hannah here today from the UK. Firstly, I would like to know a little bit about you Hannah, and a little bit about Social Movement Technologies.
Sure. So, Social Movement Technologies is an organisation, it’s an NGO, that provides training, advice, strategy for people who are building movements and trying to build power. Particularly on how they can use technology in a way that is helping them achieve their campaign objectives. So, they have lots of different trainings with people all over the world and groups all over the world, both big and small, on things like how you organise decentrally, how to respond to cybersecurity, they have a strand looking at how you can use AI tools and campaigning. So, I’m a trainer with them – I help train in AI and campaigning. Then in the UK, which is where I’m based, I run an organisation called Campaign Lab, which looks at campaign innovation and how people are innovating in terms of campaigns and responding to new technology.
Aoife Carli Hannan
Awesome. So, in terms of new AI technologies, such as ChatGPT, how would you say that they can help us in crafting effective campaign messaging?
So, this is a really interesting one, the biggest thing that things like ChatGPT do is they’re incredibly good at language, because they’re large language models. So, they’ve taken in millions of data points and lots of texts from all over the internet. Then what they’re doing sort of predicting the next word. People sometimes think, oh, this technology is thinking, it’s not usually thinking, it’s just predicting the next most likely word. This is incredibly helpful when you’re trying to come up with messaging or think about how to construct sentences. So, I think the most useful thing that things like ChatGPT can do for us is they can help us avoid that sort of like blank page, when you’re just starting with something, they’re very good at giving you a starting point. I definitely would always suggest that people don’t just run with the first thing that comes out of these models. A lot of the art is in successively prompting them and being like, oh, could you make that a little bit shorter? Oh, I’d like to turn to be a bit more informal, or can you make this sound like this person? And those can be really useful, like prompts in terms of getting more interesting things out of the model. But I certainly would advise of using it very much in partnership with human creativity because that’s still a really important part of what we do.
Aoife Carli Hannan
Yeah, Awesome. So, it’s kind of more like a dialogue with ChatGPT as you’re creating. So, how can AI play an effective role in reaching specific audience groups during campaigns?
So, one thing that I found it can be quite useful for doing is if you’re trying to reach, say, a group who are quite different to you, or you’re trying to I guess bridge build, between two very different groups, and you’re part of one group, one thing that can be quite helpful is you can sometimes ask ChatGPT to take on a persona and you can have kind of a discussion with that persona. Obviously, in some ways, this is quite shallow. But it’s also very useful if you’re a very small campaign group and say, you can’t afford to pay for polling or focus groups. This is another way to explore that or think about that. It’s kind of just manufacturing a persona.
There’s been some really interesting research. There’s a good Medium article about it, where they sort of simulated some focus groups around the junior doctors strikes. So, a data scientist, I think, basically created various personas based on the British public and then got them into kind of focus groups and asked ChatGPT to take on these personas, and then discuss their political attitudes towards the junior doctors strike. Then see whether that correlated with things like polling or focus groups that have been run efficiently, and I think they’ve found quite a lot of crossovers. So, that kind of thing can be quite useful. I would say there’s always limits to that in the sense that it’s no substitute to actually talking to real people about how they think and feel. I think also you have to be aware that ChatGPT is obviously trained on data, which is from our society, our society contains all sorts of inherent biases like systemic biases. So, anything that you’re interacting in, the model will have those baked in. So, you’ve always got to be aware of that. In the sort of training we run, I always go to the image generation software, and I say, “type in a picture of an Irish person” and I often get someone who is very short, all dressed in green, you know, when I immerse myself, I’m like, well, that’s not necessarily an Irish person, that’s a very stereotypical Irish person. So, you will get stereotypes and things too. I think particularly if you’re trying to do work that overcomes that, you just need to be really mindful about when you’re engaging with the models, not to say that you can’t overcome that in some way. But just be aware that there are all kinds of limitations.
Aoife Carli Hannan
Yeah, awesome and what would you say are some of the other limitations of ChatGPT and other such technologies are?
I guess the biggest thing is that it’s not always correct, and it does get things wrong. So, you can’t completely trust all the information that’s coming out of it, and you never should.
We always talk about this idea of keeping a human in the loop, you shouldn’t really be like issuing content that is wholly generated by ChatGPT. It should always be part of a broader workflow in which a human is involved in tracking it.
So, I would always caution against just kind of taking the first thing and not tracking it. Because it can get things wrong. It’s very limited by its training data. Obviously, you’ve got some versions of it that can browse the web, which means it can be more up to date. But the basic version of ChatGPT is limited up to a certain point in time. Therefore, if you’re relying on it for like current events, it’s not super reliable if you’re not using the version that has web browsing enabled. I think also like there are obviously some ethical questions we need to think about deeper, you know; whether we should be using these tools, or the kind of deeper impacts around kind of copyright and things like that. There’s a really interesting example, where lots of artists who are really upset with image generation models have started organising in a really interesting way. In their artwork, when it’s online, they’ve kind of created a digital poison, which means that if it’s sucked up by a model, the model doesn’t read the art correctly, and then that then poisons the training data set. Yeah, it’s a very interesting way of fighting back. So, there are a lot of kind of deeper ethical questions about, you know, jobs, copyrights, AI safety, is another big one.
So, I think those are some other kinds of deeper, yes, ethical limitations. I mean, from my point of view, where I’ve got to on that is; number one, I’m very aware that the government needs to take some precaution in trying to legislate around safety, particularly with these models, when obviously, we need to have a really robust response in terms of unions and job security and things like that. But I also do believe that people, particularly campaigners who might be struggling with very small resources, and small teams, should not let this opportunity to do more using this technology pass us by.
I also very much believe that when a new technology enters the world, people who are trying to do good in the world have a duty to use that and shape how it’s being used.
If we don’t sort of engage with it, we’re just kind of surrendering that whole territory to people who might use it for much worse reasons. I very much believe that with technology, it’s important for people with missions, or people who want to do good to kind of shape how it’s used and that’s happened throughout all kinds of history. I think often, particularly with newer phases of technology, we have been quite slow to build things and to make things and to, you know, use new technology to make the kind of world we want to see. I think that’s really important thing to do. So, that’s kind of where I rest on that and that’s how I square that circle. But that’s not to say there aren’t significant ethical questions, which people really need to be working on and thinking about, and I do know a lot of campaigns and communities organising around those questions, which I’m very glad they exist.
Aoife Carli Hannan
Yeah, absolutely. I had another interview with someone called Ned Howey from Tectonica and we had a similar kind of conversation around those particular ethical challenges. But one of the things that he talked about a lot – that I’ve also noticed – is that there’s a little bit of a hesitancy I think, in a lot of people working in campaigns and stuff because we know there’s ethical challenges. So, we’re not sure about using AI. But in doing so we’re leaving it up to, you know, our opponents – so to speak – to use AI in ways that are not so constructive, and how important it is for us to be taking those on constructively. So, I guess in that as well, do have some examples of how AI has been used successfully to improve the messaging in electoral campaigns or other types of campaigns?
So, we’re still quite early on, in the sense that I think people are still kind of dragging their feet in how to use this well. It has definitely been used in different electoral campaigns over the world to varying degrees of success. I think one country used it very much to create lots of funny memes of their opponent and that was quite interesting, because it’s very obviously fake generated content, but it was, you know, funny. In the UK, we’ve had some AI deepfakes, they’ve been used so far, and they’ve been kind of released on Twitter, which had been obviously quite concerning, and a bit of a problem. And then it was very interesting in the States. I think Ron DeSantis, started using ChatGPT hooked up to WhatsApp. So, his campaign would WhatsApp people, and then they’d be having a back and forth. Then I think, one activist noticed that the kind of responses were quite stock. And then they said, you know, “are you an AI?” Or answer a maths question for us. They were like, “can you tell me what two plus two is?” And dutifully the model goes, two plus two is four. The next question was like, “are you an AI?” And it’s like, “yes, i was built by OpenAI”. And obviously, that’s really wrong, because at no point was anybody informed that they were talking to an AI. So that was kind of problematic. You’ve also seen in the States various political adverts being made by AI.
I think you’re starting to get more models trained on say, bespoke political data, which then will message craft for political campaigns a bit better. So, we’re sort of seeing the beginnings of it, but I’ve not seen any being used massively yet. But you’re seeing the beginnings of people experimenting and trying to use this kind of technology, both for good and for ill. I think there are quite a lot of opportunities. But again, like there was an interesting study done, I think, by Oxford University, where they tested AI generated messaging versus targeted messaging that is generated by AI. They found that the AI messaging generated, that was just general messaging was effective, whilst the targeting one wasn’t more effective than the other one. So, I think there’s still mixed success in terms of how people are using it to target. I mean the one success that I found when I’ve been like, you know, messing about with it has been, if you ask it to take on a tone of voice of a person, it’s quite good at then coming up with really interesting things that you might not have thought of before.
So, you know, I asked it to do a defence of the UK, we have like low traffic, neighbourhoods that have become a very contentious issue in places like London, and I said; “can you write a defence of low traffic neighbourhoods?” – which obviously mean less traffic environment – “in the style of Donald Trump” and actually, what it came up with was really, really interesting. And I was like, oh, I hadn’t thought about this issue in that way. Like, you know, it was very much emphasising more community and keeping communities safe, and that kind of thing. Whereas, you know, normally our instincts will be to go towards, the kind of environmental concern or orientation and very different way to how I would have would have thought about so I think, yeah, it can be very useful for that but yet to see it being used on a larger scale.
Aoife Carli Hannan
Touching on the limitations, again, obviously, some of those are sort of, beyond campaigners themselves as we’re using AI and will require their own campaigns in order to regulate and all that kind of stuff. But I guess, for just campaigners in general, using these sort of new AI technologies, how would you suggest that they overcome some of the limitations or barriers that you’ve talked about?
So, I think these limitations, the biggest way to overcome them is just to check content that’s coming out of them. And again, that second kind of rule about having human in the loop being really, really important. One of the big problems as well with it, and I guess this is more general problem, is that we actually don’t know how to necessarily limit these models, like there are certain ways that you can kind of create limits in terms of what kind of thing it would talk about. So, if you’re kind of say, building a chatbot, to talk about your campaign issues, you can kind of limit it a bit. But then there will always be ways for people to kind of talk about something else and you can’t necessarily restrict it in the same way because we have no sense of how we kind of know at the top level, that it’s predicting the next word, but it’s not like we can regularly understand those individual predictions. We’re way beyond being able to understand that. So, I would say like it’s quite hard to limit what it focuses on. I would definitely be aware of that if I was building things on top of that. I think also fundamentally technology is still quite unstable. So, I’d be wary of making huge stuff that depends on it on top of it and making it kind of part of like much bigger automated process, I don’t think we’re quite there yet, we might be in six months. But currently that’s still pushing against that. I think the biggest way that campaigners can use it, and again, use it productively is as the sort of thing that they are in dialogue with, and see it as kind of a tool that you use in your sort of suite of different tools when you’re creating something in your job.
So, I would say that’s the main way to overcome limits with this technology is by treating it like the tool that you use, and very much being in dialogue with it and not letting anything out that hasn’t been checked by you.
I think also there are kind of like deeper sort of ethical questions for campaigners. Again, there’s a sort of like, particularly around things like AI generated images. So again, you had recently I think, Amnesty came under fire, because they use an AI generated image to publicise a kind of fundraiser for people who have been victims of the police, where they attended a protest, and they basically use AI generated images of the protests. What was really interesting about that was they were really transparent about the fact that we’re using AI and they said, the reason they were using it was because they wanted to protect the identity of the protesters, they wanted to kind of still convey the image of the protest, but protect the identities, which I personally thought was actually a fair ethical line to draw. But lots of other people were like, no, you’re muddying the waters. Or, you know, you shouldn’t be using AI images, or what about like photographers might be taking pictures of the protests, would you not use one of their photos? So, it sparked up quite a big debate. So, I think things with images, you’ve got actually think like, why you’re using the image? And then there will be organisations that will kind of have to have these kinds of deep ethical debates within them and think about, okay, when do we use an image?
I mean, one of the ways that I think the images can be really useful is where you’re trying to illustrate something conceptually, which you might not have, like been able to show before. For example, with a campaign around transparency in Parliament, it was really cool to be able to make an image of like Parliament and a glass box. That’s conceptually quite an interesting thing to convey. So, I think it can actually increase the sophistication of the kind of images we use in campaigning, but you wouldn’t want it to constantly be creating too many people, you know, that feels like on the edge of something that might be wrong. I mean one example, where it might be right is if you’re particularly working with like vulnerable groups who don’t necessarily want to be photographed, but they still want their stories to be told. So, you know, there are pros and cons. You have to kind of weigh it up and understand, okay, why am I doing this and go back to like, is what I’m doing, ethically helping my cause? Is it harming anybody? Is it you know, actually helping people to tell their stories is not helping? And there are the kind of fuzzy lines, which I think every organisation has to kind of work out for themselves.
Aoife Carli Hannan
Absolutely, and like you’re saying, with Amnesty, I mean, because it’s all new as well, there’s going to be a lot of back and forth and figuring out, you know, amongst sort of the broader community, as well as within the organisation, which things sit well, and which things don’t sit well. I was just remembering that I recently saw that, I think it was South Korea, the President had created an AI avatar to apply to young people. It was really quite successful. I thought that was interesting in terms targeting, and especially I think it was around the language, they were able to use with the avatar that it was very, like his usual way of speaking appealed to an older demographic that’s more traditional politician speak, as opposed to say, the way you know, younger people might talk amongst themselves. Do you have any thoughts in terms of how this particularly applies to electoral campaigns, where we are looking at leaders that are trying to appeal to so many different demographics, but potentially wouldn’t in their style?
Yeah, I think I, you know, my background is in sort of electoral political campaigning and yeah, I think things are about to get weird. I think we’re gonna see more stuff, like people doing experiments and things like that. I do think we’re like deeper down. As we get into this sort of idea of AI avatars, or like, you know, deepfakes or fake content that’s generated. Increasingly, people are going to really struggle to trust what they see online, they already struggle to trust what they see online, and this is why we’ve got the growth of like fake news and things like that.
But I think fundamentally, what will become more important for politicians is building trust with people and already most people don’t trust political systems.
I think it’s very important that anybody working in politics is thinking about that core problem of trust, because actually that’s going to become harder and harder to build. Actually, the way that you build trust in political campaigning isn’t just by having one big show, it’s like 1000 acts, build trust, not one big act. So, actually it’s about relationships. It’s about relational campaigning, it’s about knocking on people’s doors and talking about the party, what the party will do for them. It’s about being true to your word, you say, as a local politician that you’re going to keynote, that part then you go and you do it, and then you demonstrate that and then you build trust. I think in some ways, AI is very exciting and great and there’s loads of cool things that can be done. But too often people are looking for a shortcut to building that trust and actually building that trust is hard work. Being a good representative, it’s being accountable, it’s being transparent about what you’re going to do and then doing it. I think too often our politicians don’t do those things. And I hope that this will be a useful reset in terms of how they see their role in the political system, and how they sort of see what they should be doing for people and how they should be accountable to people.I think, on the one hand, it is very exciting, and you can do cool things. On the other hand, back to basics, relationships, building trust, I hope it will mean that we demand more of our representatives as well.
Aoife Carli Hannan
Would you say that in terms of building trust and accountability, that transparency in the use of AI will be something that is really important in terms of using AI in campaigns and in general?
Yeah, I think we’re gonna have to see most campaigns come out with like clear policies about how they’re using AI and when and I think that transparency piece is quite important. But again, I think we come back to like this idea of trust and messengers. So, we have fake news now. People make things up. People photoshop things. You already have that problem and the problem, again, goes back to trust.
So, part of building that trust might be about being more transparent about your use of AI and when you use it and how.
So yeah, I mean it’s an interesting question. But I also hope that because you kind of save some time when you’re campaigning, if you’re using AI, because you know, it gives you a good first draft, or it does that boring work email that you needed to write. And actually, it was easier for the AI to do it. So, you got the AI to do it. That frees up more of our time as campaigners. And the question is, like, how do we use that time? And actually, if we use that time for like, building relationships, building coalitions, during the tricky stuff, that maybe it’s harder for AI to do, I think that will be a really good outcome for us. So yeah, thinking about time saved, and then how we spend that time feels like an important question, too.
Aoife Carli Hannan
So, how would you say you didn’t envision AI being used into the future? Maybe in maybe a utopian or dystopian way, whichever you prefer?
So I think, I would hope that we end up in a situation where AI is used in partnership with humans, very much like this, what we do now the idea of successive prompting, and, you know, working on things. I think we’re increasingly going to see levels of automation, which means that the human is less involved. I think the question is, what are the limits of that? And how do you do that in a way that is fair? There are real problems with you know, algorithmic biases, and things like that. So, I think questions about how that might change the nature of work and what we are then subjected to. I think it’s very important to have, for example, like where you’re working under an algorithm. So, for example, platform workers, where an algorithm is deciding how their shifts might be allocated. It’s very important to have algorithmic accountability and transparency. So, I strongly believe things like Unions should have a right to see algorithms and see the kind of algorithms that are setting work for people and that’s going to be an increasingly important question.
Yeah, I mean, it’s hard to know what the future is gonna look like. I mean, one of the things that I’m very involved in in the UK is something called the UK Civic AI Observatory, which was set up by the London College of Political Technology over here and that is basically an attempt to try and be an observatory, see what’s coming, see how things are changing because ultimately, things are changing very quickly. And we have no idea of like, how this will play out, I guess, or what, what we’ll end up with, because it’s really hard to understand that. So, it’s quite hard to predict too far into the future.
I do also think we have to be quite serious about safety concerns. And again, people far cleverer than I might have written extensively about this, but I think paying heed to sort of not just the safety in terms of robots are going to take over the world and kill us all. But also, safety in terms of, how much are aware of how these things operate before we have these things kind of running key functions is a good question. Then also, I think there’s another question which is the economic transition that might happen due to AI technology and how that will affect jobs and economies and kind of making sure that people are equipped to deal with that. Our societies are equipped to deal with that.
Then there’s kind of this third bucket, which is, how people socially start engaging with their AI? How does it change how we relate to AI? Do people become addicted to this technology? There’s a whole load of social effects that are possibly quite chaotic as a result of this. And actually, we have to have a sense of how we’re going to manage that as a society. So, those are the main thoughts I have, I guess I don’t have a sense of how we’ll work with it, I would hope in partnership in the kind of way that we’ve been doing thus far. And that we kind of have that accountability and we have these like deeper questions about what kind of stuff is coming out of the AI. In that we’re able to do that in a way that’s like transparent and clear and understandable and interpretable. But yeah, these are quite big questions. And I think it’s also okay not to know the answer right now, either.
Aoife Carli Hannan
Absolutely. Yeah and I think the other thing as well, I mean, it’s awesome to be having people talking about the policies and frameworks now as well, because the reality is, we’re really catching up still with a lot of the technologies that we already have, such as social media that have been not regulated well for so long. Now we’ve got a whole new world introduced to us in these new AI technologies, that we’re kind of playing catch ups. And so we’re trying to kind of get ahead and be prepared for what’s going to come. But it’s so unclear, and there’s so many potential directions as well.
Yeah, I think definitely, our leaders and our political systems have failed us quite a lot in terms of technology. You think about, as you say, social media and we’re only just catching up to the broader effects of that. Again, I think a lot of this comes down to the sort of people that are in politics and whether they have the skills or knowledge to do this, or they’re not being prepared. I also think, the moment you have, tech companies that are able to pay for huge lobbying abilities, whereas anybody else, he’s got kind of different, more nuanced views, or maybe is like, well equipped to reach politicians. So, I think there is, there’s a real problem with that.
As we go forward, we’re gonna have to develop better systems of governance, to be able to deal with these kinds of things because we are I think in for a quite changing century and things are gonna keep changing.
So actually, we need to be prepared to deal with change. And we need people to feel secure in dealing with change. And that involves, investing in public services, and investing in education, making people feel confident about changes to their lives and their jobs, because they have the security of knowing that we’ve got good public services that are there for them when they need them, and those kinds of things really are important. Investing in social infrastructure, making sure people aren’t lonely so they don’t get addicted to technology, because actually, there’s like, a rich community life, you know, all these things need to be built, because we need to provide security and resilience in an age of what will be quite a lot of change. And again, I’m not sure we’re really thinking about that, or we’re really prepared for that which you know, is in the box of other things to consider slash worry about.
Aoife Carli Hannan
The ever-growing box.
AI, Technology and Social Transformation with Ned Howey
Aoife Carli-Hannan, engagement officer with the Commons Library, chats with Ned Howey, the co-founder of Tectonica, a movement building agency whose stated mission is “to create a seismic shift in the way politics are done, through innovations that empower social, economic, and environmental justice movements.” Alongside directly helping organizations, political parties, and unions with an array of strategic, creative and technological services the agency also created and facilitates the Tectonica Organising Network (TON). In this conversation Aoife and Ned focus on the democratic dilemmas, opportunities and challenges associated with new generative AI technologies. They discuss the potential of these tools to deepen existing biases and power imbalances but also the possibilities they create for fostering practices that can connect people to social change movements and transform our societies for the better.
Aoife Carli Hannan
Before we get stuck into the questions and the interview itself, I would like to acknowledge that I’m personally tuning in from Naarm, which is Melbourne in so-called Australia, and this is the unceded land of the Wurundjeri Woi-Wurrung peoples of the Kulin Nation. So, I would like to pay my respects to Elder’s past and present and acknowledge their continuing connection to the land and the waterways that I’m lucky to call home. So Ned, you work for Tectonica, do you want to tell us a little bit about Tectonica, what you do and what Tectonica’s position or place is within social movements?
Tectonica is a movement building agency, which sounds like a little bit of a strange thing, because we don’t really have an official industry area for movement building agencies. It uses tactics and techniques from a lot of different disciplines, basically, both tactical approaches, and actual overall work. We don’t have something for movement building that’s as formalised as the kind of the agencies out there that do marketing to support businesses, but we support progressive organisations and campaigns in the work that they do and we do this in a very specific way.
Our mission is basically to create a seismic shift in the way that politics are done. So, it’s not just about helping have some kind of service for the organisations that we work with, but specifically about helping them innovate in the way that they’re doing the work. We work with social, economic, and environmental justice movements. Specifically, we believe that if we do the work in the right way, we’re also going to be winning. We say often that we could win every single campaign that we work with, and our mission wouldn’t be done. We need to change the way that we’re doing the work. Because fundamentally, the right people are going to be winning, the right causes are going to be winning. If we are tied to and innovate the way that we are doing work, the whole kind of arena of politics should change.
So, how do we do this? A lot of people get confused by this too. We have a broad array of strategic, creative, technological services, you know, and the parties that we work with – political parties that we work with, or the candidates, organisations and unions. It’s not just tactical services that we provide. Very specifically, we provide tactical services, but that’s always framed in a bigger picture of unlocking transformational opportunities, not just transactional ones. And really, this is focused towards us rooting the work fundamentally and transforming the organisations, so that they can derive their power from the people, from the constituents and from the communities that they’re responsible and representing. We do this through opportunities discovery, which we explore with our organisations, kind of what they’re not seeing and help them uncover what they are not seeing.
We believe that communities are the solution.
So, we help them uncover those solutions within those communities. We help them to see the bigger picture and have a process to outline what they’re missing or what they might be missing, but ultimately the solutions come from them. We also help build movement infrastructure, so we help people get set-up on CRMs, which help them build contacts and relationships so they can track them, or websites, or full tech stacks and all kinds of other technological things that help them really power those movements. And we also help support them on people-based campaigns. So, this is not just like communication campaigns or whatever, like we do do some branding and comms and things like that. But always with a focus of people-based power as the kind of thing that will actually be able to do this. And the last thing that we have is Community Capacitation. So, we have a network, the Tectonica Organizing Network (TAUN), which is basically a community of several thousand organisations, across the globe, really all over the place, that really have this kind of transformational mindset in place. We’re really working together to be able to change the way that we’re doing politics so that it is more transformational and more focused on people power. And so, you know, as we’re learning things with our clients, and the research we’re doing, and as we’re looking at resources, at some point, we said, well, we’re not gonna be able to work with every organisation in the world. So, we have a social impact mission – let’s make sure to share what we’re doing. And out of that, we started to create an organisation where people support each other, we provide resources for folks, we provide original thought leadership, and conduct our own research, etc, etc. So, lots of different things, basically, to achieve this end goal of trying to make our politics more people based, because we fundamentally believe that’s the way that we’re going to achieve these justice ends.
Aoife Carli Hannan
Amazing, that sounds really awesome. So many different things that you are doing – sounds like you’ve got your finger in a lot of pies. We definitely need to talk to you at some point about a CRM.
We love CRM work, I mean, CRM is exciting, because there’s really this whole range of things you can do. Some people just think of it in very simple, unidirectional emails and keeping people’s data. But CRMs are managing contacts and growing contacts, which have huge possibility, because movements are built on relationships, and relationships can be scaled. But you really can use some technology and CRMs are the way we call them Constituency Relationship Management Systems. And that helps scale those things.
Aoife Carli Hannan
Yeah, I really like that framing of looking at them from a relationships perspective – between people rather than I guess, like the, you know, the usual framing and the business mindset of people equals profit. So, in your article that you have in the comments library, the Democratic Dilemma of AI, you talk about a range of different ethical dilemmas posed by the integration of AI in political and advocacy campaigns, can you explain a little bit about them?
I should give a disclaimer here, which is I’m not anti-AI. I’m actually super excited and hopeful for AI. We cannot ignore the reality that we’re already in the AI age, especially with this new generation of AI that is very powerful, that’s more universal, that’s AGI is really powerful and MLMs are really powerful. There are things that have really, in these past years, especially the results of the last kind of advancements in the last four years have jettisoned us forward into kind of a new technological age.
This technological age is as powerful, I think, as the Industrial Revolution – if not more powerful, honestly.
It’s not so much just like, oh, these things that we have now are really exciting. The speed at which these are developing right now in their fundamental possibility and potential for change is astronomical. Once you start to look into the things that we couldn’t do, you know, a year ago, are possible now. What we’ll be able to do in a year or two, a lot of the current kind of standards that we think of the world working are just going to disappear and be replaced by things that are far more powerful, far better. And this is going to improve our lives. So, I want to say, I’m really excited. The Industrial Revolution did some amazing things. It also did some horrible things, especially those things that we weren’t thinking through or weren’t prepared for as our world changed, because it is powerful, we need to be intentional and thoughtful about what it has there.
So, I just want to start by saying – don’t fear AI, but be intentional, be cautious about AI, because it is powerful. We want to mitigate some of those potential harms as much as we can. And also, power towards the right things. You know, the industrial revolution is really where a lot of our organising practice came out of and that was because of the harms the systemic harms that was causing that we’re not seeing at the time we had women in textile factories that were burning down without any fire regulation, you know, organising that happened in response and the differentiation that didn’t happen previously in terms of wealth and all these kinds of things that basically we had to be set up to fight the changing effects and systemic harms coming out of the Industrial Revolution. Despite all of the good and you know, things that helped us – a longer lifespan and better goods and all this stuff. We have to think of AI like this and prepare ourselves for it. It doesn’t mean deny the reality of AI or say let’s just not use that because it could be dangerous. It means let’s look at the full, giant potential impact of AI and be super intentional as we proceed with it.
There’s a number of different ways that things could really have an impact in terms of the components, especially around politics, civics, campaigns, advocacy, etc. First of these is just maintaining a lot of this standard tech considerations that we had in the past. Quite frankly, all the stuff, concerns with data and privacy and all these things that we’re kind of already dealing with but haven’t gotten a handle on with social media age and with a lot of the newer tax, they’re going to be accelerated a little bit. So, we need to even more be cautious of the things that we haven’t solved them for the past, we haven’t properly regulated in a lot of cases, we haven’t properly dealt with them, we’re already dealing with that stuff. And now, as we have this kind of accelerating factor of AI, we need to be even more considerate. Then there’s malicious activities, of course, a lot of the stuff that was already going on; disinformation, you know, malicious practices, that was already happening, that again, is going to even accelerate even more like exponentially.
These malicious activities are going to be way easier, especially as we have the ability to do it easier, more convincingly and at greater scale.
So, we need to be super cautious of the impact that’s going to happen with these malicious activities, and be prepared for that entering into our political scenes. But there’s one other thing, you know, besides the kind of preparing for us to fight a world that’s changing, and probably going to create more inequality, and besides the current tech considerations, and besides these malicious activities of deepfakes and all the things that are, you know, increased disinformation, there’s one other thing that I think we have to be super cautious of, and, again, the Industrial Revolution serves as a really good example, which is systemic harm that happens as we integrate these tools, our civics change, as we integrate new technologies, every time, it changes the way that we conduct politics, then this is not about an individual use, but an awareness of the system that’s changing as we integrate them. As we update them and as we get excited by a lot of the opportunities that are there, we sometimes ignore the systemic harms.
What I predict will be most kind of impactful and to me most concerning about AI, are the systemic harms that are going to happen.
Starting with bias, you know, we’re bringing in information, it’s already supervised, and bias is a complicated thing, because in reality, there’s no such thing as unbiased AI, like, we can’t expect it to be unbiased, it’s pulling from the internet, even the language that we use, the language itself has bias in it. And in the past, we had humans that were representing that language and questioning that language and editing it for their own communities and whatever. This is a universal AI so it’s coming in with a lot of those biases – specifically with a lot of discriminating factors that mirror all of the slanted privilege in our world. So, we’ve got a global North slant, we’ve got a very white slant, we’ve got a very western slant, we’ve got very male, you know, like, all of these elements of bias are baked in, even with conscious efforts of some of these, you know, tech companies are they’re really baked in. Then we have this other element, which is the trust gap. We already have, I think, you know, disinformation is already a problem and that’s gonna accelerate.
But disinformation is not the disease, disinformation is the symptom of the disease.
People have been making up things about their opposition or whatever or making up fake things to put out there to manipulate civics since, you know, basically, we had democracy, that’s not a new thing. What is a new thing is that we exist in a world that’s changed because of the social media age, where disinformation really goes out quicker, where trusted sources no longer are trusted. And that trust is, you know, based, there’s a gap now between the civics that is above us and the civics that people used to see in their everyday lives. I call that the trust gap. That’s where we’re getting populist right-wing people in power and where people are going out, for example, to vote for Donald Trump. Because he’s, quote, “not a politician”. They’re not excited by the campaign or the policies. They’re excited by voting against an establishment that they don’t trust.
This gap is really damaging our democracies and AI is going to accelerate that – it’s a systemic harm, it’s going to likely accelerate that trust gap even more.
And specifically, you know, we’re talking about even having a place where information is not sourced in humans. It’s sourced in the general voice of some, you know, machine algorithms. So, of course, people are going to distrust more. So, we’re going to really accelerate and stimulate in a bad way, that trust gap. Last one of these that I think is really important, that’s a systemic harm we should be considering, is the fact that we’ll be increasing our transactional practices with AI if we’re not intentional that.
Again, there are transformational opportunities with AI, but if we are just doing more of what we’re doing more efficiently or, you know, only accelerating the tactics that we currently have, our politics is going to become increasingly more transactional.
This is the biggest concern I have, honestly, because we’re already living in a world where the direction of our politics is becoming extremely transactional and moving away from the transformational. We surveyed thousands of organisations in a qualitative and quantitative study in 2020, in Europe, and we found that the more personalised an intervention online in French was and the more decentralised intervention was, the less likely it was featuring in people’s campaigns, despite the fact that people generally agreed that we need to feature across the board on those campaigns. So, we already have a huge problem with our practices being very focused on unidirectional and being very transactional. We’re really losing this practice that we used to have of transformational political practices, this is going to get even worse systemically with AI for a number of reasons. And it is the area where I’m most concerned, a lot of our political practices today have participation as kind of a happy accident. People are basically involving people only because they’re short on resources. And what we see as these horrible practices sometimes where you actually have an opportunity to fundamentally engage people and bring them to the table in meaningful participation. And they’re treating people as if they were machines already, you know, like a lot of field practices that are out there had this opportunity to foster real discussions, have, you know, capacitate leadership, capacitate people to communicate strategically and how they’re talking to people that think different. But really, they just want to do it at a bigger reach, bigger scale and treat it like it’s a giant walking megaphone, because they’re short on resources. And they think this is the best way. Sadly, there’s this lost kind of value of transformational practices in our current politics. But that’s my biggest concern for the systemic harm that we’re going to cause if we’re not intentional about the way that we play, because actually, there are ways that AI could increase the transformational, we’re just not doing it yet.
Aoife Carli Hannan
Would you say a large part of this is because we’re still playing catch ups at the moment, like, we have this new thing introduced and so it, of course, is embedded with the issues that already exist in our society? So, we’re kind of trying to play catch ups rather than sort of setting it up to already be able to integrate better into our movements.
Yeah, I think there’s a couple of things here. And yes, we’re trying to play catch up, because we really have damaged our civics from the social media age and haven’t figured out how to address that yet. And now already, there’s a new age, there’s a new technological age, we’re introducing something new, that’s going to even accelerate the problems that we have and that we haven’t solved yet. We haven’t.
And you know, fundamentally, what I think is interesting is when we look at the short term on these tactics, they look like they’re working, as we step back and look at the bigger with the impact on civics, the impact upon the progressive movements, the impact upon democracy, we’re losing the game, we’re losing.
It’s just a series of these kinds of Pyrrhic victories, like hey we raised a lot of money, and ooh, we grew our list really quickly, and oh look we finally figured out how to get more people to be members. Even those practices are getting worse and worse in the moment because we’re so focused on reach and scale and short-term outcomes, that the entire ground beneath us is shifting in the wrong direction. As we think we’re making these little wins, we’re losing the practices that we had and the bigger value that we had. And that will probably get worse unless we’re intentional. Again, there’s opportunities about it in with this new technological age, there’s opportunities that are great, we just need to be intentional and value the bigger picture thing over the short-term returns. I’ll be quite frank, the short-term returns are in a lot of cases coming from the fact that a lot of the text that we’re using for politics, the playing spaces that we have in politics, and the industry that we’ve built about around politics, are guided by market forces, whether you’re capitalist or not a capitalist or whatever. Civics has different drivers than market force drivers. And the entry of these market force drivers kind of through these technologies really are going to favour short term outcomes, which is a good example and one that should be a cautionary tale from social media, for example, is these platforms that are going to favour all the things we don’t want in a good civics; simplicity, hate based messaging, fear, you know, looseness with the facts, a disengagement, that’s because they’re built to sell us shoes, they’re not built for civics and they’re certainly not built for transformational civics.
Aoife Carli Hannan
Yeah, I was wondering if you had any specific examples around AI, in particular. Is it already been used in any campaigns that you know of? I know you had a recent election in Spain for example, was there any use of AI in those campaigns that you know of?
People are really just starting to use AI, so there are only small examples. And so, we have different things. We have people who are using AI tools that are already out there. And I should distinguish – AI has actually existed for a while, when we’re talking about AI right now, we’re mostly talking about this new generation of generative AI tools that are, you know, very different LLN, AGI, is, you know, these things that didn’t exist few years ago. But it’s important to note that AI has been already playing a role, obviously, in social media algorithms and in a lot of the tools that we already use day-to-day. AI has existed for a while, it’s this new generation of AI that’s very specific.
With this new generation of AI, I think that there’s lots of use cases as people are applying them in their day-to-day work, the results of that are really new. I know a lot of the civic techs, too, are just being developed right now, almost any civic tech that’s basically, you know, focused kind of AI first is at best in beta. But you know, most of them are still in development. So, I see experiments that are happening out there right now, in little cases, that people are using them in tactics. But I don’t really have great examples of specific campaigns, which are putting AI technology at the forefront, because it’s so new. I am excited by a few of the techs that are out there, and the way people are utilising them in some of the civic techs that we have. Open Fields, for example, I know is developing a field CRM, a really cool one with kind of a very good mindset about good political practices. They’re developing right now an AI component that’s going to help field organisers process information in a more efficient way. That’s going to bring that information back and cumulate it so that it impacts the campaign. So, that’s a great connection directly to community taking information that’s going to then impact hopefully the campaign and better represent accountability to its constituents. I’d say most people are using them to speed up their transactional practices, whether that’s fundraising, or targeting and digital ads, like those are the more prominent experiments that are happening right now. I wouldn’t say across the board, those are bad practices, but they’re ones that I’m less excited about. Because they do you have this possibility to further increase the transactional while kind of decaying, the transformational potential.
I’m not against transactional practices and tactics, we need those. We need those constantly in our campaigns and our mobilisation. But we’re really heavy on those compared to the transformational politics that really are key to us moving progressive causes, not just winning campaigns, but moving the societal consensus through movement building.
Aoife Carli Hannan
Yeah, you’ve already talked a bit about the most prominent ethical considerations. But you did also talk in that article that you wrote about the increasing ability of AI to like mimic or attempt to mimic human interactions. How do you believe this raises ethical considerations in the campaigning space?
To start with, it’s amazing to see how little people have thought about this. I’m saying this as an open and proud nerd. This is the central topic of half of the Sci-Fi genre over the past. By the fact that AI can mimic human interactions like this is – I feel like I’ve been studying this since high school with every episode of Star Trek, every Philip K Dick novel, and I don’t know how people are not talking about this and missing this, like, I certainly have my mind on this. And yes, since our first kind of thinking of machine building from Alan Turing, like the Turing Test literally, is can a machine replicate the experience of being another human like this is central to this technology; with how it’s changing, what it is presenting, and I think there’s something that has to be very much thought of here.
This is the real differentiation between any technology that humans have had before – it can replicate the human experience, okay, it can replicate very well, and increasingly well, the relationship and the feeling of that relationship between humans.
We are built first of all, as a social species that is like literally where we came from, that is baked into our core as a species. So, we will have a very large tendency to even project upon connections, human connections onto like inanimate things that we have – almost as though you’re chatting with another thing. And those things are becoming more real. That’s in some components exciting, but in some components should be very concerning and opens up a whole Pandora’s box of other concerns. Specifically in civics and, why in civics? Because civics is how societies make decisions. Civics is how we as individuals and communities make decisions together for better long-term outcomes. You know, things like saving our species from the destruction that we’re doing or from the systemic harms of power and power differentials like this is the core of our democracies – our democracy is built on how people relate to each other.
Fundamentally with transformational practices, with practices that are about progressive movements, it’s where we build power, it’s through relationships and movements.
The foundational part of movements is the connections that we have the difference between; I’m not going to pay my rent, and hey, my whole buildings not going to pay rent because the relationship is bad or we’re going on strike, is the relationships that we have to each other. And so, when we’re entering a factor that can mimic human relationships, it’s going to completely upend the game in terms of transformational participatory democracy, it’s going upend the game in terms of the trust that we have for each other. And if we’re feeling fools, you know if campaign messages are coming out from a machine, not a candidate, and we’re feeling betrayed by that, if we have a relationship where we are really tricking people, and we’re not authentic with people, that authenticity, that trust is going to decay.
And fundamentally too, if we’re just replacing the places where we have real humans authentically right now with machines, we’re going to be both losing the practice of doing democracy and democracy is something we do, not something we have. We’re going to be decaying all the potentials that we have for the fundamentals of movements.
I think that this age, if we’re not intentional, could be the death of movements, I fundamentally believe it.
And if it’s the death of movements, then we’re stuck with the world that we have and a world that is naturally through the systems that we have getting more unequal, more unjust, more unfair. We need civics, we need the practices that we have for justice in our world. And so, this potential to replicate the human experience, particularly is very concerning for me, that’s if we’re not intentional about it. And I want to be really cawed here that we need to have an awareness about the potential of relationships and the role that it plays in politics. Because it can go the other way, if we put our minds to it, if we’re not just applying these tools with like, the quickest, the fastest, the cheapest, if we’re not just letting market forces enter in and just being like, well, the, you know, this particular fundraising vendor or this particular this building service, or this particular whatever, like it can go faster and cheaper and at greater scale. If we are intentional about it, it actually could go the exact opposite way. Because these tools are powerful and do open up real possibility. So, I am not a robot. You’re not a robot. And our constituents are not robots either. And they’re not just subjects to our organisations or campaigns. They are our organisations and campaigns, if we’re doing civics right but that means that we have to make sure to be accountable to them, and to bring them to the table to give them a place and make it about participation.
Aoife Carli Hannan
Yeah, that’s really interesting what you’re saying as well, in the beginning of that about being a nerd and looking at AI for a long time through Sci-Fi, and that kind of like prediction of what the future will look like. I grew up in the MSN age. So, we had like, what was sort of an early iteration of ChatGPT I guess in SmarterBot and those kinds of things.
How addictive will it be once it really does feel like there’s a relationship? It’s gonna get way wilder. ChatGBT, I already I use it, and I love it. I find myself at times getting frustrated with it, getting excited with it, like having an emotional attachment with a relationship with ChatGPT. Imagine these companies start entering in commerce, marketing, products, influencing the way we think about these things, or reinforcing the thoughts that we should have about the system being more okay, in the power differentials in the world being okay, like a lot of problems that could come up with these technologies if we’re not very cautious and very intentional about them.
Aoife Carli Hannan
Well, we’ve talked a bit more about the kind of impact it will have the larger scale sort of macro level, but in terms of smaller organisations and advocacy organisations; what would you say some of the main considerations or ethical dilemmas are in comparison to those sorts? Is there a difference, I guess, between the two?
So, one thing here when I’m talking about systemic harms to consider is that a lot of small organisations, I saw this in the past that were afraid to go on like Facebook, because they thought, you know about check-ins, because they were afraid because they heard terrible things about Facebook but ended up ceding that ground. You’re not actually helping the systemic impact of this by ceding the ground to negative players. I think while we have to look at these bigger pictures systemic harms and take them on and be conscious about them and be conscious about the way we’re using them.
We also have a really big potential threat, especially for the smaller campaigns that are not really going to have you know that impact of it, if they individually just abandoned out of concern or whatever, that’s going to have a harm too for our progressive causes and our progressive movements, particularly because those that are being driven, the more powerful people, are going to adopt these really quickly, they have the resources and the space to actually learn and adopt really quickly, whereas our smaller organisations and campaigns often don’t. So, they’re already pressed for resources, they’re already short on time. I’m not saying that we shouldn’t dive in and experiment with AI with these small organisations. Quite the contrary, I’m saying we should. I’ve heard from people that didn’t even try, oh, I’m not going to try it, because there’s concerns and whatever and I believe in all that, and it’s, you know, that’s not gonna replace me and whatever, like, no, it’s not going to replace you, you’re going to be replaced by those that know how to use AI.
This is tool that we need to be using and we need to be using it with intentionality, of course.
It’s looking at ways that we can as a movement, a broader progressive movement, as people that believe in transformational practices of movements, how we can actually use these tools to get afront the issues that we’re going to come up with in a changing world, a more unequal world, new problems that we haven’t even thought of yet. So, small campaigns should be leaning in on AI, and particularly a lot of our small campaigns that represent communities that are already underrepresented. These AIs are already showing that there’s a huge gap in the uptake and adoption between communities that are more privileged and those that are not. So, we’re already on the back end of these things so we need to be adopting these tools. Of course, with intentionality, of course, with the awareness of new worlds of unequalness that’s coming out of these, and of course, with all the ways that these can be used for bad. We need to make sure that we’re not, you know, replacing our constituents voices, and make sure that we’re not alienating our constituents or being inauthentic or losing their trust. But yes, can we use these tools for being better at what we’re doing? We must.
I think that’s definitely really important. Something I’ve noticed a lot with a lot of people in the advocacy space is a sort of, I guess, a conservatism when it comes to new technologies and a fear of them, because of the potential harm, and also sometimes a bit of a hesitancy towards new things, in an ironic way, like new phones or new social media, you know, TikTok, or whatever the latest thing is, it’ll be “oh, yeah but you know, there’s potential issues” or something like that, but it’s definitely unhelpful. Then you also then get left behind in that process. And I think, especially like you’re saying with AI is such a massive shift. We are, by definition, going to be on asymmetrical power challenges. Those that are fighting for environmental, economic and social justice, we have less resources and less power. So, we have to.
Doesn’t mean we can’t win because we can build power. We can do that through collective action and coordinated action to have a greater power output.
And by making power visible within our communities, but what we need to make sure of going in is that we’re going to be less resourced, we need to take advantage of every tool that we can, in doing so. But we need to do so intentionally. So, that doesn’t cause greater harm to our communities, or untether us from our communities, detach us from the voices of our communities, we need to be using it to bring people together, to build more power amongst those communities and centre it in those communities. And if we’re not cautious about how we use it, we will be doing the contrary. So, it’s not really if we use it, it’s really a question of how we use it.
Aoife Carli Hannan
What strategies would you suggest for imposing limitations on the utilisation of AI in the context of political campaigns?
So first, it’s about what we’re trying to achieve with it. If it’s just turning up the volume on transactional practices, we have to be very cautious. Especially in places like the US right now, where everyone is burned out on being bombarded with spam and there’s not good regulation around using people’s data consensually. I don’t think just turning up the volume is going to help us engage more people in civic politics, which is really what we need to be doing. So, the intention being participatory, the intention being transformational, that’s really a key, it’s very much about what purpose we’re putting them towards. Secondly, in terms of more like, tactically, when we’re applying these, we need to be very aware of different ways which we can use them, are we using it to proofread something we wrote? Or are we using asking it to generate something original? Those are two very different things. I’m not saying one is good or bad, I don’t know. But we need to be intentional about how that is and when we’re using them in different ways. Be conscious of the impact of that.
So, one of the things that I’m very big on and I think is very key is transparency when these are being used. You’ll notice at the bottom of my blog articles I will say exactly what generative AI was used for whether it was the image. We need to be transparent with our audience, which are constituents ultimately, that we are using these in a certain way, you know, was it just used to proofread? Or was it sourced from that? Those are very different things. And I think we need to maintain the trust, we need to be transparent about that, and aware of the authenticity of how it is. Secondly, when we’re using these tools; let’s say we’re using it to write a fundraising email, it’s gonna bring in all the language and bias, you know, it’s doing some experiments, and we talked about this in one of my blog posts around like, talking about abortion and reproductive rights. I very consciously didn’t use the word “woman” to see if it’s, like, “she’s” absolutely what it defaults to, of course, it used a woman that’s coming from the general like, consensus, whatever, it did not use “pregnant people”. And so it was immediately in his language, assuming the exclusion of non-binary, and trans women.
So, we need to be super cautious that if we’re going to use AI as the source, that it’s coming from a general consensus source, which is the thing that usually as progressives, we need to be challenging. And so, we need to really double check and go and like, is this really speaking our language as it develops language? Or is that speaking, like language of power, of the global north, of white male, cis, hetero, you know, like, all the privileges basically. So, we need to be super conscious of that in the way that we’re using that information and drawing from there.
Aoife Carli Hannan
That had me thinking to be honest, because I haven’t, I mean, I’ve definitely considered that in terms of when you ask questions, and I’ve seen it, answer questions, very blatantly biasedly, but less in the consideration around the actual specific use of language and messaging. I mean, I guess, you know, when I do use ChatGBT, I use it in a very kind of like conversational sense to sort of generate messaging, rather than getting everything from that source.
And also, I would add to that, it’s going to – while being very useful and quicker – can also limit our political imagination, ration our sources of images or words, and the carry bias with them, because they’re representatives of how we understand our world. It’s going to be much easier when it’s coming from an AI generated machine to basically miss those places where we should be questioning. Progressive work, as we know, it’s really hard in organisations, everyone’s very opinionated, we know this, but that’s why they’re there. There’s a lot of conflicts within the movements, within the organisations and a lot of times, yes, that’s like frustrating, sometimes it feels like slow to move things, but that conflict is actually part of the work that we’re doing. When somebody raises their hand when we’re doing, you know, work around reproductive rights and says, no, we shouldn’t use this other language, because it’s excluding people. And then we have a discussion around it. I’m not here to say which language we use, because I’m not the person most impacted by that issue.
But the conflict that we have around that at the table, is actually a key part of us moving transformationally from an unjust world that we live in to a more just world that we want to build.
And if we remove that, through just accepting the bias of AI, because the work gets quick, and we look at, hey, let’s win the short-term gain in the specific campaign, then we really are limiting ourselves and the potential that we have as progressive movements.
Aoife Carli Hannan
And I guess in that as well, because AI is built on the existing world that we live in and existing information that is present and pulling from that – then how can it be transformational? How can we envision something through using that same language? And that same framework, I guess, as well?
That is exactly it. Yes, we’re gonna freeze our politics in the unjust consensus understanding of the moment, rather than continue to challenge it in its underlying part. I mean, it’s funny because people talk about movements and use it in like, very different ways. But movements mean a thing, the thing that we’re moving is society’s consensus understanding. It is transformational and that doesn’t just happen at societal level, it happens in our campaigns at the individual level, and at the community level, through organisations and through the work that communities are doing. But fundamentally, we’re not just taking a democracy, I’m not just taking a giant survey of how everyone feels about something.
Democracy is a practice that we conduct. And specifically, democracy is a practice that we take part in, and as we take part in it, we move that ground to a better more just place than it currently is.
For me, it’s not just about a constant battle of having the best conversation of ideas, not just a marketplace of ideas. It’s about constantly unlayering and bringing forth the visibility of communities that we’re not even seeing in our very unjust world.
Aoife Carli Hannan
Yeah, so I guess like you’re saying AI is obviously not going away anytime soon and you’ve talked a lot about how I guess we can use it in a transformational way. What is your vision in an ideal setting of how we could use AI and how it can really better our movements?
I am so excited about AI. I think we’ve really just started to scratch the surface. To share a personal story. I mean, I never planned on owning a business. I’m 14 years in and I’m still like, what happened? I never imagined myself as a business owner, I never thought of myself like that. To share a little bit of my story, I always was an activist, and since I came out at the age of 14, was very, very radicalised and seeing the injustice of the world gained a consciousness through that experience of privileges and injustices of power that other communities had, not just the ones that I am part of and, you know, was always very active. I worked in homeless services. My first career for almost a decade was in homeless services in San Francisco, seeing the impact of how badly our civics and our societies fail certain people. After quite a time working in that, and seeing things that were just unimaginable, I really wanted to go work on systemics, the underlying problems that were creating this situation, so I went off to study a Master’s in Public Policy at UCLA. In my first week of classes, my husband died, 31-years-old, completely unexpected. I couldn’t imagine something like this happening. My life just exploded on a one-way ticket, I moved to Argentina, and I had to figure out what the heck I was gonna do with my life.
A year later, we started Tectonica, my business partner, Mariana and I, and she’s from small town, Entre Ríos, which is province in Argentina, she never imagined opening a business either. We both kind of had activist backgrounds, we both studied literature, not even politics. We never imagined opening up a business, it was beyond our, like, it’s kind of you know, we had no connections, no capital, not really the right experience that we should have had in this world. You know we’ve built that through the clients, from listening and learning from our clients over the last decade, and years, but it’s been really challenging. And I think, you know, it’s kind of a small miracle of fate and real drive. Really a lot of that drive came out of my own experience, having lost my partner and really wanting to create a space and feeling the need for more justice, that we were able to build this thing through perseverance. It’s kind of a miracle that we’ve survived the years that we did. We had some hard years. But I knew nothing about business. But I also think that my voice is important in the work that we do, in the work that we’re doing. Coming from my experience of 10 years in homeless services probably is better, I think, to have people that build businesses, off of seeing the injustices firsthand. I see our politics, and I see so many people that need to be the most important people that don’t have access to being represented, to being part of those politics, whether it’s working in political businesses, whether it’s working in organisations, whether it’s represented in democratic, representational politics, I see people shut out of these things, because they don’t have access to the information. They’re not the elite, they didn’t study the right thing, whatever. AI has a huge potential to help people that come from backgrounds that are out there get more represented and be more present in the spaces where we most need that representation.
Even in my work now, I’m like, God, I wish 14 years ago when we started Tectonica I could have looked up how to structure a business or things that I’ve learned over the years, reading hundreds of books literally just desperately trying to figure out how do we do this thing. But ChatGPT and other AI sources make this information so much more accessible, make the work around it so much more accessible in a way that I really hope other people that have an idea are starting things or want to build things or want to enter politics or want to run an outsider campaign that they have access now in a way that they didn’t. And that’s fundamentally one of the things that AI has huge potential to actually do is to give people access to the information and to tools that they haven’t had previously. It skills people up in areas where they shouldn’t have to study 4 years of business, or 4 years of politics or whatever things that they might not generally have the privilege to be able to go and do like others that currently are represented do. But beyond that, in terms of the text that we create, and the way that we actually use these texts, AI is amazing because it can take unstructured data and form it into structured data. It has the ability to be very conversational, it has all these abilities that we could be applying them for. For actually building stronger connections, for identifying our motivators, for looking at these things that fundamentally do build relationships with people. You know I look at the practices that are good practices about going deeper and engaging people more and I’m like, wow, there’s so many opportunities for AI right there.
Right now, we’re not doing a lot of stuff, because we don’t have the resources and we don’t have a good way to do it efficiently. You know, a lot of people were aiming more towards transactional practices just signing off on a petition or asking for money, you know, do a big social media campaign, whatever, instead of the like, hey, let’s jump on the phone and connect people, which is what movements actually are about connecting people. Those things, whether they’re events, or supporting small groups to develop community strategies, or actually connecting that first person that signed that petition to a real person that has a shared passion about that thing. We can find AI tools and build AI tools and apply AI to actually help us do those things better. Maybe we can connect people that have shared motivators together, maybe we can even just like simple things that we’re not doing right now like connect their schedules, so they can have a call. AI has all these, you know, potentials to assess the information of why they signed that petition, and what are their true motivators and how they want to participate in ways that we’re not asking people to build deeper participation around.
There’s so much fundamental potential for these tools that we’re really just starting to explore, if we’re dedicated to making it about more participation, more representation of voices, people at the table being more involved, engaged and having a bigger role in our civics and our democracies.
Aoife Carli Hannan
Going on from that as well, do think talking about, you know, the kind of climate that we’re in at the moment and the thing around relationships, and especially around our connection to people being kind of fraught, I guess, post – not post COVID – but you know, post that locked down period of COVID, where we already had so much disconnection, and then we have AI coming in around that time. Do you think our relationship with new technology AI has been influenced by that as well?
You know, I think that we don’t even see coming what’s coming. That’s kind of why I’m trying to use the information of looking historically, looking at what the problem that we have with politics, which I think is most people working in politics are not even seeing or concerned with, and looking at the very specific, unique factors of AI and what they bring in to kind of predict what we should be looking at as we come up with this politics.
We got sidelined by social media. I mean, remember the whole like, 2008-2014 period, we were super excited by social media. We we’re like, wow, Obama’s segmenting emails. Looks like a very new, exciting thing. Wow, look at Occupy Wall Street, hey, look at the Arab Spring, and we didn’t see coming what was coming with social media, and the fact that after Obama, we got Trump, and, you know, yes, we got the Arab Spring in Egypt, but the people that had been building on ground, you know, community infrastructure was the Muslim Brotherhood, you know. One because, yes, we can tip a thing with social media and this new internet age, but we couldn’t sustain relationships in a way that was for the people that were really the ones demanding change in democracy. Yes, we could have Occupy Wall Street, but have we really solved the problem of wealth inequality and the unfair situations? No, no, we haven’t solved that at all. You know, we were very short sighted and got very excited by the potential as we should have with social media, without seeing the potential systemic damage as it was integrated.
We’re just beginning to see, we’re just beginning to imagine what this AI age is going to look like, and how it’s going to impact our civics and our politics. We should be excited. We should be leaning in and going forward. We should be doing that with a long-term foresight of the current context that we sit in with an unjust world and with a politics that is deeply damaged and going against democracy and going against our progressive causes. We are losing worldwide right now. So yes, we should get excited. But we should get excited by making sure that it’s going in the right direction. And how we are accelerating things. We don’t even know a lot of stuff. And I don’t know, thinking it through a lot. But there’s things that I am sure I’ve totally missing. There are things that are going to happen in our world that we’re just not seeing. Will we have deepfakes? Yeah, we’ll have deepfakes specially in the next year. 2024 has more elections than in the world, you know, than any year until 2048. So, we’re gonna have a lot of cases where we’re seeing immediately the impacts of AI. We haven’t figured out how to regulate it so it’s really out there.
We’re counting on people to be ethical, which is totally wrong, because people are not ethical, because they don’t wish to be ethical.
There are people that are more, you know, trying to keep their make a buck, quite frankly, there’s market forces that are going in, and even in some cases, we have a race that’s out there. We’re gonna excuse a lot of stuff just by saying we have to win. And so, you know, I think that yes, we have to win. But we have to look at the big picture of winning not just the short-term wins, and a lot of people are going to justify a lot of bad stuff by saying we have to win, that’s going to create longer term systemic harm, our long-term outcome is going to be greater than the short-term immediate thing that we’re facing. And if we damage our democracies, as we’re gobbling up the short-term wins, and quick efficiencies, whatever we’re really going to create a bigger problem than the problem that we’re solving. So yeah, excited about the potential. And also, let’s do intentionally, let’s not head in the wrong direction, let’s head in the right direction for what we’re trying to solve for and the world that we’re trying to create. This is fundamentally, I just want to say this, for me, what civics is about. Right now, we’re basing a lot of our politics on the immediacy, yes, yes, we can get immediacy out of our politics, especially with social media playing field that we have, with fear and with hate and with the worst of our human emotions. But we, as I mentioned, are social species by our nature, and actually the best of our civics, the best of people coming together to make decisions, better long term decisions, which we know we can do is because we have these distinct human emotions for social capacity that can have us build better long term solutions, if we build our politics on that, on our desire to belong as a group as our caring for one another, as our worry for our future generations, and as our love. It’s not that those are less powerful than the fear and the hate and whatever, they’re less immediate, but they’re stronger emotions.
We need to lean in on those in the core of the politics that we build, and how we apply our new techs and how we value our politics. It’s the only way we’re going to save our species.
It’s like we have these capacities as humans to have better outcomes as societies, if we’re intentional, if we come together to make better decisions than just our individual immediacy, you know, that we’re faced with. So, I have hope, I have a huge amount of hope. But I also think that we need to have discussion and intentionality and be conscious of that potential of civics, which is built good civics on our potential of responsibility, community, shared caring of each other, and fundamentally, without sounding too much like a tree hugger is what drives our are making better decisions. It’s from whether that’s collective action, like a strike, or whether that’s saving our species from the destruction that we’re doing.
Our politics can be based on love, love for our fellow humans and love for our community and love for each other.
And, you know, I know it sounds super idealistic when we’re faced with hate, do we have to simplify that Facebook message, so it gets more reach? Versus, can we ask somebody to come to the table to say, how are you going to stand together in power? I think that’s the choice that we’re facing right now. And with AI, that’s going to be the choice that we’re going to make. Are we going to go one way? Or are we going to go the other, as we’re faced with these new tools as they impact our civics?
Aoife Carli Hannan
Absolutely, and I think on that note, as well, we should be idealists on the left, and I think we’re often skepticists, but if we are going to be motivated, and maintain the motivation, and the drive to believe that we can create the world that we want that, that we need that, like you’re saying that our species actually needs to sustain itself, to exist, we have to have be idealistic and also with, you know, be understanding that it’s not just going to be all okay, but that we need to envision that world and have to keep that in our mind. Thank you so much for your time. That was really awesome. And I know I have a million more questions I could ask now. And I’d love to talk more about them.
Let’s see how these AI experiments go and then either commiserate or celebrate. Exactly. Yeah. So I would say too we have to have these discussions is why I’m so appreciative of you, and the invitation to have this interview, and to share some of these these things because I am really concerned. The only way we’re going to solve this is through dialogue, discussion, intentionality. And by holding ourselves to the direction that we want to go and that’s how we establish a direction and then we need to do it.
Commons Conversations Podcast
- What Campaigners Should Be Considering About Artificial Intelligence AI
- Artificial Intelligence and social justice
- Social Movement Technologies Collection
- Tectonica and Sign up to Tectonica Newsletter
- Listen to more Commons Conversations Podcasts
- Artificial intelligence AI
- Podcasts - Commons Conversations
- Social change
- Technological innovations