Title reads 'Beyond headcounts: Evaluating climate conversations for real impact'. Icon of two people with speech bubbles. Background image is a collage of photos with dried cracked earth and windfarms.

Beyond Headcounts: Evaluating Climate Conversations for Real Impact

Introduction

As climate conversation programs expand across Australia, one question keeps emerging: how do we know if they’re actually working?

Most organisations default to simple headcounts and post-event surveys, but experienced activists have also discovered that meaningful evaluation can take place through a fundamentally different approach, one that captures relationships, builds volunteer confidence, and recognises that changing hearts and minds is a slow, relational process.

This article draws on findings from a 2025 research project conducted by the Advocacy Research Network, in partnership with the Climate Justice Coalition. The project brought together insights from a comprehensive literature review and interviews with ten experienced climate organisers across Australia.

The Evaluation Challenge: Why Traditional Metrics Fall Short

The instinct to measure conversations like any other campaign, such as counting attendees, tracking sign-ups, or measuring immediate behaviour change, runs into immediate problems when it comes to climate conversations.

Where do you find the time and resources to plan, let alone implement, any evaluation activities? And even if you do, how can you work out if conversations really have any effect?

Many organisations find that conversation programs simply don’t fit neatly into traditional evaluation frameworks. Instead, activists interviewed as part of this research project revealed a number of bespoke approaches. For example, some activists found that an emergent, active learning approach was what worked for them.

As an organiser noted, “we didn’t have time for [formal evaluation]… The evaluation process had emergent qualities as we engaged with each other and shared our perceptions… came up with tactics and new things to do… you amplify the stuff that works, and you shut down the stuff that’s not working.”

However, other groups noted how evaluation was often simply too challenging or resource-intensive to implement in practice. As highlighted by one organiser, “evaluation cost a fortune actually. So we never used it”.

Even pre-planned evaluation tasks can be difficult to put into practice. A 350.org organiser noted the difficulties in obtaining data during conversations themselves, recalling that “we initially wanted to use the [doorknocking] survey as a scaffolding… But then we found that it was quite tedious to follow question 1 to question 10. So eventually we just use certain parts of the survey to generate conversations.”

This reflects a broader challenge highlighted by academic research. While studies consistently show that engagement programs increase knowledge, attitudes, and self-reported behaviour change, they struggle to demonstrate long-term or objective behavioural outcomes. Most studies report process indicators like attendance and workshops run, with very limited use of objective behavioural or population-level data.

The problem becomes even more acute with decentralised, volunteer-led programs. How do you evaluate impact when conversations are happening in hundreds of different locations, led by volunteers with varying experience levels, using flexible scripts adapted to local contexts? This has led experienced practitioners to rethink evaluation and look for different metrics and approaches. 

Different Metrics and Approaches

Layered Monitoring: Combining Numbers with Stories

The most effective programs have moved toward what researchers call “layered monitoring”; combining simple quantitative tracking with rich qualitative insights. This isn’t about abandoning data collection, but about collecting the right kind of data.

As a Climate for Change organiser noted: “do not fall into the trap of showing metrics, because metrics can feel small. But if you tell stories they can be really powerful.”

This insight—that story-telling might be more valuable than traditional metrics—points toward the alternative approaches that many organisations are now exploring.

The Cairns and Far North Environment Centre (CAFNEC) provides an excellent example of this approach in practice. As an organiser explained: “After every single doorknock, we have a debrief with our volunteers… at the end of 8 weeks of door knocking, we also had a debrief with our door knocking leaders. Each suburb had a different kind of iteration of the conversation guide based on those learnings.”

This systematic debriefing creates what researchers describe as a feedback loop where evaluation becomes part of program improvement, not just external reporting. The key is making evaluation feel natural and useful to volunteers, rather than an additional burden.

Practical tools for layered monitoring include:

  • Simple conversation tracking spreadsheets with qualitative comment fields
  • Typeform surveys with auto-posting to Slack channels for real-time feedback
  • Post-event group reflection sessions with standardised questions for easy replication
  • Activity logs (tracking conversations) combined with structured debrief sessions
  • Slack bots with auto-posting conversation counts and celebrations

Relational Over Behavioural Outcomes: What Really Matters

One of the most important shifts in thinking about conversation evaluation is moving away from expecting immediate behaviour change. Research shows that conversations reliably increase climate knowledge, perceived efficacy (people’s confidence in their ability to do something), and intent to act – but often do not prompt short-term collective action such as signing petitions or coming along to a group meeting.

For example, some studies have shown that online training might increase volunteer confidence to speak about climate but not significantly raise activism rates one month later. Other studies have shown an increase in knowledge and intentions to take action by people who have had conversations, but little change in their actual behaviours within the timeframe studied.

This doesn’t mean conversations aren’t working. Instead, it means we might be measuring the wrong things.

As the research recommends, evaluations could instead be structured around volunteer confidence, their perceptions of their ability to influence people, and relational outcomes. These can be tracked and measured, and are more feasible to measure than immediate change in people’s behaviours as a result of a single, often short, conversation.

Ways to measure change in people’s confidence after conversations:

  • Simple 1-5 scale ratings on “How confident do you feel talking about climate change?” can be asked in a conversation script as well as before and after conversation sessions with volunteers.
  • Pre-post training interactions can measure volunteer’s knowledge and comfort levels when having conversations – quick and easy measures such as choosing an emoji can overcome survey fatigue.

Ways to measure relational outcomes in volunteers and conversations:

  • Relationship mapping exercises and network mapping tools can track who volunteers spoke to and new connections made.
  • Volunteers can be asked reflection questions like “What did you learn from this conversation?” These can also be asked at the end of conversations or in follow up gatherings and events for those who have had conversations.
  • Conversation diaries can be recorded using smartphone apps or simple notebooks.

Decentralised Evaluation Tools: Making It Work for Volunteers

When conversations are happening across dozens of communities with hundreds of volunteers, evaluation systems need to be embedded into volunteer practice, not imposed from above. The academic research examined as part of this project showed that successful evaluation builds in data gathering practices that will work in grassroots settings, where volunteers may have limited time and resources. 

Successful programs have found that peer-logged systems and gamified approaches can enhance engagement while capturing valuable data. The key is making data collection feel social and rewarding rather than bureaucratic.

Peer-logged and gamified tools include:

  • Regional WhatsApp/Signal groups with daily conversation count sharing
  • Shared Google Sheets with easy access where local groups log their activities
  • Instagram or Facebook groups for photo sharing from conversation events
  • Simple mobile apps where volunteers can quickly log conversations
  • Regional competition dashboards showing group totals across areas
  • Peer recognition systems where volunteers nominate others for good conversations

Social elements can make evaluation tasks more enjoyable. Building systems where volunteers want to share their experiences and learn from each other, makes evaluation feel collaborative rather than extractive. And this means volunteers are more likely to do it.

Process-Oriented Indicators: Building for the Long Term

Rather than strict and ambitious quantitative targets, both academic research and experienced activists recommended focusing on relational indicators, volunteer growth, and story-based learning.

This reflects the reality that decentralised conversation programs aim to build movement capacity and community relationships, not just deliver specific policy outcomes. For example, AYCC was able to track this in their decentralised network by embedding evaluation into their volunteer leadership journey. They created systems where reflection and assessment became part of skill development rather than external accountability.

Practical Implementation: What Works on the Ground

Several organisations have developed practical approaches that balance useful data collection with volunteer capacity:

  • Buddy Systems
    Pairs of volunteers interview each other after events, creating peer support while capturing insights.
  • Teaching Back Sessions
    Volunteers train others and reflect on what they learned, building capacity while documenting lessons.
  • Story Collection Processes
    Volunteers document and share conversation highlights, creating evaluation content that’s also useful for training and funding.
  • Simple Debrief Templates
    Conversation captains use structured formats with their teams, ensuring consistent reflection without requiring formal training.

While it is important to have multiple ways for volunteers to get involved in conversations programs, this approach applies to evaluation too. Different volunteers will engage with different types of reflection and feedback processes.

Long-Term Evaluation and Volunteer Follow-Up

Effective conversation program evaluation requires thinking beyond individual events to track volunteer development and community relationship building over time. Research shows that change rarely happens in a single conversation, with several organisers noting that multiple touchpoints are often required for people to shift their thinking.

One organiser who helped run a nation-wide conversations program emphasised the longer term view of program impact: “we often talked about 3 points of contact being required in communities for people to start to shift their thinking. And often the 1st conversation was that 1st point of contact.”

This suggests evaluation systems need to capture cumulative impact across multiple interactions and time periods, not just immediate outcomes from single conversations. While it is difficult to find data to measure whether this impact is really occurring, there are some options. For example, polling opinion data may be available in different communities at different timepoints and used to investigate change in public attitudes. Proxy measures such as the number of articles about climate change in local newspapers, or frequency of events related to climate change could be used for long-term evaluation. 

Other community impact evaluation tools include:

  • Community pulse surveys: “Are people in your area talking more about climate?”
  • Local group health indicators measuring meeting attendance, volunteer satisfaction, and group cohesion
  • “Most Significant Change” technique where volunteers identify their most meaningful conversation
  • Community mapping exercises showing spread of climate conversations in local networks
  • Email, pledge, or petition sign ups in communities where a conversations program has been held, or other measures of engagement such as attendance at local events. 

Making Evaluation Work for Everyone

The most successful evaluation approaches serve multiple purposes:

  • they help volunteers improve their skills,
  • provide useful data for programme improvement,
  • create content for training and funding, and
  • build community among conversation volunteers.

The key insight from experienced activists is that evaluation shouldn’t feel like surveillance or busy work. When done well, it becomes a tool for volunteer development, program improvement, and community building—while also generating the evidence needed to demonstrate impact and secure ongoing support.

Good evaluation systems make this organic process more systematic and shareable, without losing the flexibility and responsiveness that make conversation programs effective in the first place.

Moving Forward: Boosting our Evaluation Culture and Practices

Climate conversation programs are still experimenting with evaluation approaches that match their relational, decentralised nature. The most promising developments combine simple, accessible tools with rich qualitative insights, embedded into volunteer practice rather than imposed from outside.

The goal isn’t perfect measurement, as that’s impossible to do! Instead, it’s about getting useful feedback that helps volunteers grow, programs improve, and communities strengthen. When evaluation serves these purposes, it becomes a tool for movement building rather than just organisational accountability.

For organisations starting conversation programs, the research suggests beginning with simple systems focused on volunteer confidence and relationship building, then gradually adding layers of insight as capacity and experience grow. The most important element isn’t the specific tools used, but creating a culture where reflection, learning, and adaptation are valued and supported.

This approach to evaluation recognises that changing hearts and minds on climate is fundamentally about building relationships and community capacity. And these outcomes can’t be measured in a single one-off survey. Instead, they are outcomes that require patient, nuanced, and multi-faceted measurement over time.

Explore more resources related to this article and 2025 research project conducted by the Advocacy Research Network, in partnership with the Climate Justice Coalition.

Explore Further