by Felix Gumbert

Workshop: Bridging the Gap in Political Communication Online through Technology?

Multidisciplinary Perspectives on Political Communication and Technologically Supported Interventions in Online Spaces

11th and 12th of June, 2025 at ZiF, Bielefeld, Germany

Registration is possible until May 29.

The Bots Building Bridges project is organizing a workshop to take stock of ongoing interdisciplinary collaborations, technological and methodological advances, and looks towards the future, to identify new questions and build new connections.

Technology has tended to widen the gap between opposing sides in many contentious debates, but if used properly, it can also be used to bridge these gaps. This is important because constructive discourse in social media is consistently complicated by incivility and disinformation. In part, this is fueled by automated systems that exploit social and technical mechanisms in social media. Recent advances of large language models (LLMs) have enabled and facilitated the deployment of sophisticated chatbots, which are already being used to toxify online communication through incivility and disinformation campaigns. Identifying and counteracting them, especially manually, has been found to be difficult. However, the technology also offers opportunities for benevolent applications to not only identify and counteract the effects, but to proactively support a constructive communication culture. Unfortunately, understanding communication and interactions among people in social media is lagging behind technological change and the proliferation of new platforms with their own dynamics. These challenges require collaboration across social and technical sciences.

Location

The workshop will take place at the ZiF (Zentrum für interdisziplinäre Forschung), Methoden 1, 33615 Bielefeld.

Program

You can download the program as a pdf here.

Description of Panels and Talks

Panel 1: Information Environment Improvement Strategies: Connecting Community Curation and Individual Consumption with Platform Regulation

Panel Description

Hate speech and misinformation are persistent threats to the health of online communities and social media. They have the potential to cause psychological stress (individual harm), the destruction of communication cultures (community harm), and increasing polarisation and incitement of hateful actions (societal harm). Communities can respond by employing managers tasked with dealing with inappropriate comments. To foster constructive communication cultures, managers remove inappropriate content, warn and punish perpetrators, and – crucially – engage directly with community members through counterspeech, supported by automated tools. States can respond with legislation that regulates platforms and by promulgating interventions in educational contexts which seek to increase user competency when consuming information. Many current educational interventions, however, are ill suited to an ecosystem where information is over-abundant. Will curriculum shifts be required to enable students to effectively triage the quality of information in online spaces? Finally, what should platforms do? The aim of this session is to create a dialogue between 3B team data and analyses regarding the effectiveness of information curation and consumption with analyses of platform-based regulation and moderation. To what extent do, or should, counterspeech and information literacy articulate with platform-based mechanisms such as content management or censorship; account and access management; (counter-)narratives; creating friction; inoculating and debunking; and long-term regulatory measures such as community guidelines?

  • Regaining initiative – approaches to secure pluralism (Lars Rinsdorf, TH Köln)

Abstract

The talk deals with options to enhance the resilience of democratic societies related to the threats of disinformation. It discusses the results of a 2-fold analysis of experts in this particular field from science and political thinktanks as well as from journalism, activism, and law enforcement. We applied in-depth interviews and a focus group with these experts based on core findings of previous research results to identify patterns of successful strategies to fight disinformation. It is argued that it would be shortsighted just to flag disinformation or to roll out prebunking measures at scale. On the long run, democratic institutions should instead establish constructive narratives how to tackle societal problems.

  • Mainstream Partisanship vs. Fringe Outlets: Pathways to Misinformation Across Democracies (Ana Sofia Cardenal, University of Catalonia)

Abstract

Misinformation is a pressing societal concern, evident in contexts such as the COVID-19 pandemic, the 2020 U.S. elections, debates over climate change, and immigration. The contemporary information environment has amplified the spread of misinformation, yet much of the academic and public discourse has centered on the role of fake news and untrustworthy fringe sources. However, such sources are rarely visited by most users, while misinformed beliefs remain widespread.
In this presentation, we argue that partisan media—particularly widely visited partisan outlets—may play a more significant role in shaping false beliefs. These effects can be both direct and indirect, for instance, by linking audiences to fringe sources. We explore this argument using a combination of web-tracking and survey data from five advanced democracies (France, Germany, Spain, the United States, and the United Kingdom).
Our findings indicate that highly skewed partisan media diets—particularly those leaning to the right—are strongly associated with false beliefs about the most politically polarized issues, such as climate change and immigration. In contrast, exposure to untrustworthy fringe sources is more directly linked to false beliefs about less politicized topics, including COVID-19 and the war in Ukraine. The results underscore the need for misinformation interventions that are sensitive to the varying levels of political polarization across issues.

  • Civic Information Literacy and Counterspeech Automation: Strengthening Digital Dialogue through Critical Skills and Technically Assisted Community Management (Holger Heppner, HSBI & Matthieu O’Neil, University of Canberra)

Abstract

In an increasingly polarized and information-saturated digital landscape, fostering constructive and democratic discourse requires both critical competencies and innovative technological interventions. Civic information literacy refers to the ability to navigate digital information environments critically, efficiently, and responsibly. We present a set of fast, transparent, and non-partisan tools designed to help individuals distinguish credible from manipulative content. Core techniques include lateral reading, reflective engagement with emotional cues, and informed use of collaborative platforms like Wikipedia. At the same time, digital environments increasingly rely on algorithmic systems to moderate discourse. It is well established that computers are perceived as social actors, a tendency that becomes even more pronounced with large language model-driven virtual agents. These agents are entrusted with increasingly complex social tasks that demand emotional sensitivity and moral reasoning. One such task is counterspeech in response to hate speech and incivility on social media. We present findings on various counterspeech techniques, assess their effectiveness, and explore their potential for automation through bots. The findings suggest how LLM-based bots could play a supportive role in promoting healthier and more respectful online discourse.

Panel 2: Automated Communication and Content Amplification in Political Contexts

Panel Description

The communication processes in social media are often influenced by artificial actors. Algorithms structure the selection and communication of content and bots distribute automatically generated or manually created content. In political communication in particular, bots are often used strategically to spread one’s own political opinions, defame opponents and manipulate trends. Corresponding strategies of automation and amplification have the potential to influence public opinion formation, and therefore also to affect political debates and decisions – including electoral decisions. This poses new challenges not only for the actors in the public sphere, but also for academia, which is faced with questions such as: i) How can corresponding strategies of amplification and automation be identified? ii) How can their consequences be assessed and analysed? iii) And how do established theories of the public sphere need to be adapted in light of the phenomena described? Questions such as these will be at the centre of the contributions to the session ‘Automated Communication and Content Amplification’.

  • Reducing Partisan Animosity Using AI (Ozgur Can Seckin & Bao Tran Truong, Indiana University)

Abstract

Over recent decades, Americans have increasingly clustered into like-minded partisan communities and grown more hostile toward the opposing party, resulting in unprecedented highs in affective polarization (Brown et al., 2025; Boxell et al., 2024). Prior research shows that same-party conversations can intensify polarization (Strandberg et al., 2017), while cross-party exchanges can mitigate it (Levendusky et al., 2021), yet the effect of agreement and disagreement within these dialogues remains unclear. Drawing on social-balance theory (Newcomb, 1953) and cognitive-dissonance theory (Festinger, 1957), we argue that ideological dissonance – disagreeing with an in-group member or agreeing with an out-group member – triggers discomfort that makes people rethink group boundaries. This process might push participants to reevaluate the diversity within their own party or reduce perceptions of out-group extremism, thereby lowering affective polarization. We test this claim in a 2 × 2 online experiment. Participants engage in brief, structured chats with a large language model that either represents their political in-group or out-group and either agrees or disagrees with them, yielding four conditions: (1) In-group Agreement, (2) In-group Disagreement, (3) Out-group Agreement, and (4) Out-group Disagreement. For example, a self-identified Democrat in the Out-group Agreement condition converses with a Republican-aligned chatbot that agrees with the participant’s policy view. A follow-up survey four weeks later will measure the persistence of any attitude change. By demonstrating how short AI-mediated dialogues can recalibrate partisan perceptions, this study will provide both theoretical insight and a scalable intervention for fostering more civil political discourse.

  • New Automation – AI in Creating and Fighting Artificial Communication (Christian Grimme, University of Münster)
  • Artificial Amplification in the Hybrid Media System (Florian Muhle & Indra Bock, Zeppelin University, Elena Esposito, Bielefeld University)

Panel 3: Deliberation, Echo Chambers and Polarization: The Interplay between Theoretical Concepts and Methodological Approaches

Panel Description

Social media platforms stand accused of promoting ideological and affective polarisation by fostering toxic interactions and the formation of echo chambers, undermining the potential of informed opinion and consensus formation. However, the extent of these phenomena and their negative effects are disputed. This debate is fuelled by the diversity of theoretical concepts and methods of measurement. Social media allows for large-scale analysis of digital trace data, which is being analyzed through network analysis to discover the formation of echo chambers or polarization, but there is a disconnect between structural analysis at a macro-level and interaction analysis at a micro-level. Those studies that use interaction data to assess the factors contributing to (or inhibiting) political deliberation tend to analyze either pairwise or aggregated interactions, and thus do not make use of sequences of reciprocated interactions between a group of people, that is, conversation data. This session focuses on the following questions: i) How can macro- and micro-level analysis be fruitfully combined? ii) How can empirical findings contribute to the advancement of theoretical concepts? iii) How can we – theoretically, methodologically, and empirically – assess if social media promotes deliberative discussions or is rather detrimental to them? 

  • Detecting the Symptoms of Destructive Polarisation: The Practice Mapping Approach (Axel Bruns, Queensland University of Technology)

Abstract

In the absence of hard empirical evidence for the existence of genuine echo chambers, our focus must necessarily shift to polarisation as a major driver of societal divisions: the core problem in the contemporary landscape of public debate is not that people no longer encounter counter-attitudinal information and perspectives, but that perceived or actual polarisation between political and social groups undermines meaningful debate and consensus-building between opposing groups. Even the apparently straightforward concept of polarisation must be further developed, however: clear distinctions between competing issue and ideology positions can be productive if they enable citizens and decision-makers to choose their preferred course of action, but such agonistic competition can turn into antagonistic division once one or more sides of a debate abandon their commitment to engaging with their opponents, and to working towards consensus or at least compromise. Our work has identified several symptoms of this shift towards destructive polarisation: (a) breakdown of communication; (b) discrediting and dismissing of information; (c) erasure of complexities; (d) exacerbated attention to and space for extreme voices; and (e) exclusion through emotions. Drawing on social media data, this contribution uses the novel methodological approach of practice mapping to identify the discursive groups and their positions in a given public debate, and to assess the extent to which their activities exhibit the symptoms of destructive polarisation.

  • AI and Deliberation: Normative Ideals in the Light of Current AI Research – A Review (Dennis Friess, University of Düsseldorf)

Abstract

The presentation reviews current research on artificial intelligence (AI) in the context of online discussions, using deliberative norms as a theoretical framework. Focusing on the deliberative dimensions of civility, equality, rationality, and reciprocity, the study examines and categorizes 171 articles accordingly. It finds that most AI research to date emphasizes enhancing rationality and civility in online discourse. Techniques such as argument mining and hate speech detection are commonly employed to improve the quality of discussions, fostering more structured and respectful communication. However, efforts to address equality and reciprocity—the foundational principles of democratic discourse—remain limited. This imbalance highlights critical gaps in the application of AI for fostering equitable and inclusive dialogue.

  • Deliberation, Echo chambers, and Polarisation: From individual contributions to interactions between users (Ole Pütz, Felix Gumbert, Bielefeld University & Rob Ackland, Australian National University)

Abstract

Despite interaction being a key concept in most social theories, empirical research associated with the terms deliberation, echo chambers, and polarisation operates with a limited notion of interaction. Typically, the focus is on the production and diffusion of information or on interaction as a one-time perception and response. However, social media platforms also enable users to engage in continued reciprocal interaction or conversations of variable length. This shift in perspective requires means of data collection and analysis that maintain the structure of conversations in the form of reply chains that are part of larger reply trees. The consideration of online conversations allows us to ask new questions or rephrase existing ones, such as whether users actually engage in discussions online and what role partisanship and incivility play in them.

Panel 4: Automated interventions into Social Media Discourses

Panel Description

Automated interventions into ongoing social-media discourses operate along two axes: behaviour and content. Behaviour concerns the ways in which automated systems insert themselves into a discourse, by e.g., sending direct messages (DMs), responding to human posts, following human users, mentioning human user-handles, quoting and retweeting human posts. Content concerns the textual and/or visual substance of the intervention, which can introduce new topics and arguments or support/elaborate/contradict/rebut human-posted content. This content can be pre-crafted and carefully curated by human experts, and deployed to counter some narratives and/or reinforce others. Alternatively, it can be generated automatically via generative AI technology. The content can represent one perspective on the topic or attempt to provide a balanced perspective by including different perspectives and taking a neutral stance. Open questions that we would like to discuss in a panel include i) Is it ethical to intervene automatically into a discourse with the goal of influencing opinion formation, even for the purpose of contribution to depolarization? ii) Which intervention strategies are most effective? Those taking a specific stance or perspective, those offering a balanced view, or those providing a humorous framing? iv) How can generative AI methods be used to generate such balanced contributions? iii) What are the expected outcomes of such interventions and how can we evaluate them? 

  • The evaluation bottleneck in NLP-supported deliberation (Gabriella Lapesa & Julia Romberg, GESIS – Leibniz Institute for the Social Sciences)

Abstract

In our talk, we will address a fundamental issue in NLP-supported deliberation: evaluation. Practical and ethical implications often prevent direct testing in deliberative campaigns, which is why annotation carried out beforehand by domain experts (i.e., trained moderators) or recruited participants (who could, for example, take the perspective of the discussion participants) is a practical solution to the evaluation bottleneck. Recent developments in annotation for NLP, however, highlight a crucial property that such annotations should have: going beyond the notion of gold truth and instead focusing on as many perspectives as possible. This variety challenges the current evaluation simulations, as we will detail and propose solution approaches.

  • DeLab: Explainable Automatic Moderation for Online Deliberation (Zlata Kikteva & Artur Romazanov, University of Passau)

Abstract

Given the prevalence of unhealthy, unconstructive, and oftentimes simply disrespectful behaviour in the online sphere, there continues to be a substantial amount of research focused on the matter of intervention into social media conversations. In our talk, we present a bot for online moderation on social media, built upon principles of soft moderation that is aimed at promotion of constructive dialogue rather than removal of offensive material. We will discuss components of the bot, including data collection, feature extraction, the decision intervention model, and content moderation generation. We also provide a human-centered perspective on content moderation based on the results of an ethnographic study conducted in the Netherlands.

  • Bots Building Bridges: Automated Interventions Encourage Discussions Across Viewpoints (Philipp Cimiano & Matthias Orlikowski, Bielefeld University; Tony Veale, University College Dublin)
Abstract

Bots on social media have a bad reputation. We usually associate bots with threats to political discourse like misinformation. But why should bots not be used to do good? Specifically, could bots help to build bridges as they increase the quality of deliberation in a polarized debate? We investigate this question based on bot interventions trying to engage users with content that challenges their viewpoints. In the context of the German “Tempolimit” debate on the adoption of a general speed limit, we conduct a field study with a bot that replies to posts which oppose or support the general speed limit. Comparing to generic bot replies, preliminary results show that a bot using authoritative content to challenge users’ viewpoints can encourage substantial replies and longer chains of replies, a prerequisite for deliberation. We close with an outlook on variants of our bot that use humor rather than a logic of debate: In prior work, we developed bots that generate comic strips as an engaging and unthreatening medium to highlight different sides of an argument.

Location

The Center for Interdisciplinary Research (ZiF: Zentrum für interdisziplinäre Forschung) is Bielefeld University’s Institute for Advanced Study. It offers time, space, and funding for interdisciplinary research. Located next to the Teutoburger Wald, it provides an environment that is removed from everyday academic duties. Directions.

Contact and Registration

Registration is possible until May 29.

For any additional questions, please get in touch with one of the members of the CITEC team.