by Felix Gumbert

Workshop: Bridging the Gap in Political Communication Online through Technology?

Multidisciplinary Perspectives on Political Communication and Technologically Supported Interventions in Online Spaces

11th and 12th of June, 2025 at ZiF, Bielefeld, Germany

Bots Building Bridges project is organizing a workshop to take stock of ongoing interdisciplinary collaborations, technological and methodological advances, and looks towards the future, to identify new questions and build new connections.

Technology has tended to widen the gap between opposing sides in many contentious debates, but if used properly, it can also be used to bridge these gaps. This is important because constructive discourse in social media is consistently complicated by incivility and disinformation. In part, this is fueled by automated systems that exploit social and technical mechanisms in social media. Recent advances of large language models (LLMs) have enabled and facilitated the deployment of sophisticated chatbots, which are already being used to toxify online communication through incivility and disinformation campaigns. Identifying and counteracting them, especially manually, has been found to be difficult. However, the technology also offers opportunities for benevolent applications to not only identify and counteract the effects, but to proactively support a constructive communication culture. Unfortunately, understanding communication and interactions among people in social media is lagging behind technological change and the proliferation of new platforms with their own dynamics. These challenges require collaboration across social and technical sciences

Panels

  1. Automated Communication and Content Amplification in Political Contexts

The communication processes in social media are often influenced by artificial actors. Algorithms structure the selection and communication of content and bots distribute automatically generated or manually created content. In political communication in particular, bots are often used strategically to spread one’s own political opinions, defame opponents and manipulate trends. Corresponding strategies of automation and amplification have the potential to influence public opinion formation, and therefore also to affect political debates and decisions – including electoral decisions. This poses new challenges not only for the actors in the public sphere, but also for academia, which is faced with questions such as: i) How can corresponding strategies of amplification and automation be identified? ii) How can their consequences be assessed and analysed? iii) And how do established theories of the public sphere need to be adapted in light of the phenomena described? Questions such as these will be at the centre of the contributions to the session ‘Automated Communication and Content Amplification’.

  1. Deliberation, Echo Chambers and Polarization: The Interplay between Theoretical Concepts and Methodological Approaches

Social media platforms stand accused of promoting ideological and affective polarisation by fostering toxic interactions and the formation of echo chambers, undermining the potential of informed opinion and consensus formation. However, the extent of these phenomena and their negative effects are disputed. This debate is fuelled by the diversity of theoretical concepts and methods of measurement. Social media allows for large-scale analysis of digital trace data, which is being analyzed through network analysis to discover the formation of echo chambers or polarization, but there is a disconnect between structural analysis at a macro-level and interaction analysis at a micro-level. Those studies that use interaction data to assess the factors contributing to (or inhibiting) political deliberation tend to analyze either pairwise or aggregated interactions, and thus do not make use of sequences of reciprocated interactions between a group of people, that is, conversation data. This session focuses on the following questions: i) How can macro- and micro-level analysis be fruitfully combined? ii) How can empirical findings contribute to the advancement of theoretical concepts? iii) How can we – theoretically, methodologically, and empirically – assess if social media promotes deliberative discussions or is rather detrimental to them? 

  1. Information Environment Improvement Strategies: Connecting Community Curation and Individual Consumption with Platform Regulation

Hate speech and misinformation are persistent threats to the health of online communities and social media. They have the potential to cause psychological stress (individual harm), the destruction of communication cultures (community harm), and increasing polarisation and incitement of hateful actions (societal harm). Communities can respond by employing managers tasked with dealing with inappropriate comments. To foster constructive communication cultures, managers remove inappropriate content, warn and punish perpetrators, and – crucially – engage directly with community members through counterspeech, supported by automated tools. States can respond with legislation that regulates platforms and by promulgating interventions in educational contexts which seek to increase user competency when consuming information. Many current educational interventions, however, are ill suited to an ecosystem where information is over-abundant. Will curriculum shifts be required to enable students to effectively triage the quality of information in online spaces? Finally, what should platforms do? The aim of this session is to create a dialogue between 3B team data and analyses regarding the effectiveness of information curation and consumption with analyses of platform-based regulation and moderation. To what extent do, or should, counterspeech and information literacy articulate with platform-based mechanisms such as content management or censorship; account and access management; (counter-)narratives; creating friction; inoculating and debunking; and long-term regulatory measures such as community guidelines?

  1. Automated interventions into Social Media Discourses

Automated interventions into ongoing social-media discourses operate along two axes: behaviour and content. Behaviour concerns the ways in which automated systems insert themselves into a discourse, by e.g., sending direct messages (DMs), responding to human posts, following human users, mentioning human user-handles, quoting and retweeting human posts. Content concerns the textual and/or visual substance of the intervention, which can introduce new topics and arguments or support/elaborate/contradict/rebut human-posted content. This content can be pre-crafted and carefully curated by human experts, and deployed to counter some narratives and/or reinforce others. Alternatively, it can be generated automatically via generative AI technology. The content can represent one perspective on the topic or attempt to provide a balanced perspective by including different perspectives and taking a neutral stance. Open questions that we would like to discuss in a panel include i) Is it ethical to intervene automatically into a discourse with the goal of influencing opinion formation, even for the purpose of contribution to depolarization? ii) Which intervention strategies are most effective? Those taking a specific stance or perspective, those offering a balanced view, or those providing a humorous framing? iv) How can generative AI methods be used to generate such balanced contributions? iii) What are the expected outcomes of such interventions and how can we evaluate them? 

Location

The Center for Interdisciplinary Research (ZiF: Zentrum für interdisziplinäre Forschung) is Bielefeld University’s Institute for Advanced Study. It offers time, space, and funding for interdisciplinary research. Located next to the Teutoburger Wald, it provides an environment that is removed from everyday academic duties. Directions.

Contact

If you are interested in participating, please get in touch with one of the members of the CITEC team.