Social media users in search of ‘facts

Vera Zvereva
https://orcid.org/0000-0001-8071-6380

Department of Language and Communication Studies, University of Jyväskylä

ABSTRACT

What factors influence users to believe the stories they find in social media, and what role do emotions play for users in concluding that a particular fact is ‘true’? This article examines one aspect of emotionalized communication in social networks in an information war context, namely, how social network users make decisions about the reliability of the information they receive. We employ a qualitative study of a single case – a discussion among Russian-speaking Livejournal.com and Facebook.com users of a tragic incident in Ukraine – the deadly fire that took place in the Odessa Trade Union House on 2 May, 2014. The relevancy of this case consists in how, for all its uniqueness as a tragic event, the communications by users in its immediate aftermath typify important features of social media discussions of ‘shocking events’. This article considers a general model of behavior of users who must try to comprehend the tragic news and are caught in a state of uncertainty amid acute confrontation between actors in an information conflict.

ARTICLE HISTORY
Received 10 September 2020
Accepted 31 January 2021

KEYWORDS
Digital communication, emotionality in social media, verification of facts, visual evidence, information war


CONTACT Vera Zvereva Email icon[email protected] Department of Language and Communication Studies, Mail iconUniversity of Jyväskylä

© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.

Introduction

Scholars studying social media communication are increasingly inclined to view social media as ‘affective’ and ‘emotional’, emphasizing the affectivity generated and transmitted through social networks (Papacharissi, 2015; Sampson et al., 2018; Slaby & Röttger-Rössler, 2018; Wahl-Jorgensen, 2019), their tendency to encourage the expression of emotions, as well as their ability to amplify users’ emotional states, and manage the collective experience of news and events (Hyvärinen & Beck, 2018). Social media are studied as hyper-connective and immediate, focused on sharing sentiments and producing affects (Coleman, 2018) which have direct implications for public life and political culture (Wahl-Jorgensen, 2019). Researchers highlight the centrality of social media’s ‘emotional architecture’ in shaping the emotions of the public and enhancing their exploitation for commercial purposes (Wahl-Jorgensen, 2018). Through the lens of ‘affect’ and ‘emotion’, scholars have considered how various spheres of communication mediated by social media have been transformed – from journalism and ‘personalized and emotionally driven’ news media (Beckett & Deuze, 2016, p. 6), through crisis communication (Hyvärinen & Beck, 2019), mediatized conflicts and information wars to the sphere of leisure and consumption (Karatzogianni & Kuntsman, 2012).

In particular, researchers of political communication and mediatized conflicts emphasize the fact that the affordances of social media have contributed to the rise of political populism (Ekström et al., 2018; Engesser et al., 2017; Moffitt, 2016) and the formation of an ‘affective public’ mobilized, united and disconnected through expression of their sentiments (Papacharissi, 2015). They note how its increasingly emotionalised space makes digital media attractive for political manipulation in election campaigns, political mobilizations and propaganda campaigns (Bakir & McStay, 2018). Social media push users further towards selectively exposed information and into echo chambers within which shared collective feelings are generated and similar emotions expressed by participants united around similar political views and values (Knobloch-Westerwick et al., 2020; Quattrociocchi et al., 2016; Workman, 2018). The propensity of users to justify their stance in argument, as well social media’s potential to produce emotional contamination (Guillory et al., 2011) – have become tools of information warfare, i.e. of conflict situations in which political confrontation is expressed through information and communication technologies (Pomerantsev & Lucas, 2016).

In this study, we examine one aspect of emotionalized communication in social networks in an information war context, namely, how social network users make decisions about the reliability of the information they receive. We employ a qualitative study of a single case – a discussion among Russian-speaking Livejournal.com and Facebook.com users of a tragic incident in Ukraine – the deadly fire that took place in the Odessa Trade Union House on 2 May, 2014.

The relevancy of this case, several years after the event, lies in how, for all its uniqueness as a tragic event, the communications concerning the Trade Union House fire by these users are typical of how ‘shocking events’ in general are discussed in social media. It allows us to study a more general model of the behavior of users who, on the one hand, have to confront shocking news and try to comprehend it, and on the other hand, are have to deal with a state of uncertainty amid acute confrontation between actors in an information conflict. This case reveals what kind of challenges are faced by users in similar situations, and how they progress from knowing nothing to defining their own position on ‘what really happened’.

The research goals of this study are, first, to examine the factors in the conditions of an information war that influence social network users, upon learning about a tragic event, to find particular stories that they encounter in social media credible; and, second, to enquire what role do emotions play for users in concluding that a particular fact is ‘true’?

Our hypothesis is that social media users in an information warfare context tend to assess the reliability of new information based on their existing political views, demonstrating confirmation bias. Their choice is at the same time influenced by the emotions shared with them in social media by like-minded people; and the emotional authenticity of the experience of the event reinforces their conviction of the truth of their interpretation of reality.

Materials and methods

Context

In 2014, during the Ukrainian crisis, Russia’s annexation of Crimea, and the subsequent unrest in Ukraine, the information war conducted between Russia and Ukraine involved the official media, various participants in information campaigns on the Internet, and ordinary users of social media (Gaufman, 20152017; Pomerantsev & Lucas, 2016). On social media, users on both sides of the conflict both generated and responded to misinformation and hate speech using similar language strategies. The ongoing confrontation in May 2014 between social media users with respectively pro-Russian and pro-Ukrainian positions provides the context for the case discussed in this article.

The chronology of events was reconstructed by the 2 May public investigation group (Gruppa 2 Maja, 2015). In Odessa, with a large Russian-speaking population, the confrontation between supporters of the Euromaidan (pro-Ukrainian) and ‘anti-Maidan’ (pro-Russian) positions had been going on for several months. This confrontation was significantly intensified by the hybrid information war on the part of Russia and the annexation of Crimea in February–March 2014. On 2 May, events in the city include two episodes: clashes between anti-Maidan supporters and pro-Ukrainian activists near Grecheskaya Square, and clashes on Kulikovo Pole (Kulikovo field) with a fire in the nearby House of Trade Unions.

The confrontation began when in the city center a procession of football fans and supporters of Euromaidan, who had arrived in Odessa the same day declaring a ‘Ukrainian unity march’, was attacked by a group of people who took a pro-Russian stance, and were trying to disrupt the march and – according to some of the anti-Maidan supporters – to prevent the football fans from attacking the camp of ‘anti-Maidanists’ on on Kulikovo Pole. During the street clash, stones and Molotov cocktails were hurled and rubber bullets fired from both sides. One Euromaidan activist was shot dead, triggering an escalation of the conflict on Grecheskaya Square, where five more people were shot from different sides.

The ‘Euromaidan Activists’ put the attackers to flight. The conflict continued in Kulikovo Pole, where about 300 supporters of the pro-Russian position – including women and elderly people who had been defending the camp – barricaded themselves in the House of Trade Unions. A crowd gathered around the building and tried to storm it. ‘Ultras’ began to throw Molotov cocktails, while from the House of Trade Unions, Molotov cocktails were also thrown into the crowd. Near the House, tents and a barricade, blocking the entrance to the building, caught fire. In a few minutes, the whole building was on fire. People inside the House of Trade Unions, trying to escape, started jumping out of windows. Some supporters of Euromaidan began to help people leave the building; others started beating up those who escaped the fire. The police did not intervene, and firefighters responding to the first calls took 30 min to arrive. In all, 303 people were evacuated from the House of Trade Unions. On this single day, 48 people died in Odessa: 34 people were pro-Russian activists who died from fire and smoke inside the House of Trade Unions; 8 died falling from the windows; 6 were shot in the morning in street clashes. All the dead were citizens of Ukraine (International Advisory Panel, 2015).

People were struck by the scale of the tragedy. Multiple eyewitness accounts appeared immediately in social media, as well as rumours and subsequently refuted false information. Participating in the discussions were information warriors on both sides, pro-Russian and pro-Ukrainian, using various kinds of provocation. Among users, the tragedy in Odessa elicited various reactions from horror, anger, and grief, to gloating and triumph. Many expected the tragic events would be promptly investigated and the perpetrators punished. However, despite the efforts of the 2 May public investigation group, Ukrainian and international organizations, the precise details of the events surrounding the fire remained unclear and ‘those responsible for the killings and violent deaths of 48 people have yet to be brought to justice’ (Office of the United Nations, 2019, p. 18).

According to an investigation by the International Advisory Panel, there was a conspiracy between the police and ‘anti-Maidan’ activists, who attacked the procession in the morning; but the situation got out of control. The report also states that the fire began in five places at once: the largest fire broke out in the lobby, apparently caused by Molotov cocktails thrown from the street. The cause of the four other outbreaks was the actions of those who were in the building. The experts found no signs of deliberate arson. People died or suffered ‘from burns to the body or burns of the respiratory tract, carbon monoxide poisoning and poisoning by the products of combustion, or from falling from a height (…) inflicted injuries which could indicate that the victims were beaten or tortured were not detected’ (International Advisory Panel, 2015, p. 31).

Data

The data for this article was collected in the fall of 2015. Given the necessity of limiting the large amount of information, and our insufficient knowledge of Ukrainian social media, it was decided to study only the posts and comments of users of Livejournal.com and Facebook.com written by the ‘pro-Russian’ users in Russian. (Livejournal is the oldest social networking service, available for Russian-speaking users since 1999; it is a network of blogs and diaries).

It was planned to collect data for this study with the help of the search tool for Russian-language blogs ‘Yandex-search’. Its value for a qualitative researcher was that it allowed a representative sample of records found by keywords to be made for specific social networks over a given time interval. However, in the fall of 2015, Russia adopted a law on the ‘right to be forgotten’ on the Internet, in the wake of the reduction of Internet freedoms on the Runet (Lonkila et al., 2020). Immediately after that, Yandex disabled this tool for retrospective search for records by publication date.

Before the Yandex search tool was disabled, we collected a sample of the most popular entries for May 2-6, 2014 in Livejournal. Afterwards, however, Google search was the tool at our disposal, using which a researcher does not receive systematic results and can only guess how representative the search is. Therefore, we used the search results as a rough model, but had to abandon the prospect of using more accurate quantitative indicators of the scale of users’ discussions on Livejournal and establishing clearer connections between materials.

Thus, for a qualitative study, we selected records from publicly available sources on the Internet. These are (1) 20 popular posts written by users of Livejournal.com and Facebook.com, between May 2 and May 6, 2014. These are texts in Russian, written from a pro-Russian position, discussing the fire in Odessa. (2) Discussions, user comments on these posts (the most popular post on Livejournal collected 859 comments; the most popular on Facebook collected 482 comments and 1.5 thousand reposts). By users, we mean everyone whose records are published online: that is, they include people expressing their point of view, activists, propagandists, or trolls. This kind of broad understanding, in our opinion, is important, since on social media, the discourse of an event is made up of the whole record, regardless of who the author is or was.

The study of discussions about events such as the Trade Union House fire involves sensitive issues; therefore, in this study steps were taken to ensure the anonymity of users whose opinions were studied. All texts were divided into two groups. The first consists of the posts of famous bloggers, media activists and propagandists. We consider these texts as publications whose authors seek to openly disseminate their views. The names of such authors have not been anonymized; at the same time, this study does not provide a link to their specific posts, since the ensuing discussions are joined by private individuals participating in them as users of social networks. The second group of texts consists of the posts and comments of ordinary users or trolls pretending to be such. The personal information of such users was not collected, and all quoted materials were anonymized. The quotations in this text do not enable identification of any user. Besides the data having been anonymised, to further protect the privacy of users no links are provided to the discussions on Facebook and Livejournal.

Methods

The phenomena discussed in this study – the expression of emotions in social media, the use of emotions in the context of information warfare, confirmation bias and selective reference to facts in echo rooms – are studied primarily using quantitative methods, since we are talking about examining of mass behavior of users of social networks (see, for example, a detailed review of the literature on emotions (Hyvärinen & Beck, 2018)). However, this study is conducting a qualitative study of one case. Qualitative research allows us to supplement what we know on the basis of quantitative research, since it demonstrates how exactly one or another large process unfolds at the micro-level. In our case, we use media frame analysis (D’Angelo & Kuypers, 2010) and discourse analysis of media text (Faircough, 1995; Van Dijk, 2008)

The methodology of discourse analysis allows examining the production of meanings in digital communication. The basic action that a social media user undertakes in an information war, in the battle to construct their version of reality, is the establishment of ‘fact’ through language. As a discursive phenomenon, ‘fact’ presupposes a person’s statement about an event that has occurred. The naming of an event, its inclusion in a certain narrative, the definition of its boundaries, duration and importance are all elements that construct a fact. Therefore, on social media battles unfold around the naming of the event and its provision with explanations. Speaking about the emotions of social media users, we also mean the possibility of their study using discourse analysis: mediatized emotions are discursive, expressed using language means (Wahl-Jorgensen, 2019); they are socially and culturally coded and serve to build the narratives and emotional communities of users (Slaby & Röttger-Rössler, 2018).

At the same time, as Fairclough shows, the discursive dimension of media text is directly related to discursive and social practices; and we believe that the study of the co-participation of users of social networks in the collective shaping of the discourse about the Fire in Odessa, as well as about other preceding and subsequent events, allow you to better understand the degree of polarization and intransigence of pro-Russian and pro-Ukrainian users.

Frame analysis, in turn, is an important part of media research; and in relation to the study of the Ukrainian crisis and subsequent events, studies have been carried out using frame analysis in social media (see, for example, analysis of visual frames in work Makhortykh & Sydorova, 2017). The frame is understood as a way of organizing social experience and providing it with meaning; and framing processes is considered as the selection and highlighting of particular aspects of reality in order to promote a particular interpretation, understanding of the problem, its assessment and subsequent attitude towards it (D’Angelo & Kuypers, 2010; Entman, 1993). Media frames can be defined as a way of presenting reality by highlighting themes, stories, emphasizing certain ideas and placing emphasis with which events are given meaning. The question of the compatibility of discourse and frame analysis is discussed by Lindekilde (2014): the emphasis on intentionality and strategic rationality in the framing process differ frame analysis and discourse analysis. At the same time, what Kuypers calls a ‘rhetorical version of framing analysis’ (Kuypers, 2010, p. 298) brings the two approaches closer together. In particular, ‘rhetorical perspective’ of frame analysis might be applied to studying news media in order to explore bias in the news coverage and the central organizing ideas in a narrative which constitute a frame. In this case a researcher focuses on language choices which provide saliency to a certain issue or event; analyses keywords, metaphors and rhetorical devices which make an event not neutral (Kuypers, 2010).

This article examines social media in conditions of information war, when the media come to be seen by parties to the conflict as both battlefield and weapon; and social media is used to harm the enemy, by manipulating information (Pomerantsev & Lucas, 2016). User anxiety, fear, horror, anger, readiness for mobilization are used by combatants on the media front, whether acting openly (such as journalists, propagandists, and media activists) or as hidden ‘soldiers’ (such as trolls, and bots and users spreading knowingly or unknowingly false information) (Pomerantsev & Lucas, 2016; Soldatov & Borogan, 2015).

Social media discussions of some unfolding extreme event tend to spread incomplete information, contradictory evidence and rumours, which gives rise to a state of uncertainty. The soldiers of the information war use this situation of uncertainty as a resource for influencing public’s feelings and perceptions. Their task is either to promote their own version of reality in order to compromise the enemy’s version, or to convince a sufficient proportion of the public that no one can understand what has happened and thus normalise the idea that the truth is impossible to discover. Ordinary users are drawn into such confrontation and themselves, often without realizing it, begin to act as participants in the information war (Zvereva, 2020).

Users of social networks often have to decide for themselves what they are willing to consider to be true and what false. This choice can be difficult as users are under intense pressure. Official media, participating in the information war, can turn out to be a source of disinformation. Also, users face pressure from their own friends in their social network echo chambers (Quattrociocchi et al., 2016).

At the same time, a ‘fact’ itself may turn out to be undesirable and inconvenient. The emotional truth that users experience does not necessarily lead them to the truth of facts. Fake news stories, which may be inserted into reportage, are constructed to have an emotional impact on readers, and therefore even those who are inclined to suspect manipulation and find fakes in the news cannot always distinguish between true and false reports (Bakir & McStay, 2018). At the audience’s disposal is an excess of conflicting evidence, all of which is designed to look authentic; therefore, establishing what actually happened becomes a difficult task for the media user.

Results

Stages of coping with crisis

According to the social stage model of coping (Pennebaker & Harber, 1993), three stages can be distinguished in how people cope with a crisis event: an emergency phase, in which people discuss the event openly or ruminate and experience anxiety and depression; an inhibition phase, when discussions calm down and people still feel the need to reflect on the event, but become tired of the emotional load; and the final phase, adaptation, when thoughts and discussions about an event become less frequent and people gradually return to normal life.

Our case involves the first, emergency phase, as experienced in social media. In examining how social media users reacted in response to information about the Fire in Odessa, we can find parallels with how social media communication develops in the event of terrorist attacks and non-natural disasters. Researchers of crisis communication in social networks note that there, the cycle of reactions typical of experiencing a tragedy is noticeably accelerated (Eismann et al., 2016).

In their analysis of social media users’ reactions to the Boston terrorist attack, Hyvärinen and Beck (2019) described one optional and three universal phases of conversation post-terror attack: shock, making sense, (sometimes, a subsequent event and its closure), and the aftermath. They show that how online conversations develop after a terrorist attack, and how users go through these stages, depends on their geographical proximity to the event, with negative emotions noticeably increasing with distance. Our observations concerned users, most of whom reacted to the event from elsewhere than Odessa; and their reactions largely coincided with those described by Hyvärinen & Beck.

The first stage of shock is close to what we observed. When a tragic event occurred, first, eyewitnesses, participants, and victims made their first postings of texts and images. Such messages were fraught with emotion; most often they resulted in waves of reposts from other users with expressions of compassion and horror.

The array of texts by users who shared this information with their own comments grew quickly. It included the first accounts by journalists, reposts, speculations and interpretations, and rumours. Comparing this with the observations of Hyvärinen & Beck, we can also relate this to the making sense stage. Users’ feeling of horror was either transmuted into anger (as a way to resolve their uncertainty and cope with helplessness), or it was replaced by attempts to explain what had happened. Users provided multiple versions of their answers, from simple explanations to conspiracy theories.

Soon after the tragedy, a significant volume of evidence was already accumulated on the Internet. This is when users quarrelled with each other: a new wave of emotions surged up – against those who were considered as ‘enemies’, collaborators, ideological opponents, as well as ‘disseminators of rumours’, ‘alarmists’, ‘trolls’, etc. At this point, exploiters of the situation for their own purposes became active: i.e. fraudsters, prankers, and political trolls. Real information circulated along with untrustworthy or intentionally falsified.

Finally, at the aftermath stage, the event is given an official assessment by politicians and journalists. According to Hyvärinen & Beck, clear explanations of what caused of the tragedy and the apprehension of the perpetrators give people a sense of closure, while the lack of such answers prolongs the sense of anxiety and uncertainty. In our case the lack of clarity and the deliberate falsifications in media gave rise to anger. Users trying to refute rumours or learn ‘the whole truth’ initiated amateur investigations. People in social media quarrelled anew over the official statements. Although they would continue, sometimes for a long time, the discussions in social media were now cooling down.

So, how do users define what a ‘fact’ is, and how do they make their choices about what to believe?

Framing

A person faced with completely new information tries to comprehend it by naming it. Such a name already categorizes the event and explains it (Hall, 1997). In social networks, when users discuss ‘facts’, they are sharing information and emotions simultaneously, so that one can hardly separate them. Users’ accounts of what has happened come loaded with their fear, anger, compassion, and convictions.

In the case of the Trade Union House fire, users of social networks of all political opinions were well acquainted with the context and perceived the news from standpoints that they had already adopted. From the very first publications made on May 2, the new information was included in pre-existing narratives and placed in ready-formed frames. The frame is constructed from elements well-known to the audience: these are keywords, attitudes, roles, actions and plots, all equipped with ready-made interpretations.

Certain thesauri are contained within the frame from the outset: the choice of an explanatory structure determines the choice of vocabulary with which a fact is constructed. Users, whether pro-Russian or pro-Ukrainian, were faced with an excess of hate speech already inscribed in similar frames during the information war between the two states: for example, from the pro-Russian side: fascists, ‘ukry’, ‘maydauny’; and from the pro-Ukrainian side: ‘colorado beetles’, ‘quilted jackets’, ‘Russian terrorists’ (Fialkova & Yelenevskaya, 2015; Radchenko & Arkhipova, 2018). The emotions to be shared are also inscribed in the frame: disgust, horror, anger, triumph all forming parts of different explanatory structures that start working as soon as the event is described.

Among the pro-Russian users’ posts and comments, the frames which could be identified were ‘the civil war in Ukraine’, ‘the Odessan Khatyn’, and ‘the tragic accident’. The frame ‘the civil war in Ukraine’ in relation to the Trade Union House fire was supported by one of the leading pro-Kremlin propagandist bloggers, Boris Rozhin, a user with the nickname colonelcassad. The first news from Odessa was presented as follows:

colonelcassad, 2.05.2014: War in Ukraine. Evening – Online!

We continue to broadcast the development of the civil war in Ukraine online – the battle for Slavyansk, the riots in Odessa and the situation in the South-East.

This framing did not receive much support in the users’ discussions of the Odessa fire. Perhaps its scope was too broad, and its inconsistency with the events in Odessa, as well as the fact that it did not provide an opportunity to adequately express the shock, pain, frustration and horror of the event itself, may explain why users tended not to employ ‘civil war’ as an explanatory construction.

Instead, an explanatory construction was taken from the context of another war. The dominant frame was ‘the Odessan Khatyn’. It echoes a tragic episode from WWII: on 24 March 1943, in the Belarusian village of Khatyn, 149 people were shot or burnt alive by soldiers of the Nazi Schutzmannschaft Battalion 118 which included Ukrainian nationalists. In Soviet culture Khatyn became a symbol of the crimes committed by the Nazis and their collaborators. For example:

User 1. The Bestial Crime of the Right Sector. The Odessa Khatyn – organized by Neofascists and Banderites with the connivance of the Odessa police.

Following the advent of the Ukrainian crisis, official Russian media and the Russian authorities’ unofficial supporters on the Internet extensively used strongly pejorative vocabulary equating Ukrainian politicians, supporters of revolution and soldiers with ‘fascists’ and ‘Nazis’. Thus, the very language of politics was enlisted as a tool of warfare (Gaufman 20152017; Lyebyedyev & Makhortykh, 2018; Makhortykh & Sydorova, 2017; Ryazanova-Clarke, 2015). Russian speakers know the Khatyn story from their childhood. It presents an opposition between the worst human evil (‘fascists’) and the innocent people burnt alive. So, into this frame, were fitted all participants in the Odessa fire and the relations between them. This frame is indeed fraught with emotions, but they have already been worked out and ritualised. ‘Khatyn’ must inevitably awaken the wrath of the wronged people on whose side is absolute moral rightness; and whose enemies – ‘the fascists’ – represent the same absolute evil.

The narrative on the basis of this frame was supported by the state-owned Russian newspapers, websites and TV channels. Horrific fake details were generously added. For example, a publicist, Egor Kholmogorov, wrote the following in one of the most popular Russian newspapers, Komsomolskaya Pravda, and this quotation was reposted by social media users:

The tragedy of Khatyn was repeated in Odessa city centre. ‘The Right Sector’ and ultra-right nationalists burnt people alive, asphyxiated them with smoke, and forced bloodstained victims who had jumped out of windows to crawl along, then beat them to death with clubs. One of the activists had his leg cut off with a spade, live on video. More than forty people were martyred by the fanatics of ‘United Ukraine’ on Russian bones and Russian deaths (Kholmogorov, 2014).

Researchers note that ‘old’ media (such as newspapers), when reporting an event, sometimes adjust to the style of social media, adopting a more emotional style and rhetoric in response to audience demand for more emotional news (Wahl-Jorgensen, 2019). From this point of view, Kholmogorov’s propagandist statement may be said to mimic the popular style of personalized news from readers’ social networks. This frame was widely sustained by pro-Russian social media users, who employed its language when talking about the tragedy which shocked them. In this frame, all types of actions and relationships between the actors become part of the ready-prepared story. ‘Enemies’ are denied all humanity, and their role allows them to commit any atrocities.

It is also worth noting that on Livejournal and Facebook texts were also circulating written either by the ‘ultras’ of the pro-Ukrainian party or by provocateurs and trolls. In these texts the dominant frame was also loaded with hate. It said: ‘Russian terrorists burnt their own people for the sake of provocation’. In many pro-Russian posts these kinds of text were quoted to prove the inhumanity of the supporters of Ukraine’s new authorities who could celebrate the horrible deaths of the fire’s victims.

A third frame, relatively rarely employed among the pro-Russian users, was that of ‘the tragic accident’, according to which there was no deliberate act of arson at the Trade Union House. Users who did employ this frame, writing that, while both sides shared responsibility for causing it, neither side had intended the fire to result in deaths, were criticised by other users for their indifference, or accused of bias. Notably, their impartial position, which eschews the violent expression of emotions and avoids accusing either of the parties, was read by some users as evidencing a lack of sympathy – that is, it was seen as flawed or even amoral. As one person responded to a post about Odessa whose author tried to be objective: ‘What kind of a soulless robot has written this??’

Accounts of eyewitnesses and gossip

Social media users often turn to social media for news when traditional media lack the time to cover the event immediately (Hyvärinen & Beck, 2019). At the same time, users clearly express a preference for emotional news, and specific types of emotions, depending on the event, in accordance with their political views. Those users who look to the Internet for information have to build their version of the event from fragments. Among the various materials, the testimonies of witnesses are considered very important; they are also actively falsified. After the fire, many emotional accounts of eyewitness appeared on the web. It was hard to tell the genuine stories, written by survivors in a state of shock, from the false ones. It can also be assumed that there is a demand among some social media users for ‘affective witnessing’, that is, for accounts of an event in which it is the affect itself that is witnessed (Richardson & Schankweiler, 2019); and the presence of such components in the text enhances its ambivalent appeal and the sense of its authenticity.

For example: this text purporting to be by ‘one of the participants’, published on the same day as the fire, was reposted by different users 87 times. The text presented a description of what happened on behalf of the woman who was inside the House of Trade Unions:

User 2: One of the participants writes: ‘I was at Kulikovo Pole when the ultras and hundreds of ‘Right Sector’ people came to our camp! I personally went to all channels on social networks, called friends! I said, come to defend your city! Nobody! I repeat, nobody responded! Ours were only 200–250 people, of which about 100 women, 50 old people (…) We saw an angry crowd (…) They began to burn us alive! Those who were in panic began to jump out of the windows and, when they were falling to the ground, RS and ultras ran up to them and finished them off. The police did nothing, they just watched how people were burnt alive. (…) It was very scary. I will never forget the faces of the victims’.

Combatants of the media war intercept the eyewitnesses’ storytelling. From the discursive point of view, they often present a story in a story. Framing allows a broad narrative to be built, supported by micro-stories. In the stories mediated by soldiers of the information war, the sensational terrible is brought to the fore, following the model of scandalous journalism. The reader can easily become addicted to the terrible, demanding from each new story at least as monstrous details as before, and denying as unreliable those testimonies that do not contain them.

The position of terrible truth told by an eyewitness provides an opportunity to try to completely change the narrative about the event. It is worth paying attention to the syntax in the quote below: capital letters without punctuation marks visually prove the speaker’s frenzied state. The horror thus visually expressed is intended to confirm the obviously falsified eyewitness testimony:

User 3.: Another eyewitness report: ‘Death squads (…) trapped about 300 people in the building and threw Molotov cocktails at them which contained a new, poisonous concoction. (…) Those who managed to get out through the windows, and there were pregnant women among them, were kicked and beaten to death using chains and clubs. (…) I HAVE ACCURATE INFORMATION FROM A MEDICAL WORKER IN THE CITY THAT 189 PEOPLE WERE BURNT (…) THE BODIES ARE BEING HIDDEN AND THE BANDERITES ARE TAKING THE BODIES OUT OF THE CITY MORGUES ON THE SLY TO BURY THEM IN PITS’.

Some users did express outrage at the entries they read as fake news and tried to refute such stories. Despite them having been evidently falsified, however, such ‘eyewitness stories’ provoke a strong response in users, who begin to doubt whether they know the whole truth.

When checking information, it is often impossible to find the source of the statement. The information circulates on the model of a rumour spreading: users copy and publish links to other accounts. This model of spreading information is indeed highly suitable for anyone trying to launch a big lie in the form of a hidden truth. However, users may also disseminate false information moved by sincere motives: they truly believe it and seek to ‘open the eyes of the blind’. For example:

User 4. Just now on the TV channel they said that they had found another 160 bodies in the basement of the Trade Unions Building. (…) Killed by firearms and hacked to death with axes!! Among them were children who had run in with their mothers when they were passing!!!

PS. About 2000 bastards have surrounded the building and they are going to bring out the bodies in pieces in plastic bags (…) they want to hide this horror!

WAKE UP!!!! SHAAAREE!!!!

This text had 864 reposts. In the comments, some users accused the author of the post of lying; others were thankful for the truth. The effect of big lies on many users is to persuade them that facts and figures of this scale cannot be false. Rumours of this kind open the way to conspiracy theories. They nourish the feelings produced by the descriptions of monstrosities and the frankness of photographs of the bodies of the deceased.

Visual evidence

For many social media users drawn into a mediatized conflict, an important way to find out the truth about an event is to find photos and videos from the scene and draw conclusions from studying them (Makhortykh & Sydorova, 2017). However, visual sources on their own can say little without a verbal interpretation. Such interpretations can completely change the meaning of a piece of visual evidence. Usually, captions under photos and comments under videos posted online can explain how these materials are meant to be interpreted. Quite often, users accept whatever the interpreter says they see in the photo or on the video. Thus, visual evidence when provided with ‘unobtrusive’ verbal interpretations can be turned into a source of their truth.

In social networks, numerous reports appeared of the death of a pregnant woman in the Trade Union House. These accounts were illustrated with the shocking photo of a dead woman in a sweater and a skirt; she was lying on her back, on an office desk, with her feet touching the floor and her large naked belly positioned in the center of the frame.

User 5. A pregnant woman. Dead. She had nothing to do with the fighters of the Odessa squad or to their ‘opponents’. She simply worked in the building. She was probably about to go off on maternity leave. But she didn’t … She just came to work on that day. And never went home again. And her child will never be born.

Stories of this kind, accompanying the photograph, quickly spread through the networks. A video filmed by someone from the crowd near the Trade Union House appeared online. On this video a woman’s cries can be heard, seemingly from one of the building’s windows. The cries subside. Next, a flag of Ukraine appears from one window.

Some users decided that this video explained the photo. The way in which this interpretation was made shows how vulnerable an attempt to explain an unclear situation can be; a situation, in which media users ‘drowning’ in uncertainty have just three pieces of evidence: a picture of a dead woman; a recorded cry; and a flag hanging out of the building. Operating with available but incomplete data results users establishing illusory, non-existent correlations between the fragments of information (Stroessner & Plaks, 2001). According to a popular explanation, the pregnant woman who screamed was strangled by one of the Ukrainian nationalists, who hung the flag as a sign of his triumph. It is the visual pair – the photo and the video – that acquired the status of proven facts in the social networks.

The logic in which this story was unfolded presupposed an escalation of the terrible: since it was a woman in the photo, then before her death she must have been subjected to sexual violence. Those users who needed to strengthen the ideological message, placed anti-Western attitudes into this story:

User 6. BESTIAL RAPE AND MURDER OF PREGNANT WOMAN IN TRADE UNION HOUSE

User 7. The death squads of Obama and Merkel’s Junta have killed and raped a pregnant employee of the Trade Union House in Odessa.

Journalists investigating this story tried to find the deceased pregnant woman in Odessa. From their investigation it became obvious that the narrative had been based on deliberately falsified evidence (Levit, 2014). According to the chief medical examiner who conducted the examination of the body of that woman:

This is a (…) vile fake. The dead woman is Irina, 54 years old. We think that after the woman died in one of the rooms of the Trade Union House, somebody specially bent her body back in such a way that her protruding belly – an age-related phenomenon – was visible (…) As for the cause of Irina’s death, it was carbon-monoxide poisoning (Levit, 2014).

However, due to the multiplicity of blogs and webpages, their networked rather than hierarchical structure, and the absence of a ‘responsible’ resource, the exposure of a report as fake does not lead to its disappearance. The fake and its refutation seldom attract the attention of users at the same time, and the respective trajectories around the network can be very different. The successful fake thus continues to live a life of its own.

Generation of affect

Users’ quests for facts often exhibit various shared states of anxiety, frustration and anger. As remarked above, sometimes users disapprove of ‘emotionless’ texts. Statements about ‘unthinkable’ atrocities generate collective affect which users try express, writing about their own rage and hatred. In the example below, while it is impossible to determine whether ordinary users are talking only to each other or if there are some trolls involved in the conversation, nevertheless, a characteristic feature in the following example is how ideological assessment is added to the unspeakable:

User 8: There are 126 bodies in the basement of the Trade Union House. They were killed there, they didn’t burn [in a fire] (…) I don’t know, I DON’T KNOW, what to do in this situation

User 9: All normal people feel exactly like this. (…) And the West supports it!

User 10: The West and the fuckers in Kiev (…) let these beasts burn in hell (…) I’m just shaking with rage at the Banderites

This shared hatred is exploited by trolls. For example, User 11, whose profile and online activity indicate that this user is very likely a troll, posted several repetitive comments with exclamations, appealing to divine wrath, and not forgetting to place the proper ideological accents:

User 11: Lord, save us and help us! Protect us from the fascists! Lord, punish the monsters of the European Union and the USA. Let all those from the Kiev government be damned. People of the world! Arise! Tomorrow anybody who speaks out against the USA will be destroyed!! Russia, why do you remain silent?

As a result, in social networks, not so much the knowledge about the tragic event as the amount of hatred increases, and this hatred becomes the message which is broadcast with most success.

Building arguments

The more the informational chaos and uncertainty grow, the more users seek clarity in constructing their explanations of what (they think) happened. For this, users search and select materials, interpret them, and construct elaborate chains of seemingly logical proof. They dwell on the most horrible visual materials (such as close-up photographs of charred corpses) and try to make sense of them. Despite resorting to the same procedures on the same limited set of data, users tend to draw different conclusions that support their respective initial positions.

User 12: I think it was both sides that set the building on fire (…) I do not see a single fact indicating that anyone ‘purposefully set fire to the building’.

User 13: if we assume that there were (…) Kremlin special forces, with their red ribbons, then everything miraculously turns into a beautiful and LOGICAL picture. Riots in Odessa are the last thing Kiev needs, but Moscow … 

User 14: it can be confidently concluded that the punitive operation in Odessa was planned by the Kiev Junta under the leadership of the CIA (…) The purpose of the planned provocation is to force Russia to send troops, in order to accuse [Russia] of inciting war.

These chains of argument, though employed by individual users to make sense of the events they are investigating, are also immediately captured by trolls and propagandists. The following example illustrates the use of ‘assumptions’ in one of the texts written by a ‘media combatant’ (– my italic):

User 15: Apparently, the version that says those who died in Odessa were not ‘asphyxiated’ and were not ‘burnt up in a fire’ is now being confirmed as trueIn actual fact, at the Trade Union House dozens of people (they say over a hundred) were slaughtered by the Right Sector activists, who then poured home-made napalm from their Molotov cocktails on the dead bodies and set them alight (…) now a post has appeared with a great many photos. a) many people shot (with a bullet to the head) b) a woman who has been strangled with an electrical (telephone?) wire, c) a couple whose necks are possibly broken, d) a huge, huge number of bodies completely blackened and charred with untouched parquet floors around them. (…). This is the most important and convincing evidence.

All the users know about information war, and willingly talk about vigilance. Participants in online discussions suspect each other of provocations and accuse each other of falsifying the facts. We are faced with an amplification of the phenomenon of confirmation bias – the tendency to prioritize information and evidence that confirms the values and beliefs of users – in social media (Knobloch-Westerwick et al., 2020; Workman, 2018). Social media users often choose to fall back on their pre-existing beliefs (e.g. about the rightness of their party and its cause, or on stereotypes of ‘good’ or ‘bad’ social and ethnic groups involved into the accident). The following example shows how new information which is not embedded in the user’s chosen explanatory model causes confusion and is ultimately rejected. If official investigation data do not confirm their beliefs, then users claim that the investigators are biased.

User 16: So it turns out that everybody in the Trade Union House was burnt in the fire. But what about the pregnant woman? She was killed.

User 17: Now it seems they are saying she didn’t exist, and I don’t know who to believe.

User 18: All I can advise is whom you SHOULDN’T believe: the OFFICIAL conclusions about death and the information on the number of dead. (…) The truth will come out a lot later … 

What is left for many users is their memory of their own traumatic experience of perusing apparent evidence of the event, and of the genuine emotions they experienced in the process. In many ways these experiences of theirs have been conditioned by how these users themselves framed the tragic event and wrote about it. It often looks as if the truth of users’ emotional experiences influences which explanations they believe for the incident.

Discussion

User discussions about the Trade Union House fire in Odessa show a recurrent pattern of behaviour in a situation, where users struggle to comprehend a tragic event in the context of an ongoing information war. We suppose this pattern might be extensible to other similar situations.

In discussing tragic events, social media users were combating uncertainty. They needed clarity, but professional journalism did not help. On the contrary, users faced a situation in which the state media were involved in the ongoing information war, with some officially acclaimed media channels supporting the spread of false information. The status of ‘verified fact’ was important for users trying to make sense in this chaos of information, but facts could not be checked by those means which social media users had at their disposal. Hence, there unfolds a battle for the articulation of the ‘true fact’ and its presentation to the audience.

At the first reports of a tragedy, users experience strong emotions. When they begin to express them verbally in their texts, they look for a suitable language to express them; and it is already packed, in ready-made frames, in other people’s texts swirling around. These frames contain not only words for expressing emotions and naming events, but also explanations, stories, and interpretation. In our case, the frame most often drawn upon by pro-Russian oriented users turned out to be the frame ‘Khatyn in Odessa’. In it the collective memory of a tragic atrocity of WWII was revived to explain completely different contemporary events. The users’ emotions were directed into the pre-etched channels of collective horror, compassion, powerlessness turned into rage against the perpetrators, and demands for retribution. According to the logic of this frame, the fire in the Trade Union House was described as a ‘crime’ (‘brutal’, ‘monstrous’, ‘unheard of’), which in turn had instigators and perpetrators (‘fascists’, ‘Inhumans’, ‘Nazis’, – the words of the frame with which users associated people belonging to groups with pro-Ukrainian positions), as well as the intention and purposefulness of the action. The expectation that ‘fascists’ are capable of the most terrible ‘atrocities’ allowed some users to believe rumors about a brutal massacre of victims in the House of Trade Unions. Other users detected in these horrific stories a lie aimed at inciting even greater enmity between ordinary users of social networks according to their sympathies with Russian or Ukrainian sides.

Positions adopted by users are often based on beliefs that they had before. The new information is processed in such a way that it can be included in the old stories that a user has previously chosen and approved. Controversial data is either rejected, or is interpreted to confirm the stories’ validity. Users trust their own experiences and the authenticity of experienced emotions. This authenticity reinforces the credibility of the originally selected frame and its narrative.

The more chaos and uncertainty, the more users strive to build the most clear and consistent explanation of what happened. One of the most striking features of user discussions immediately after the tragedy was that many came to specific conclusions very quickly and began to write about it as if they already understood everything. We noticed in our materials two important ways in which users construct their arguments. One is the search and selection of materials that would confirm their expectations and in this way build up a substantial body of evidence. Another way to establish the truth about what happened is to search for photos and videos from the scene. Users are accustomed to the high level of penetration of digital media into everyday life, and to the fact that everything is always documented and posted online. In order to reach a conclusion, users study in detail visual evidence of the most harrowing kind, getting used to it and trying to include it in their story, trusting their own perception and ability to discern the truth. However, the complexity of working with visual materials arising from the dependency of their interpretation on the context and verbal commentary is often not taken into account, and users believe that they ‘see’ in photographs and videos what the interpreter reports, as if it had been the users’ own experience.

Users of social networks are vulnerable and often cannot easily extract the necessary words from frames that would accurately convey their thoughts and emotions while remaining free of explanatory constructions that use hate speech or are open to manipulation by participants in an information war. In this situation, there are special opportunities for the soldiers of media battles not just to disseminate information altered by their own interpretations but to discursively create an alternative ‘truthful’ event that replaces the one which really happened.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributor

Vera Zvereva is senior lecturer in Russian language and culture at the Department of Language and Communication Studies, University of Jyväskylä, Finland. She is a member of the editorial board of the journal Digital Icons: Studies in Russian, Eurasian and Central European New Media. Her research interests include digital Russian studies, communication on social media, digital memory studies, and political language, populism and propaganda on the Internet. https://www.jyu.fi/hytk/fi/laitokset/kivi/henkilosto/henkilosto/zvereva-vera

References

Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions. Digital Journalism6(2), 154–175.

Beckett, C., & Deuze, M. (2016). On the role of emotion in the future of journalism. Social Media + Society2(3), 1–6. https://doi.org/10.1177/2056305116662395

Coleman, R. (2018). Social media and the materialisation of the Affective present. In T. D. Sampson, S. Maddison, & D. Ellis (Eds.), Affect and Social Media: emotion, mediation, anxiety and contagion (pp. 67–75). Rowman & Littlefield International.

D’Angelo, P., & Kuypers, J. A. (eds.). (2010). Doing News Framing Analysis. Empirical and Theoretical perspectives. Routledge.

Eismann, K., Posegga, O., & Fischbach, K. (2016, June 12–15). Collective behaviour, social media and disasters: A systematic literature review. Proceedings of the 24th European Conference on information systemshttps://aisel.aisnet.org/ecis2016_rp/104

Ekström, M., Patrona, M., & Thornborrow, J. (2018). Right-wing populism and the dynamics of style: A discourse-analytic perspective on mediated political performances. Palgrave Communications4(1), 83.

Engesser, S., Ernst, N., Esser, F., & Büchel, F. (2017). Populism and social media: How politicians spread a fragmented ideology. Information, Communication & Society20(8), 1109–1126.

Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication43(4), 51–58.

Faircough, N. (1995). Media Discourse. Hodder Arnold Publication.

Fialkova, L., & Yelenevskaya, M. (2015). The crisis in Ukraine and the split of identity in the Russian-speaking world. Folklorica19, 101–131.

Gaufman, E. (2015). World War II 2.0: Digital memory of fascism in Russia in the aftermath of Euromaidan in Ukraine. Journal of Regional Security10(1), 17–36.

Gaufman, E. (2017). Security threats and public perception. Digital Russia and the Ukraine crisis. Palgrave Macmillan.

Gruppa 2 Maja. (2015, 15 May). Hronologija sobytij v Odesse 2 Maja 2014 goda. http://2mayodessa.org/hronologiya-sobytij-v-odesse-2-maya-2014-goda-2/

Guillory, J., Spiegel, J., Drislane, M., Weiss, B., Donner, W., & Hancock, J. (2011). Upset now? Emotion contagion in distributed groups. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI May 2011), 745–748. https://doi.org/10.1145/1978942.1979049

Hall, S. (ed.). (1997). Representation: Cultural representations and signifying practices. Sage.

Hyvärinen, H., & Beck, R. (2018). Emotions trump facts: The role of emotions in on social media: A literature review. Proceedings of the 51st Annual Hawaii International Conference on System Sciences, 1797–1806. Crossref

Hyvärinen, H., & Beck, R. (2019, December 15–18). How emotions unfold in online discussions after a terror attack. Proceedings of the 40th International Conference on Information Systems, ICIS 2019, Munich, Germany. https://aisel.aisnet.org/icis2019/crowds_social/crowds_social/14/

International Advisory Panel. (2015, 4 November). Report of the International Advisory Panel on its Review of the Investigations into the Events in Odesa on 2 May 2014. https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=090000168048851b

Karatzogianni, A., & Kuntsman, A. (eds.). (2012). Digital cultures and the Politics of emotion. Feelings, affect and technological change. Palgrave Macmillan.

Kholmogorov, E. (2014, 3 May). Odesskaja Khatyn’. Komsomol’skaya Pravdahttps://www.kp.ru/daily/26226/3109739/

Knobloch-Westerwick, S., Mothes, C., & Polavin, N. (2020). Confirmation bias, ingroup bias, and negativity bias in selective exposure to political information. Communication Research47(1), 104–124.

Kuypers, J. A. (2010). Framing Analysis from a rhetorical perspective. In P. D’Angelo, & J. A. Kuypers (Eds.), Doing News Framing Analysis. Empirical and Theoretical perspectives (pp. 286–311). Routledge.

Levit, A. (2014, 26 June). V odesskom Dome profsojuzov ne bylo nikakoj ubitoj beremennoj zhenshhiny — ekspert. Faktyhttps://ukraine.fakty.ua/183835-v-odesskom-dome-profsoyuzov-ne-bylo-nikakoj-ubitoj-beremennoj-zhencshiny

Lindekilde, L. (2014). Discourse and frame analysis: In-depth analysis of qualitative data in social movement research. In D. della Porta (Ed.), Methodological Practices in social movement research (pp. 195–227). Oxford University Press.

Lonkila, M., Shpakovskaya, L., & Torchinsky, P. (2020). The occupation of runet? The tightening state regulation of the Russian-language section of the internet. In M. Wijermars, & K. M. Lehtisaari (Eds.), Freedom of Expression in Russia’s New mediasphere (pp. 17–38). Routledge.

Lyebyedyev, Y., & Makhortykh, M. (2018). #Euromaidan: Quantitative analysis of multilingual framing 2013–2014 Ukrainian protests on twitter. IEEE Second International Conference on Data Stream Mining & Processing (DSMP)8/21, 276–280.

Makhortykh, M., & Sydorova, M. (2017). Social media and visual framing of the conflict in eastern Ukraine. Media, War & Conflict10(3), 359–381.

Moffitt, B. (2016). The global rise of populism. Performances, political style and representation. Stanford University Press.

Office of the United Nations High Commissioner for Human Rights. (2019). Report on the human rights situation in Ukraine 16 February to 15 May 2019https://www.ohchr.org/Documents/Countries/UA/ReportUkraine16Feb-15May2019_EN.pdf

Papacharissi, Z. (2015). Affective publics: Sentiment, technology, and politics. Oxford University Press.

Pennebaker, J. W., & Harber, K. D. (1993). A social stage model of collective coping: The loma prieta earthquake and The Persian gulf War. Journal of Social Issues49(4), 125–145.

Pomerantsev, P., & Lucas, E. (2016). Winning the Information War: Techniques and Counter-Strategies in Russian Propaganda. Center for European Policy Analysis. https://li.com/wp-content/uploads/2016/08/winning-the-information-war-full-report-pdf.pdf

Quattrociocchi, W., Scala, A., & Sunstein, C. R. (2016). Echo Chambers on Facebook. SSRN. Crossref

Radchenko, D., & Arkhipova, A. (2018). Ukrop i vatnik: Jazyk vrazhdy rossijsko-ukrainskogo konflikta kak napadenie i zashhita. Ab Imperio1(1), 191–219.

Richardson, M., & Schankweiler, K. (2019). Affective witnessing. In J. Slaby, & C. von Scheve (Eds.), Affective societies Key concepts (pp. 166–177). Routledge.

Ryazanova-Clarke, L. (2015). From commodification to weaponization: The Russian language as ‘pride’ and ‘profit’ in Russia’s transnational discourses. International Journal of Bilingual Education and Bilingualism20(4), 443–456.

Sampson, T. D., Maddison, S., & Ellis, D. (eds.). (2018). Affect and Social Media: emotion, mediation, anxiety and contagion. Rowman & Littlefield International.

Slaby, J., & Röttger-Rössler, B. (eds.). (2018). Affect in relation. Families, places, technologies. Routledge.

Soldatov, A., & Borogan, I. (2015). The Red Web: The struggle between Russia’s Digital dictators and the New online revolutionaries. PublicAffairs.

Stroessner, S. J., & Plaks, J. E. (2001). Illusory correlation and stereotype formation: Tracing the Arc of research over a quarter century. In G. B. Moskowitz (Ed.), Cognitive social psychology: The Princeton symposium on the legacy and future of social cognition (pp. 247–259). Lawrence Erlbaum Associates.

Van Dijk, T. (2008). Discourse and power. Contributions to critical discourse studies. Palgrave MacMillan.

Wahl-Jorgensen, K. (2018). The emotional architecture of social media. In Z. Papacharissi (Ed.), A networked self and platforms, stories, connections (pp. 77–93). Routledge.

Wahl-Jorgensen, K. (2019). Emotions, media and politics. Polity Press.

Workman, M. (2018). An Empirical study of social media exchanges about a Controversial topic: Confirmation bias and participant characteristics. The Journal of Social Media in Society7(1), 381–400. https://thejsms.org/index.php/TSMRI/article/view/355

Zvereva, V. (2020). State Propaganda and popular Culture in the Russian-speaking internet. In M. Wijermars, & K. M. Lehtisaari (Eds.), Freedom of Expression in Russia’s New mediasphere (pp. 225–247). Routledge.