Using InfraNodus, you can analyze the comments to YouTube videos and reveal not only the main topics inside, but also the social network of commentators in order to see which clusters of users are the most engaged.
This can be useful both for studying your own YouTube channel's audience, but also for revealing disinformation campaigns on social media. You can compare the discourse across different channels or videos to uncover irregularities that may point to external manipulation by propaganda bots and corporate / government actors.
Below we describe how it works using an example of a recent interview between the French broadcaster BFM's journalist and the vice president of the Russian parliament (duma) Pyotr Tolstoy, who is a well-known Putin supporter and propagandist. This interview got about 1 Mln views in the first 5 days online and is a great example of how a propaganda machine can make us believe that public opinion is more radical than it is (therefore causing further radicalization, polarization, or disillusionment).
We find this video interesting because Tolstoy says obviously provoking things about France: threatening it with nuclear war, promising to kill "everyone" and every French soldier who'll ever enter Ukraine, bashing their openly gay prime minister as a "pervert" (Tolstoy's own words from the interview), and promoting one-sided narrative about the war in Ukraine. Yet, most of the French-language comments to this video support Tolstoy while also complaining about the BFM journalists and Macron, the French president. We were curious to understand whether the comments were manipulated by bots or whether it was representative of the real BFM audience and, to some extent, of the general sentiment in France. Are those comments made by bots or just by disillusioned citizens who are not content with their life and their country to such an extent that they're ready to cheer an intolerant and aggressive foreign government official?
1: Extract YouTube Comments and the Social Network Within
The first step is to extract the comments from a YouTube video along with the channel ID of the commentators. InfraNodus has a YouTube comments import app that enables you to not only extract the top-level comments, but also the replies along with the commentators' channel IDs:
It will then build a network graph where there will be two types of nodes:
- the concepts and terms used by commentators (as standard nodes, which are connected based on a 4-word co-occurrence window)
- the unique usernames of the commentators (as @mention nodes, which connect to every word in the statement)
- if a commentator responds to another comment, their usernames will be linked (e.g. @channel1 @channel2)
2: Visualize the Interactions in YouTube Comments as a Social Network Graph
This two-layer mixed network allows you to see a social overlay (comprising of the connections between the @mentions and connections between the ideas mentioned.
Click @mentions only at the top menu (activate it via the "advanced mode" tab if it's not visible), to see a social network of commentators: these are the usernames that tend to respond to each other a lot.
The visualization shows that the most active commentators for this video on YouTube are the users with no preselected channel name (therefore, they use a generic `@user-xxxxx` ID and show as a @user in the graph.
Then there's an active user @florentboyer2736.
You can click on each user and filter the comments that they left so you can see the content they posted.
Click AI: Summarize Visible to understand what they're saying en masse:
As you can see, most of the comments that elicit further discussions and interactions come from the "nameless" users (with the usernames @user-xxxxx). And AI interprets them as the following:
"The text is talking about controversies in an interview with Tolstoy, critiquing the French journalists for negative treatment. Certain people support Russia against the occidental world."
Therefore, we see that most negativity about the journalists and support for pro-Putin propagandist Tolstoi comes from anonymous or non-established user accounts.
You can then analyze some of these channels further to study their age, whether they have playlists, etc. to better understand if these may be bots or some functional accounts created to spread propaganda.
3: Evolution of Public Discourse Over Time
It would be interesting to see how the comments evolved over time. This may help us discover irregularities in how the content is presented.
To do that, we go to the Graph's settings and choose Node Filter. Then we select the time range for the first 20% of the video's presence online (i.e. the first day):
We can see that during the period between 22/03 20:03 and 23/03 06:13, that is, about 12 hours, 20% of the comments were posted. When we analyze their content, we will see two things:
- early comments are much less socially interconnected.
- their general sentiment is about 50/50: some critique the interviewee, some support him (see the text in AI module above, which summarizes the visible 19% of early comments saying that they split between support and critique — this is in contrast to the overall discourse, which is much more pro-interviewee. Meaning that at some point people from the outside (non-BFM subscribers) came in and added their opinion on the topic.
This is also confirmed by a timelapse visualization of the comments' discourse evolution:
This may indicate that the first 20% of comments are much more representative of the general sentiment in France (or at least among BFM's audience): 50% are anti-Putin and do not support his political agenda against France and 50% are disillusioned citizens who are very critical of their country and media, distrust public politics, and, ironically, support a foreign propagandist politician.
However, as time passed, multiple anonymous user accounts came in and encouraged a more positive stance towards the interviewee, making sure to engage with the previous commenters (probably to "like" and support their point of view), shifting the discourse towards a more anti-establishment, anti current French government narrative. It could be possible, that the video got distributed on some pro Le Pen (French right-wing party) channels as well as on pro-Putin networks, enforcing the bias towards pro-Putin propagandist Tolstoi.
Interestingly, we would not have this insight from just reading the comments, because YouTube tends to show either the most recent or the most popular comments, so a more balanced collective point of view that appeared at the beginning was obfuscated by this engineered narrative.
4. The Content of the Discourse: Timelapse
Let's see how the content of the discourse evolved over time. In order to do that, we import the comments again, but this time we use entity detection algorithm in InfraNodus to extract named entities and key concepts and build a knowledge graph from them showing relations between ideas present in the comments.
We then go to the Settings > Timelapse analysis to see how the topics evolved over time. We will see that the actual content stayed more or less constant:
The commenters have been discussing the same topics: "France, Russia, and French". Then when we slice them off to look for the deeper topics, we see that on a deeper level they're discussing "Russians" and "Macron", then they're talking about "propaganda" "shit" and "bfm" (the channel that put out the content) as well as Tolstoi and the interviewee.
This shows us that despite the fact that the social network of comments to this video had an influx of anonymous users, the subject matter stayed more or less the same. So it wasn't like highly ideological commenters came in and swayed the discourse. Rather, there was already an anti-Macron, pro-Russians sentiment in this video (we do not say pro-Russia, because the comments are more focused on the relationship between the French and the Russians, rather than Russia) and the new commenters who arrived after the video was published simply helped push this already existing agenda in the comments section and sway the overall sentiment to the anti-government, pro-Russians direction. This is confirmed by the AI analysis earlier where we looked at the first 20% of comments and the last 80%.
We can also verify this hypothesis by comparing the first batch of comments to the latter ones. exporting the first 20% of comments and visualizing them as one graph, then visualizing the last 80% of comments in another, then launching graph compare feature which shows what's present in the earlier batch of comments that's not present in the latter batch:
People started talking about Putin, comparing him to Hitler, but also critiquing France and Macron and the journalist of BFM.
But then, in the later comments, we see the following patterns that are not visible in the earlier ones:
The discourse picked up on all the negative points ("BFM and its journalists are pushing the propaganda"), increased the personal attacks on the journalist, started referring to nuclear war and a possible escalation.
Overall, the comments section of this video is a perfect example of how online manipulation of public opinion works and can serve as a how-to guide to propaganda across the political spectrums. The recipe is simple: do not fight public opinion, rather, find the already existing negative tendencies, give them "likes", amplify those tendencies, add a big existential scare in the end.
Comments
0 comments
Please sign in to leave a comment.