Can a mathematical theory could help stop the spread of fake news online?
Jan 25, 2024
If we can determine the crucial nodes that spread disinformation more accurately, we gain a weapon that we can effectively use whenever necessary.
When discussing fake news, people often immediately think of the individuals and content involved in the process, their motivations, and dishonesty. One of the many problems with this approach is that these units (people or content) are practically inert. They can't cause any harm unless they fall into the right network. The behaviors and paths they follow are what make them harmful or even lethal. Hence, the idea: why don't we analyze this infrastructure? In this case, another idea: could percolation theory (a renowned mathematical framework used in physics, mathematics, and computer science) help us?
Let's backtrack a bit. Why should we explore mathematical theory and disinformation together? The rationale is that networks consistently behave in a similar way across various fields, from computing to agriculture, and from architecture to social behavior. It's no coincidence that social networks have become a vital part of modern societies. Analyzing entities from a network perspective might seem strange to some, but the fact is, examining issues from their networking standpoint allows the introduction of diverse ideas. Often, these ideas are game-changing.
Percolation theory provides an authoritative method to study the behavior of connected clusters within a network. It gives an astute examination of the emergence and growth of clusters in a network as connections between nodes are added or removed. By employing percolation theory to analyze disinformation networks, researchers could be empowered to comprehend the structure and resilience of these networks, which paves the way for effective containment and mitigation strategies.
The concept of the critical threshold in percolation theory is pivotal. It indicates the tipping point where a network transitions from a fragmented state to a connected state. Recognizing the critical threshold in disinformation networks can illuminate the conditions that enable false information to spread most effectively. Through simulating disinformation propagation within a network and fine-tuning the strength of connections between nodes, there would be indicators for us to disable the key nodes on the disinformation network and bring them down or drastically decrease their impact.
Let’s put it into an example. Imagine a bunch of computers that are not interconnected. Then, imagine you start to randomly connect computers. In the beginning, you have multiple clusters of two or three computers connected, but they have a reach limited just to a few machines. The critical threshold is the moment when you connect one specific computer that allows several clusters to become interconnected and enable the networking power that was locked.
Moreover, percolation theory enables a thorough assessment of the robustness of disinformation networks against interventions. By targeting and disrupting key nodes or connections within the network, it's feasible to gauge the extent to which disinformation can be contained or isolated. This invaluable insight could guide the development of strategies to obstruct the flow of false information and minimize its impact on the population.
If my guess is right (yes, this is only a guess), percolation theory also can be instrumental in assessing the effectiveness of countermeasures against disinformation. By injecting accurate information or corrective narratives into the disinformation network, we could ascertain how these interventions impede the spread of false information and potentially fragment the network. This research serves as a foundation for the design of proactive interventions to counteract disinformation.
There is not - yet - proof that all the characteristics of percolation that work for physics and maths would be valid for disinformation networks. However, the concepts within the theory suggest a new way to see the system, and if there is going to be a solution for the issue, it will not come from traditional AI or fact-checking. Disinformation is a complex and hybrid problem and it demands a similar mindset. Defining standards to evaluate and categorize nodes according to their capabilities (let’s say how efficient or how fast they are to spread disinformation) can allow us to see the network as a map and then disable these key nodes objectively, and maybe acquire a weapon that could dismantle the structure holding the pipes where the bad info flows.