An illustration of a person standing gated in by screens and abstract patterns and colourful barcodes.

Illustration by Ibrahim Rayintakath

How to fight disinformation

It’s nothing new, says historian of media, and we can learn from the past.

Following the Arab Spring uprisings of the early 2010s, there was a flood of optimism about social media’s potential to unleash democracy around the world. “If you want to liberate a society, just give them the Internet,” said activist and Google executive Wael Ghonim after Egypt’s Mubarak regime was toppled. Many believed that Facebook, Twitter, and other digital platforms would amplify marginalized voices and hold the powerful accountable.

At that time, historian Heidi Tworek was wrapping up her PhD at Harvard, looking at how Nazis used radio – a revolutionary technology at the time – to spread horrific propaganda around the world. “Everyone was talking about this utopian vision for how new technologies were going to spread democracy and uplift communities, and I was basically looking at the exact opposite thing – how a group of people used a new technology to really nefarious ends,” says Tworek. “I thought, OK, here’s a moment where a historian needs to intervene to say, ‘This is not new. Every media technology has been used for political purposes, but it’s a mistake to assume it can only be used for utopian ends.’”

While she was adapting her research into a book about Germany’s efforts to control global communications from 1900 to 1945, she witnessed history repeating itself in disturbing ways: “The far-right revived Nazi terminology using Lügenpresse (lying press) to decry the media. Marginalized groups were targeted online and blamed for societal ills. News was falsified for political and economic purposes,” she says. “And like radio in the first half of the 20th century, a technology designed for utopian aims became a tool for dictators and demagogues.”

Today, the narrative around social media and democracy has flipped into a dystopian vision. Social media is now blamed for fuelling disinformation, division, and violence around the world, algorithmically funnelling audiences into disparate partisan echo chambers, and eroding a shared sense of reality on everything from health, to climate, to politics. A 2024 World Economic Forum report named misinformation and disinformation the top global risks for the next two years, outranking war, climate disasters, and health epidemics. In early 2025, the head of Canada’s Foreign Interference Commission warned, “Information manipulation (whether foreign or not) poses the single biggest risk to our democracy. It is an existential threat.”

Sparked by this sustained and overwhelming global concern, Tworek – who joined UBC in 2015 and now directs its Centre for the Study of Democratic Institutions (CSDI) – has become a key voice in shaping policy responses. She has testified before governments around the world on platform governance, hate speech, and election integrity in the digital age. During the height of the pandemic, she advised officials on combatting health misinformation. More recently, she co-authored a report on protecting elections in the age of Generative AI, and served as an expert witness for Canada’s Foreign Interference Commission.

Here are some of her key strategies for tackling disinformation:

Understand historical patterns
As a historian and policy expert, Tworek has worked to shed light on what is unprecedented about today’s disinformation crisis, and what is simply a newer manifestation of longstanding problems. She notes that some aspects of disinformation in the digital age are truly unprecedented: the scale of the Internet’s reach, the granular level of surveillance, the micro-targeting, the global pre-eminence of US-based platforms. But disinformation itself is an age-old problem, and Tworek cautions against viewing new media technologies through a strictly utopian or dystopian lens. Fake news has existed since the advent of newspapers, and new technologies like the printing press, the radio, the Internet, or GenAI have all been used to amplify misinformation – at least in the short term.

Disinformation often has an economic incentive, but Tworek says it has long been a tool of geopolitics as well. “Countries historically turn to information warfare as a cheap form of interference when they feel geopolitically weak,” she explains. “That was as true for Germany in the past as it is for Russia today.”

The more important questions for Tworek are: What structural conditions enable disinformation? Why does it spread more at certain times than others? How do entire information ecosystems, not just individual pieces of disinformation, shape democracy?

Focus on business models, not content
Rather than trying to crack down on problematic content – which many governments have tried and largely failed to do – Tworek argues that we need to investigate the economic structures that enable disinformation to thrive. “It’s tempting to focus on examples of individual content that are particularly harmful,” she says. “But the reason those pieces of content go viral is because of the few companies that control the bottleneck of information.”

In the early 20th century, British and French news agencies dominated global news distribution – which is why Germany worked so hard to develop its own strategies for influencing global communications, as Tworek detailed in her book. “Today you still have a small number of platforms who frame and shape how we communicate with each other,” says Tworek. And those platforms profit from engagement, regardless of whether content is true or false.

Tworek says the potential for manipulation has increased since Elon Musk took over Twitter (now X), and Meta abandoned its fact-checking program. “We’ve seen how political influence can shape these platforms,” she says. “We need only think of Trump’s inauguration, where the CEOs of major tech companies were seated in front of his own cabinet.” She calls the concentration of economic and political power in today’s tech world “historic and alarming.”

At a minimum, Tworek says we need far more transparency about how algorithms work. She points to the European Union’s Digital Services Act – which requires tech companies to provide more transparency and imposes stricter rules on targeted ads – as a model that Canada should consider. She has also considered whether Canada might join Europe as a negotiating partner in its efforts to safeguard democracy from digital manipulation. “Canada is a small market that cannot hope to sway big tech companies on its own. Working with Europe is one way for Canada to create change.”

Strengthen the Information Ecosystem
Tworek has also urged policy-makers to move beyond just policing bad content and pay more attention to making trustworthy information more accessible. “If we just focus on disinformation, we miss the bigger problem: the health of the entire information ecosystem.” Investing in public and independent journalism, supporting digital literacy programs, and improving government communications can help. She points to a study from the early days of COVID that analyzed government health websites around the world and found most were written at a university reading level. “There is no country on earth where 100 per cent of people have a university education. And thus that is an exclusionary way of providing this information.” Governments need to think about the basics: “Are you translating it into all the right languages? Is it easily accessible? Are you putting information on all the social media channels that reach Canadians, including ones who may be searching for information in Mandarin or Cantonese or Hindi?”

A generation ago, government officials could get away with just holding a press conference and feel confident their message would reach the public through mainstream media. Today, governments need to work a lot harder to get accurate and accessible information in front of their citizens, she says. If they don’t, bad content will bubble up to fill the void – whether it’s misinformation shared by genuinely confused people, or disinformation paid for by problematic actors.

Democracy-proof solutions
Although the state has an important role to play in fostering a high-quality information landscape and protecting citizens from disinformation, Tworek cautions against overreaching regulatory measures. As her research on Weimar Germany shows, well-intentioned efforts to curb disinformation can ironically enable problematic control of content by less democratic governments in the future. The Weimar government’s attempts to protect democracy by increasing state supervision over content ultimately laid the groundwork for the Nazi propaganda machine. Tworek recommends a two-part test to any new policies. First, how could an authoritarian regime misuse this policy for censorship? Second, how would tech companies evade or manipulate it?

Tworek notes that the role of disinformation isn’t always to push a specific agenda; sometimes, it simply aims to create confusion or erode trust. This is one reason why she and her colleagues warn against sensationalizing the problem, even in the face of insidious AI tactics such as deepfakes. As her colleague at CSDI, Chris Tenove, puts it: “If people believe there is widespread disinformation or use of deepfakes, they are more likely to believe the entire information system, including news media, is untrustworthy – or maybe even that democratic institutions and election outcomes themselves are untrustworthy. It’s good for people to be critical and skeptical, but not to feel disempowered or cynical. People can and do find accurate information. Elections can be conducted with integrity. We need to be vigilant in addressing some of the risks that generative AI poses, but not fatalistic.”

Finally, Tworek argues that we need to understand misinformation and disinformation more as symptoms than root causes. Focusing solely on information manipulation distracts from the real-world issues that leave citizens feeling alienated, powerless, and distrustful of their institutions. “Social media may amplify anger, but that anger also stems from real-world experiences of current conditions,” Tworek writes. “If we do not address pressing issues like growing inequality and climate change, improved social media communication will not stem discontent.”