How might we build trust in an untrusting world?

Considering the challenge of polarization in digital societies

Richard Gingras
10 min readOct 22, 2023

We live in a divided world. My country is fractured. Your country is fractured. Everywhere I’ve been of late, from Brazil to Italy, from the Philippines to Nigeria, I see struggles to address fractured societies. Deep fissures of ethnic, religious, economic conflict.

The underlying issues are longstanding. They are further challenged by changes in the global politic. Open markets and borderless capitalism drive fear of lost jobs. Growth in immigration is perceived to disrupt culture, to disrupt religion. And yes, these challenges have been exacerbated by the frictionless means of expression enabled by the Internet.

We see frustration with the perceived value of democracy. We see less willingness to engage in constructive dialog. When the motivation to achieve compromise or consensus is lost, democracies break down.

We live in a highly polarized world. History warns us. It tells us that polarization stretched to the breaking point does not end well. You can go back centuries on that one. The only thing that has changed over time is that communications technology makes it all happen faster.

As technologies of media progressed, from the printing press to radio to television, it became easier for people to consume more and more information. However, the ability to speak to the people, to influence them at scale — good, bad, or indifferent — was limited to a privileged few. Participation was not diverse, minority voices were not fairly represented.

The Internet changed that. It put a printing press in everyone’s hands. Everyone had the opportunity to share his or her voice in the public square. Millions did. It enabled diverse voices to express themselves to every person in the world, or more precisely, to anyone willing to listen.

In a world of unfettered free expression, the nature of both public discourse and political engagement changes. Yes, the Internet can elevate noble speech — that which appeals to our better angels and allows us to find consensus. But it also enables heinous speech, where anger, outrage and self-righteousness can fuel a hatred of others.

We, our species, are more easily stimulated by emotional expression than by reasoned, complex analysis. We prefer that our biases be confirmed. Affirmation is more satisfying than information. It always was. It always will be.

The politics of fear has forever been a powerful tool. Fear motivates action. Fear shifts and hardens our perceptions of reality. Fear silences dissent. Fear has driven countries away from democratic principles toward authoritarian regimes. It was one of the great Greeks who said our open societies, our democracies, will be destroyed by the freedoms we enable. Wise words. Terrifying words.

I’ve had many intellectual wake-up calls over the last decade. After the US election in 2016 I spoke of the need to “bridge the gaps in our society by appealing to our innate sense of reasoning.” It sounded right at the time.

I have learned we have no innate sense of reasoning. We are first and foremost tribal beings. We think first through the filter of what our friends, our tribes, expect us to believe. We think first, as Daniel Goleman has made clear, through a social construct. If the head of the tribe says the moon is green, one would be inclined to agree, lest one not receive a leg of the roasting calf. This is not a new trend. It is not representative of any particular ideology. It is who we are.

With the Internet, the mathematics of the media space, the information space, changed. As a society’s access and participation in media became more open, the information space became intrinsically more diverse, and mathematically more divisive. The Internet broke the information space into a million shards, from 500 channels to more than a billion websites. We can choose, and do choose, the voices that reflect our view of our world, the voices that reflect and confirm our biases — good, bad, and indifferent.

This challenges our core understanding of free expression, of the recognition that supporting free expression means accepting the existence of expression we find uncomfortable, indeed heinous.

How do we address these challenges to our societies?

In response to disinformation and misinformation, there is demand for regulation, a demand for governments to create mechanisms to protect our societies by filtering out or reducing the amplification of what might be deemed as harmful speech.

Such regulatory solutions are tricky. What is unacceptable expression? What is the truth? How is it determined in areas where there are many perspectives and few singular fact-based truths? Who decides? Who decides who decides? How do such legal mechanisms work in open societies where free expression is prized? How do we address such thorny questions when societies are fighting over what books are allowed in our libraries, or how history is taught in our schools?

Can press freedom be maintained as legal mechanisms are crafted to address misinformation? Where does one draw the line between awful and lawful in a divisive political world teeming with extreme parody and threatening outrage? Can regulation of knowingly false information be effective if exemptions are granted to politicians? Does what we see in the United States, with politicians attacking disinformation researchers, give further indication of this deep and paradoxical challenge? Might such mechanisms be used against the press by less well-intentioned leaders? Will blocking, banning, or de-amplifying certain acts of expression address our differences or harden us in our silos of segregated belief?

Us versus Them. We versus They. Does one ever win an argument by putting one’s hand in front of someone else’s mouth?

How might we address our confrontational, suspicious, untrusting world? How do we build motivations to care about the greater common good?

Looking back at my own fifty years in news, media, technology, and policy, I wonder: what can journalism, or technology, or any other institution do to rebuild a sense of relevance, of value, of trust in fact-based knowledge? How might each of us assess our own efforts to address the loss of understanding, the loss of trust, the loss of a collective sense of the common good?

Our world is deep with suspicion. Institutions are distrusted. Media is distrusted. Journalism is distrusted. Science is distrusted. Technology is distrusted.

Nine years ago Sally Lehrman founded the Trust Project. Lehrman was determined to understand the problem and guide news organizations in building trust. A noble ongoing effort. I assisted in the project’s initiation. I believed in it then and now. There are many insights and innovations that point to paths forward. But I also think I misunderstood the challenge of trust, or at least that word. We don’t have a deficit of trust. Everyone trusts someone. Everyone trusts some sources of information — whether we think the sources they trust are worthy or not.

The world of journalism has never been solely about fact-based coverage. There have always been varying degrees of partisanship, perspective, opinion. The left-leaning news brand is despised by the right. The right-leaning news brand is despised by the left. Both lean toward affirmation rather than information. Sadly, fact-based coverage is drowned in a flood of opinion, often skewed through the lens of perspective. What is the answer to that?

Before the Internet, the amount of opinion content in a traditional newspaper in the United States was small. It was typically limited to the editorial page, which presented the opinions of the newspaper itself, and later extended to the OpEd page (literally, opposite from the editorial page) for curated opinion from third parties. Today, with unlimited space, news sites offer far more opinion than in the past. We have more partisan news sources than in the past. We assume readers understand the difference between fact-based coverage and partisan opinion. They don’t.

The prevalence of opinion on a news site creates doubt about the fact-based coverage that sits by its side. If the reader agrees with the opinions, they are inclined to trust the fact-based coverage. If they don’t, they won’t. Do readers’ efforts to understand how to think get overwhelmed by the crowd of voices telling them what to think?I

How might we avoid unduly amplifying societal fears? In the United States, one is 35 times more likely to die of cancer or heart disease than from violent crime. Yet, our societies perceive those fears in reverse — our fear of violent crime is exponentially higher than our fear of dying in our cars or from serious illness.

We live in a landscape of distorted risk.

Everyday we read about shootings, kidnappings, gang warfare. All the horrific but anomalistic events that journalism needs to cover. How does our society learn of these things without molding perceptions of reality that conflict with actual reality? Is there an increase in violent crime in my community or is it a rare occurrence? If I enter a polling booth with a distorted sense of societal risk, how does that not effect how I consider issues or candidates?

Might we provide the context to close the gap between irrational and rational fear? Several years ago at Google we began a project we thought might help address this. We built a massive data commons coalescing statistical data from thousands of authoritative sources. Might such resources make it easier for journalists to offer appropriate context? As newsrooms develop new tools leveraging Generative AI, might those tools assist the reporter in surfacing relevant data to provide further context?

Might we rethink the models and formats used in journalistic work? The Constructive Journalism Institute explores a different vein of opportunity — presenting news coverage through a constructive lens. The word constructive is key. It’s not news that makes you feel good; constructive journalism goes beyond the typical coverage model. It seeks to provide clear signals and clear intent in displaying the necessary context, the hows and whys of a calamitous event, and importantly, to report objectively of the range of thinking on how the event could have been prevented.

Can journalism convey the principles of unbiased fact-based journalism through the structure of its work? Can such models guide critical thinking and help guide the reader’s own evaluation and ultimate judgement?

Might we avoid terms and labels that emphasize divisiveness and instead angle toward others that promote constructive dialog? If we think it’s important to seek common ground, then maybe the political talk show shouldn’t be called Crossfire. Ulrik Haagerup, in his work at Danish television, showed how constructive discussion frameworks can succeed.

I have spent time with Janet Coats, a linguistics researcher and managing director of the Consortium on Trust in Media and Technology at the University of Florida. She shared the work she’s doing analyzing the coverage of racial justice protests, specifically the murder of George Floyd in 2020.

Coats points out, “The words quite literally scorched off the page. The verbs used to describe protest actions repeatedly drew comparisons to fire or destruction, such as spark, fuel, erupt, trigger, ignite.”

Coats poses the question: is the recurrent use of this fiery language a deliberate choice, or is it a subconscious pattern when covering such stories? What impact does that have on the perception of political demonstrations and of the people participating in them? How might that fuel partisan divide?

Language matters. Linguistics matters. Politicians know this. They spend lavishly on research and message testing to understand precisely which words and phrases will stimulate the desired response, be it hope or fear.

Might the world of journalism also study linguistics? Might it assess the impact of the terms and phrases that are used? Might it consider the impact of amplifying the false memes and spin propagated by the politicians we cover?

How do we address the challenge of “the other” without being perceived as someone else’s “other”? I read a thoughtful book by Mónica Guzmán, a Mexican-American journalist, who looked at it through the lens of her own sharply divided family. It’s called “I Never Thought of It That Way: How to Have Fearlessly Curious Conversations in Dangerously Divided Times.” We can’t find common ground without learning how to listen to each other.

Fear of the other. This is at the core of our crisis of divisiveness. Can we take care not to demonize those we disagree with? All of us, in our own way, might address this. Can we avoid reducing the other to simplistic memes? Can we avoid reducing the other to a demon. Demonization confirms the bias. It doesn’t bridge the divide. It deepens it.

Stanford University coordinated a recent mega-study on what types of interventions would decrease political polarization. Two approaches seemed to work best. One is to leverage empathy. The other is to leverage perceived similarity. Both are highly relevant to the information ecosystem.

The value of empathy can be leveraged by highlighting relatable, sympathetic exemplars of different political beliefs, rather than the “high conflict personalities” who are typically more visible in politics and media. Easier said than done, but worth considering.

The value of perceived similarity can be leveraged by highlighting common cross-partisan interests. Does the news publication feature content about non-controversial topics of common interest? This has long proven to work well in local news, where the value of service journalism — on local sports, on community events, on the progression of life from birth to obituary — can drive engagement, can unify a community, and research shows, can build trust for the coverage of more controversial topics.

These questions are not only for the media and journalism communities. How might other institutions do their part? How does Google do its part? How can algorithms and machine learning reflect a society’s ideals and principles — whether in surfacing relevant and authoritative search results, or in developing applications of AI that can help address our societal challenges and reduce the risk of harm?

What are the underlying principles being pursued? How do we drive the greater value of the public good? How do we create a path toward agreement on what is the common good? How do we address the key question, paradox that it is: how to manage free expression in our modern digital age?

It is up to us, and our societies, to find the answers — whether in our laws, in our principles, or in our own thoughtful behavior.

Richard Gingras is global vice president of news at Google. In that role Gingras focuses on how Google surfaces news on Google’s consumer services and on Google’s effort to enable a healthy, open ecosystem for quality journalism.

Gingras serves on the boards of the Center for News, Technology, and Innovation, the International Consortium of Investigative Journalists, the International Center for Journalists, the First Amendment Coalition, the James W Foley Legacy Foundation, the UC Berkeley School of Journalism, and PRX, the Public Radio Exchange.

Gingras has walked the bleeding edge from satellite networks to news products to search engines, from PBS to Apple to Excite to Salon to Google. He knows innovation is hard. He concedes he’s made more mistakes than you.

--

--