A Media Ecosystem for an Age of Fracture

by Shreeharsh Kelkar, University of California at Berkeley

As the dust settles after Donald Trump’s shocking upset of Hillary Clinton in the US Presidential Election, and we await the long-term fallout, I am left wondering about what “objectivity” in media institutions will—and should—look like in the future in this age of political polarization. How might we—social scientists interested in understanding how knowledge is made credible—use our insights and contribute to this debate about the future of media institutions in the United States?

The United States today is heavily polarized along political lines. As political scientists have shown time and time again, the polity of the United States experienced a realignment after the Civil Rights movement. The two major parties no longer have substantial overlap, if any; they cater to entirely different constituencies. Republicans are the party of managers, evangelicals, and the white working class; Democrats of organized labor, affluent professionals and minorities. The Democrats have a wider tent and therefore more disagreements, but both parties and their constituencies have sorted themselves into competing and opposing positions on a number of vital issues: the size and role of the welfare state, the scope and status of abortion, and the parameters of religion in public life.

This fracture was bound to have some effects on the media ecosystem. As Paul Starr has shown, American media institutions have constructed themselves as independent organizations, free of political bias—partly because their distribution was heavily subsidized by the federal government and partly because they started to rely on advertising as their main source of revenue. This is typically true of some of the largest circulation media institutions: the New York Times, the Washington Post, Time Magazine, Newsweek, the Atlantic Monthly, CBS, NBC, CNN, and others. These institutions produce this objectivity through a variety of practices: separating their business and editorial divisions, keeping fact and opinion separate, and scrupulously reporting (political) conflicts by giving equal coverage to every side. This arrangement worked very well for a pre-polarization age where both political parties presided over coalitions that spanned the ideological spectrum; indeed, it was constitutive of it.

Polarization caused the first crack in this media ecosystem as it led to the creation of new media institutions at the margins (as well as a public that consumed them avidly) that did not adhere to (and indeed, flouted) this form of objectivity, conservative talk radio being a prime example. Internet-enabled business models multiplied and scrambled this effect. The early Internet bloggers, attuned to the age of polarization, despised the he-said-she-said model of media objectivity, which they sometimes characterized as “false equivalence.” Their blog-posts combined both an obsession with facts and an emphasis on their personal, partisan, voice. Blogging has long since become institutionalized with many early bloggers now writing for more established media publications. What has stayed constant is that their model of writing (embodied in outlets like Vox and Talking Points Memo) is attuned to a different model of objectivity: while being obsessed with facts, figures and truth, it proudly spurns the bipartisan style of, say, the NYT or CNN. Facts and opinions are mixed, and every side does not get equal (or similar) coverage.

But perhaps nothing has been more consequential for media institutions than the rise of curatorial “platforms”: the Facebooks and the Googles who remain many Americans’ gateway to the news that they consume. Curiously, Facebook and Google have adopted the same stance of media objectivity even as their role in guiding public discourse has increased: they stress that they are non-political arbiters of political discourse. They have invested their organizational identity into becoming “platforms”: as platforms, they have no politics and are simply channeling the voice of the users. As Tarleton Gillespie has argued, the word “platform” sometimes refers to “technical platforms, sometimes as platforms from which to speak, sometimes as platforms of opportunity. Whatever tensions exist in serving all of these constituencies are carefully elided.” Unlike media institutions who maintained their objectivity through specific organizing practices like separating fact from opinion, the platforms’ main justification is that they use algorithms. In their telling, the mechanical objectivity of algorithms—and the relative absence of human judgment in this process—is the reason their curatorial decisions are objective.

Why have Facebook and Google studiously resisted being styled as media institutions and stuck to their guns in calling themselves mere platforms? A couple of reasons are clear: their reliance on advertising for revenue, and the fact that, in this age of polarization, being labeled “political” turns away people of the opposite political affiliation. But there is a third, perhaps more insidious, reason. Facebook and Google are committed to the principle of what has been called “context collapse“: Google wants to index all the information in the world and Facebook wants to make all activities “social” all the way down. These aspirations are sweeping in that they seek to dismantle the boundaries between activities such that everything is mediated through these platforms; the distinctions between the social and the commercial, the personal and the public, and the political and the non-political are blurred. To slightly twist a phrase coined by Helen Nissenbaum, platforms seek to blur, even exterminate, the “contextual integrity” of particular activities so that they all start to be mediated by the technological machinery that these platforms specialize in. Expanding their activities to cover more and more domains is the key to Facebook and Google’s Silicon Valley-inspired business models. Explicitly regulating content or regulating content-producers is not conducive to the growth of these platforms.

To understand this, consider the dispute over Facebook Trends, and the current debate about “fake news.” In this age of polarization, how then do we restore a meaningful public sphere of reasonable political debates?

Facebook Trends, Fake News and Objectivity

After Trump’s victory, an increasing amount of liberal dismay focused on the rise of “fake news“: transparently false and outrageous claims about Hillary Clinton, manufactured sometimes in Macedonia, at other times in California (and in a demonstration, that sometimes commerce knows no politics, by gung-ho entrepreneurial Democrats!), that managed to nonetheless circulate widely among conservative audiences primarily through sharing on Facebook. Critics alleged that these fake stories circulated more than real stories did, and that this had some role in the election results, and that Facebook should do something about this.

But before fake news, there was Facebook Trends. A few months ago, the website Gizmodo unveiled what seemed to many a scandalous revelation. Facebook’s Trending Topics were not really algorithmically generated; rather they were curated, by a group of people (to be sure, a very small group of people who were neither trained in the practices of journalism, nor paid very well), who worked with what Facebook’s massive programs discovered. Even more damning, this group of people was making choices that systematically deleted conservative news headlines while privileging more liberal ones. Or so alleged the main source for the article: a conservative Trends curator.

Scholars have expressed dismay about what the Trending fracas revealed: first, that the general public sometimes seems unaware that there are people behind the algorithms of Facebook, and second, that Facebook’s Trend curators were abysmally paid employees with little or no formal training in any journalistic enterprise, an indication that Facebook treated a fundamental job as contingent work. Conservatives expressed a different kind of dismay: that Facebook had a liberal bias just like the New York Times and was systematically discriminating against them.

Facebook’s reaction to the controversy was instructive. Conservative luminaries were invited to Facebook’s headquarters to have their concerns registered. And then, Facebook essentially fired its Trending staff and left the algorithms essentially unsupervised. The result has not especially been good, as the Washington Post has shown: the output of the Trends feature has steadily grown more incoherent.

Facebook could have certainly dealt with the Trends controversy in a different way. They could have strengthened the job description of these Trends curators and hired journalists rather than contingent workers. They might have made the process through which a Trend was captured and published transparent (at a similar level of abstraction they use for their much-vaunted Edgerank algorithm so that they revealed no proprietary information). They could have hired more conservative curators and explicitly claimed that their curators were as representative as possible of the general US population. Instead, they opted for an argument where they fell back on the mechanical objectivity of algorithms. This allowed Facebook to remain a neutral, non-political platform, and it seemed to assuage affronted conservatives.

Facebook’s approach to dealing with fake news has been notably different, maybe because influential Facebook employees may have felt unease at their own complicity in the election of Donald Trump. After some initial denials by Mark Zuckerberg, Facebook recently announced that independent fact-checkers would get privileged access to Facebook and would be able to flag egregious articles. Shared Facebook items would thus carry some sort of mark verifying their authenticity. Facebook is also clearly at work on more computational mechanisms to identify false news items. Time will tell whether these methods raise an outcry from conservatives who argue that the fact-checkers are not fair and have some sort of liberal bias.

Rather than falling back and doubling down once again on the objectivity of algorithms, Facebook created what is essentially a political solution to the fake news problem: it outsourced the verification of shared links to third parties who have established practices of adjudicating truth claims by media institutions. Certain issues remain unresolved: what happens when particular parties dispute the judgment of the fact-checkers? How is the judgment of the fact-checkers incorporated into the Edge rank algorithm, given the brute fact that items at the top of the Facebook newsfeed tend to be read the most? Time will tell.

We—and Facebook—might conceivably think of other models of objectivity in our efforts to create trust-worthy media institutions (i.e. trusted by both liberals and conservatives) in an age of polarization and platforms. In her comparison of the American, British and German regulation of biotechnology, Sheila Jasanoff characterizes each national regulatory style as “contentious,” “communitarian,” and “consensus-seeking” respectively. In the German polity, for example, she finds an emphasis on the representativeness of knowledge-making bodies, while the United States hopes to resolve regulatory issues by having the different parties face off in an adversarial manner through disputes between experts. That Facebook chose to tackle its fake news problem (in as much as this can be “tackled”) not by emphasizing the mechanical objectivity of its algorithms, but rather by drawing on the prestige of fact-checking institutions, is a hopeful first step.   Platforms are new; there is a great deal of interpretive flexibility in their underlying technical and institutional infrastructure. The future of media objectivity in an age of polarization, Trump, and social media platforms might lie in drawing on alternative civic epistemologies. That might be an area where we social scientists might make a useful contribution.

Contact Dr. Kelkar at: skelkar[at]berkeley.edu

Advertisement

One thought on “A Media Ecosystem for an Age of Fracture

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s