AppComputingFoodTechnology

Algorithms are everywhere


Like a lot of Netflix subscribers, I find that my personal feed tends to be hit or miss. Usually more miss. The movies and shows the algorithms recommend often seem less predicated on my viewing history and ratings, and more geared toward promoting whatever’s newly available. Still, when a superhero movie starring one of the world’s most famous actresses appeared in my “Top Picks” list, I dutifully did what 78 million other households did and clicked.

As I watched the movie, something dawned on me: recommendation algorithms like the ones Netflix pioneered weren’t just serving me what they thought I’d like—they were also shaping what gets made. And not in a good way. 

cover of Filterworld: How Algorithms Flattened Culture by Kyle Chayka

DOUBLEDAY

The movie in question wasn’t bad, necessarily. The acting was serviceable, and it had high production values and a discernible plot (at least for a superhero movie). What struck me, though, was a vague sense of déjà vu—as if I’d watched this movie before, even though I hadn’t. When it ended, I promptly forgot all about it. 

That is, until I started reading Kyle Chayka’s recent book, Filterworld: How Algorithms Flattened Culture. A staff writer for the New Yorker, Chayka is an astute observer of the ways the internet and social media affect culture. “Filterworld” is his coinage for “the vast, interlocking … network of algorithms” that influence both our daily lives and the “way culture is distributed and consumed.” 

Music, film, the visual arts, literature, fashion, journalism, food—Chayka argues that algorithmic recommendations have fundamentally altered all these cultural products, not just influencing what gets seen or ignored but creating a kind of self-reinforcing blandness we are all contending with now.

That superhero movie I watched is a prime example. Despite my general ambivalence toward the genre, Netflix’s algorithm placed the film at the very top of my feed, where I was far more likely to click on it. And click I did. That “choice” was then recorded by the algorithms, which probably surmised that I liked the movie and then recommended it to even more viewers. Watch, wince, repeat.  

“Filterworld culture is ultimately homogenous,” writes Chayka, “marked by a pervasive sense of sameness even when its artifacts aren’t literally the same.” We may all see different things in our feeds, he says, but they are increasingly the same kind of different. Through these milquetoast feedback loops, what’s popular becomes more popular, what’s obscure quickly disappears, and the lowest-­common-denominator forms of entertainment inevitably rise to the top again and again. 

This is actually the opposite of the personalization Netflix promises, Chayka notes. Algorithmic recommendations reduce taste—traditionally, a nuanced and evolving opinion we form about aesthetic and artistic matters—into a few easily quantifiable data points. That oversimplification subsequently forces the creators of movies, books, and music to adapt to the logic and pressures of the algorithmic system. Go viral or die. Engage. Appeal to as many people as possible. Be popular.  

A joke posted on X by a Google engineer sums up the problem: “A machine learning algorithm walks into a bar. The bartender asks, ‘What’ll you have?’ The algorithm says, ‘What’s everyone else having?’” “In algorithmic culture, the right choice is always what the majority of other people have already chosen,” writes Chayka. 

One challenge for someone writing a book like Filterworld—or really any book dealing with matters of cultural import—is the danger of (intentionally or not) coming across as a would-be arbiter of taste or, worse, an outright snob. As one might ask, what’s wrong with a little mindless entertainment? (Many asked just that in response to Martin Scorsese’s controversial Harper’s essay  in 2021, which decried Marvel movies and the current state of cinema.) 

Chayka addresses these questions head on. He argues that we’ve really only traded one set of gatekeepers (magazine editors, radio DJs, museum curators) for another (Google, Facebook, TikTok, Spotify). Created and controlled by a handful of unfathomably rich and powerful companies (which are usually led by a rich and powerful white man), today’s algorithms don’t even attempt to reward or amplify quality, which of course is subjective and hard to quantify. Instead, they focus on the one metric that has come to dominate all things on the internet: engagement.

There may be nothing inherently wrong (or new) about paint-by-numbers entertainment designed for mass appeal. But what algorithmic recommendations do is supercharge the incentives for creating only that kind of content, to the point that we risk not being exposed to anything else.

“Culture isn’t a toaster that you can rate out of five stars,” writes Chayka, “though the website Goodreads, now owned by Amazon, tries to apply those ratings to books. There are plenty of experiences I like—a plotless novel like Rachel Cusk’s Outline, for example—that others would doubtless give a bad grade. But those are the rules that Filterworld now enforces for everything.”

Chayka argues that cultivating our own personal taste is important, not because one form of culture is demonstrably better than another, but because that slow and deliberate process is part of how we develop our own identity and sense of self. Take that away, and you really do become the person the algorithm thinks you are. 

Algorithmic omnipresence

As Chayka points out in Filterworld, algorithms “can feel like a force that only began to exist … in the era of social networks” when in fact they have “a history and legacy that has slowly formed over centuries, long before the Internet existed.” So how exactly did we arrive at this moment of algorithmic omnipresence? How did these recommendation machines come to dominate and shape nearly every aspect of our online and (increasingly) our offline lives? Even more important, how did we ourselves become the data that fuels them?

cover of How Data Happened

W.W. NORTON

These are some of the questions Chris Wiggins and Matthew L. Jones set out to answer in How Data Happened: A History from the Age of Reason to the Age of Algorithms. Wiggins is a professor of applied mathematics and systems biology at Columbia University. He’s also the New York Times’ chief data scientist. Jones is now a professor of history at Princeton. Until recently, they both taught an undergrad course at Columbia, which served as the basis for the book.

They begin their historical investigation at a moment they argue is crucial to understanding our current predicament: the birth of statistics in the late 18th and early 19th century. It was a period of conflict and political upheaval in Europe. It was also a time when nations were beginning to acquire both the means and the motivation to track and measure their populations at an unprecedented scale.

“War required money; money required taxes; taxes required growing bureaucracies; and these bureaucracies needed data,” they write. “Statistics”may have originally described “knowledge of the state and its resources, without any particularly quantitative bent or aspirations at insights,” but that quickly began to change as new mathematical tools for examining and manipulating data emerged.

One of the people wielding these tools was the 19th-century Belgian astronomer Adolphe Quetelet. Famous for, among other things, developing the highly problematic body mass index (BMI), Quetelet had the audacious idea of taking the statistical techniques his fellow astronomers had developed to study the position of stars and using them to better understand society and its people. This new “social physics,” based on data about phenomena like crime and human physical characteristics, could in turn reveal hidden truths about humanity, he argued.

“Quetelet’s flash of genius—whatever its lack or rigor—was to treat averages about human beings as if they were real quantities out there that we were discovering,” write Wiggins and Jones. “He acted as if the average height of a population was a real thing, just like the position of a star.” 

From Quetelet and his “average man” to Francis Galton’s eugenics to Karl Pearson and Charles Spearman’s “general intelligence,” Wiggins and Jones chart a depressing progression of attempts—many of them successful—to use data as a scientific basis for racial and social hierarchies. Data added “a scientific veneer to the creation of an entire apparatus of discrimination and disenfranchisement,” they write. It’s a legacy we’re still contending with today. 

Another misconception that persists? The notion that data about people are somehow objective measures of truth. “Raw data is an oxymoron,” observed the media historian Lisa Gitelman a number of years ago. Indeed, all data collection is the result of human choice, from what to collect to how to classify it to who’s included and excluded. 

Whether it’s poverty, prosperity, intelligence, or creditworthiness, these aren’t real things that can be measured directly, note Wiggins and Jones. To quantify them, you need to choose an easily measured proxy. This “reification” (“literally, making a thing out of an abstraction about real things”) may be necessary in many cases, but such choices are never neutral or unproblematic. “Data is made, not found,” they write, “whether in 1600 or 1780 or 2022.”

“We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future.”

Perhaps the most impressive feat Wiggins and Jones pull off in the book as they continue to chart data’s evolution throughout the 20th century and the present day is dismantling the idea that there is something inevitable about the way technology progresses. 

For Quetelet and his ilk, turning to numbers to better understand humans and society was not an obvious choice. Indeed, from the beginning, everyone from artists to anthropologists understood the inherent limitations of data and quantification, making some of the same critiques of statisticians that Chayka makes of today’s algorithmic systems (“Such statisticians ‘see quality not at all, but only quantity’”).

Whether they’re talking about the machine-learning techniques that underpin today’s AI efforts or an internet built to harvest our personal data and sell us stuff, Wiggins and Jones recount many moments in history when things could have just as likely gone a different way.

“The present is not a prison sentence, but merely our current snapshot,” they write. “We don’t have to use unethical or opaque algorithmic decision systems, even in contexts where their use may be technically feasible. Ads based on mass surveillance are not necessary elements of our society. We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future. Privacy is not dead because of technology; it’s not true that the only way to support journalism or book writing or any craft that matters to you is spying on you to service ads. There are alternatives.” 

A pressing need for regulation

If Wiggins and Jones’s goal was to reveal the intellectual tradition that underlies today’s algorithmic systems, including “the persistent role of data in rearranging power,” Josh Simons is more interested in how algorithmic power is exercised in a democracy and, more specifically, how we might go about regulating the corporations and institutions that wield it.

cover of Algorithms for the People

PRINCETON UNIVERSITY PRESS

Currently a research fellow in political theory at Harvard, Simons has a unique background. Not only did he work for four years at Facebook, where he was a founding member of what became the Responsible AI team, but he previously served as a policy advisor for the Labour Party in the UK Parliament. 

In Algorithms for the People: Democracy in the Age of AI, Simons builds on the seminal work of authors like Cathy O’Neil, Safiya Noble, and Shoshana Zuboff to argue that algorithmic prediction is inherently political. “My aim is to explore how to make democracy work in the coming age of machine learning,” he writes. “Our future will be determined not by the nature of machine learning itself—machine learning models simply do what we tell them to do—but by our commitment to regulation that ensures that machine learning strengthens the foundations of democracy.”

Much of the first half of the book is dedicated to revealing all the ways we continue to misunderstand the nature of machine learning, and how its use can profoundly undermine democracy. And what if a “thriving democracy”—a term Simons uses throughout the book but never defines—isn’t always compatible with algorithmic governance? Well, it’s a question he never really addresses. 

Whether these are blind spots or Simons simply believes that algorithmic prediction is, and will remain, an inevitable part of our lives, the lack of clarity doesn’t do the book any favors. While he’s on much firmer ground when explaining how machine learning works and deconstructing the systems behind Google’s PageRank and Facebook’s Feed, there remain omissions that don’t inspire confidence. For instance, it takes an uncomfortably long time for Simons to even acknowledge one of the key motivations behind the design of the PageRank and Feed algorithms: profit. Not something to overlook if you want to develop an effective regulatory framework. 

“The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.”

Much of what’s discussed in the latter half of the book will be familiar to anyone following the news around platform and internet regulation (hint: that we should be treating providers more like public utilities). And while Simons has some creative and intelligent ideas, I suspect even the most ardent policy wonks will come away feeling a bit demoralized given the current state of politics in the United States. 

In the end, the most hopeful message these books offer is embedded in the nature of algorithms themselves. In Filterworld, Chayka includes a quote from the late, great anthropologist David Graeber: “The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” It’s a sentiment echoed in all three books—maybe minus the “easily” bit. 

Algorithms may entrench our biases, homogenize and flatten culture, and exploit and suppress the vulnerable and marginalized. But these aren’t completely inscrutable systems or inevitable outcomes. They can do the opposite, too. Look closely at any machine-learning algorithm and you’ll inevitably find people—people making choices about which data to gather and how to weigh it, choices about design and target variables. And, yes, even choices about whether to use them at all. As long as algorithms are something humans make, we can also choose to make them differently. 

Bryan Gardiner is a writer based in Oakland, California.



Source link