Episode 300 – Algorithmic Warfare with Andy Lee Roth

Episode 300 - Algorithmic Warfare with Andy Lee Roth

FOLLOW THE SHOW

Andy Lee Roth of Project Censored tells us how big tech controls what we see and hear. Who needs a state when corporations take the reins?

**Milestone 300! We dedicate this, the 300th weekly episode, to our loyal listeners, and we wish to recognize the valiant work of our underpaid podcast crew – correction: our unpaid podcast crew – who have put in thousands of hours editing audio, correcting transcripts, writing show notes, creating artwork, and posting promos on social media. To have the next 300 episodes delivered to your inbox as soon as they’re released, subscribe at realprogressives.substack.com

Project Censored has been a valuable resource for Macro N Cheese. This week, sociologist Andy Lee Roth talks with Steve about information gatekeeping by big tech through their use of AI algorithms to stifle diverse voices.  The discussion highlights historical and current instances of media censorship and looks at the monopolization of news distribution by corporate giants like Google, Facebook, and Twitter.

In an economic system that is fully privatized, trustworthy journalism is another casualty. News, which should be treated as a public good, is anything but.

Andy Lee Roth is associate director of Project Censored, a nonprofit that promotes independent journalism and critical media literacy education. He is the coauthor of The Media and Me (2022), the Project’s guide to critical media literacy for young people, and “Beyond Fact-Checking” (2024), a teaching guide about news frames and their power to shape our understanding of the world. Roth holds a PhD in sociology from the University of California, Los Angeles, and a BA in sociology and anthropology from Haverford College. His research and writing have been published in a variety of outlets, including Index on Censorship, In These Times, YES! Magazine, The Progressive, Truthout, Media Culture & Society, and the International Journal of Press/Politics. During 2024-2025 his current work on Algorithmic Literacy for Journalists is supported by a fellowship from the Reynolds Journalism Institute.

projectcensored.org

@ProjectCensored on Twitter

[00:00:00] Steve Grumbine: All right, folks, this is Steve with Real Progressives and Macro N Cheese. Folks, I have a guest today who is part of the Project Censored team. We have talked to a lot of the Project Censored folks, largely because they have a lot of really important information that directly ties to things that we feel are very important.

As we’re trying to educate folks about modern monetary theory [MMT], we’re trying to educate folks about class and society and, you know, quite frankly, we have really been focused heavily on the lack of meaningful information about Gaza. And the poor, poor understanding that rank and file Democrats and Republicans have about Gaza.

I mean, you see so much warped thinking. So much limited understanding of history. And, quite frankly, a lot of censorship. So, Project Censored’s been our friend. They have really, really helped us understand situations that have, you know, been baffling. Or maybe people are looking for more empirical evidence that the things we’re saying are true.

Today is no different than that. We’re going to talk about algorithms and artificial intelligence. This is a extremely important topic to me personally, but I think to the larger Real Progressive vs nonprofit organization and the people that work with us and follow us. So with that, let me introduce my guest.

My guest is Andy Lee Roth. And Andy Lee Roth is the associate director of Project Censored and is interested in the power of news to shape our understanding of the world. And he’s got some really important stuff going on that we’re going to talk about today. In particular, about algorithms and AI [artificial intelligence]. But without further ado, let me bring on my guest, Andy Lee Roth. Welcome to the show, sir.

[00:01:46] Andy Lee Roth: Hi, Steve. It’s a pleasure to join you on Macro N Cheese.

[00:01:50] Steve Grumbine: Absolutely, it’s funny when Mischa [Geracoulis] was telling us, Hey, you got to talk to him. You know, it was like, I thought I did. And I realized that I had spoken with Mickey Huff and that, at that point in time, there was a possibility of you both being on, but you had a conflict. So we went ahead and went with Mickey Huff that time.

And here we are. And all this time, I’ve been thinking, in my head, I have interviewed this man. But I have not interviewed this man. And I’m very happy to interview you today. So thank you so much for joining me.

[00:02:20] Andy Lee Roth: You bet, Steve. I have a colleague who jokes that Project Censored, which is a, critical media literacy education organization that champions independent journalism. But my friend and colleague jokes that Project Censored is a little bit like the Grateful Dead – that we’re kind of always on tour, but with a rotating cast of musicians, like, on the stage.

So, yeah, this is the version you get today.

[00:02:43] Steve Grumbine: Well, I am absolutely a “Deadhead,” too. So that works for me, man. All right. So let’s talk about this real quick on a more sober element here. And that is, I know we had a little talk offline. I’ve been very concerned that the media, itself, has been skewing, not just the media, but the algorithms and social media lifting up voices that are really not telling the truth or the full story. They’re always slanted in favor of, what I consider to be, a genocidal apartheid regime.

And Israel is always slanted with the focus of, Oh, but do you condemn Hamas? And they don’t understand the entirety. Going back to the founding of Israel. And, , it was an AstroTurf country, really. And you go on further back, I mean, my goodness. The Palestinians – there’s manhole covers that show – before they gave it over to Israel – then it said – Palestinian whatever.

I mean, the fact is, is that we are not being fed information in a proper fashion. And the kind of information that we need to hear – It’s not making it to the people. It’s being stifled [and a] bill is being put on top of it. And that really troubles me. It troubles most of the people that follow us.

And the other factor that, I think, is interesting is – is that when we go and ask something like a ChatGPT. Tell us about what’s going on in Israel and Gaza. It gives us what appears to be a very neutral, response. But it’s not neutral at all. It’s, quite frankly, very slanted and very biased and very intentional in elevating the Israeli occupation of Palestine. So help me understand. It’s a big topic, but this is what you’re doing work on right now, is it not?

[00:04:31] Andy Lee Roth: Yeah, and thank you for the opportunity to talk about this. I should preface my answer to your fantastic question by saying that, yeah, the work I’m doing on Algorithmic Literacy for Journalists is a project sponsored by the Reynolds Journalism Institute at the University of Missouri. And I’ve been fortunate and the project’s been fortunate enough to have RJI, the Reynolds Journalism Institute, select us as one of some 200 projects that they reviewed last year for this year’s fellowships.

And so, I’m grateful for that support to make this work possible. So to your question about what kind of information are we getting? Are we getting sufficient information? Are we getting quality information? Are we getting disinformation or misinformation about Palestine? Let me break that question down into two parts and first talk about the establishment corporate media in the United States. And then talk about what’s happening in terms of social media and other algorithmically-driven platforms.

So, in terms of the US corporate media, it’s simply a fact of the matter – and Mickey Huff, the project’s director, and I have written about this for Truthout – it’s a matter of fact and record that the US corporate media have, for decades, treated Gaza’s inhabitants as non-persons and daily life in Gaza as non-news. And this is an example of something that we see as a wider pattern, which is how news media emissions often function as tacit permission for abuses of power.

So the corporate media, the US corporate media, didn’t create the violent, inhumane conditions in Gaza, now expanding out to the West Bank and to Lebanon and beyond. But the corporate media, their shameful legacy of narrow pro-Israel coverage, indirectly laid the groundwork for the atrocious human suffering taking place now. And so the corporate media’s extended erasure of Gaza and its habitants is, I think, certainly rooted in what is tacit, and sometimes overt, racism that distorts so much of what we have in terms of news coverage of the Middle East, in general, and Palestine, in particular.

But that misleading coverage is not just something like institutional racism. It’s also how corporate news outlets define what counts as news and who counts as newsworthy. And the corporate media has a kind of myopic focus on dramatic events rather than long-term systemic issues.

So, the media critics, Bob Hackett and Richard Gruneau, noted a book from the year 2000, The Missing News, that, for corporate media, the news is about what went wrong today and not what goes wrong every day, right? And that kind of encapsulates a whole problem with a lot of the news and information we receive here in the United States through these kind of corporate legacy channels of traditional news, right?

Second part – If we shift and talk about what’s happening in terms of social media where people increasingly are turning for their news, right? If I go into a classroom and tell my students that I still read a paper newspaper at breakfast every morning, there’s nothing I could tell them about myself that would make me seem more foreign and alien to them.

So if we turn to social media and say, well, that’s now the way that most people are getting their news about the world – whether it’s Israel and Palestine, the current election, you name it – there we can start talking about algorithmic, content filtering. And perhaps even censorship isn’t too strong a word.

In 2020, some Project Censored colleagues of ours, Emil Marmo and Lee Major, did a survey of the de-platforming and de-ranking of all kinds of alternative media voices on social media online. So this is any kind of voices that, by their account, were dissident, counter hegemonic, alternative, and they looked from the libertarian right to the anarchist left. And they found dozens and dozens of cases where alternative independent news outlets that were covering these kind of institutional issues, right?

Racism. Environmental degradation. Class struggle. Illegal government surveillance. Violence by state actors, whether domestically or abroad. Corporate malfeasance – all the kinds of independent outlets doing that kind of reporting and using social media to reach their audiences were all subject to a variety of forms of online censorship.

With the advent of the web and social media – on one hand, we’re faced experientially with this kind of onslaught of information and what seems like an incredibly diverse range of perspectives. But it’s incorrect to assume that, for instance, like our social media feeds or our search engines are in any ways, at all, neutral conduits of information and perspective. We’re talking here about big tech platforms that are increasingly using algorithms and other forms of AI systems to, basically, filter what kind of news circulates widely.

Now, the first kind of obvious thing to say is, Oh, well, not everything is throttled, Andy, come on, right? You know, we have a First Amendment. And there are two kind of, I think, quick important answers to put on the table there. First, blockades of information and perspective don’t have to be complete to be effective. The fact that, if I go and look for alternative perspectives online that I can find some, doesn’t mean that this kind of algorithmic throttling isn’t taking place. And notice, like, if I’m just cruising around, not actively looking, I may never find those sources.

So that’s one kind of problem. The other is that, I think, if we’re talking about censorship in the United States broadly, and we’re talking about, you know, most Americans think of the First Amendment and Congress shall make no law, right? But the First Amendment is about what governments can do and can’t do in terms of freedom of information and freedom of the press and freedom of expression. The First Amendment doesn’t impose any restrictions on corporate control of those fundamental freedoms.

And that’s a crucial issue that is part and parcel, I think, of the kind of coverage we’re seeing and what we’re not seeing in terms of things like Israel/Palestine.

[00:11:06] Steve Grumbine: So let me ask you a question. When I think of the First Amendment, and you stated it perfectly, obviously. But when I think about that, I’m thinking about it that the government cannot do these things. There’s nothing, and this is the key thing about really understanding the role of capital, role of our founding institutions in government with private property, and the role that private property has in determining where laws begin and end.

And the fact that these platforms represent private property, private ownership, not government. It’s kind of like a freemium service. And it’s, you play on their field you play by their rules. So where does the First Amendment come in when it comes to understanding these sorts of relationships with algorithms and AI?

I mean, people feel like, Hey, my, freedom of speech is being stifled. But it’s being stifled through a private platform. And, I want to make this clear, I’m not for private platforms. I’m very much for a public commons. But with that in mind – you know, complaining that private entities are somehow or another squashing them – is this of their own volition, or is this with the government coercion, or is it a combination? How does that play out?

[00:12:20] Andy Lee Roth: Yeah, that’s it. Thank you again. Another great question. I’m going to preface my answer by saying, my training and background is as a sociologist. And this sociological concept from over a hundred years ago is actually quite useful.

In 1922, a sociologist named William Ogburn coined the term “cultural lag.” And what Ogburn meant by cultural lag is the gap in time between the development of new technology – he used the more fancy social science word, material culture – but basically, the gap between the development of new technology and then the cultural development of norms and values and laws that will guide the proper acceptable use of that technology. And where there’s cultural lag, the idea is the tech develops before the norms and the values and the laws can catch up.

And I think with AI right now, part of the big debate about AI right now, is, I think, like manifests these are the ripples, the consequences, the turmoil developed as a result of cultural lag, right?

AI tech is developing at such a fast pace now that there’s no way that the laws and regulations, much less these larger cultural values and norms, have had a chance to adapt. And the result is in that lag, the people who control these tools, their power is enhanced, right?

If we’re talking about, kind of, a shift to this new era of AI and the kind of algorithmic gatekeeping that my work is presently focused on, what I think the most fundamental question to ask is how this new technology is shifting who holds and wields power. And we know , historically, that control over infrastructure confers power on those who control those who have the infrastructure.

So going back then to the First Amendment, Mickey Huff and Avram Anderson, who I should give a shout out to – Avram Anderson is an expert on information science, who’s on the faculty and works in the libraries at Cal State University, Northridge, uh, one of my best colleagues – a little over a year ago, Mickey, Avram and I wrote an article , there’s a version of it on the Project Censored website, about what we call censorship by proxy. And it goes straight to the question that you’re asking about the First Amendment and government and corporations.

And the idea of censorship by proxy is there are three defining elements of it.  It’s censorship that’s undertaken by a non-governmental entity that goes beyond the kind of level of censorship that a government entity could take on its own. But that third element serves governmental interests, nonetheless, right?

And so, I think that this is a 21st century, well, this is in some ways, not a new form of censorship. There’s long been collusion, like going all the way back to the Spanish American war, for instance. Collusion between the press and the government to promote imperialist interests, say. But this is a 21st century version of censorship by proxy that Mickey, Avram and I, uh, wrote about is all about how these big tech companies are playing in effect, a gatekeeping role, that in many cases serves government interests. And one of the examples we described that I’ll just gloss briefly now and if you want to go into more detail, we can. But it is the closure a few years ago of RT America, which was the Washington, DC based station of the RT Russia Today network.

A number of fantastic American prize-winning reporters, reported for RT America including Chris Hedges had a show on RT America. Abby Martin had a show on RT America. RT America had been, basically, harassed by the government for some time. US based and US citizen journalists – journalists who are US citizens because they worked for RT America – were first forced to register with the government as foreign agents. But in the end, important press freedom organizations called this out and said this is a form of indirect harassment that makes the journalist’s jobs harder to do.

So there’s a long track record. And this is all coming out of the furor around the 2016 election and after about Russian interference in US elections, is the wedge here. So around that time there’s clear evidence in security reports, and so forth, that the government would like to shut down RT America. They’d like to shut down RT, but they can’t.

Then Russia invades Ukraine and there is a moral panic about Russian influence that is widespread in the US. And there are many factors for that that I won’t try to explicate now. But one of the upshots of that is there’s what appears to be a kind of public campaign to the platforms that carry RT America to deplatform RT America. And that public campaign, driven by that moral panic about Russia, is successful. RT America is driven off the air.

And that’s an example of what we mean by censorship by proxy. The government didn’t have to shut down RT America. But it very much suited foreign policy interests in the government and people that’s just concerned about, kind of, quote, “disinformation in the United States” to have RT America go off air and not be a presence in the United States any longer. So, censorship by proxy.

[00:18:02] Steve Grumbine: I see guys like Ken Klippenstein, here, recently get suspended from X just by simply reporting out on the news of the day, if you will. And, yes, from what I understand, he aired some information that was already publicly available, but it aired that and Musk deplatformed him.

YouTube has been deplatforming alt media like, insanely, lately. And then, lo and behold, most recently, I think it was two days ago, [a] gentleman from the Grayzone was kidnapped, captured, beaten, and jailed in Israel for reporting out on the genocide that is being committed over there right now. Currently.

I mean, there is definitely some form of collusion going on. The idea that police agencies can buy information from social media platforms. They can’t directly monitor it. But if it’s given, because it’s for sale publicly anyway, because these groups are selling the information, the authorities are getting it through that backdoor route, as well.

Is that part of what you’re talking about?

[00:19:10] Andy Lee Roth: Yeah, I mean, that’s certainly an important part of the project, or important part of the big picture. Not so much a part of my project, but, we definitely have colleagues at the Electronic Frontier Foundation [EFF] who have tracked that, sort of, use of tech in law enforcement for a long time.

And you know, one of the things I learned from folks at EFS is about the idea of, quote, “parallel investigations,” where the information that law enforcement acts on is not necessarily the sort that would be allowed in court because it’s oftentimes been acquired through inappropriate means. But once the information is held, law enforcement can use parallel investigations that they now have, they now know that they’ve got this.

They can figure out another way to legitimately get the data. And then it’s useful in court for a prosecution of someone who might otherwise not have, you know, not have been prosecuted. And, I guess, there’s grounds for debate, perhaps, about like, well, don’t we want law enforcement to be able to go after crime when it occurs? But then that, of course, raises basic questions about who’s defining what counts as criminality.

Those are tricky areas where the tech, again, like the basic thing is like control over tech, control over infrastructure confers power, right? Thinking about this, like I mentioned, my background [is] in sociology, and maybe I can dork out with you for a moment here.

[00:20:38] Steve Grumbine: Let’s do it.

[00:20:40] Andy Lee Roth: I’m using this term “algorithmic gatekeeping” and that term gatekeeping has a legacy in, kind of, the sociology of journalism and critical communication studies of political discourse.

The original ideas around gatekeeping go back to studies that were done in the 1950s. There were two studies in particular. One by a guy named David Manning White and another by a researcher named Walter Gieber. And they developed this concept, and for a while it was the dominant paradigm, in thinking about how news and information flows through mass media, right? A term we don’t talk about a lot more, but, at the time, that would have been appropriate.

They both did studies of local newspaper editors who were getting stories coming over the wire service and making determinations about what stories to run and what stories not to run. And the kind of famous upshot of the first of these studies, the David Manning White study, was that, newspaper editors exerted personal bias in how they chose those stories. And that personal bias was, specifically, as White’s study was presented and subsequently remembered, as political bias. Personal political bias.

The reality is that in White’s study, 18 of the 423 decisions that he examined involved decisions like, oh, this is pure propaganda; or that story is too red. But the report that White published about his findings talked about how highly subjective, how based on the gatekeeper’s own set of experiences, attitudes, and expectations, the communication of news was.

A few years later, and I promise this will come back around to algorithms in a moment, there’s an interesting, like, return, a boomerang kind of quality to this. A few years later, that study was duplicated, looking at multiple wire editors rather than a single editor, by a researcher named Walter Gieber. And his conclusions basically refuted White’s conclusion that gatekeeping was subjective and personal.

Gieber found that, basically, decisions about what stories to run were a matter of, like, daily production and what he called quote bureaucratic routine. In other words, no political agenda to speak of. And lots of subsequent studies reinforced and refined Geber’s conclusion that professional assessments of newsworthiness are based on professional values more than political partisanship, and so forth, and so on.

That gatekeeping model was like the dominant paradigm for understanding news for decades. And it was finally displaced by others in, kind of, the, maybe, the 80s or 90s. And at the time, actually, a very prominent sociologist of the history of news journalism in the United States, Michael Shudson, probably put the final nail in the gatekeeping model in a 1989 article when he said, well, the problem with that model is it leaves information sociologically untouched.

It’s a pristine model. The idea that news comes in already formed and an editor decides what to promote and what to leave on the floor – it’s too simplistic for how we know the news production process actually works.

So Shudson making that call – basically, maybe it gives him too much credit to say he put that final nail in the coffin – but he certainly gave voice to what was the sense among a lot of people studying these things at the time – that that model was kind of primitive and no longer relevant.

The argument I’ve made more recently is that with the advent of the internet and these big platforms like Meta and Microsoft and others, that the old gatekeeping model is actually completely relevant once again. Because the entities that are now controlling the flow and distribution of news – Google, Facebook, Twitter/X, etc. – these corporations don’t practice journalism themselves, and all they’re really doing is facilitating the passing on of the content.

So the Shudson critique that news is, sort of, untouched except for how it’s distributed in the old gatekeeping model, actually accurately describes what we have in a kind of algorithmic gatekeeping model of news now. And I think that’s a really important thing. It’s one way of saying like, you know, we might wish we could go back to an era where editors who are themselves news professionals make the judgments rather than tech companies.

Or, to get more pointed, rather than the algorithms and other AI systems of those tech companies. And there’s one more point on this. You couldn’t go and do today the kind of studies that White or Gieber did in the fifties, right? Because the – all the algorithms are, treated by the companies that control them as proprietary information.

There have been lawsuits to try to get, say, the YouTube algorithm cracked open. Not for the public to see how it works, but for an independent third party to examine it for whether systemic biases are baked into the algorithm or not. Or whether the algorithm is just being gamed by people.

And those lawsuits have been class action lawsuits. And the courts have, consistently, decided in favor of the big corporations and the, proprietary control over the algorithms. So part of what we’re up against, again, is a form of corporate power. There’s more to say about that, but maybe I should halt there for a moment.

[00:26:16] Steve Grumbine: That is very important to me because what I’m hearing – and this plays a part in much of what we do here – this is directly, once again, placing the most important factor on the ownership of the algorithm. It’s saying, this is mine. My private thing. I own this. This is my intellectual property. And it’s not for you to know what we do or how we do it. You just need to know that the news and the stuff that you get is a pure and pristine . . . Blah blah blah.

But in reality, right there, in and of itself, we know fundamentally that this is a long-standing, private property versus public interest, kind of story. Once again everything, you can almost down the line see that every one of these issues is an issue of private ownership versus the public commons and the public interests.

Yeah. we at Project Censored have made strong arguments. And allies of ours, like Victor Picard at the Annenberg School of University of Pennsylvania, have made strong arguments that we need to start thinking of journalism as a public good, right? And that just like clean air, and clean water, and other things that are fundamental to life, if we’re going to have a democratic society, we have to have a kind of a sense of news as a commons, right?

Trustworthy news as a commons. So a major blockade to that right now is when we have companies like Twitter and Facebook and their parents – if we’re talking about like Instagram and others, right – and the parent companies, Meta, Google . . . Zuckerberg won’t even acknowledge that Facebook is a communication platform, much less, you know, anything like, an outlet that provides journalism.

Intermission 

And so there’s a huge disconnect. Just a yawning gap between journalism as a profession, as a discipline which is guided by a code of ethics, right, that says good journalism –

I’m now quoting from the Society of Professional Journalists Code of Ethics, which is a brilliant document and well worth anyone who cares about news should have a look at it – basically, in a nutshell, good journalism is independent. It’s accountable and transparent. And it seeks to report truth and minimize harm.

And I could go into more detail on each one of those individual points. But the key point I want to make now is that the big tech platforms we’re talking about are committed to none of those principles. They aren’t committed to being accountable. They aren’t committed to being transparent.

They may talk about providing opportunities for people to seek the truth and pay lip service to the idea of minimizing harm. But on each and every one of those points, if we did a mock debate or a, sort of, imaginary courtroom session like you do in high school, I think it wouldn’t be a big challenge to come up with a conviction of the major corporate tech companies. Many of which are US-based, but they have a global reach in terms of those standards, right?

And so the idea that journalism is increasingly at the mercy of these big tech platforms because the big tech platforms have, basically, swooped up most of the advertising revenue that historically has made journalism a viable commercial enterprise in the US the fact that they’ve not only come in and swooped up the advertising revenue, but they’re also now gatekeeping content – you know, any kind of content that doesn’t fit with sort of a status quo understanding of the US, domestically or in the world – is a serious, serious problem.

And it’s another way, I think, for people who follow the work of [Edward] Herman and [Noam] Chomsky and their propaganda model, as they outlined in Manufacturing Consent, this is a new wrinkle on those filters. But it conforms almost in all the fundamental ways to the propaganda model that Herman and Chomsky outlined in 1989.

[00:28:09] Steve Grumbine: I’d like to touch back on that momentarily. But before I do, I want to just say this. It seems to me like the concept of a dialectical perspective is completely lost. Like when you hear the news about the economy and how the economy is going it’s almost always told from the perspective of the investor.

It’s almost always told from the position of the wealthy. It’s almost never told from the perspective of the working class. And when it is, it might be some subheading or some substatement deep in the article. But there just doesn’t appear to be any balance in terms of whose interests are being advanced.

Cause it’s impossible to have an impartial news story. It always depends on from what vantage point you’re talking. You can look at the French Revolution. And if you were just looking at it from Napoleon’s perspective, I mean you’d hear a whole different story than you would if you were talking to a peasant, or someone who was a bread maker, or maybe somebody who was part of a guild. It’s just a completely different perspective.

Before we go into those things that I want you to dive deeper into, I want to take a moment and just ask you, do you see a place for dialectical perspectives? Because I don’t see any representation of that in any of the outlets.

[00:32:13] Andy Lee Roth: Yeah, actually, a lot of my work, going back to the dissertation I did at UCLA in sociology on broadcast news interviews was about basically, in blunt terms, the narrow range of people who are treated as newsworthy by the corporate press. And there’s a longstanding tradition for this kind of research, that it’s the strongest bias in the establishment press is for official sources.

So sources who represent some agency, whether it’s a government agency or a corporate agency. And if you aren’t affiliated in one of those ways, you’re much less likely, there are much more restricted conditions under which you might be treated as a newsworthy source of information or perspective.

And you’re quite right. One of the most obvious examples of that is how, in conversations about the economy, working people are almost always, if not invisible, they’re more talked about than heard from. this goes to a concept, that sociologists who study news use. It was developed by William Gamson called media standing.

And Gamson’s idea of media standing is not just who gets talked about in the news, but who gets to speak. Borrowing the term standing from legal settings – Who is authorized to speak as a source. And again, there’s just a massive and, growing body of literature that shows, this is one of the reasons Project Censored champions independent journalism, is that there’s a wider range of people treated as newsworthy.

There’s a more inclusive definition of newsworthiness in terms of who might be a useful source on a given story. You can see this happening to tie this back to algorithms. And here I want to put in a plug for a fantastic book that’s just out in the last week or so.

The book is called AI Snake Oil, and it’s by a pair of information science tech researchers from Princeton, Sayesh Kapoor and Arvind Narayanan. In that book, they talk about some of the pitfalls of journalism’s coverage of AI. And one major area they talk about is uncritically platforming those with self-interest.

So treating company spokespeople and researchers as neutral parties. Right? Repeating or reusing PR terms and statements and not having discussions of potential limitations. Or that when you do talk about limitations, the people who are raising those concerns are treated as quote “skeptics” or quote “Luddites.”

And all these are characteristics of, you know, a lot of the news reporting we’re getting, at least from mainstream kind of news . . . so-called mainstream establishment news outlets. And this is part of what motivates the algorithmic literacy for journalists project that I’m working on is to reiterate these points.

Like – when you talk to a company spokesperson, think about how they may be overly optimistic about the potential benefits of the tool that they’re developing, right? When you use terms from a PR statement, it’s important to consider whether those terms might be misleading or overselling, and so forth, and so on.

And, also, the idea that there are other kinds of sources who could well be newsworthy. So people who focus, for instance, on the ethics of tech development; activists who understand how algorithms, how the use of algorithms, say, by local law enforcement are affecting minority members of the community,

All these kinds of things are part of that issue that you raise, which is one of the things I learned in grad school, was the idea of sources make the news. Journalists understanding of the world is heavily shaped by the sources, who they have regular contact with and who they turn to for information and perspective.

And so the selection of sources is kind of a fundamental bedrock area where we can see whether news is inclusive and diverse or whether it’s exclusive and reinforces status quo arrangements – even when those status quo arrangements are rife with systemic inequalities and injustices.

[00:36:32] Steve Grumbine: Absolutely. That was well stated. So I want to jump to your current work. The work that you’re in the middle of right now. Can you tell us about your project and give us a background in that?

[00:36:44] Andy Lee Roth: Yeah it’s a kind of a year long project that if all goes well, it’ll be complete in February or March of next year March of next year In the meantime the Reynolds Journalism Institute, which, as I mentioned, is based at the University of Missouri, publishes each month’s updates from myself and my fellow fellows in this year’s 2024-2025 RJI class. So you can go to the RJI website, which is RJIonline.org, and follow some links and you’ll get to the fellows in general, and my reports, in particular.

So you can find the first of, kind of, progress report that I published with RJI called Big Tech Algorithms: The New Gatekeepers. And that covers many of the topics we’ve just been talking about, now. But also, more recently, I just published with Avram Anderson, my colleague who I mentioned earlier, an article on recognizing and responding to shadow bans.

So this is a social media phenomenon where, content isn’t taken down. It’s not censored. It’s just not made visible to anyone other than the person who posted it themselves. And lots of news organizations that are using social media to promote their stories, especially news organizations that report on things like systemic racism, police violence, the situation in Palestine, and beyond.

Many news organizations are subject when they try to promote their reporting through social media platforms, like Instagram, are subject to shadow banning. And I know that Mint Press News, which is based in the Twin Cities Minneapolis area, has been subject to shadow banning. For the article on shadow banning, Avrem and I talked to Ryan Sorrell, who’s the founder and publisher of the Kansas City Defender. And he told us amazing stories about the restrictions that have been placed on their social media content and some of the strategies that they’re using to get around.

So the point of the project is not to say, Oh my gosh, the sky is falling and the sky is falling. We’re all doomed. Journalism screwed. The point of the Algorithmic Literacy for Journalists Project is to give people . . . give people, journalists and newsrooms the tools they need to push back against this kind of algorithmic gatekeeping. So the article about responding to shadow bans has five specific recommendations for how to recognize and respond to shadow bans if you’re a reporter or a news outlet whose content is being restricted that way.

We’re working now, another Project Censored colleague of mine, Shealegh Voitl, who’s our digital and print editor. Shealeigh and I are working on the next article which will come out in a few weeks. Which is about how . . . We know a lot about what’s wrong with horse race coverage of elections. How horse race election coverage actually makes people cynical about politicians and politics. How it actually demobilizes people from voting.

Um, and what Shealeigh and I are doing is looking at horse race coverage of elections. Critiques of horse race election coverage. And seeing if there are lessons from that coverage for how journalists cover tech developments, AI tech developments. And, especially, they don’t get called horse races, they get called arms races when it’s two tech companies battling to see who can be the first to come out with some new AI tool. Especially when those AI tools are for public consumption.

So we’re trying to look at, and this will be the next step in this project – We’re looking at these lessons from flawed horse race coverage of elections to generate lessons for how journalists can better cover battles between either competing tech companies or, international competition, to develop new AI tech.

And, not to give this story away, but, basically, the idea is like – when you focus on who’s ahead and by how much, or are they losing or gaining ground, you’re missing, as a journalist, valuable opportunities to teach people about well, what can this AI tech really do? And what guardrails do we need to make sure it’s used the way it’s intended to be used? That it’s not being tested on populations. That it’s not being developed in ways that will reproduce pre-existing inequalities, and so forth, and so on.

So, yeah, the project in a nutshell, if anyone, checking out our conversation here today, is a journalist and you’d like to help in the development of these tools, I am actively enlisting journalists and newsrooms to help vet the toolkit once it’s ready. And that’ll happen later this year. So we’re kind of putting out pieces of it as it’s in development. But also seeking feedback – especially from journalists and other news professionals – before the final product launches in what will be February or March of next year.

[00:42:07] Steve Grumbine: That’s really fantastic. It’s good to have a feedback loop and not be locked in your own little, you know, reinforcing.

[00:42:15] Andy Lee Roth: Yeah, there’s no point. One of the challenges . . . I’m used to developing media literacy materials for classroom use and I’ve taught subcourses of sociology for my own for years. But preparing something that will work in a classroom is really different than preparing something that will be useful to journalists.

Of course, teachers and students all have time pressure on them, but the time pressures that journalists work under are extraordinary. And the financial pressures on journalists are just as astonishing. And so to create tools that journalists might actually take the time to look at and consider using requires a special sort of directness that is, it’s a part of what’s interesting and challenging for me about this project is that transition. Making things just very straightforward and direct.

So I mentioned partly the motivation is I think we can have better coverage of AI if journalists have more algorithmic literacy themselves. Of course, the ultimate goal of that is to have a better informed public. So that when we hear boasts about what one of the researchers whose work I’m looking at calls, you know, we can be resistant to hyperbole when we hear about AI. Whether that hyperbole is doomsday scenarios, or whether that hyperbole is positive in the sense of AI is going to save us. Whether it’s the doomers or the boomers, just like any news, we need a kind of a media-literate public that knows how to parse claims for whether they’re trustworthy. Whether they’re supported by evidence or not.

So hopefully, if the project is successful, we might bump the needle a little bit in terms of how journalists do their work. Which then, in turn, helps the public be more informed. Better informed. More engaged around these issues.

[00:44:06] Steve Grumbine: You know, I look at AI and I want to ask you a couple of questions about this before we get off of here. This is really important to us. You know, I, frequently, will put a question into ChatGPT just to see what the response will be. And almost invariably, if I don’t put qualifiers in my question, for example, please provide me with a class-based understanding of the United States Constitution.

If I just say, please summarize the United States Constitution, I am going to get a glorification of this document. It’s going to literally salivate all over itself. And I read it and I go, wow, this is a greatest thing ever. My goodness, what a wonderful thing. But if I ask it to give it from a perspective of a African American, um, it might

say some things differently. It might give me a perspective. And it’s always about whose class interests are being represented by the initial answer. The unfiltered initial answer. I mean, it’s filtered. That’s how it comes up with its answer. At some level, the first output isn’t to say, Well, you know, Karl Marx once said that it was important for . . . whatever, you know? It always comes at it from a very, very matter of fact, so vanilla, you wouldn’t think to question it.

[00:45:33] Andy Lee Roth: Mm hmm.

Well, there’s a double-barreled quality there, I think, right? One of the barrels is obvious. One of the barrels is less obvious, right? One is , you know enough, right? Your use is fairly sophisticated in that you’re understanding if I don’t specify these parameters, I get kind of a generic status quo response.

So that’s one level. And that’s a kind of algorithmic literacy that if we’re going to use ChatGPT and other kind of generative AI programs the way the creators of those tools want us to use them, we have to know that. We have to know to make those distinctions and to be alert for if you don’t, that’s going to shape the kind of feedback you get.

But the second, less obvious barrel is, even if you specify the parameters, the generative AI program is only going to give you what it can produce on the basis of the data it’s been trained with. And if prejudice or inequality of one form or another is baked into that data from the start, then the program will reproduce that inequality. That injustice. Right?

So, this goes to another of the kind of, like, pitfalls and kind of how we think about AI, right? The idea of attributing agency to AI. Or comparing AI to human intelligence. If you and I have an argument about what the Founding Fathers intended, each of us can, you know . . . that argument in a positive sense of the term argument, right, we would, make propositions and support them with evidence. And if I said, well, why do you think that, you could explain to me why you think that. AI can’t do that, right?

you can ask why is that so, and AI will churn through its calculations and come up with something, but it’s not a thinking, intelligent entity, right? Not in the sense that we tend to think of those terms, or tend to use those terms. And I think that’s really important. So the idea that – it’s the old thing that my, my high school chemistry teacher used to say – garbage in, garbage out, right? If the AI has trained on materials that reflect existing biases, then the AI is going to give us biases.

The other component of the kind of – if I say it’s double-barreled and the second of the barrels is less obvious – is, you know, ChatGPT has hoovered up lots of attention in terms of public discourse and, pundit commentary, and so forth, and so on.

And I think it’s important to be concerned about it. It’s profound in its potential consequences for us. But I’m more concerned about all the AI systems that are operating that we aren’t consciously aware of.

[00:48:29] Steve Grumbine: Um, law enforcement.

[00:48:31] Andy Lee Roth: Right. So, you know, we all . . . Maybe a bunch of people maybe enjoy when Netflix tells you what movies it thinks you might like. And for a long time when Netflix did that, they would say we think there’s a 94 percent chance that you’ll enjoy this recommendation based on what you’ve watched. But when we use a search engine, similar kind of algorithmic filtering is going on, but Google doesn’t tell you there’s a sixty seven percent chance that you’re going to enjoy returns on your search that we’re giving you, right?

[00:49:04] Steve Grumbine: Uh huh.

[00:49:04] Andy Lee Roth: The process is the same, but it’s not flagged as being something that has been driven by an algorithmic assessment of your online behavior. And so it’s less visible and, therefore, less conscious of it. And, therefore, we’re more without a kind of algorithmic literacy. We’re more vulnerable to manipulation of that sort.

So I think it’s right to focus, it’s good to focus, on ChatGPT and others of these kind of high profile generative, AI programs. Um, I think we also need to be conscious of all the other ways that AI is shaping the information we receive about the world. And, therefore, how we understand the world. And that’s again, kind of an underlying motive for the work that I’m doing now.

[00:49:53] Steve Grumbine: Not to give away the farm here, and give away your work here before it’s time, but could you, maybe, give us some other ways that AI is used outside of, you know, algorithms for social media or, news, but, also, outside of ChatGPT? Obviously, the military uses it. I mean, we heard AI is being used to target the people in Gaza, for example, with drones and the like. Help me understand other ways that we should be aware of.

[00:50:21] Andy Lee Roth: I think one of the most important ones, and this is actually a series of reports. The series is called Machine Bias. It was produced by ProPublica. And this was, I think the . . . 2017 [2016] was the date. ProPublica was looking at the use of algorithmically-driven software that would predict the likelihood that someone, a specific person, might commit a crime again in the future.

And what ProPublica exposed was how courts across the country were using these tools to influence judges’ sentencing of people convicted of crimes. And because the data driving the AI, in this case, was biased against blacks, the results that ProPublica exposed – this was a series led by an independent investigative reporter named Julia Angwin, whose work is really impressive – um, exposed how, therefore . . . like based on this biased . . . based on these algorithmic injustices . . . Blacks convicted of crimes had received longer sentences.

And this was a direct result of the use of this AI technology. That study is, I think, is important in its own right. It’s, also, what, kind of, launched this idea of algorithmic accountability reporting. Taking the traditional watchdog role of journalists, which is a complicated concept. Whether journalism in the United States, historically, has functioned or lived up to that watchdog ideal is a topic for another conversation.

But the idea of algorithmic accountability reporting is that the traditional watchdog role of journalism gets focused on the development of these new AI technologies. And, Nicholas Diokopoulos is one of the pioneers of that work. And he often invokes in his own work the ProPublica Machine Bias series as an example where, you think about crime and courts and, the legal system, and you don’t necessarily immediately think AI. And yet, right, what ProPublica showed was probably already historical injustices, in terms of racism within the criminal justice system, were being amplified by the use of AI tech in the sentencing process.

[00:52:53] Steve Grumbine: So let me ask you, Andy, one last question. If you had one opportunity to say everything that we missed in this call, and, obviously, we probably missed a lot because we had an hour here. But what would be the one thing you’d want people to know that maybe we didn’t cover? Or that you think is important about this conversation?

[00:53:10] Andy Lee Roth: Thanks so much for this conversation, first.

I’m going to toot Project Censored’s horn here. I mentioned earlier, right, a majority aim of the project is to help the public, and especially students, to become more media become more media literate and especially and especially more critically media literate. Thinking about these issues of power that we’ve been talking about throughout the conversation today as a key component of media literacy.

And the thing I would say is media literacy is . . . is itself a kind of umbrella term. We’re talking about multiple literacies and one of them is algorithmic literacy. So, being aware when you use a search engine that it’s not a neutral conduit of information. And we don’t know exactly how the Google search engine works. But if people want to check this out for themselves, we can’t pry open the, so-called, black box.

But we can compare different search engines and see what kind of results they produce. And you’ll see then that search isn’t neutral, right? And that’s one example of a host of things where, in our daily lives, we’re depending on algorithms that we aren’t necessarily conscious of. And so becoming more conscious of them and calling out problems when we encounter them is, I think, one of the ways that we’re going to push back.

it’s also really important – and this is something journalists have not done a good job of – when there is policy or regulation that might affect how these technologies are developed and how they’re implemented, we should care about that even when it seems like dry, difficult stuff. I’m probably overstepping the boundaries of the last question here, but last November, a bunch of journalist organizations got together and issued the Paris Charter on AI and Journalism.

You can find that online easily if you look for it. The Paris Charter on AI and Journalism. This got no news coverage in the United States – except for, kind of, news reports by a couple of the organizations that were signed on as members of the coalition that proposed this. But one of the basic proposals is that journalists and media outlets and journalism support groups should be invited to the table to talk about the governance of AI, right?

That journalists remain at the forefront of the field and have a say in how it develops. And, probably, we could expand that point to say ordinary people, too.

[00:55:43] Steve Grumbine: Hmm.

[00:55:44] Andy Lee Roth: So, that’s a bigger issue in terms of policy, and that’s probably an uphill, into the wind, battle. But I think people should be aware of those issues. And there are, definitely, resources out there; mostly produced by activists for how to push back if you’re a member in a community that is suffering from algorithmic bias. For, often, these are targeted at law enforcement use of AI – So, facial recognition technologies and the like.

And there is stuff out there. It just isn’t widely covered by the establishment press. Perhaps for reasons that are obvious. So yeah, I think my key point, my key takeaway, would be that algorithmic literacy is part of media literacy. People, organizations like Project Censored and the Reynolds Journalism Institute, are making resources available to journalists and to the general public.

But, you know, it’s on all of us to be informed. And, on that note . . . Thank you for having me on the show today, Steve, and for featuring these issues on Macro N Cheese. It’s a valuable platform for getting the word out, for sure.

[00:56:53] Steve Grumbine: I really, really appreciate your time. Folks my name is Steve Grumbine, and I am the host of Macro N Cheese; also the founder and CEO of Real Progressives, a nonprofit organization. We survive on your donations. So, please, if you would like to make a donation, they are tax deductible; we are a 501(c)3.

Andy, thank you again. I really appreciate it. We will make sure, when we publish this, that there are complete show notes, extras, links, you name it. And there’s going to be an edited transcript for you all who want to read along, or maybe want to read.

Don’t forget – every Tuesday night at 8 PM we have something called Macro N Chill – which is a redo of the interviews that we do. And they’re done in video format and they’re broken down in about 15-minute segments. And we listen and we talk about them. We build community. We share knowledge. We ask questions. We agree. We disagree. Please. To make it successful, we need you.

So come on down and check us out. Uh, you’ll usually see an update on our Substack. Check it out: realprogressives.substack.com. With that, Andy, I really appreciate you joining me again today, sir.

And, for the team over here, we want to thank you and Project Censored for being great guests. And, with that folks, on behalf of my guests and myself, Macro N Cheese . . . We. Are. Out of here.

Books:

Chomsky, Noam and Herman, Edward, Manufacturing Consent 

Hackett, Bob and Gruneau, Richard, The Missing News

Kapoor, Sayesh and Narayanan, AvindAI Snake Oil

Useful Links:

https://rjionline.org/person/andy-lee-roth/

The New Gatekeepers: How proprietary algorithms increasingly determine the news we see (published by The Markaz Review) https://themarkaz.org/the-new-gatekeepers-andy-lee-roth/

Queer erasure: Internet browsing can be biased against LGBTQ people, new exclusive research shows (published by Index on Censorship, with avram anderson) https://journals.sagepub.com/doi/10.1177/0306422020917088

On the importance of critical media literacy, more generally…  Should This Article Be Trusted? (YES! Magazine) https://www.yesmagazine.org/opinion/2020/12/21/trust-media-literacy

Others’ work discussed in the episode:

ProPublica’s Machine Bias series https://www.propublica.org/series/machine-bias

AI Snake Oil, by Arvind Narayanan and Sayash Kapoor (Princeton University Press, brand new, just published Sept. 24, 2024)  https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil

Related Articles

They’re Worried About The Spread Of Information, Not Disinformation

They’re Worried About The Spread Of Information, Not Disinformation

..corporate media have increasingly taken to branding realities inconvenient to US information goals