SRCCON:POWER Talks: Britt Paris on media manipulation and the audiovisual fake
Day & Time: Friday, Dec. 14, 2018, at 9:45am
KIM: Y’all are, like, into it. I never heard the room just got silent whenever you want to start. Good morning.
AUDIENCE: Good morning.
KIM: Good morning!
AUDIENCE: Good morning!
KIM: That’s so much better. Did everyone have a lovely dinner last night and/or drinks and/or resting or whatever you needed to do? By the way, we won trivia last night. So today’s a little like yesterday so we’re going to start out with Britt Paris this morning and we’re going to break out into sessions and then we’re going to come back to lunch with Rachel and Ben, and close things out with another round of sessions.
So our first talk is from Britt Paris. She’s a scholar on media studies from the Data & Society Research Institute and she’s going to explore how fakes are created, used and interpreted, which is interesting because there’s so much going on around about misinformation and combating it but not necessarily, like, how it’s made and how we interpret it as journalists. So I’m excited to see this. Britt told me she is — she actually has an undergrad in journalism and used to work in newspapers. So I found her!
BRITT: Is it on? Okay. So thanks so much for inviting me here for my first SRCCON to talk about audiovisual manipulation. And this is a work in progress and I’m currently sort of thinking through it so I welcome questions and discussion on it. So I’m a researcher at Data & Society Research Institute, and they do work within the work of media manipulation to investigate how nefarious groups game technology to fuel their political agendas. And in addition to long-formed research on this topic, we hope to provide tools and knowledge for journalists covering these topics.
So before entering into academia and the research world, I was, as was mentioned before, trained as a journalist. I have a degree in what they called at the time, “convergent journalism.” And I worked a few years, few years within freelance reporting, photo and video editing but before too long, I left because I got too mad.
[ Laughter ]
And I got too mad about things that I’ve been I’ve been hearing a lot of you feeling mad about, so I feel like I’m in sort of in the right place. So over the last few years, I’ve been working on longer form academic research. I’ve collaboratively produced on what gaps and litigations on data in injustice and police homicides shows us about how data collection is first and foremost, a tool that’s wielded by the powerful to maintain social inequalities. And this work is really emphasized the need for discussions around the notion of data justice: thinks critically of politics of evidence. So thinking about questions of, like, what counts as evidence and in what settings. Who’s involved in data collection? What knowledge is valued and undervalued as well as what it is and is not considered data in these settings.
And I seek to develop counter-data practices that expose the dominant framing of data is neutral and the policy in the technology realm. So it’s in this context that I’m talking about sort of my reinvigorated interest in communication and what I present here is primarily a work in progress, will, if everything goes as planned will be published in April 2019. So I just want to give you a little bit of an idea of where I’m going, so after this introduction, I’m going to give you an overview of the current state of audiovisual manipulation, and problem areas, and some of the ideas motivating my research and also when I hear audiovisual manipulation, I’m defining as any video that’s manipulated to increase its novelty and I’m going to talk a little bit more about that in a moment. I root audiovisual manipulations as evidence of events not necessarily but evidence of production and interpretation. I look at these videos using qualitative methods like looking at the chain of custody, the dissemination across networks to understand the intense intents,, the impacts, and the effects of these videos.
So first I’m looking at the trophy surrounding pornographic deepfakes and I contrast that with Jim Acosta’s interacting with The White House intern that was posted in conjunction with the announcement that they were revoking Costa’s press pass. So in context, I’m going to be talking about pornography but not in detail and I’m going to be talking about police brutality. And then, finally, I look at the implications of these videos and their sort of latent social effects and this will cover primarily that bodies are recontextualized in these videos, and second-order information, and speed of scale of the dissemination of these videos.
So when I’m talking about manipulated audiovisual data, I mean recorded video data manipulated through data to increase its novelty and it’s often, but it’s not always through arguments that are explicitly or implicitly political or that’s sort of what I’m interested in.
And so with what’s commonly referred to as deepfake, the video is generated through artificial intelligence and often combined with other video data to create novelty. So deepfake is just one small category of what I’m considering to be the broader field of audiovisual manipulation and this broader category of audiovisual manipulation is more banal in a lot of ways but I think it’s equally laden with risk. So all of the types of audiovisual manipulation that I discuss in the report and that I discuss here today pose series of questions for consent, culpability, and for content moderation.
So also, my definition of audiovisual manipulation for this project is related to the notions of disinformation and misinformation that are up here.
So disinformation is this information that is intentionally false and seated into communication channels with the intent to deceive for political reasons. But misinformation is information that is incorrect but without the explicitly intent to deceive. So I have a clip here of audiovisual manipulation that caused debate. I think we’ve all seen it, or are familiar with it. So I’m not going to play the video.
But it was produced by Jordan McBeal at BuzzFeed this year and it sort of highlights the fact, or proceeds the fact that in recent months, headlines of how computer science research universities, computer scientists have successfully developed numeral networks and generated adversarial networks to turn clips or audiovisual clips into realistic but completely faked lip-synced videos commonly referred to as deepfake. So consumer-grade versions of this deep learning technologies are available for wide use. And more and more of these deepfakes are making it into the Internet today. There’s still elements of the uncanny value, but there are problems to come. But I want to highlight the first instances of these AI-generated videos came out in 2017. The name deepfakes, the artificial-intelligence generated videos don’t come from tech industry or anything like that, or tech industry-driven buzzword, but rather came out from porn videos. So naming conventions, right? As people who tell stories the way that we frame these stories is important. So interrogating the videos here is a step towards developing more nuance whenever we’re talking about these videos. So not to pick on anyone here.
I hope anyone in the audience is one of those people. But it’s okay if you are. So there’s no shortage of instructions of how to make these manipulations through machine learning technology, or that decry the dangers and threats this technology poses to democracy and elections. Or that propose technological fixes as the only solution to stay aloft these threats. As precedence of this audiovisual content that’s fake, both AI generated and not-AI-generated are culpable. So both of these types of videos, AI generated and not AI generated are rooted in political and sociocultural factors that contribute to the production of these videos. So my upcoming report that I’m sampling with this talk, really tries to understand the politics of evidence in ways that are similar to, but distinct from other historical precedence of audiovisual manipulation.
So with this work, I suggested at looking at the intents and the impacts behind these videos. It helps us think about ways to distinguish among these fakes that’s a bit more nuanced and understanding the implications of these videos, I think, helps us better assess the risks that they pose to society.
So looking at the intents, the impacts, and the implications of the cases that I’m looking at today sort of give us a hint or, you know, a sort of glimpse in how we might understand the politics of evidence, and how they’re evoked by these videos. And so the ones that I — the big sort of issues relating to the politics of evidence that I’m talking about today are that bodies are recontextualized, often stripped of their rights or mobilized to justify unjust actions. The speed and the scale that these videos now achieve make content moderation more difficult and I put forth that these types of fakes extend a type of second-order disinformation. And I call it second-order disinformation because it’s a step more complex than the issues thus far wrought by textual information because it’s more difficult to fact-check images, one and it’s more difficult to fact-check moving images on top of that. So just to give you a little preview of the larger project here. So I collected 50 examples of audiovisual manipulation over the last year or so.
And I iteratively interpreted these videos into this 6p typology that you see here. So the examples in this typology were chosen because of the ways that that they are ideal cases in which the sociopolitical aesthetics surround audiovisual manipulation. And so I want to highlight that these are not all AI-generated deepfakes but rather I looked to broaden the window of concern of audiovisual manipulation to include other types of fakes to show how deepfakes, again, are one subset of audiovisual manipulation worthy of study. So to some degree, audiovisual manipulation is necessary for storytelling. We capture the palpable action and not sort of the often extraneous context of the crowd. In fictional storytelling, often the production of fantasy or spectacle absolutely requires image — sophisticated image manipulation.
So through the twentieth century, audiovisual production and dissemination have largely been the realm of professionals with growing cultural, economic, and political power who made mediated arguments to justify the hegemonic ideals of western society, colonialism, misogyny, et cetera. However, in the early 2000s, the augmented web came with hand-held devices that allowed audiovisual capture, people were more poised to make this change — giving people power to do these stories. Internet utopian Henry Jenkins wrote about Photoshop for democracy where he wrote that, “Participatory culture becomes participatory judgment, setting up for images that critical a political figure.” So he crisis the image, and this was the base image from it, with republican faces, Bush, Powell, and Rumsfeld, with their face grafted onto the Three Stooges, and he did another image, at the time, 2004, it was Dean, Kerry, and Edwards posing as the Three Stooges. And he argued that Photoshop should be a tool as a medium to meaningfully express political ideas.
In the time since Jenkins wrote, you know, Photoshop for Democracy, and I’m not trying to pick on him. There are a lot of net utopians writing sort of within this vein at the time. But at the time, this net utopian worldview has been thwarted and thwarted again. For example, today, we see white supremacists, and misogynist trolls have mobilized the spirit of play that were once realm of creative groups’ participation. And you see this image of the Three Stooges with a face of a graft of democrats and republicans grafted onto them. So moving on to the cases…
So the first instance of what is now colloquially referred to as deepfakes appeared in November 2007 as I mentioned earlier on a subreddit devoted to machine learning face replacement algorithms. And it started with famous people like Gal Gadot, with their faces grafted own porn videos. And for these videos, it’s not going to fool anyone who looks closely but that’s not necessarily the point. The point is to play with machine learning. To get better using at these tools. Detractor says that it’s the mere engagement with sexual fantasy, and objectification, and even of revenge. As the community of deepfakes has grown from the subreddit that now bans pornography, to porn sites like Pornhub, the fakes are getting better. As the technology gets better, the fakes are improving. In an interview with Motherboard, under the handle @deepfakes under which the games gained their name said that, “Every technology can be used for bad motivations and it can’t be stopped. I don’t think it’s bad for the average person to engage in machine learning research but I don’t know that’s really all that’s happening here.” While it may not all be inherently bad to build communities around machine learning practice, that this instance was the first that surfaced says, I think, a lot.
That this instance uses celebrity faces is one thing but there have been other photo and video fakes in revenge porn cases with non-public figures.
So that these porn enthusiasts were the first to break this deepfake phenomenon support this idea that this new technology is sort of firmly situated in an environment in which its practitioners see mobilizing the identity of women, largely against them without their consent as a sort of unproblematic creative practice.
And this highlights three big problems. First, there’s the clear problem of harm, right? So who’s responsible for what happens immediately and in the long-term to these women both famous and not when someone makes a video to ruin their day, or alternately fetishize them as a sexual object. The second is that this represents a deepening entrenchment, I think, of online misogyny that peck let’s on platforms strongly on platforms like Reddit, and Discord, and YouTube, and secondarily, on sites like Pornhub, and the fact that these involve two people, there’s the body actor, of the porn video, and the face actor, and this makes litigation very difficult. And while there are theoretically many paths to stop pornographic videos of them being disseminated online especially if they’re not a public figure, there’s not a single law or suite of laws that can undo the multiple types of damages that these deepfakes pose.
So moving on to my second example. So we’re probably all pretty familiar with this example. It’s pretty recent. So this is a recent example of an outright example of a deepfake of the most powerful office in the world, Sarah Huckabee Sanders posted a video of Jim Acosta that fueled their decision to remove his press credentials with the intention to discredit and keep a member out of the White House is clearly a political issue.
So this video was shared by Sanders ten hours or so after the press conference. It was edited and posted by Paul Joseph Watson of Infowars. I’m sure you’re all familiar with that site. So this short timeline here shows the chain of custody of the video showing the segment of the C-SPAN video was posted as a gif and posted to a conservative site, For America, and then the original For America gif was edited by Watson back into the C-SPAN stream, and Paul Watson even showed his editor’s panel to excuse himself from culpability. But that didn’t work, you probably can’t see from there but there are two circled areas where you can tell very clearly where things are spliced in and you can’t tell clearly. So this isn’t a deep fake, but it is an example of audiovisual manipulation. It was edited, and mobilized to be misleading. It was possible that this video was intentionally chosen because the video in question itself in the tweet drops frames as an artifact of transcoding from the C-SPAN video to the gif and as a result, the actions appear choppier, and more forceful. And the fact that this livestream’s audio gives no hint at all at any sort of struggle and that the manipulated video incorporates no sound also suggests its intent to deceive. So the Sanders tweet featuring the Acosta video was seen by hundreds of thousands of people that day. It was accessed primarily by mobile users in U.S., and I show this primarily to show that people were accessing this tweet very quickly on the run and there was a study coming out MIT by a scientist named Soroush Vosoughi and he and his team looked at various examples of disinformation and reputable information, and found the speed with which disinformation reaches scales compared to reputable information is staggering. And this case can be seen as a perfect example.
So these two tweets show the general flavor of comments around the debate of the video. And it’s the argument that we hear everywhere today. It’s either that the liberal media are liars or that, you know, the fascist White House is nefariously trying to hamstring it for the state. This is a clear example of what policy experts, Daniel Citron and Bobby Chesney, call to support their false claims and put the onus on the victims of their claims to prove that there was no wrongdoing. So that the administration used a manipulated video along with the announcement of the revocation of Acosta’s press pass evoked ire from the civil liberties organizations as well as from many state and federal officials. CNN pursued a lawsuit and it quickly became an issue of press freedoms in the Trump era, but after two weeks, The White House sat down realizing that it had no grounds and restored Acosta’s press badge. So the Acosta’s video juncture shows us very clearly that we as a culture often forget the importance of preserving the integrity of the passage of time in the video, and I think this is true especially for most of the video that we see online is edited through multiple means, through Snapchat filters, through gifs, through fast-forward tutorials… the list could go on and on.
But I think in certain arenas, context really matters. So the time, the duration, the continuity of the videos, they’re all important parts of the contextual frame that we use to interpret the video. And so, for a historical example of this, we can look to the 1991 Rodney King video. This video shows Rodney King to be brutally beaten by members of the Los Angeles Police Department. The video in itself was not manipulated. The officers were charged with excessive force and the video was submitted in court as evidence. However, the defense slowed the video down so that King’s involuntary physical reactions while being beaten appear to be jerking. And according to some witnesses, it looked as if he was trying to get up. So the defense slowed this video as they asked their clients, the police officers, was he complying here with your order to stay down? Was he complying with your order here, to stay down? And a white juror who was interviewed afterward who decided the police officers were not guilty of excessive force on King said the slow video in the courtroom, quote, made all the difference.
So the slow video captured the events that did happen on March 3rd, 1991. But the video decontextualized the events. The slowing of the video, suggested that the video could be interpreted in a different way. And I say this all to drive home the fact that interpretation is shaped and guided not only by aesthetics, but also interpretation is shaped by the pre-existing prediction of whoever’s looking at the video just like, you know, we saw with the comments around the Acosta video, you know, sort of falling into two camps based on people’s pre-existing political ideologies.
So to zoom back out into the larger research, and the two case studies that I’ve talked about here, this is a sort of rubric laying out the intent, the interpretation, and the — the intent, the impacts, and the implications of the six P’s in question. The video ending in the six P’s in question. So this rubric provides a way to critically interrogate audiovisual manipulation and possibly other types of disinformation that really looks at how these videos are performances of production, and performances of interpretation.
And I think it helps us think more clearly about how to interpret these videos in ways that communicates implications in risks without falling prey to maybe recycling overblown hype.
So moving on to the sort of what we learned about audiovisual manipulation for these videos. So the videos in question here indicate how bodies are implicated in these audiovisual fakes. Despite the fact that video technology is long been touted as neutral and that it may be offers an accurate model or mode of capturing events or human experience, these videos show that this is not, and that this has never been true. There’s many modes of manipulation that alter the record, right? And these videos really make bodies legible. In the porn case, the bodies are recontextualized, and further stripped already-precarious rights to identity, presenting a clear injustice.
In the Acosta example, the interaction of the bodies is manipulated to justify, you know, some unjust actions and removing a member of the press, the afforded the state that is responsible for holding a government of power accountable. So recontextualizing these bodies does not remove harm from the equation; it only recontextualizes harm in the ways that we’re poised to make an impact, on the ways we’re able to build on the world as time progresses. So fake audiovisual content whether it’s generated through neural-network technology, or like in the case of the Acosta video, through existing technology, represents what we’ve been witnessing around the public discourse around “fake news” and related campaigns. So this represents second-order information that requires new content modeling solutions even as we’re grappling with this information. And exploring fakes as second-order information, also lets us explore what technology forces us to grapple with as we look at it. It permits us to think about how the data underlying social media interactions of these videos are part, is mobilized to interpret humans in ways that are not only fallible but also harmful.
So if technology interprets human life as humans interact online and you can look to the machine learning literature that Silicon Valley permits to be public on this to show that this is probably the case.
So if technology interprets human life as humans interact online, social media’s massive reach and aesthetics of urgency speed up the process by which technological interpretation of human life proceeds.
So social media platforms have expressed difficulty at mod rating text-based content at the scale of their userbase to quell the tide of disinformation. But I want to highlight at speed, you have scale here. And this interaction increases the pure volume of data that is created by each individual user that is then leveraged by each of these platforms that are hawking data to other, and often unknown powerful entities. Disinformation, then, and the spread of it is not necessarily solely the fault of the users. This is an example of the platform really working according to its intended technology affordances.
And I think this is because social interaction is really a buzzword that these platforms use to scale up rather than a question that they grappled with meaningfully in the construction of their platforms.
But I also want to highlight that social media’s massive reach was never inevitable. Nor is it impossible to scale back or hold accountable. So the impressive scale of social media is used to stand in for this supposed fact that it is, and that it always has been too big to fail. But we know that’s not true. The scale of social media platforms has been achieved through strategic decisions of each platform. You know, Sara talked about this yesterday morning and we can think about also the examples that we’ve seen with Facebook’s cascading PR debacles through the last year to see exactly how these strategic decisions have been made at every step.
So issues of bodily contestation, second-order information, and the speed of infrastructures are all what I would call data justice issues that cause us to ask how data from social media data generated through the sharing of the videos to the data of the videos themselves is currently and can be collected and mobilized against us.
And here it’s clear that the technical solutions, I think, that are posed to, you know — posed as solutions to deepfakes, or posed to sort of verify everything, I think this push to verify everything is suspect because when a system has absolute omniscience through verification, or through other modes of security, one participates by paying with their privacy, absolutely. And while some see technical interventions as the only solution to the spread of disinformation, these technical interventions, especially the ones that are focused on verification only create more holistic data models for the powerful and perpetuates a lot of the problems that already exist in society. We can see, you know, examples in surveillance, and artificial intelligence, and machine learning tools used in social-benefit systems that affect the most vulnerable people.
So solutions, I think, and interventions really need to look outward to social structures, to practices, to motivations, and cultural context if these issues surrounding audiovisual manipulation and the host of problems related to technology are to be meaningfully addressed.
And I want to highlight that human moderators, human content moderators did a lot of sort of bringing social context to bear on the development of technology from the Internet, from the, sort of, very beginnings of the Internet in the late-1960s, into the early 2000s. They were communities of people interested in weighing the social cost and benefits of all kinds of social information posted online. But in the time since, tech companies have gotten rid of a lot of these human content moderators, and Dr. Sarah Roberts’s work shows that the tech industry extends terrible working conditions for the few workers that they keep on, and she also shows that human content moderators are often paid little by tech companies and valued even less.
So I think a partial answer to audiovisual disinformation could be convincing tech industry to place a higher value on human content moderation and the humans who do it, and also pushing for more modes of tech-industry accountability overall.
So I want to leave you now with a fake deepfake from the 1980s. This is Max Headroom. He was the first computer-generated television post. His existence says a lot. Its existence could be an entirely different talk. But this is a guy done up with computer practical effects. And with that I’d like to say thank you and I’d welcome your questions and comments!
[ Applause ]
KIM: Oh, man. Okay.
AUDIENCE: I’m just curious because you touched a bit on moderating explicit content and things like that, so I’m wondering if you’re planning to update this at all, and touch on the recent changes with Tumblr and their reaction to that — with moderation.
BRITT: I mean, to be completely honest, could you fill me in with what’s going on with it?
AUDIENCE: I forget the details of it, but Tumblr basically has begun removing, and will completely ban specific content, more specifically like pornography and things like that, and people have already started leaving the site sort of just due to other changes and leadership changes and things like that. But there was kind of a big deal for people who felt like it was punitive for folks who looked to Tumblr as a way to explore their sexuality and things like that. Just like that blanket removal of content seemed like too strong of a step to make up for poor moderation on the part of their site.
BRITT: Right, and I think this is an example of a platform trying to, I don’t know how to say this, bolster its bottom line. So it’s saying, you know, we can’t possibly talk to experts or community members who this would affect, so we’re just gonna sort of blanket cut it. And, you know, this will be cheaper for us overall but, you know, in the end, who knows. But yeah, this is a good example. Yeah.
AUDIENCE: Your work — this is great, by the way. And your work has a lot of implications for how we consider and use audiovisual evidence. In the — particularly in journalism. I mean, in society, but I’m thinking about it from the journalism point of view but I wanted to ask you a hypothetical and ask you, what questions would you be asking in this scenario given the research that you’re doing. A blurry, choppy video that seems to indicate a prominent political figure using a, let’s say a racial slur appears — starts circulating the platforms. What types of questions would you be asking in the wake of that event as it spreads through social platforms that might inform the types of questions that journalists might be asking?
BRITT: I mean, following it — this is sort of “the” hypothetical. This is sort of one of the big hypotheticals that we’re talking about when we’re talking about deepfakes when audiovisual manipulation is evoked. And I think in that scenario, journalists are sort of looking at these things as they unravel. You know, as these events sort of happen. And I think tracing — as a lot of you already do in journalism, as we see in the Acosta example, there are a lot of people following that, you know, manipulation through networks, and seeing, you know, where the sort of source components of the video came from, and how exactly it was edited in, and pushing, you know, Paul Joseph Watson of Infowars to answer for himself. I think that’s sort of a good model. I would like to commend the press on that day of really pointing to this issue and pointing to the larger implications and not necessarily, sort of, overblowing the implications of this video. But actually, you know, sort of putting their head down and following the data through the systems and following the videos’ dissemination through networks.
So that’s what I would, I guess, at this point, sort of, point to as a possible practice. But you know, I don’t know. It’s a hypothetical. So I’m not exactly sure. Every situation’s different but thanks for that question.
AUDIENCE: Thank you so much. This is so fascinating. It has so much language that’s so useful for us to have conversations. But you know, we talk a lot about what tech companies should do, or would do, or what they should be required to do, and when I think about news organizations that have a Facebook page, or use Reddit, or use these companies… what do you think news organizations should be doing in their own content moderation or their own sort of — I mean, a lot of times, disinformation spreads through the comments on a news organization’s own page and those organizations do nothing.
BRITT: Right. Yeah. Yeah, that is a great question! You know, I don’t know. I think… in some senses, I want to say, like, my gut reaction is, “News organizations using these platforms is not the problem; it’s these platforms who, you know, push news organizations by sort of evoking to scale — oh, we have to get on Facebook! Everybody’s on — sort of using social media in all of these ways that, you know, is absolutely essentially for your survival, you know, whether it’s as a person, or as a news organization.”
And so you know, I don’t know if the individual use of these platforms is necessarily something that I would have any recommendations about because, you know, it is useful for platforms to do research to engage with their communities. Instead, I think I would point to holding the tech industry accountable for making it such — if that makes sense. It may not be. Actually, I’m not answering the question quite fully. Let me think about it and I’ll get back to you.
AUDIENCE: All right. So the news organizations seem to have a reactive approach to a lot of this and your description of what a newsroom should do, your responses to this is very, very valuable, but I’m wondering how we can be more proactive about this specifically in understanding that a lot of the companies that are profiting from the explosion of AI and also these platforms that are not — they’re not doing moderation correctly are actually often owned by the same investors. So how can we leverage that connection to play a role in keeping that stuff from happening supposed to waiting and reacting to it?
BRITT: Yeah, I mean, again, a great question. I’m not 100% sure how that would work. Though I will say this: which is that Data & Society we’ve produced a lot of research sort of pointing to tech industry culpability in a number of specific concrete issues where, say, for example, white supremacists are leveraging, you know, sort of these fringe social media platforms — their communities on their to come onto YouTube and sort of allowing or promoting the proliferation of white supremacy on YouTube itself. So we’ve had some success with producing that report and getting journalists and, you know, engaging with journalists to really bring this to public attention.
So I think — and then, you know, YouTube will come back to us and be like, “What should we do?” Now whether or not this is a PR move that they’re doing, or whether or not they’re actually interested in changing is another question. But I think as much as we can, sort of, push the public toward — in the sense of that — push the public toward tech industry accountability for very clear, and palpable reasons so that whenever their kids are on the backseat of the car watching YouTube, they’re not getting white supremacy subliminal messages. That companies have a vested interest in holding these platforms accountable, and these platforms’ bottom line should be regulated in some way.
I don’t really have much hope that this will ever happen through the appropriate legislative channels, especially not in the upcoming few years.
But, you know, sort of maybe a long-term goal might be regulation or, you know, sort of informal regulation through improved modes of accountability. But other than that, you know, I don’t know. In terms of what journalists, or in terms of what news organizations can do to hold these tech industry, you know, behemoths accountable, I guess, as much as you can, leverage what power you have within your organizations to bring these conversations to these people. And show them how it will hurt their bottom line. Unfortunately, like, that’s one of the only tools that we have is sort of getting at them through neoliberal capitalism but I don’t know… use what you got, I guess.
KIM: Thank you so much. I’m trying to keep us on time, y’all! But Britt is going to be around. You can talk to her at lunch! Okay. Thank you!
[ Applause ]
KIM: All right. You have a 15-minute rehydration, refooding, restroom break. And then if you are looking for bootstrap bias, we’re here. Yeah? And if you are looking for running for the board, which we should all run for the board, Franklin 1. And history and resistance in journalism is in Franklin 2 upstairs, and social justice and pre-trial risk assessment is in Haas upstairs. Will see you back here around lunch-ish!