December 13 & 14 in Philly Support OpenNews!

SRCCON:POWER Talks: Sara Wachter-Boettcher on tech, bias, and responsibility

Day & Time: Thursday, Dec. 13, 2018, at 9:30am

KIM: Hello? Hello? Hello! Now everyone says hello. You know new people? New friends? Hi, Andre! hello! I’m Kim, and I’ll be MCing. I’ve had a lot of coffee and a lot of matcha so I’m very excited to be here. And also very excited to explain a little bit about the talks, and why we’re doing them, and kind of their purpose here. We’ve invited and reached out to some really smart, and engaging people, inside, adjacent to, and outside journalism to sort of set the scene and sort of set our thoughts for the day. So hopefully they’re thought-provoking, they’re bringing new things to light, and also just affirming things that you may have been thinking about. So they range in topics. We’re going to have two this morning. And then we’re all going to break out into sessions, and then Mr. Hernandez over there is going to close us out wonderfully and eloquently! So after each Q&A, we have varying times — after each talk, we have varying times for Q&A.

So I guess for the benefit for the transcript, when we get to Q&A, I will just hustle around with the mic for the benefit of the transcript. And I’ll tell you, like, when we have time and when that is. So our first speaker is Sara Wachter-Boettcher and she is a product and strategy consultant and author of Technically Wrong, Sexist Apps, Biased, Algorithms, and other Threats to Toxic Tech. By the way, she has a podcast. It’s being renamed from “no, you go” to “strong feelings.” And I think I feel like that says everything about the podcast that it’s about feminism and friendship, and work between friends. So I’m excited to listen to that. The second season — the next season launches January 10th, I believe. And she’s going to talk to us about bias in tech products and the unintended consequences of ignoring those biases for so long. She’s also going to talk about our responsibility to critique how tech is made, and how we talk about tech. So come on up, Sara.

[ Applause ]

SARA: Hello! Good morning. All right. Very excited to be here to talk with all of you. I work, obviously, with a lot of tech products, with a lot of tech companies, and I don’t get to talk to folks who work in journalism nearly as often as I’d like to. So that’s pretty exciting for me. And I’m hoping that I can pose a lot of questions today that will keep coming back around as we go through the different sessions as we go through the couple days. I do want to mention, I’m going to talk about a lot of tough subjects. I’m going to talk about sexism, and racism, and I’m going to talk about things like body image, and violence, and abuse, and fascism. And all of it. So there’s a lot happening. So I’m going to touch on a lot of those things. So if anybody is uncomfortable, or is sensitive, I wanted to warn you about that. I think it’s really important stuff and I think you guys are here for these kinds of topics so I’m excited about that. So to start us out, I wanted to talk about something that happened that may seem on its face as cute and fun, but is a very good example of the kinds of things that I like to talk about. And it’s how they updated Google maps last year. So Google maps released this new feature and pushed it out to a whole bunch of Echo users. In addition to how to get from point A to point B, they decided to add this new information: how many calories it would take to walk. They also decided that what you needed to know was how many mini cup cakes that was. What they mean by mini cupcake, I don’t know, whose definition of a mini cupcake, I don’t know. Talked about this in other countries other than America, and they have very different ideas of what mini is with sweetness than Americans do. Nobody knows but this is what Google Maps decided to do, right? So Google Maps pushed this out. And pretty quickly after they pushed this out, people had feedback. One journalist, Taylor Lawrence, had a pretty good one. So I wanted to go through her mental process. So about 8:00 p.m. she gets this update. She sees this happening, and her first reaction is like, oh, my God! What is this, right? So pretty quickly she starts breaking down some potential issues. One of the first things she notes is, she didn’t opt into this feature, and there was no way to opt out. She couldn’t be like, no, no cup cakes, thank you. And the other thing was that this could be really dangerous for people with eating disorders, or people who just feel like this is shamey anyway. You know, you’re going to shame me for not walking, are you going to do that to me, Maps app? She talks about how calorie counting is not necessarily an agreed-upon metric that’s going to work for everybody. She talks about not all calories are created equally. She talks about… why cup cakes, and cup cakes are not a food that are neutral. It’s not like cup cakes are like the standard food that we use to measure everything. I don’t know. Live your life, live your truth. But cupcake is probably not your standard metric. At it goes on and on. Over the course of an hour, she goes through all the ways that this product might fail. Just so recap, these are all the things that she covers. So can’t turn it off. Dangerous for people with eating disorders. Generally shamey. Average calorie counts might be inaccurate. Not all calories are equal. And pink cupcakes are not a useful metric. And also pink cupcakes are encoded with a lot of cultural meaning. It’s white, it’s middle class. It’s not a neutral food, really. So the perpetuation of diet culture. Now you can look at any one of these critiques and say, this is not a big deal. You could look at one of these critiques and say, I want to get calorie information. I think the cupcake is kind of cute. But there’s a whole host of people for whom that might not be true. And it took her one hour TAO*PBLGD that list. One hour on Twitter, probably doing something else at the same time. One hour. So what happened next? Well, within three hours, she actually shut the feature on. I have worked on a lot of projects and on a lot of products and I know how long it takes people to get this done and all I know is that it takes more than three hours to do this. I bet they spent more than three hours what the cupcake illustration would look like. Could we get two different illustrators to do two different styles? You know, sprinkles or no sprinkles? Nobody along the way did anybody have a conversation saying, who might this not work for? How could this go wrong?

[ Applause ]

Don’t answer that. How could this go wrong? Or is this even useful? Is this a good idea? Or maybe even someone did bring it up. Maybe someone brought that up, and they weren’t listened to. I’d like to start with this example because I think it’s a perfect encapsulation of the mundane decisions that end up with products and we can end up with stuff that can hurts us, and nobody realizes it, until it’s too late. And the other reason I bring it up is because it’s a literal manifestation of delight, a cupcake, right? We focus so heavily on this one thing that is positive, and that everybody’s excited about. It’s going to be great and people are going to engage with it. And the reality is we don’t see all the ways that it can go wrong and we can see this in a ton of small choices that are being made over and over and over again in the tech industry. So small choices like this one. And this is an email, my friend, Dan Hahn got on his email, he has a smart scale. But you may notice one of the things. But you may notice some things, it’s not addressed to Dan, it’s addressed to Calvin. Don’t be discouraged by last week’s results. We believe in you! Let’s set a weight goal to help you inspire you to shed those extra pounds. That is because Calvin is a toddler. Calvin does not have a weight goal. Calvin does not need a weight goal. But, weird, his weight goes up every week! And this product was designed under the assumption that everybody who’s using it is trying to have their weight either stay flat or go down, and that if your weight is going up, that is a problem. That is a problem, that even though he’s set a weight goal. He hasn’t said that he wants to lose weight, because he’s, like, two.

Even though, they’re assuming that he must be wanting to set a weight goal, he must be trying to lose weight. This is funny if you use this with your toddler, literally doesn’t have a weight goal, and they can’t read. It gets less funny for lots of other people, and less funny in other circumstances. So this is another push notification that Dan’s wife received: congratulations, you hit a new low weight. She also did not set a weight goal. She had a baby. First time she weighed herself after that. Congratulations. But you have to think of all the people for whom that is not a congratulatory moment. Like I have people with chronic illnesses that when they lose weight, that’s the number one sign that means they’re sick. And you can see so many types of small choices. This is one of about, like, 7,000 screenshots I have received from Facebook from people who got these from Facebook. This one is an example of one of your notifications you get about your most popular content from the past being surfaced back to you. This is a picture of a woman’s mother’s grave. And this was the most loved photo that she posted in August 2016, and it was resurfaced to her as August 2018 with Facebook’s heart illustrations on top of it. This is something that Facebook has been doing for years. They know that this is a problem. They kind of pretend that they’re going to do something about it, and then they make a feature that’s basically exactly the same. Over and over again. Where we leave people out. We exclude people. Sometimes in little ways. Sometimes in ways that can be funny. So this is an example from Tumblr. I want to walk through this one. I want to walk through this one because there’s a lot to love here. So I’m following this person named Sally Rudy, and she’s an Irish novelist. And she’s great. And I asked her. So I got an email and so I want to understand what exactly happened here after this kind of went viral on Twitter. And she said that what happened was that she was just hanging out one day and she gets a push notification on her phone that says, beep, beep, neo-Nazis are here. And she’s like, oh, shit. Did I accidentally follow neo-Nazi tags? Did I favorite something? Like, what am I doing here? Like, why am I getting this on my phone. And she dug around, dug around, and got in touch with people at Tumblr and what they told her was that at first, they couldn’t figure out why shedding that. Finally, it was because she was reading posts about the rise of neofascism in the United States. And so Tumblr decided to do that with all the fresh neo-Nazi content. Of course, nobody sat down and wrote, “Beep, beep, neo-Nazis are here.” Somebody, at some point wrote a text string, beep, beep, (topic) is here. And that text stream can be used in lots of different circumstances. She used a lot of examples, actually. Someone reported recently, “Beep, beep, mental illness is here!” And he was like, well, it’s not wrong… but it’s not necessarily what I wanted to get as a push notification.

I bring up this example because this is, like, it’s a funny example of kind of a jarring circumstance but I think it’s for another reason, so after shedding this notification and stepped in and said, so sorry, so sorry, right? There was a whole Twitter conversation about what had happened. And on Twitter, publicly, the lead writer at the time, I think he’s still there, Tye Savage, this comment sticks with me, and this comment we’re going to talk about it in this talk: we talked about getting rid of it, but it performs kind of great. They talked about getting rid of it, but it performs kind of great and we can see that exact same kind of is decision-making, that exact same model playing out in some really big places with massive consequences where it gets decidedly less funny. I don’t know if you saw a story that came out of YouTube Kids. So James Bridle took a huge look on how YouTube Kids were promoting extremely creepy violent content targeted at children. This is a screenshot from a knock-off Peppa Pig cartoon. On the knock-off cartoon, Peppa goes to the dentist, and it goes into this violent torture scene and it’s not just a Peppa cartoon, it was thousands and thousands of videos. It was a keyword salad, meaning that it would be tagged up with terminology, with stuff related to matching and counting, themes that kids tend to be into. And, of course, with all these character names and brand names, so that if you have a child handed an iPad in the back of a car and they’re watching a standard Peppa the Pig cartoon, and guess what happens, they’re automatically sent to related content. And that content sends them into this rapidly creepier and creepier, and more and more upsetting world blood pressure when James Bridle wrote about this, and YouTube finally said, we’re going to shut this out, they only cracked down after an outcry, because these videos, these videos, kids were being tied up. These violent videos of doing weirdly sexual, and weirdly violent things. These videos make a lot of money. It’s very profitable. It’s very profitable. Because it’s popular for kids. The kids keep watching. It works. Google made a lot of money off of it. But at what cost. Or you can look at an example for a story that broke this summer that is some new research that came out. And in that research, there was a picket that I wanted to talk about in a place called Altena, Germany, and they had taken in a whole lot of refugees in 2014, 2015.

And at one point, Derek Denkhaus, a fireman trainee, he broke into a home, and set it on fire. Derek had no history of being particularly political at all, much less violent. Everyone was shocked that this would be something that he would do. And after looking, there was only one thing that set him apart. He had been using Facebook… a lot. And the in general had been using Facebook a lot. And his story is part of a research that Carson Nieler, and Carl Schwartz started looking at variables that can come up during these attacks. Income levels, and education levels, how right or left leaning that part of — at, is it urban areas, rural areas? How many newspapers are sold? Is there a long history of hate crimes in this community independent of the refugees coming in? In this community, what levels of diversity does this community have already? Does this community have a lot of newspaper sales? Does this community have any number of variables again, right? Over and over again, they did not find anything. And the one thing they found was that whenever a town had a higher-than-average use of Facebook, they had more Facebook attacks. Over and over again. Reliably, right? Big city, small town, it didn’t matter. Whether it was rich or poor, it didn’t matter. Whether it was right or left wing, it didn’t matter. Over time, per-Facebook, one standard deviation over average, and 50% increase on refugee attacks. Essentially what was happening was that people were going on Facebook, and what people were seeing on Facebook was not necessarily what was happening at large. Because Facebook’s algorithm is on — things that perform on primal things like fear, and anger, they perform great. So even in a community like Altena which had been, actually, pro refugee. They have a lot of drive to donate goods to refugees. They have inform community programs to partner with refugees. And even in a community like that, the story being told on Facebook, different. And people like Carl Denkhaus, thought that was really true. And thought that, in fact, violent answer was the answer. And if you look at any company like YouTube, is that over and over again you’ve got these organizations that’s really built around this culture of prizing one thing and one thing only. And that’s the hockey stick growth that shoots up. And that means more and more and more revenue. In 2017, Facebook made $47 billion on its ads. And even though this year, there’s all these negative stories about Facebook, they’ve made more than that this year. They surpassed that number in October. It’s incredibly profitable to allow hate to proliferate on Facebook. It’s incredibly profitable to have your kids watch hate videos on YouTube. And, in fact, it goes right at the core of how these corporations operate. In May, a memo leaked from Andrew Bossworth, one of the early, early, core leaders there. He said the ugly truth. This is what he told to all of the staff. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps us, independence anything that allows us to do connect more people more often is de facto good. Everything that we do is justified.

And so what we have is we have this technology that is shaping the world that is just not vulnerable to abuse which is often what we talk about. We say, oh, these platforms are so vulnerable to abuse. They’re not vulnerable to it. They’re optimized to it. Because the exact same thing that makes them money is what makes it so dangerous for others. So we’ve got that problem and we’ve coupled that with these other issues. And we have biases embedded in these tech products and others. So I’m going to just talk about a couple of of them. But I’m going to focus on things on natural language processing. So sort of how machines understand text, right? So take, for example, there is a technology called Word 2 Vec, and it’s possess≈put together by Google and what they did was looked at 3 million words from Google News articles, and then they trained this tool to understand language by looking at these Google News articles. What they wanted to do is how to find vectors. So Word 2 Vec means vectors, and what they’re really looking for is relationships between words, so other than just understanding the definitions, they’re trying to get at higher-level natural language processing by looking at how words sit spatially to each other. So because it was more of a advanced tool, it was able to do things that were more advanced. So it can do analogies like, Paris is to Tokyo like France is to Japan. Man is to programmers, and women is to homemaker. Possibly less cool. And the reason they had did that, the way that they looked at these data, to an algorithm, man is to computer programmer, as woman is to homemaker, is the exact same kind of relationship as Paris is to France as Tokyo is to Japan. It’s not true for us, but it was true based off the data, meaning it was based off of the people who were chose to report those articles, the topics those articles were about. All of those things reflect certain worldviews, and reflect certain people. And also reflect a historical perspective, right? So, sure, are there more people who are women who are being interviewed in articles who are homemakers than computer programmers. Yeah, that’s probably true, historically. But you don’t necessarily want the machine to think that is true factually, right? And you can see these kinds of biases so many different places. This is a story that came out, I think, in October that, yeah, in October of this year, it was revealed that Amazon had spent two years working on a tool that would use natural language processing in order to start looking at résumés and figuring out who would be a good person to hire. They were building this tool because Amazon was going through a rapid hiring spree, right? They wanted to expand their staff very quickly. And they couldn’t find enough people to fill those jobs. And so they said, okay, we’re going to use AI to go out and find those people for us as sort of a way to autumn recruiting. So I’ll just walk through what happened here. So what they did is we want to find who are the ideal candidates, right? Sure, don’t we all? And so what we’re going to do is we’re going to use ten years of résumés that we have collected here at Amazon and the outcomes of those. And we’re going to use that to train AI. So who was hired, who wasn’t hired. And we’re going to look through the people who we think have a good chance of being successful fits. The right kind of people we would hire here.

Immediately, it would downgrade any résumé that mentioned the word “woman.” So maybe you went to a women’s college, or you were part of a women’s organization, or part of a team. You immediately lost a point. I mean, these are the obvious ones. There are less obvious ones here, right? And whether or not a candidate knew coding languages required for the job. That did not affect the algorithm at all. So it didn’t give you more points if you knew the technology that’s used on the job. However, it did favor candidates that used words like captured and executed, which were mostly men. So what they found was that this technology was so biased, it was so problematic that it couldn’t be ended up fixed. They worked on it for years. And they ended up scrapping the whole thing because they couldn’t be sure having a bias hiring process.

Some of you might have seen, a technology called Predictum. And the idea is that Predictum is going to help you find the perfect babysitter by taking a look at their social media profiles, right? So what they do is say, we want to find you childcare providers that you really trust. We’re going to use mommy bloggers to be our barometer for what people are looking for in a babysitter. So they asked mommy bloggers. Even anonymous accounts — even if you really tried to anonymize your account, they’ll find every sneaky backdoor and they would look at all of your posts and decide whether you were risky or not. And then those would be categorized. Do you have abusive posts, are you polite? He positive? Negative? So they’ll give you categorizations to everything that you’ve done, and they would help to determine what your risk level was. So pretty soon, a writer at Gizmodo, Ryan Merchant was like, I’m going to let this loose on people and find an actual babysitter, and see what this does. And he ran her through this system, and she was determined low risk, but something very important here. A moderate risk for disrespectful attitude. Okay…

AUDIENCE: Whoo!

SARA: So he’s like okay… maybe she’s considered low risk overall but maybe in the other article, people talked about how if they saw anything other than the lowest possible risk rating, if they saw any flags at all, they would strongly not hire that person. So we’re talking about real potential for people’s employment, right? So let’s take a look at some of the posts which they flagged which was from Twitter, but it was anonymous, but they found it. Let’s look at these posts… didn’t put any makeup, but I got that post-poop glow. Our legal system is a fucking crazy map. Haven’t decided if I’m an indigo child or a narcissist, 2018 is the year I stop talking shit. So these are the ones the system found disrespectful compared to what he got for somebody else in his life, I don’t know what that encompasses. I’m sure he’s fine… this guy, his name is Nick. He’s a standup comedian. So you might guess his Twitter is a little dicey. And he comes out as low, low, low risk. And it doesn’t flag any of this posts. So he goes in, and looks at Nick’s posts and outlines out all of these different posts that he has. Just right there. Just right there. This is the kind of stuff that he’s talking about, right? So he’s talking about Tom Brady and some questionable circumstances with the Vice President. A lot of name-calling, a little bit different than what you saw on the screen earlier. So he’s talking about calling people variations on F-words. I’m going to keep it cleaner for the transcript. So this is, again, what’s on his Facebook. Nothing flattering. So he goes to predict them and he says, what’s going on here. And he gets the immediate answer. He says, I can guarantee 100%, there was no bias. 100% no bias because, you know, we don’t look at skin color, we don’t look at ethnicity, those aren’t even algorithmic inputs. We can’t possibly be screening for bias because we don’t even put those in the algorithm! Here’s the thing, though. We don’t know if those results are coming back because of some engrained results in the algorithm. We don’t know at all because guess what, they won’t reveal where they got that data from they trained that algorithm on. And did you know that historical stereotypes about black women and disrespectfulness. Did you know that people have historical perceptions about black women’s speech that automatically makes it seem disrespectful. Could this be influencing this algorithm? We don’t know if this is happenstance. We don’t know if this is a systemic problem. We don’t know where the algorithm is learning what because he doesn’t reveal where that is. And this could have a huge impact on whether somebody gets a job or not. And I bring up this example because I think this is a particularly interesting piece of technology but also because there’s thousands of things that are being designed that are making decisions that are making huge impacts on people’s lives and no ability whatsoever to see what’s happening. So I’ve covered a lot of stuff and this is the point where I kind of think like, “What even is happening.” So I’m going to sum up for a second here. I’m going to walk through. First of all, we’ve got an extremely powerful industry that has had very little oversight for a long time that has a history of being reckless products without care for consequences. Now we have an industry that is also starting to decide how machines are going to learn. What they see, what they don’t see, what they prioritize, and they’re embedding that machine into almost everything you can imagine, including a lot of the tools that most of us are relying on now. And the inner workings of those systems are opaque at best, and proprietary secrets at worst. And the people who make them often can’t even explain them. They can’t even tell you how the algorithm ended up with the results that it did. And here we are. All of us, we’re increasingly relying on these, to do our jobs, and to share and inform the world with the work that we’re working on. I’ll try to talk to tech companies, and I try to talk to them a little bit about what they should be doing, thinking about. And what they should start doing to start to change. But I want to mention today that I’m 100% not a journalist so I don’t know exactly how journalism fixes this per se. But I do want to talk a little bit about how we might be able to think about this. And how we might be able to bring this around as we go back to our workplaces next week, or maybe if we can go back to our workplaces some time in January… speaking for myself. I want to talk a little bit about how we think about this. So how might our questions move our own defaults. I think there’s going to be some talks about this over the next couple of days. So some of the defaults that I’ve seen that I’ve seen coming out of the media. Lots of default ideas about who average Americans are. Probably not a background like many of us. New York is here. And the west is over there. That became very clear in a lot of coverage of the fires in California.

Puerto Rican lives are not the same as American lives. And also, that our English, or should I say, my English, is standard English. Other Englishes? These are, like, a small spattering of false norms I’ve heard really, really commonly in the media, right? So we’re responsible for these false norms because it has always been what the media does has an impact in the world. But now we have this additional responsibility that’s creeping in where media is playing an enhanced role in what machines understand, right? And so you cannot be held responsible for somebody building a shitty algorithm that looks at Google Articles and ends up with biased results. However, you have to think about the fact that that’s what they’re learning from oftentimes. They’re learning from the words that we put out.

[ Applause ]

Thanks for that. So over and over again, we have to think about how does that add, or how does that power that we have to communicate without with the public, how does that add to the responsibility that we have when you start getting into things like machine learning involved. I also want us to consider, how might we interrogate the tools that we’re using? It’s really easy to take a tool at face value when it seems useful to you in the moment. I do it all the time, I’m sure. So what kinds of questions are we asking of our tools? Should we trust them when we shouldn’t be trusting them? And this could be anything. But what we’re gathering data about our audience trying to understand how they think, who they are, when we’re trying to target new audiences. New people. When we’re sifting through big datasets. This is probably one that so many of you are working adjacent to. Journalism is attempting to look at lots and lots of data, find some information in it. Are we using tools that have bias? Have we even thought about it. All along the way, we have to think about the tools that we’re using, and whether those tools have their own problems, and what do we do? And then do we use it, not use with the knowledge of that how do we mitigate those harms? And how do we think about the newsrooms? I know a ton of ways that you want newsrooms to change, and you probably wouldn’t have come to a conference like this in the first place. Who needs to be in the newsroom right now, or who needs to be out there reporting, why aren’t they there today? I know some folks have pains about that what questions do editors need to be asking before being published. Who should be interviewed? What kinds of sources are we not using today and why are we not using them? And how might we be tougher on the tech industry in general?

For a very long time, the tech industry kind of had the free press on tech press. I don’t know how many of you consider yourselves part of the tech press. But time after time, we saw glowing profiles, how big is the IPO, or what is the shiny new technology? But very, very seldomly were we hearing questions like, okay, did you actually talk to people who might be impacted by this product, and were they involved in this process? Did you do anything to find out this could potentially hurt someone? And how do you know that data is quote-unquote, “good”, and also, what does it mean, good? Also, I’ve seen image recognition packages go out saying we have a 99% success rate. Now image recognition has a lot of other problems like surveillance stuff that I’m not going to even try to get into at this moment. But what you find out is that they have 99% success rate if you look at the people they tested it on, and if you look at the people they tested it on it’s 99% white people. And as soon as you look at that, it’s very, very bad. And you talk about how you don’t want image recognition to be necessarily good. But you can also talk about saying, if we turn that on as surveillance for people of color the machine learning is constantly misidentifying them. What problems does that create? So the problem is these questions are not being asked nearly enough. So as you go through the next days of talks, I hope that you keep thinking about all the ways that we might be questioning power in every aspect. Of our own power, of the power of the people we cover and the people that we work with. So thank you so much for listening this morning!

[ Applause ]

KIM: Well, that was a good note to start on. We have time, I’m sorry, for one question. So you better make it good! Oh, I don’t mean to scare you!

SARA: We have time for one question. It’s fine if your question isn’t the most amazing question you’ve thought of in your life. Set that bar lower for you.

KIM: I’ll take a decent question.

AUDIENCE: The bar is low enough. I just wanted to ask about — you talked about the “it performs kind of great” for the things on algorithms. In what sense because it apply to the lies in journalism?

SARA: Speaking as a non-journalist, I think it’s a good question. I’m a little reluctant to, like, shame people’s, like, faith necessarily. I also think that there’s like, on an individual level, there’s so many reasons why that’s where people have ended up. But, I mean, yeah. Click-baity, crappy journalism has pro live rated partially because of the way it performs. It hits at people’s basic emotions, and weakest moments and also something else, if you have something that’s really inflammatory, and it gets to Facebook, and it performs kind of great. I think breaking out of that is a really big and tough question. I think that’s part of the reason why probably a lot of you are here is sort of like we are at some weird and bad moments in terms of, like, what people think about the media, what people start to use the media, where people are getting information, how information is shared. What we’re going to do to get ourselves out of that hole, and who needs to change outside of us in order to be able to do that. I really do think a big question, just as an audience, is that it’s a big, big problem. But I think the big thing is that if we do not put real pressure on the tech industry that has enabled kind of that worst behavior to change things, and if we do not start making that a priority I think it’s going to be very hard to solve any of it within, like, the more journalism level. Like, I don’t really think you can because it’s where so much of that power has relocated. And so how do you do that? Like, great question. But, like, changing how the tech industry operates, I think, at this point — one of the only things that you’re seeing is lots and lots and lots of public outcry, and I think that puts kind of pressure on a governmental level. I don’t know if any of you heard the most recent governmental hearings. I did not watch the whole thing. It has to get to a place where it’s like, why can’t conservatives have all their news online, which is like the — why are you so biased against conservatives is probably not the right question at this moment in terms of what’s going on with tech. So we’re going to have to get beyond that, and I think that is only going to happen with a constant amount of, like, deep, painful work. Sorry, y’all. But that’s why we’re here.

KIM: Awesome. Thank you so much, Sara. We have five minutes for you to go rehydrate, re-doughnut, maybe. I’m not sure. Re-whatever-you-need-to do, and then we’ll be back.