AI Tech in Journalism-Episode 3

Today, I’m sitting down with Dr. Benjamin Birkinbine. Dr. Birkinbine is a media scholar and academic. He currently serves as the director of Graduate Studies and is an Associate Professor of Media Studies at the Reynolds School of Journalism and the Center for Advanced Media Studies.

Birkinbine's research explores the intersection of media, and technology, and his research is rooted in the political economy of communication, with a specific focus on free and open-source software and the digital commons

His work has been featured in numerous academic journals and publications such as Media, Culture & Society, the International Journal of Communication, The Political Economy of Communication, and the Journal of Peer Production, among others

Alayna: Thank you for talking to me. I know that Chat GPT thing has been kind of a big trend that everyone's kind of jumping on. I'm kind of interested to know what the long-term ramifications of this new AI technology would be on, you know, the media and the journalism industry.

Before we start, can you give me a little background on your area of expertise in your experience in the media? 

Ben: Yeah, yeah. My name is Ben Birkinbine, and I'm an associate professor of media Studies at the University of Nevada Reno. Specifically, I teach at the Reynolds School of Journalism, where I'm also serving currently as the Director of Graduate Studies.

 My research focuses on an approach to communications research known as the political economy of communication. So really what I investigate is the production, distribution, and exhibition of media content most specifically.

I've also focused on digital technologies like free and open [00:01:00] source software and the ways that corporations sponsor those projects. So yeah, I guess that that line of research took me a little bit into, you know, studying like hackers and what's going on with digital technology and all of that. So I've got some knowledge about those subjects as well. 

Alayna: So Chat GPT has only really come out in the last few months. What was the general reaction from the faculty, you know, the J School?

Ben: Yeah, I think, I think some of the initial concerns were, were sort of driven by some of the panic, like a little bit of a moral panic of what does this mean for journalism, for journalism education?

What does this mean for media producers? What does this mean for the university as a whole? So I think that some of the early conversations, particularly at the university level, really centered around the risks that Chat GPT would pose on student conduct. So, really, there were a lot of discussions about the plagiarism [00:02:00] aspects of it.

 An academic misconduct angle, which kind of focused on how we want students to be doing original work. And so if they're plugging some question prompt or essay prompt into, an AI engine like Chat GPT, that ultimately is undermining the purpose of what we are trying to instruct our students on.

 I will say though that to the credit, I think of our faculty, particularly, and I can't speak for universities kind of writ large, but I think our faculty started to think really creatively about just accepting that this is now a new reality.

Ben: And that we should be probably altering the way that we teach our classes or how we issue assignments to our students and the types of assignments that we issue to our students. Maybe getting back to some more in-class exercises and in-class writing activities that can really take advantage of exercising those critical faculties of putting [00:03:00] thoughts down in real time and that are your own original thoughts.

So I think a lot of those conversations are happening right now. Yeah. 

Alayna: And I mean, do you think this is a tech that's here to stay or it's gonna be kind of a passing trend?

Ben: Yeah, I think there's no question that AI and ChatGPT, or any AI in general, is here to stay. In fact, many of us are already interfacing with AI whether we realize it or not.

So it is certainly here to stay. Number one, I think that what is most impressive about these most recent iterations of these AI engines, or the chatbots, are basically that this is the first generation of this iteration of the technology, and it's only going to get better and kind of more sophisticated as we go along.

So I think the key questions to ask about Chat GPT now are really about the ethical implications of it. What are its true capabilities, and then how a variety of industries are going to adapt to [00:04:00] this new [technology] 

Alayna: One of the first questions I had is where does Chat GPT get its information?

Is it like compiling its answer from several different sources, so that it's something that's not quite copied and pasted from another source? And, yeah. So then who owns that copyright? Those thoughts? 

Ben: Yeah. So I'm, I'm kind of identifying two different questions I would say there.

So number one is kind of the technical aspects of Chat GPT, like where is it getting its information? And yes, to your point it's drawing from basically the internet. Publicly available sources of information. So these practices have been used for a long time, which is known as open source intelligence.

And so, not only just like US government agencies are using this, but businesses use this to investigate new potential employees when they're trying to search for you on the internet to see what your digital footprint looks like online. So [00:05:00] that process has now been automated through these chatbots.

You would provide a prompt to Chat GPT or the chatbot, and then it will kind of do a really quick search and, and crawl the internet to gather information. But then the unique thing is that in theory, the chatbot should not be just copying and pasting. And so that brings in well, okay, so it should not just be copying and pasting, rather it engages in a sort of creative exercise of creating syntax to respond to your prompt.

So there is, you know, at the heart of it, sort of an original creation that's there. And that in theory, it should not just be directly plagiarizing. But that being said, there have been other news stories that have come out to show that in fact, the chatbot does or has plagiarized at certain times.

There may be original creations where it's drawing and forming its own [00:06:00] syntax back to the prompt, but it may also be the case that it is drawing certain aspects of its response from previously existing material. So that's why it becomes a little bit complicated when we talk about the implications of plagiarism, for example.

Because technically there may be elements of its response that are plagiarized, but actually, the bot is creating its own response, which is in itself an original creation. So it's helpful for me, rather than thinking about this as a strictly plagiarism issue, I think about it more so as an academic misconduct issue.

And that, you know, it, it, it undermines. , the ability of the student to do their own work, right? Mm-hmm. to show their own original thoughts. Mm-hmm. So that's kind of what I think about the technical aspect. Yeah. The second piece was a copyright issue. We can talk about that some more if you'd like, too, but I kind of think about that a little bit separately.[00:07:00] 

Alayna: Yeah. Do you mind going into what you think of them, the copyright aspect of it?

Ben: Yeah. So in terms of copyright, I think that the law is actually pretty clear on this one and that non-human authors cannot receive copyright for their work. Okay. So that's number one. So one of the more famous cases there was, if you remember the APE that had taken a selfie. And then there was somebody that was trying to, it was the photographer, who was trying to argue that this non-human actor, the ape, could claim copyright of the photo itself and not the photographer. So the courts ultimately rejected that saying that the ruling there was that non-humans cannot claim copyright for their creations. So that's where the law sits right now. I think that we're gonna have a lot more interesting conversations about that specific question in light of the advances in AI. So not only, [00:08:00] you know, not only just chatbots but those that create artwork or can, digitally, provide some sort of novel creation based on another person's work.

So how do, you know, unique creative expressions differ from, say, elements of remix culture. And when a DJ mashes up other people's work and then creates something new with it. Yeah. Right. 

Alayna: And I think, right now with its early iterations, it's kind of, it's not as elegant or sophisticated. It can't, you know, write a 10-page paper at this point. But, I wonder if in the future when it does develop more of that ability to do that, if a student just copied and pasted directly from it, which would be silly, but you know, if somebody tried to do that, how, those plagiarism checkers pick that up?

Ben: Yeah, I think [00:09:00] it's a really good question and I think that one of the things that when we think about technology and how it develops, so these iterations of chatbots are really just singularly focused on one thing. So there's a chat, there's an AI for this, like Chat GPT, which can respond to questions and you know, engage in conversation, that sort of thing.

There's also AI that is specifically focused on composing music, or making pictures, creating pictures, and creating artwork according to various styles and elements of style that can be drawn from. So this is where we are in, in this regard. But the way that it develops is that not only do you have the AI engine, so something like Chat G P T, but you also have an AI detection software, which you can enter text into and it will give you a check. It tells you, oh, this was, you know, here's the probability that this was written or authored by an AI, right? So those things will continue to develop in tandem. So it's not like we need to run [00:10:00] everything that comes through Chat GPT through a plagiarism checker, although we can do that.

I mean, those plagiarism detectors. are themselves a form of AI, right? It's drawing from the databases that it has for other works. We'll just say that other creations determine whether those phrases and those sentences appear somewhere else. So from my end, as a college professor, when I run a student's paper through a plagiarism checker, what I get back is a highlighted version of the paper and it says that [ it color codes it] and it's like, this appears from this source. You know, and it tells me where it comes from. And then I'll get a score like, you know, 11% of this paper seems to be from other sources.

Or, you know, in really severely bad cases, it's something like, 60% is unoriginal. So, those technologies will, again, sort of develop in tandem. There's a [00:11:00] variety of AI detection tools that are out there that run alongside technologies like Chat GPT.

Alayna: And what do you think of the implications that these systems might have on spreading information and misinformation? I think right now people maybe don't know to take these with a grain of salt because it's so new and shiny that they might just take it at face value and not realize that these things are also putting out wrong information or incorrect information.

Ben: Yeah. So there's a lot to be said about that particular topic as well. I think that one of the interesting things that are happening now is, Chat GPT in itself, according to my own experience with Chat GPT, it seems to have some pretty clear guardrails, but if you've been following the news recently the basic users or hackers have found a way to get Chat GPT to [00:12:00] turn into Dan, which means “Do Anything Now”. Right? And Dan is this alter ego of Chat GPT, which basically removes those guardrails. But there's a disclaimer that's provided to say that as I have now entered into Dan Mode, I will say things that are nonsensical. They could be hateful, they could be whatever. Okay?

So now people have been experimenting with Dan just to see how far they can push the technology. How crazy can it go, right? So, to your point about misinformation or disinformation, or quite frankly just the Chat GPT being wrong about certain questions. I've also seen, you know, anecdotal evidence of people saying like, ‘Who is this?’ And they make up a name or whatever, and ChatGPT says, “Oh, it was a famous French painter.” Or something. It's like, “Well, that, no, that's incorrect. The person doesn't exist.” Right. Or there's no record that we have that this person exists. So again, this is the first iteration of [00:13:00] this.

And I think that you know, as we go forward, I think the key is going to be those broader ethical discussions and how it's going to be used. I think that's an open topic of conversation. 

Alayna: Yeah, because I think also, I think the information it gives you could be influenced by the question you ask. You know, if somebody asks Chat GPT ‘What are the dangers of the Covid vaccine?’ It's gonna scour the internet for any source that would write about the dangers of the Covid vaccine versus, giving it a caveat of where that's coming from. 

Ben: Yeah, absolutely. I mean, the technology itself is iterative in that it builds off of the inputs that it receives to try to improve itself over time. Right. And so any AI engine is going to be reflective of the information that is put into it. So that not only takes place at the coding and the design of the technology itself, so the biases that exist, the databases that it's [00:14:00] being fed, it's going to be reflective of all that. But from a programming standpoint, you can also try to tweak the code to try to address some of those errors or to indeed put guardrails on the AI. again, this is just version 1.0. And so of course there are, there's gonna be a lot of these concerns right now, but it will only be improved, or the technology will get more sophisticated. I will say, over time in being able to address these things.

And then the question becomes, you know, what responsibility do the creators of these particular technologies have in ensuring that they're being transparent about the design aspects of these technologies, as well as what is encoded into the technology itself? 

Alayna: Yeah. So what are your views on, the pros and cons of using this technology in journalism itself? [00:15:00] 

Ben: Typically, when I think about digital technologies, I find it helpful not to really think about the pros and cons. But I think I think about it in terms of change and technological change is inevitable.

And there's going to be this complex interplay of using it as a tool that can be useful for its specific functions in that process. However, there will be experimentation, there will be learning, there will be mistakes, there will be perhaps injustices, right, because of this particular technology.

 That being said, I think again, what I continue to go back to is just ensuring that the technology is developed and is developed and improved or coded in a way that is transparent, that is responsible to the public that it is being used within. And that we as the public can be aware of what [00:16:00] exactly is happening.

Now, companies that are developing these technologies most often do not want to make them publicly available because it also opens it up for hacking attempts or improvements or copying or, you know, whatever, intellectual property theft. All of those things are because there's a market incentive to ensure that the creators of these technologies can profit off of their creations in various ways.

And I think the main important thing for me is just to ensure that it remains transparent and that it is responsible, right? In a certain way, that's the realm of politics, right? That's digital politics. That is, that is digital policy. And, you know, TechnoPolitics 

Alayna: So I'm already, I work in marketing and I've seen the ways that it can be used for SEO, content writing, and just kind of streamlining some of those processes.[00:17:00] And I can already see companies, rather than hiring like SEO or marketing agencies, you know, just kind of employing the Chat GPT. I mean, do you see newsrooms doing that or is there still gonna be the value of the human writer? 

Ben: Yes. I do think it's yes and yes. I think that, certainly, will be certain aspects of news production that will be automated.

In fact, there have already been newsrooms that are using AI engines to author basic news reportage stories. So the Associated Press has one. I think the New York Times was developing their own, or they had one. And other newsrooms have been doing this elsewhere. That should not diminish the role of the writer, but I do think that the role will change.

If you ask me to be an optimist [00:18:00] about how this is all going to go, I think it would be great if we could fully automate basic news production. Right. And hopefully, the revenue generated from that basic news production and, all of the advertising or whatever that supports those things, whatever model you wanna think about, hopefully that would free up human writers to work on more in-depth and investigative reporting stories.

Or indeed to engage in more creative forms of news telling and news storytelling. I think that's probably the most optimistic view of all of this. Right? I can also talk about the more pessimistic view if you'd like me to go that way. I just want this to appear in a way that gives my comments full context.

Because I've covered the most optimistic, the most pessimistic is basically that AI will be used to [00:19:00] supplement or altogether supplant. That is to replace human labor and you will have full newsrooms that are fully automated, that really rely on reducing costs even further by not investing in human labor, basically.

And for the purpose of simple, like propagation of whatever stories they want, right? And it doesn't matter because it's the AI. Then to your earlier questions, who's ultimately responsible for those things? And do we have a responsibility to try to crack down on the supposed free speech rights of the fictitious persons, which actually is, sort of allowed under the First Amendment right? The corporation as a fictitious person, as a fictitious person, enjoys First Amendment rights under the law. So then if we can engage in all sorts of hyperbole, hateful speech, all of that stuff, if it's being [00:20:00] generated by AI.

And being put out there simply to earn clicks and earn subscriptions and all of that stuff. It becomes simply another tool of, you know, ruthless for-profit companies just trying to, you know, weaponize our own biases against us and further divide the public against itself.

Meanwhile, the business owner just reaps profits off of the whole thing.

Alayna: Yeah. Any last comments you have on just the way that you foresee this being used and grown within the J School?

Ben: I think in some, I would simply say, I would reiterate my point earlier to say that we are still in the very early stages of AI or Chat GPT, as one iteration of that. It's only going to get more sophisticated, but it's not going away. So I think that everyone needs to learn to adapt and live with these technologies as we go forward.

Again, I think it's most useful to [00:21:00] think about technological change as a change rather than saying that it is an advancement, or that it is ultimately ruining humanity or something like that. Yeah, I think both of those things can be true and it's often a mix of the two of those. So thinking about it as a change and how it's going to change things going forward, that's going to be a process that plays out over the next, you know, five to ten years, I would say. 

Alayna: And hopefully as this grows and changes, the public who consumes the media will kind of develop more of a discernment, and the people who produce the media with the assistance of the Chat GPT, and other AI, will also have more discernment in how they use it. 

Ben: I think this is a persistent issue.

It's just basically kind of awakening a critical media literacy within the population, within producers, within consumers of mediated content. And it's something that we do right at the university level every year. Right. And [00:22:00] we just continue to hope that, as these skills get out there in the public, people become a little bit more skeptical of the things that they see and hear or read, wherever it appears.

Mm-hmm. 

Alayna: All right. Thank you so much for meeting with me, Ben. I think that's all I have for you. Thanks for your insight into this. 

Ben: Yeah, no worries. Good luck with the project. Thanks so much. I appreciate it. All right. Thank you.

Next
Next

AI Tech in Journalism-Episode 4