- Public Media Innovators
- Posts
- Exploration #135
Exploration #135
AI Is Not the Answer

Image Generated with DALL-E
Welcome to the Public Media Innovators newsletter. Feel like AI is stalking your job? We got you covered this week with a column and piece on prosocial AI. We’ve also got a new generative video tool from Adobe, the first legal opinion on AI and copyright, a school shooting turned into a powerful video game, and finally, actor Hank Azaria puts AI to the test of recreating many of his most beloved characters from The Simpsons.
But First…
There’s still time to register for PMI’s next webinar, Innovate with Current: The Secret History of Public Media, on February 20 at 2pET. Times are tough, I know, but public media has been through tough times before (and I’m not talking about ‘95). In a nutshell, this session looks at how public media innovates to rise to historical challenges, be they technological challenges or challenges in how to better represent America…, or both!
Heading to South by Southwest?
It’s the time of year where we put out the call. If you are going to South by Southwest next month, please let me know. Amber, David and I will be there from PMI, and I believe Mikey Centrella from PBS’s innovation team will be there as well. If you’re going, let’s hang out and compare inspirations over tacos!
AI is Not the Answer to Every Problem
There’s a lot of attention being paid lately to a very anti-social approach to AI deployment, the development and adoption of tools with the intent to replace the human workers.
Obviously, this trend isn’t a new one. We’ve long talked about robots taking over factories (and Amazon shipping centers), self-driving vehicles taking over trucking, and (I don’t know about you but) I certainly pause at the grocery story sometimes as I weigh self-checkout vs. supporting the human cashier line. As the exponential curve of automation extends, corporate titans are finally aiming it at a much wider swath of the service economy. We’ve been hearing that these innovations will impact “white collar” jobs, and that is starting to happen at scale.
Not everyone is rushing to jam AI into every corner of their operation. The CEO of AirBNB is taking a go-slow approach to integrating AI. That’s very specific product and industry, but it shows that you don’t have to integrate an AI to your brand to recognize benefits and improve the customer experience. You just need to track the tech trends and make informed decisions based on experimentation and prototyping.
In part, some of these changes will come in the form of agentic AI, a heralded step toward artificial general intelligence (AGI) where AI performs tasks made up of compound steps. We’re going to be talking more about various forms of agentic AI in the coming weeks here. There is a lot of buzz coalescing around that being the dominant AI breakthrough of 2025 (generative video is so passé now, apparently), and this is the type of evolution in AI tools that, if not handled properly, could ruin careers (both of staff ill-prepared to embrace adoption and of decision makers deploying untested tools as solutions to ill-defined problem).
Salesforce CEO, Mark Benioff, recently opined that today’s CEO’s are they last to have led all-human teams, and that the era of AI workers is upon us. So did OpenAI CEO, Sam Altman, in a recently published minifesto. Two years ago, it was cute to talk about Claude and ChatGPT as your virtual intern. And it was an apt metaphor, given the state of the technological art at the time. Today, the idea of an AI worker feels much darker. In the zero-sum zeitgeist of early 2025 that metaphor now implies that AI is coming for your paycheck.
The truth is, AI is not the solution to every problem. But there are prosocial approaches to deploying AI that can help guide us. These are values-based structures that strive to integrate human intelligence with artificial intelligence. Some should be captured already in an organizational AI policy that is based on your public media values. But it should also be represented in how you encourage those at your station to engage with AI tools.
In Nebraska, we recognized in 2023 that we needed to put people first. Our station policy explicitly states that. But as agentic AI starts to mature, we need to take a more proactive role in ensuring that people embrace these new tools in the most constructive ways possible. And I’ll have more on that in a few weeks.
Okay, on to the links.
Learn…
Innovate with Current: The Secret History of Public Media - Thursday, February 20, 2pET/11aPT - We all know the origin story of public media—the 1960s, Lyndon Johnson, the Public Broadcasting Act, the premiere of Sesame Street, and NPR’s Watergate broadcasts. But public media’s spirit of innovation, mission-driven storytelling, and fearless experimentation started long before that—over 100 years ago.
In this webinar, Current’s Mike Janssen leads a thought-provoking conversation with our panel about the long and necessary history of innovation within public media. Together, they’ll uncover how past innovators in public media navigated emerging technologies, shifting audience behaviors, and a constantly evolving understanding of what it means to create media in the public interest—struggles that feel familiar today.
Long before PBS and NPR, pioneers like us rose to the occasion. Join historians Josh Shepperd and Allison Perlman, sociologist Laura Garbes, and Black Public Media’s Executive Director Leslie Fields-Cruz in exploring stories of innovation that offer inspiration and practical lessons for shaping the next era of public media. You can register for this webinar here.
Think…
Why ‘prosocial AI’ must be the framework for designing, deploying and governing AI (Cornelia C. Walther - VentureBeat)
Key Line: “The popular imagination often pits machines against humans in a zero-sum contest. Prosocial AI challenges this dichotomy. [...] By integrating the precision of AI with the nuanced judgment of human experts, we might transition from hierarchical command-and-control models to collaborative intelligence ecosystems. Here, machines handle complexity at scale and humans provide the moral vision and cultural fluency necessary to ensure that these systems serve authentic public interests.”
Why It Matters: Walther provides an intriguing way to think about the integration of AI with non-profit values and structures. Her ideas shouldn't be foreign, but if you've been struggling to bridge the gap between the tech and our mission, this piece will help.
The Vatican Has Some Thoughts on AI (Evan Armstrong - Every)
Key Line: “AI is scary because it forces us to re-examine the big questions: What is intelligence, what is consciousness, or even, what does it mean to be human? It is thus unsurprising that the pope, something of a specialist on such existential questions, is weighing in. The Vatican released a document last week called "ANTIQUA ET NOVA: Note on the Relationship Between Artificial Intelligence and Human Intelligence", in which they do their best to help resolve some of this dread.”
Why It Matters: Diversity of thought is never bad, and a lot of the thoughts coming out about the future of AI are coming from people with a platform-amplified voice and vested stake in the outcome (see Sam Altman's piece a few stories down). The Vatican is no less vested, but for entirely different reasons (point of order: I'm at best agnostic). If you are Catholic and have not seen this, I'd suggest it's required reading. And if you are not Catholic but want a more humanist perspective, then this one is also probably worth your time.
But You Don’t Have to Take My Word For It: Read Antiqua Et Nova for yourself.
The Conversation is trying to make its academia-fueled model work for local news (Joshua Benton - Nieman Lab)
Key Line: "That’s the idea behind The Conversation Local, an initiative that celebrated its first birthday on January 1. In four markets across the U.S., a small team has been connecting experts at local universities to local issues and distributing their work for free to dozens of local news outlets — most in those markets, but sometimes beyond."
Why It Matters: I first discovered The Conversation when I started working in earnest on this Newsletter, and as a result you've been exposed to some interesting takes on issues that impact public media. The idea that this could be an avenue for local journalism strands is an interesting one...especially since a number of us are licensed to universities.
Creativity Will Never Be Efficient—And That’s A Good Thing (Benjamin Wolff - Forbes)
Key Line(s): "That Hemingway refined his writing style by looking at a Cézanne painting was anything but typical. It would have been more efficient for him to find an example in another writer. But Hemingway was doing something new, not looking backwards, and not optimizing his creativity. [...] Throughout history, the most original thinkers and creators have often found inspiration outside their primary disciplines. [...] Hemingway was curious—and patient. He understood that new ideas emerge from unexpected places and take time to come together. He accepted that human thinkers get distracted, make wrong turns and are troubled by doubt. He worked hard at his craft but welcomed the daydream and a wander in the gardens."
Why It Matters: We talk about using AI to make us more efficient, but it can also have a role in augmenting creativity if you use it the right way. Feed it a piece of work or an idea for a story and ask it to challenge your assumptions, confront you with your biases or illuminate your blind spots.
Hollywood’s AI Blind Spot: The Fatal Mistake That Will Kill the Industry (Shelly Palmer)
Key Line: "Generative AI is not just another production efficiency hack. It represents an evolutionary leap in narrative experience. AI can dynamically generate stories with scripted characters but unscripted outcomes. It can create content that adapts in real-time to individual viewer preferences, making hyper-personalized storytelling a reality. This is a fundamental shift in how stories are told, consumed, and experienced."
Why It Matters: I would argue that gAI is many things at once. Generative AI is a potentially potent productivity hack. But Palmer's assertions in this piece are also correct, at some point we will go from prompting AI to experiences where AI is prompting us. That said, sports and concerts show us that the shared experience in real time still carries value. So, I don't see these going away. The real question really becomes where and when do the economics make that type of experience profitable. And behooves public media to figure out how generative storytelling works in a nonfiction environment.
Know…
PBS poll finds broad Trump voter support amid GOP defunding push (Sara Fischer - Axios)
Key Line(s): “The new internal poll, conducted in conjunction with YouGov, shows 65% of Trump voters think the public broadcaster is either underfunded or adequately funded, according to a copy of the poll obtained by Axios. […] 82% of voters, including 72% of Trump voters, said they valued PBS for its children's programming and educational tools. [...] The poll included over 2,000 respondents, 792 of which said they voted for Trump."
Why It Matters: Because what we do matters.
BBC World Service to cut 130 roles to save £6m in the next year (Charlotte Tobitt - PressGazette)
Key Line: 'Foreign Secretary David Lammy announced an extra £32.6m for the BBC World Service for 2025/26 in November. But the BBC said that despite this “welcome uplift”, previous licence-fee freezes, global inflation “and the need for ongoing digital and technological upkeep have meant savings are necessary”.'
Why It Matters: It's useful to occasionally check in and see how public media is doing around the globe. This headline caught my eye. "Things are tough all over," as my mom likes to say.
Adobe’s Sora-rivaling AI video generator is now available for everyone (Jess Weatherbed - The Verge)
Key Line: "And because Firefly is trained on public domain and licensed content, it’s safe for commercial use. Adobe even describes its Generate Video tool as “production-ready” to entice users who want to use AI-generated videos in films without the risk of violating copyright protections."
Why It Matters: The generative video space is heating up, though not quite with the sense of excitement that I anticipated. I’ll have more stories on that next week. But, for now, remember that just because something is copyright-clear doesn't mean there aren't still ethical considerations to using generative video in our world. If this doesn't seem familiar to you, stop what you are doing and watch our webinar from December with PBS Standards' Talia Rosen (here are the slides).
But You Don’t Have to Take My Word For It: Read Adobe's announcement for yourself.
OpenAI Roadmap Update for GPT-4.5 and GPT-5 (Sam Altman - OpenAI via X)
Key Line: “We will next ship GPT-4.5, the model we called Orion internally, as our last non-chain-of-thought model. After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.”
Why It Matters: A lot of folks still cleave to OpenAI's products, as they were first-to-market (I know I do...though I'm rooting for Adobe as well). OpenAI's product offerings have been a hot mess, so I'd be fine if their product portfolio felt a little more intentional going forward.
Extra Credit: Altman recently penned a personal blog post speculating on the future of AI development: Three Observations. I find this observation about virtual coworkers more restrained than others (see below), and his observation about AI being like transistors feels imminently plausible. Still there is a techno-utopianism below that surface that gives me pause.
Marc Benioff says that from now on CEOs will no longer lead all-human workforces—enter the new era of AI coworkers (Emma Burleigh - Fortune)
Key Line: "The idea is that these “digital workers” would handle complex, time-consuming jobs to free up time for human employees. But simultaneously, professionals see the potential for an AI takeover; these agents can compile and analyze data quickly, provide customer care, and help streamline team operations. If an algorithm can do the work, what does that mean for humans?"
Why It Matters: As I mentioned in my column above, the semantics of "digital workers" feels tone deaf right now. But if you are a GM, this is something with which you are going to need to attend. Amidst all the other things pulling at your attention, you need to task some of your team to begin working out not just a policy on AI, but an onboarding plan to ensure your workforce stays at the cutting edge by creating an AI toolbox.
Thomson Reuters Wins First Major AI Copyright Case in the US (Kate Knibbs - Wired)
Key Line: 'Notably, Judge Bibas ruled in Thomson Reuters’ favor on the question of fair use. The fair use doctrine is a key component of how AI companies are seeking to defend themselves against claims that they used copyrighted materials illegally. The idea underpinning fair use is that sometimes it’s legally permissible to use copyrighted works without permission—for example, to create parody works, or in noncommercial research or news production. When determining whether fair use applies, courts use a four-factor test, looking at the reason behind the work, the nature of the work (whether it’s poetry, nonfiction, private letters, et cetera), the amount of copyrighted work used, and how the use impacts the market value of the original. Thomson Reuters prevailed on two of the four factors, but Bibas described the fourth as the most important, and ruled that Ross “meant to compete with Westlaw by developing a market substitute.”'
Why It Matters: We've been talking about the legal vagaries of gAI tools for close to two years. We still have a ways to go before getting a truly clear picture, but thanks to opinions like this and clarity from the U.S. Copyright Office answers to the question of legal use in content are beginning to come into focus.
But You Don’t Have to Take My Word For It: Read the Judge Bibas’ opinion and the Copyright Office’s recent report on Copyright and Artificial Intelligence yourself.
AI chatbots unable to accurately summarise news, BBC finds (Imran Rahman-Jones - BBC)
Key Line: “In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants. It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.”
Why It Matters: I'm in no way trying to make excuses for the hallucinations (AI is not the answer to every problem), but studies like this one from December take time and the frontier models have already evolved. I'm still not at a point where I think you should be using AI for research without fact checking, but I also haven't had time to do extensive tests with the newest research tools on the market. And I am finding I rely more and more on Google's AI answers to my questions.
But You Don’t Have to Take My Word For It: Read the BBC report and the reaction from the CEO of BBC News and Current Affairs for yourself.
AI might already be warping our brains, leaving our judgment and critical thinking ‘atrophied and unprepared,’ warns new study (Chloe Berger - Fortune)
Key Line: '“Across all of our research, there is a common thread: AI works best as a thought partner, complementing the work people do. When AI challenges us, it doesn’t just boost productivity; it drives better decisions and stronger outcomes,” one of the authors, Lev Tankelevitch, Sr. Researcher, Microsoft Research, said in an emailed statement to Fortune. Noting that there’s been some evidence where AI enhances critical thinking when it was led by humans and guided by educators, Tankelevitch adds that “on the flip slide, our survey-based study suggests that when people view a task as low-stakes, they may not review outputs as critically.”'
Why It Matters: How you use gAI tools matters, and this shouldn't be shocking. If you use a car to get around, vs. a bike or walking, you have to find other ways to work out those muscles if you want to stay fit. There's no reason to think this should be different for think-work either, so simply pay attention to how you are using these tools and pay attention to the quality of the work you are generating. Does it remain up to your standards? What could you do to keep it at that level?
But You Don’t Have to Take My Word For It: Read the report for yourself.
Lawyers Caught Citing AI-Hallucinated Cases Call It a 'Cautionary Tale' (Samantha Cole - 404 Media)
Key Line: ‘“Our internal artificial intelligence platform ‘hallucinated’ the cases in question while assisting our attorney in drafting the motion in limine,” the law firm said in a filed response. “This matter comes with great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm. This serves as a cautionary tale for our firm and all firms, as we enter this new age of artificial intelligence.”’
Why It Matters: When I first saw this story, I thought it was actually from 2022. But, alas, no. In this instance, the firm in question had an "internal artificial intelligence platform." More of us will be developing or acquiring our own in-house platforms in the coming years, and maybe those will minimize their hallucination ratio. But "people first, people last" is still a good foundation for your organization's AI usage policy.
A ‘True Crime’ Documentary Series Has Millions of Views. The Murders Are All AI-Generated (Henry Larson - 404 Media)
Key Line: "I was curious about how his whole operation worked. Paul is not the first person to lie on the internet, but it felt like he was lying in a brand-new way. Paul had found his own niche within the AI-generated slop ecosystem that 404 Media has reported on for the last few months. He believed people wouldn’t want to watch his videos if they knew they were fake, and that he wasn’t any worse than the competition. “True crime, it’s entertainment masquerading as news […] that's all there is to it,” he said."
Why It Matters: This won't be the last time that fake documentaries (AI slop, as they say) water down the ecosystem. This will be something that local public media companies will definitely need to contend with as we get into the 2030s.
Airbnb CEO says it’s still too early for AI trip planning (Sarah Perez - TechCrunch)
Key Line: '“Here’s what I think about AI. I think it’s still really early,” Chesky said. “It’s probably similar to… the mid-to-late ’90s for the internet.” He noted that other companies were working on integrations around trip planning, but that he thinks it’s too soon for AI trip planning. “I don’t think it’s quite [a] bit ready for prime time,” the CEO added.'
Why It Matters: Going slow is a legit strategy for AI-product integration, if you are doing with intention, not procrastination. And there are lots of ways you can integrate AI that don't show-up directly to your main consumers. In our world, that translates to the fact that our content doesn't necessarily have to be AI-infused to still recognize the benefits of AI adoption.
How a School Shooting Became a Video Game (Simon Parkin - The New Yorker)
Key Line(s): “A school shooting might be considered a tasteless subject for a video game, if not an entirely taboo one, had The Final Exam not been designed in collaboration with Manuel and Patricia Oliver. Their son, Joaquin, died on Valentine’s Day, 2018, in a mass shooting at Marjory Stoneman Douglas High, in Parkland, Florida. […] Manuel told me that Joaquin loved video games; he would spend hours building a character in FIFA who looked just like him, and would plead to stop at the video-game store on the way home from school. A game seemed like an apt way to honor his memory. Moreover, in video games, one plays an active role in the drama; the Olivers thought that a game could, in turn, be a more effective educational tool than a piece of passive media.”
Why It Matters: This artistic statement, in the form of a video game, is the type of content public media could be making in service of its communities. Games are as diverse as humanity’s reason for playing them. Some are cozy, some protest injustice. We need to reframe public media to include games for general audiences.
And finally…
🎥"There's A Lot More to Voicing a Character Than Just Opening Your Mouth" (Hank Azaria - NY Times via LinkedIn)
And finally, the NYT's experiments with Azaria are not dissimilar to the ones we've done with ElevenLabs software at Nebraska Public Media. In certain instances, you can effectively mimic a human. But "characters" are an entirely different matter, as this video shows.
Have a creative, productive week!

Image Generated with Adobe Firefly 3
Reply