Exploration #134

Taking a Deep(Seek) Breath

Image Created with Adobe Firefly 3.0

Welcome to Public Media Innovators. In this exploration we burst the hype on Deepseek AI, consider Ethan Mollick’s latest gAI recommendations, explore strategies to help your organization avoid zombie gAI policies, and finally, check out an SNL takedown of AI.

But First…

Thanks to all of you who tuned-in for PMI’s latest webinar, Innovate with Current: Visions for the Future of Public Media. We had a great turnout (our third best, I think), and many more have said they intend to watch later or share it with others. If you missed it, or want to share it, you’ll find it here.

You can also now register for the prequel webinar, Innovate with Current: The Secret History of Public Media. It’s Thursday, February 20 at 2pET/11aPT. You can read more about it in the Learn section below.

Taking a Deep(seek) Breath 

By now you’ve probably heard about Deepseek AI (and if not Charlie Guo’s FAQ post in Artificial Ignorance has got you covered). Monday, January 27, Deepseek was blamed for sucker punching the semiconductor sector of the stock market (as well as AI-adjacent industries like crypto and uranium mining) with the release of a chatbot and (a day later) an image generator that were supposedly (but not really) created for a fraction of the dollars invested in ChatGPT. Even more potentially disruptive, they offered the model for free.

Cheaper processes mean fewer resources needed to make profits, and the market had been riding up a waive that was essentially built on selling wheelbarrow, shovels and pickaxes to those arriving for the gold rush. A market that was looking for an excuse to take some profits off the table got a late Christmas present in the shape of a cute whale icon.

As interested as I am in DeepSeek, I did not sign up. As I posted on LinkedIn, their privacy policy gives me pause. It states they automatically collect "keystroke patterns or rhythms". Specifically, "We collect certain device and network connection information when you access the Service. This information includes your device model, operating system, keystroke patterns or rhythms, IP address, and system language." It doesn't explicitly limit this to its site either. (FWIW, OpenAI and Anthropic do not state they collect keystroke patterns or rhythms in their policies.)

Now pair that with the fact that the user data is all stored in China. TikTok went dark for a while recently because US officials are concerned about the influence of the Chinese government on the platform and on its access to our data. This site openly says it’s taking “certain device and network connection information” and storing it in Beijing. If you’re one of the stations that’s been forbidden from having a TikTok channel, this ain’t gonna fly either.

And it didn’t. The US Navy banned it at a rate of knots. So did the US Congress, NASA and the governments of Texas and Taiwan. On top of that, security researchers found Deepseek had “left one of its critical databases exposed on the internet.” And in terms of delivering quality information, Deepseek had an 83% fail rate on Newsguard’s Red Team audit. So, I’m sure others will be following suit.

Though it’s usually as much subtext as it is text, AI nationalism is already a well-established policy across many of the regions of the globe. But as interesting a discussion is the ongoing tension between “open” and “closed” systems. Today this is playing out between open-source models like Deepseek and Meta’s Llama (which not everyone agrees is open), and proprietary models like those from OpenAI and Google. Open v. closed is the dramatic tension in tech that has defined our lives for the last 30 years, with companies like Google and Meta changing their strategy based on theatre of battle (only Apple has been consistently “closed” in its strategy). The past few weeks, it’s been Deepseek that’s driven that narrative, and now even OpenAI is thinking about embracing the other 2/3 of its name.

The pace of AI-derived change has primed us for an audible gasp every few weeks. So when the next Deepseek hits, breathe your way through the underdog and ‘Innovators Dilemma’ hype. This isn't like the day ChatGPT was released. And that day won’t be replicated again until an artificial general intelligence announces itself independent of its creators.

Learn…

Innovate with Current: The Secret History of Public Media - Thursday, February 20, 2pET/11aPT - We all know the origin story of public media—the 1960s, Lyndon Johnson, the Public Broadcasting Act, the premiere of Sesame Street, and NPR’s Watergate broadcasts. But public media’s spirit of innovation, mission-driven storytelling, and fearless experimentation started long before that—over 100 years ago.

In this webinar, Current’s Mike Janssen leads a thought-provoking conversation with our panel about the long and necessary history of innovation within public media. Together, they’ll uncover how past innovators in public media navigated emerging technologies, shifting audience behaviors, and a constantly evolving understanding of what it means to create media in the public interest—struggles that feel familiar today.

Long before PBS and NPR, pioneers like us rose to the occasion. Join historians Josh Shepperd and Allison Perlman, sociologist Laura Garbes, and Black Public Media’s Executive Director Leslie Fields-Cruz in exploring stories of innovation that offer inspiration and practical lessons for shaping the next era of public media. You can register for this webinar here.

Earn…

Google.org Accelerator: Generative AI Open Call (Google) - This deadline for this one is Tuesday, Feb 11, 2025 at 1:00 AM (I'm presuming Pacific Time). H/t to Deb Sanchez for making me aware of this one. The following is from their application page: "We are particularly interested in proposals from all over the world leveraging gen AI technology to solve problems in impactful ways across Google.org’s focus areas: Knowledge, Skills, & Learning...Scientific Advancement...Resilient Communities.” Reading through the webpage about the Accelerator, I think many of us could contribute in the first and third categories. You can learn more here, and apply here.

Think…

Three Strategies for Responsible AI Practitioners to Avoid Zombie Policies (Abhishek Gupta - Tech Policy Press) 
Key Line: "Show me the incentives, and I’ll show you the results, or at least something to that effect, is one way we can think about where and when we can expect zombie policies to rear their ugly head. We must begin by fostering a culture that values flexibility and responsiveness, encouraging teams to adapt quickly to new information and changes in the AI landscape. In particular, a culture that encourages staff to change their minds frequently, especially in light of new information, is a valuable trait that must be embedded deep into the organizational culture."
Why It Matters: Hopefully by now your organization has a v1 policy on the appropriate use of generative AI (please, I beg you, email if you need help getting v1 off the ground). But one version is not enough. The three solutions to zombie policies covered in this piece are attainable by all public media entities.

Why There’s No ‘Right’ Way to Use AI (Rhea Purohit - Every) 
Key Line: "There’s no clear way to know if I’m using AI the “right” way. The beauty of LLMs is also their curse—there is no one, true way to get the most out of the technology. Add to that the possibility that the answers are objectively wrong because the models are prone to hallucinations. That nagging feeling I had about not being “good” at AI was about understanding the shades of gray it exists in."
Why It Matters: Purohit isn't a techie. Her perspective is that of an AI skeptic working hard to test her own skepticism. So, her point of view feels familiar, especially if you keep hearing about AI but 'just don't get it.'

A New Book of the Startup Bible (Evan Armstrong - Every) 
Key Line: "A founder’s initial job is not to even have an idea. It is to discover inflection points.... Inflection theory explains why the lean startup methodology in isolation fails. It shows that a startup is movement-dependent. It does not matter how well you follow some process. If there isn’t a shift in the way the world functions, there is nothing meaningful enough for a founder to take advantage of."
Why It Matters: "Lean" startup methodology has been the rage for well over a decade now (if you went through QCatalyst's well-meaning Digital Culture Accelerator program, you were exposed to it. But ample data now shows that "Lean" doesn't yield the paint-by-numbers success it promises on the label (as those of you who went through DCA may be able to attest). This piece, a book review, introduces a new way to look at startups. I'm adding it to my reading list. You may want to as well.

Post-Inauguration Day Developments (Ernesto Aguilar - OIGO) 
Key Line: "What makes this moment different? In 2025, we are seeing a confluence of political will and structural vulnerability. With leaders opposed to perceived views being among the louder voices and unanimity of opinion seemingly a central value, traditional defensive strategies – such mobilizing support from rural districts – may prove less effective than in previous decades."
Why It Matters: Ernesto is always a strong voice for proactive leadership that embraces change that leads to better communities. If you haven't done so already, his analysis of this post-inauguration moment is a good place to start thinking about fleshing out those "Plan B's" your organization has been quietly discussing since early November.

🎧Democrats Are Losing the War for Attention. Badly. (Ezra Klein + Chris Hayes - The Ezra Klein Show) 
Key Line [from Chris Hayes]: "Steve Jobs had this saying: It’s not the customer’s job to know what they want. And I do think there’s a little bit of Democratic obsession with numbers and market research: What are the numbers saying? And part of this is just innovation and improvisation and trying new stuff that hasn’t been tried before, as opposed to backing up what you think the expectation is. And that’s really true, I think, with attention entrepreneurship. It’s not just: What does best in the algorithm? And not just: Look at the data. But to try new things."
Why It Matters: Ostensibly this is a post-election postmortem conversation, but really, it's much more than that. Regardless of your politics, there’s a reflecting that happens here that helps set parameters around the 'laws of physics' that seem to be governing the media landscape these days. As you listen to it, read it, or process the text through gAI, think about the question, how does public media compete in the attention economy without sacrificing our values?
Extra Credit: The Chris Hayes sympathetic media press push also saw him showing up in The Atlantic, The New York Times, and Semafor.

Know…

Which AI to Use Now: An Updated Opinionated Guide (Ethan Mollick - One Useful Thing) 
Key Line: "As you can see, there are lots of features to pick from, and, on top of that, there is the issue of “vibes” - each model has its own personality and way of working, almost like a person. If you happen to like the personality of a particular AI, you may be willing to put up with fewer features or less capabilities. You can try out the free versions of multiple AIs to get a sense for that. That said, for most people, you probably want to pick among the paid versions of ChatGPT, Claude or Gemini."
Why It Matters: A year ago, Ernesto Aguilar, Mikey Centrella and I wrote a user's guide to chatbots for Current. It was out of date within a month. So, I appreciate that Mollick has made this a semi-annual ritual. There's no one outside of public media whose analysis on AI works as well for public media as Mollick's does.

Inside a network of AI-generated newsletters targeting “small town America” (Andrew Deck - NiemanLab) 
Key Line: “Good Daily currently produces no original reporting, but Henderson does not rule out that possibility and considers his use of automation a model for the future of rural news. ‘If we can solve the hardest challenges — technology, growth, monetization — small teams (even one-person teams) could run profitable local news operations in every town across the country,’ he said. For the moment, the 350-plus local news teams are still operated by the same person. Most readers are still in the dark about who that person is.
Why It Matters: It's unfortunately that sometimes innovation can feel shady AF. You may be tempted to feel outrage at what this guy is doing and how he is doing it, and I wouldn’t disagree with you. But I would encourage you to set that aside and marvel at the fact that he could do this. Now imagine what an ethical, automated (or even semi-automated) product like this could accomplish in the hands of public media.

Influencer collaborations: Lessons from four months of local news experiments (Kevin Loker and Samantha Ragland - American Press Institute) 
Key Line: "It can be easy to play a quick round of pin the tail on the influencer based on follower count or post likes and comments. The higher the number, the better to work with, right? Not exactly. Take the time to evaluate your influencer landscape, to follow them and engage with their content, to listen for their values — especially those that align or mirror your news values — or even to invite them into your newsroom."
Why It Matters: The 'pivot to influencers' is the natural overreaction to the 2024 election news cycle. Adapting to this change in the media landscape should be a part of our strategies for 2025, but to succeed it needs to be done in a way that doesn't compromise our values. Fortunately, there are people exploring how this can be done.

Introducing ChatGPT Gov (OpenAI) 
Key Line: "Agencies can deploy ChatGPT Gov in their own Microsoft Azure commercial cloud or Azure Government cloud on top of Microsoft’s Azure’s OpenAI ⁠(opens in a new window)Service. Self-hosting ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance requirements, such as stringent cybersecurity frameworks (IL5, CJIS, ITAR, FedRAMP High)."
Why It Matters: For those of you in IT, this may be the balance between gAI and cyber security that you've been seeking.

US sues TikTok alleging it knowingly profits from child abuse (Ricky Sutton - Future Media) 
Key Line: "Utah Attorney General Sean Reyes claims its live streaming feature, TikTok LIVE, is a wild west where predators roam freely and children are dangerously exposed. His lawsuit describes TikTok Live as a 'virtual strip club' where teen victims 'spread their legs and flash body parts for virtual gifts'. Utah state judge Coral Sanchez overruled TikTok’s attempt to keep the allegations secret just days ago.
Why It Matters: We've talked about the dangers of Roblox here in the past, so I'm posting this in the same vein. Subscribers who are parents may have already encountered this tidbit, but if not it's good to be aware.
Extra Credit: Ernesto has started a new newsletter on LinkedIn, AI and Public Media Futures. I’m sure I’ll be featuring posts here, but I’m also sure it’s worth a subscription if you follow newsletters on LinkedIn.

Deepfake videos are getting shockingly good (Kyle Wiggers - TechCrunch) 
Key Line: "Deepfaking AI is a commodity. There’s no shortage of apps that can insert someone into a photo, or make a person appear to say something they didn’t actually say. But most deepfakes — and video deepfakes in particular — fail to clear the uncanny valley. There’s usually some tell or obvious sign that AI was involved somewhere. Not so with OmniHuman-1 — at least from the cherry-picked samples the ByteDance team released."
Why It Matters: With no major elections on tap for 2025, the buzz about deepfakes has faded into the background. But this tech keeps improving and we need to stay aware of the state of the art.

‘Millennial Careers At Risk Due To AI,’ 38% Say In New Survey (Bryan Robinson - Forbes) 
Key Line: “Danny Veiga, founder and chief AI strategist at Chadix believes these finding are a wake-up call for all professions and generations. And he cautions that, while the risks are real, so are the opportunities for those who embrace strategic upskilling and adaptability. ‘These findings shouldn't be interpreted as a death knell for Millennial careers,’ Veiga emphasizes. ‘Rather, they highlight the urgent need for targeted upskilling and strategic career pivoting within this demographic....The key is to embrace what makes us uniquely human—creativity, adaptability and leadership—and use AI as a tool to amplify those strengths.’
Why It Matters: While Millennials are the focus of the headline, most generations (save Boomers) have at least a 1/5 chance of facing disruption, according to this survey. The best thing you can do as an employee is invest in yourself and experiment with these tools. And the best thing you can do, if you are leadership at your organization, is to support your teams in experimenting and upskilling.

Meta says this is the make or break year for the metaverse (Wes Davis - The Verge) 
Key Line: "We [Meta's Reality Labs] have the best portfolio of products we’ve ever had in market and are pushing our advantage by launching half a dozen more AI powered wearables. We need to drive sales, retention, and engagement across the board but especially in MR. And Horizon Worlds on mobile absolutely has to break out for our long term plans to have a chance."
Why It Matters: I read this as a rhetorical burning of the boats, trying to inspire the Reality Labs team with fear. There are two issues here. First, the “metaverse” isn't Horizon Worlds. Though HW may ultimately be a part of the metaverse, the metaverse will never be one platform. As such, the metaverse requires standards and exchange between a wide variety of platforms. Until you can fully play in Fortnite and then switch over to Horizon Worlds with all the same assets and parity of identity, the metaverse won't have arrived. And there's no way that happens in 2025.

And finally…

Everything wrong with the AI landscape in 2025, hilariously captured in this ‘SNL’ sketch (Joe Berkowitz - Fast Company) 
—And finally, Timothée Chalamet and Bowen Yang execute a cute take down of AI on SNL.
—Or, just jump straight to the video.

Have a creative, productive week!

Image created with Dall-E 3

Reply

or to participate.