Exploration #138

Move Over Data, Make Way for Vibes

Image Generated with ChatGPT 4o

Hi all. This week we’re catching up on a whole mess of AI news, featuring OpenAI (new flagship model and image library), Claude (research and voice), and Adobe (generative extend for video), we’ve got Poynter’s survey of news consumers on AI and a great piece on what to do when your CEO requires AI be used in your organization.

But First…

Well, it’s been a minute. Great to see some of you at PMVG TechConnect, just prior to NAB. TechConnect was my first presentation of the year, which is always a great time to reflect. The audience felt like they knew they needed to make a note of AI’s latest updates, but I got a real sense that the “wow” factor of exploration has faded from the topic of AI.

Maybe I’m projecting a little. Going into the presentation, talking about AI felt at bit like “playing the hits.” But the reality is that we’ve all adapted to the notion that these models can synthesize media. What’s next is much less sexy. It’s process refinement, workflow improvements and infrastructure. But it’s also where AI truly begins to take root and affects culture change.

So, if you’re feeling like AI has gone from opportunity to a grind, you aren’t alone.

Move Over Data, Make Way for Vibes

If I could have boiled down the zeitgeist from South by Southwest into a word cloud, the largest, bolded word in the center would have been “vibes.” It was so prevalent in the ambience, that those of us there started making a running joke of it. By day three, anyone who said “vibes” triggered it an immediate echo from our little public media cohort. “Vibes,” we’d respond quietly in unison while exchanging a knowing look and a wry chuckle.

It isn’t a new term, but the world of AI has adopted vibes to define the imprecise alternative to precision- or efficiency-oriented creation. The most common usage I’ve seen is “vibe coding,” a term only coined this past February to describe a process by which natural language prompts are used to cause and large language model (LLM) to generate functional software coding. Tell the LLM what you want to do, give it your vibe, and it will then generate a more precise abstraction of that vibe.

The construct is quickly spreading to other types of creation, such as “vibe directing” or “vibe filming.”

But it was Zoe Scaman’s piece “Primal Intelligence” that drew the connection to larger, macro-cultural trends for me. She says, “After drowning in data and still making spectacularly bad collective decisions, we've rediscovered that intuition deserves more than tokenistic respect…What we're witnessing is intuition reframed as evolutionary, experiential intelligence. A rejection of excessive quantification. A cultural hunger for mystery, ritual, embodiment, and meaning. Our distinctly human magic emerging as our competitive edge. Am I suggesting we throw rationality out the window? Hell no. The sweet spot has always been at the intersection of data and intuition, of knowing and feeling. But the scales have been tipped too far in one direction for too long, and the correction was inevitable.

Her piece goes on to discuss how brands are embracing ritual and fable over rationale and formula, and it is especially worth a read if you are a marketer. Her larger theme resonated with me because, as a recovering television programmer, I can tell you that the art of TV programming is the combination of data with a gut feeling about what your audience will and won’t ‘vibe’ with.

But there’s a dark side here as well. Vibes correlate to a larger trend in society, a rejection of empiricism. And this gives me pause. Facts and data matter and they deserve to remain part of our strategic toolkit, just as AI increasingly becomes part of that toolkit. Scaman is on to something in stating that balance is the key. We’ve all seen consultants try to lead PBS and NPR stations down the path of data-fetishism. Most times it seems like it only partly takes hold, because creatives will always default to intuition in the process of creation.

A good team has both data-centric voices and people who go with their gut. As we move through the latter half of this decade, we all need to make a little more room for vibes as we rise to the challenge of serving our communities.

Okay, on to this week’s links.

Learn…

AI for Journalists and Content Creators (Thursday, April 17, 2025, 1:00pET) 
There’s still time to join KQED’s Ethan Toven-Lindsey and Ernesto Aguilar for a session focusing on how journalists and other public media content creators can strategically embrace AI. They’ll share KQED’s approach to AI experimentation, along with real-world tools and workflows that are already making an impact.

In this session, you’ll learn:

  • How KQED developed an AI policy guideline specific to public media's mission in their community.

  • How KQED’s ‘AI working group’ operates to support ethical and effective AI adoption.

  • Practical AI tools in action, including a Slack bot for journalists to brainstorm headlines, a transcription tool that helps identify highlights from KQED’s public affairs talk show, and a metadata and headline generator for newscasts.

  • Actionable strategies for fostering ways to experiment and think about AI for the long-term.

Whether you’re just beginning to explore AI or looking for practical newsroom applications, this session will provide actionable examples and implementation strategies that can help your station experiment—no matter your size or resources. Even if you can’t make the live webinar, you can get the video after the fact if you register here.

Applied AI for Public Media: Marketing, Social, and Digital Strategy 
Tuesday, May 13, 2025 12 noon ET
Looking to make your workflows faster, your content sharper and your social media posts more impactful? This hands-on session is designed for public media professionals working in marketing, communications, and digital strategy — from social media managers and digital content curators to marketing leads on any-sized team.

Join the Public Media Innovators and MarCom PLCs as we showcase real-world examples of how our peers are using AI today, including how to:

  • Generate SEO-friendly descriptions for web and video platforms

  • Draft social media posts that fit your tone and goals

  • Create alt text for images to support accessibility

You’ll walk away with downloadable prompts, links to custom GPTs, and plenty of inspiration to take back to your teams. This is a practical session built for doers—bring your curiosity, your questions, and maybe even that caption you’ve been staring at too long. Register here.

Think…

Your CEO Just Said ‘Use AI or Else.’ Here’s What to Do Next. (Alex Duffy - Every) 
Key Line: "This five-step guide is for anyone trying to understand what Shopify’s new AI expectations mean in practice. What does “reflexive AI usage” look like? How do you go from feeling behind to feeling fluent? Most importantly, how do you make AI work for you—without it feeling like more work?"
Why It Matters: While you aren't likely to run into at GM in public media who is as as brass tacks as Shopify CEO Tobi Lütke, this guide also provides a path for self-starters who want to jump into AI use ahead of any corporate pressure.
Extra Credit: Read Lütke's internal memo: Reflexive AI usage is now a baseline expectation at Shopify 

The origins of Patch’s big AI newsletter experiment (Andrew Deck - NiemanLab) 
Key Line: "Patch’s newsletters are finally everywhere, or close to it, but not every former employee I spoke to saw this AI strategy as furthering the network’s local news mission. “Patch everywhere sounds great on the surface, but the pushback is it’s kind of like opening 1,000 stores. You can have 1,000 storefronts, but without the products inside it, it’s just a storefront,” said Feser, the former Patch project manager. Without a real person at the center with real ties to the community, what is Patch actually offering in 30,000 towns?"
Why It Matters: The idea of a local newsletter that has last-touch editing by a human but which is largely aggregated by AI in its assembly is an idea that's been bouncing around my brain for a few months. So, it's interesting to see this experiment playing out. There's an opportunity for local public media here, I think.

Reimagining leadership in the AI era (Ernesto Aguilar - AI and Public Media Futures via LinkedIn) 
Key Line: "...constant change demands adaptive thinking and lifelong learning; that championing ethical AI leadership can drive innovation and fair outcomes; and how balancing tech solutions with human-centered empathy may empower teams to innovate and adapt."
Why It Matters: Ernesto, who is speaking at our webinar this coming Thursday, is part of an AI in Journalism leadership cohort and he’s documenting that journey on LinkedIn. So, if you haven't started following his posts there, you should. I'll link to a few more below. Click over and show him some social media love.

Know…

What news audiences can teach journalists about artificial intelligence (T.J. Thomson, Ryan J. Thomas, Michelle Riedlinger and Phoebe Matich - Poynter) 
Key Line: "...our research shows that when drafting or updating a news outlet’s generative AI policies, newsroom managers and other leaders should consider questions around temporality, perceptions of trust and authenticity, fidelity, and audiences’ awareness of and familiarity with various AI tools and processes."
Why It Matters: There are lots of good tips in this piece, but the bottom line here is that you need to have a policy and you need to communicate that policy.

Generative Extend arrives in Premiere Pro 25.2 (David Winter - RedShark News)
Key Line: "The tool works for all sorts of scenarios. From wide shots to handheld camera moves, it even picks up the motion of jib shots and focus pulls without missing a beat. In one example, a closing shot of riders moving into the sunset was extended by two seconds. You’d never have known it wasn’t all captured in-camera."
Why It Matters: Generative video may not generate whole films at the drop of a prompt, but it's good enough to patch timelines. Your station's AI policy should provide guardrails on when this use is ethical.

YouTube rolls out a free AI music-making tool for creators (Sarah Perez - TechCrunch) 
Key Line: “After the tracks are generated, creators can download them and add them to their videos. YouTube notes that the music is free to use, so creators will not have to worry about copyright claims.
Why It Matters: Much as Adobe did with copyright-free images in 2023, YouTube may be able to elbow its way into the generative music space. Of course, they don't say whether the music is copyright-free outside of the platform. But here's hoping.
Related: Amanda Silberling reports for TechCrunch that YouTube Shorts adds Veo 2 so creators can make GenAI videos 

OpenAI debuts its GPT-4.1 flagship AI model (Jess Weatherbed & Emma Roth - The Verge) 
Key Line: 'The launch comes as OpenAI plans to phase out its two-year-old GPT-4 model from ChatGPT on April 30th, announcing in a changelog that recent upgrades to GPT‑4o make it a “natural successor” to replace it. OpenAI also plans to deprectate the GPT-4.5 preview in the API on July 14th, as “GPT‑4.1 offers improved or similar performance on many key capabilities at much lower cost and latency. ”'
Why It Matters: OpenAI's product portfolio is so confusing, it's hard to know the right tool for the job. But essentially, for multimodal model tasks, lean into 4.1 (not 4.5 which apparently isn't as advanced). And for tasks requiring a reasoning model, lean into o3.
Related: In other OpenAI news, Maxwell Zeff at TechCrunch reports: OpenAI updates ChatGPT to reference your past chats 
Also related: ChatGPT Image Library 

Claude takes research to new places (Anthropic) 
Key Line: "Today, we’re introducing two new capabilities that make Claude a more informed and capable collaborator — Research and a Google Workspace integration that connects your email, calendar, and documents to Claude. With Research, Claude can search across both your internal work context and the web to help you make decisions and take action faster than before."
Why It Matters: While I wish Anthropic would stay more current with the features of the frontier models, that's just not their style they do always get there in the end.
Related: This reporting from Emma Roth at The Verge: Anthropic is reportedly launching a voice AI you can speak to 
Also related: This reporting from Kyle Wiggers at TechCrunch: Google to embrace Anthropic’s standard for connecting AI models to data 

Vertex AI is now the only platform with generative media models across video, image, speech, and music (Warren Barkley - Google Cloud) 
Key Line: "Today, we’re continuing to invest in generative media by adding Lyria, Google’s text-to-music model, to Vertex AI in preview with allowlist. With the addition of music, Vertex AI is now the only platform with generative media models across all modalities – video, image, speech, and music. This means you can build a complete, production ready asset starting from a text prompt, to an image, to a complete video asset with music and speech."
Why It Matters: While you have to have access to Google's Cloud services to get Vertex, Google is at the top of the heap when it comes to synthetic media right now. I'm not sure this collection is better than assembling your own best-of-class tool box, but it's certainly more convenient.

And finally…

Fact Check: Yes, Education Secretary Linda McMahon said 'A1' in response about AI (Laerke Christensen - Yahoo!News) And finally, look for an executive order mandating seraph fonts any day now.

Have a creative, productive week!

Image Generated with ChatGPT 4o

Reply

or to participate.