Exploration #104

Will A.I. Be Media's New 'Vast Wasteland'?

Image Created with DALL-E 3

Hate Longscrolling? Jump Around!

Hi all. This week’s menagerie includes pieces on Claude outperforming ChatGPT, OpenAI’s new Voice Engine, Adobe’s Firefly updates, the return of immersive streaming, and the impacts of AI on both marketing and journalism. But first…

April’s ‘3rd Thursday’ Webinar 

I’ve got a few plugs to start things off this week. First, we are a couple of weeks out from our next 3rd Thursday Webinar, PBS KIDS: Innovating Accessibility in Children's Media. And you can register for the webinar here.

Vegas, Baby. 

Second, PBS has released their list of breakout sessions for the PBS Annual Meeting, May 13-15 at The Cosmopolitan in Vegas. I’m happy to announce that I’ll be joining PBS’s Mikey Centrella, PBS Wisconsin’s Amber Samdahl, and PBS NC’s David Huppert for a fun workshop where we demonstrate prompt engineering live and without a safety net. If you are planning to be at the Annual Meeting, bring your laptop or mobile device to “Whose Prompt Is It Anyway?” on Tuesday, May 14 at 1:15p.

And as they say in the world of AI training, “come for the prompts, stay for the ethics!” Friend-of-the-newsletter, Talia Rosen, is bringing us “Why Should They Trust Us? An Interactive Forum on Media Ethics.” It’ll feature NPM’s Chief Content Officer, Nancy Finken, and I have it on pretty good authority that AI will come up at some point.

Is A.I. Media's New 'Vast Wasteland'? 

In doing my scanning and reading for the week, I found a piece from Brookings with the rather provocative title, “Can journalism survive AI?” Ultimately, I made that the “One” selection below. But along the way I also ran across a different Op-Ed from Nathan Sanders, Bruce Schneier & Norman Risen: How public AI can strengthen democracy.

Here’s a key quote: To benefit society as a whole we also need strong public AI as a counterbalance to corporate AI, as well as stronger democratic institutions to govern all of AI. One model for doing this is an AI Public Option [link by the authors], meaning AI systems such as foundational large-language models designed to further the public interest. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Certain phrases stood out to me there. For example, “designed to serve the public interest,” “universal access,” and the idea of setting “an implicit standard that private services must surpass to compete.” They sound familiar.

The authors go on to suggest: “A variation of such an AI Public Option, administered by a transparent and accountable public agency, would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Again, this feels familiar. We are living in a time where corporate interests have created a new media ecosystem that is dominated by three corporate titans (the authors specifically call out Microsoft, Google, and Amazon) and is primarily set up to consolidate pricing power around subscription or ad-based business models for a product that was literally designed from the model up to reflect the traditional cultural majority in western society. No one is saying “vast wasteland” explicitly, but don’t we sometimes think it?

While the authors aren’t asking this question, I will: Should public AI join public TV and public radio as part of our service to our communities?

Okay, on to the links…

If You Click Only One…

Can journalism survive AI? (Cortney C. Radish - Brookings) - My own answer to Radish’s question is optimistic. If journalism dies, I don't think it'll be AI that kills it. To the contrary, I think quality journalism and AI can create a virtuous feedback loop. This sentiment in the piece resonated with me: "Journalism can also be an important source of data for improving the quality of foundation models, which suffer from bias, misinformation, and spam that make access to diverse sources of quality, factual information, especially in low-resourced digital languages, even more valuable. Furthermore, as the quality of data becomes as important as the quantity of data, journalism provides a constant source of new, timely, human-generated data." Read this and ask yourself, “what’s my answer to her question?”

Things to Think About…

Here's How Generative AI Depicts Queer People (Reece Rogers - Wired) - We've talked a lot here (and will continue to do so) about how bias surfaces in the product of generative AI tools. The investigation surfaced in this article is another important element of that larger issue.

BBC Abandons 'Doctor Who' AI Promo Plans After Viewer Complaints (Stephen Graves - Decrypt) - Here's the experiment the BBC conducted: "The experiment involved creating human-written marketing copy for a "Doctor Who" push notification, email subject line and the BBC Search page; generative AI was then used to "suggest copy variations" which were reviewed by the BBC's marketing team before being rolled out." This is interesting on a couple of levels. First, I'd argue (and have) that marketing copy is not editorial copy, per se, and fair game for AI assistance. Second, BBC was transparent about their intentions ahead of time and a human had final touch on the copy. So there is no 'gotcha' here, but public opinion is fickle like that. As you are thinking about your own policies for the use of gAI tools, this is a good point of reflection.

Subscription video's new frontier: immersive entertainment (Janko Roettgers - Lowpass) - From 2016 to 2020 Nebraska Public Media made quite a few 360-videos. The productions stand the test of time, I think (you can still see our 360 series Watershed on PBS.org) and have collected a few views on YouTube over the years. They were also great training for an organization eager to think outside ‘the frame,' and allowed us to eventually level-up to VR and game development. But YouTube (and Facebook) have really been the only ways for audiences to see that content. We’ll see if this catches on, but if nothing else it’s another sign of the immersive hype cycle beginning anew.

Why does AI have to be nice? Researchers propose ‘Antagonistic AI’ (Taryn Plumb - VentureBeat) - Anyone who has used a Chatbot in the last year knows that any one of them can be a bit of a kiss-ass (especially Claude). These researchers are proposing that a more effective human-bot relationship might involve a degree of antagonism. I’m open to the argument, especially as they are quick to point out that antagonistic doesn't mean unethical or irresponsible.

Design Against AI: 2024 Design in Tech Report (John Maeda via YouTube) - I attended Maeda’s talk at SXSW 2024 (his 10th annual version of the talk) and it was thought provoking. But his summary was a better distillation. Designers will like this because it's very much a design-will-save-us approach. That doesn't mean it's wrong, but it does have a particular point of view.
—And here's a video of the SXSW 2024 presentation.

Things to Know About…

States are racing ahead of Congress to regulate deepfakes (Charlie Guo & Timothy B. Lee - Understanding AI) - Where does your state stand on this issue?
—Meanwhile, across the pond, Nadeem Bahshah reports Nearly 4,000 celebrities found to be victims of deepfake pornography

Start using ChatGPT instantly (OpenAI Blog) - For those who have held off trying ChatGPT because creating an account was more information than you wanted these companies to have, OpenAI has dropped the sign-in for the free version of ChatGPT. This will be GPT-3.5 only, not any of the GTP-4 models that come with ChatGPT Plus (so no free DALL-E access), which makes me wonder if the release of GPT-5 is closer than we think.

OpenAI built a voice cloning tool, but you can’t use it… yet (Kyle Wiggers - TechCrunch) - In other OpenAI news, ChatGPT’s parent company has been absent from the generative voice space, but it would be a mistake to think that they'd cede that ground to ElevenLabs without a fight. Think of this in the larger context of DALL-E (generative imagery), Sora (generative video), and GPT-5 (the next model rumored to underpin ChatGPT), as you think about OpenAI's stated desire to bring about Artificial General Intelligence. How do you make AGI seem less alien and threatening? Make it sound, look and move in ways that feel deeply familiar.
—Read the OpenAI blog post announcing Voice Engine here: Navigating the Challenges and Opportunities of Synthetic Voices

“The king is dead”—Claude 3 surpasses GPT-4 on Chatbot Arena for the first time (Benj Edwards - ArsTechnica) - This quote sums up the significance nicely: "For the first time, the best available models—Opus for advanced tasks, Haiku for cost and efficiency—are from a vendor that isn't OpenAI," independent AI researcher Simon Willison told Ars Technica. "That's reassuring—we all benefit from a diversity of top vendors in this space. But GPT-4 is over a year old at this point, and it took that year for anyone else to catch up." But again, I wonder, is OpenAI’s GPT-5 coming sooner than expected?

Congress bans staff use of Microsoft's AI Copilot (Andrew Solender & Ina Fried - Axios) - The concern here is the leakage of sensitive data on cloud servers, and it's worth noting that Microsoft saw this one coming. It has been working on tools that "meet federal government security and compliance requirements."

Every US federal agency must hire a chief AI officer (Emilia David - The Verge) - In other government-meets-AI news, one wonders if state governments will soon follow suit.

Microsoft Teams is getting smarter Copilot AI features (Tom Warren - The Verge) - Last month, I mentioned being pleasantly surprised by the Otter.ai software solution. There are non-Otter options as well, of course. Teams has been trying to become the video meeting platform-of-choice for years. Can Copilot help it (finally) deliver?
—Not a Teams person? Here's a useful tutorial (from an educator, no less) on how to utilize Zoom's AI companion: Integrating Zoom AI Companion and Google Workspace

Adobe Firefly introduces Structure Reference for greater text-to-image control (Andy Stout - RedShark News) - Being able to work with consistent imagery (instead of the generated image being a wildcard every time) makes generative art tools way more viable for a lot of creators. This is reminiscent of Midjourney's consistent characters, and I imagine you'll see this feature coming to most generative arts tools soon.
—Meanwhile, Carl Franzen at VentureBeat reports that Adobe is also rolling out GenStudio (first announced in September) with the promise of a safe space in which brands can use generative AI tools: Adobe introduces structure reference for Firefly AI and GenStudio for brands
—And in other Firefly news: Adobe’s Firefly Services makes over 20 new generative and creative APIs available to developers (Frederic Lardinois - TechCrunch)

Don’t like your DALL-E images? OpenAI now lets you edit them. (Cecily Mauran - Mashable) - I noticed this today as I was creating the art for this exploration. It’s not structure reference (see above), but definitely makes DALL-E a more usable component of ChatGPT Plus!

Gaming for Everyone Product Inclusion Framework released by Xbox (Marijn - Can I Play That?) - I a very happy to see this framework, which has been evolving for the last 5 years. If you'll read through it you'll see a lot of language that is sympathetic to how we in public media approach more linear storytelling. In general, Microsoft has been a leader on issues of inclusion, so this one is worth a bookmark. You can see Microsoft's announcement of it here: Xbox Releases Gaming for Everyone Product Inclusion Framework for Game Developers

And finally…

The brain-computer interface race is on, with AI speeding up developments (Reed Albergotti - Semafor) - And finally, the future is now. Although it’s early days for this tech, take a moment and imagine how this is going to impact the world.

Have a creative, productive week!

Image Created with DALL-E 3

Reply

or to participate.