Apple podcast transcription feature took six years from first launch


The Apple podcast transcription feature was officially launched in iOS 17.4, but the company says that it actually introduced a very limited version way back in 2018.

Apple says that it took so long to launch as a fully-fledged accessibility feature because the company wanted to make it universal, and to incorporate learnings from Apple Music lyrics …

Apple’s global head of podcasts, Ben Cave, told The Guardian that the first implementation dates back to 2018.

Apple’s journey to podcast transcripts started with the expansion of a different feature: indexing. It’s a common origin story at a number of tech companies like Amazon and Yahoo – what begins as a search tool evolves into a full transcription initiative. Apple first deployed software that could identify specific words in a podcast back in 2018.

“What we did then is we offered a single line of the transcript to give users context on a result when they’re searching for something in particular,” Cave recalls.

Why did it take so long to roll out full transcripts? Apple says there are two reasons – the first of which was delivering the best possible user experience.

[Apple sought] a high standard of performance, display and accuracy. A number of big leaps forward came from accessibility innovation at other departments within Apple.

“In this case, we took the best of what we learned from reading in Apple Books and lyrics in Apple Music,” says podcast policy lead Sarah Herrlinger. Apple Podcast transcripts borrow time-synced word-by-word highlighting from Apple Music and make use of Apple Books’ font and contrasting color scheme for the visually impaired.

Being willing to wait until they can get it right is, of course, a very Apple thing to do. YouTube launched automated closed captions for videos back in 2009, but they are often laughably bad, with deaf users dubbing them “craptions.”

The second was the company wanted it to be available for every episode of every show.

“We wanted to do it for all the shows, so it’s not just for like a narrow slice of the catalogue,” says Cave. “It wouldn’t be appropriate for us to put an arbitrary limit on the amount of shows who get it … We think that’s important from an accessibility standpoint because we want to give people the expectation that transcripts are available for everything that they want to interact with.”

That’s already happening with current episodes, and Apple also plans to tackle the back-catalog, though this is a lower priority, and no target date has been announced for this.

Larry Goldberg, who created the first closed captioning system for movie theaters, says that captions are now the norm for video content, and he wants other companies to follow Apple’s lead so that podcast transcriptions become the norm too.

9to5Mac’s Take

As with many accessibility features, what is developed for one slice of the population often turns out to be useful to a much wider audience.

That was demonstrated with video captions, with many people without hearing issues choosing to use them to ensure they don’t miss any dialog, as well as for watching videos silently.

I personally find podcast transcriptions a great time-saver when I want to hear what someone has to say, but prefer the speed of reading to the much more time-consuming process of listening. I, too, hope other follow Apple’s example here. (And please, YouTube, go poach some of the Apple engineers behind this work …)

9to5Mac collage of Apple images

FTC: We use income earning auto affiliate links. More.



Source link

Previous articleBitcoin Investment Products Saw Over $600M in Outflows Last Week: CoinShares
Next articleMake sure your PC is ready for Elden Ring Shadow of the Erdtree