In this presentation I use the term subtitles for written words, even though they would be called captions in some contexts. I also use the term captioner, even though (speech to text) reporter might be more appropriate in some cases.
Adding subtitles to live video content for the web and various RTC solutions might seem to be an easy task. But this form of accessibility poses several challenges. Questions that need answers include: Does the video service (e.g. Youtube, Facebook, or Zoom) support live subtitles in all cases? Which is better, subtitles created by humans or machines? Can the captioners be on a remote location? If yes, how will the captioners hear the audio? Which are the options when it comes to getting the subtitles from the captioner to the video? Can the live subtitles be efficiently edited and used afterwards?
In my presentation I share some answers to the above questions; describe important challenges; point at useful solutions; and outline a few ideas from my decade long experience as a producer of accessible live web tv.
I will also argue that the two biggest challenges ahead stem from: A) lack of standards when it comes to live subtitle distribution and encoding. B) virtually no built-in support for live subtitling in the tools used for live video production (i.e. vision/audio mixers). Here my friends, lies a sweet catch-22.