the people behind ffmpeg need a medal.
Ok, but how to use it?
following
FFmpeg is one of the most underrated software on Linux. It’s so important for the ecosystem very widely used. And you most likely barely notice it.
Just like curl.
You mean underrated?
Haha yes. My bad.
This is superb news. Now, we need audio translation and we can watch any videos in any language in the whole world.
Currently, doing that is a bit complicated than it needs to be.
After that - dubbing in realtime.
Call it dubblefish
So proud of FFmpeg and VLC project ; I want news like this everyday ! Scavenging the good stuff from AI and putting itt to good use, we need more of this ! Next time automatic translation integration from AI models and we don’t need to worry about the dredge of syncing subtitles with videos, one button compilation process and woosh ! -> in fact “on the fly” means no need for compilation, it is even better than I thought 🤩
I use mpv & it’s just a gui frontend to ffmpeg. I have some layering issues with VLC on Linux…
I need something like this but for image viewers and for manga/manhwa. It always amazes me how there’s options for this on mobile, even on IOS, but desktop linux doesn’t have anything comparable that has integrated text recognition and translation overlay.
Can you share those options? I’m curious.
The changelog lists 30 significant changes, of which the top new feature is integrating Whisper. This means whisper.cpp, which is Georgi Gerganov’s entirely local and offline version of OpenAI’s Whisper automatic speech recognition model. The bottom line is that FFmpeg can now automatically subtitle videos for you.
Yeah hey, can anyone chime in if this is at all based off LLMs? Because my problems with the incorrect plagiarism machine don’t end just because it’s now the offline incorrect plagiarism machine. Making OpenAI’s garbage hockey open source doesn’t make it okay. Or should I just start calling this shit FOSSwashing?
I dug around for a bit and couldn’t find much of anything, but judging by a look at the Github pages for both versions of Whisper, it’s looking very related. If that’s the case, fuck right off. I don’t want AI in FFmpeg, either.
It’s not AI, it’s neural network models in the same way voice recognition in devices has been working for over a decade. Even Dragon has been utilizing language models vectors for a very long time, just requiring voice training instead of utilizing a premade research or open-source data set.
I hate generative AI and it’s slop too, but getting angry about neural network models in general is not only absurd, but playing exactly into what corporations want – conflation of the underlying basic technology concepts with the capitalistic vampirism of art.
EDIT: to add, “research” here can be closed source – voice models utilized with these tend to be internally-sourced for much of them, at least earlier ones do.
It’s not AI, it’s neural network models
These used to be called AI before people decided that only LLMs and Diffusion models were AI. Both of which are types of neural networks.