FFmpeg at Meta: Media Processing at Scale

2026-03-095:37281101engineering.fb.com

FFmpeg is truly a multi-tool for media processing. As an industry-standard tool it supports a wide variety of audio and video codecs and container formats. It can also orchestrate complex chains of…

FFmpeg is truly a multi-tool for media processing. As an industry-standard tool it supports a wide variety of audio and video codecs and container formats. It can also orchestrate complex chains of filters for media editing and manipulation. For the people who use our apps, FFmpeg plays an important role in enabling new video experiences and improving the reliability of existing ones.

Meta executes ffmpeg (the main CLI application) and ffprobe (a utility for obtaining media file properties) binaries tens of billions of times a day, introducing unique challenges when dealing with media files. FFmpeg can easily perform transcoding and editing on individual files, but our workflows have additional requirements to meet our needs. For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg, such as threaded multi-lane encoding and real-time quality metric computation.

Over time, our internal fork came to diverge significantly from the upstream version of FFmpeg. At the same time, new versions of FFmpeg brought support for new codecs and file formats, and reliability improvements, all of which allowed us to ingest more diverse video content from users without disruptions. This necessitated that we support both recent open-source versions of FFmpeg alongside our internal fork. Not only did this create a gradually divergent feature set, it also created challenges around safely rebasing our internal changes to avoid regressions.

As our internal fork became increasingly outdated, we collaborated with FFmpeg developers, FFlabs, and VideoLAN to develop features in FFmpeg that allowed us to fully deprecate our internal fork and rely exclusively on the upstream version for our use cases. Using upstreamed patches and refactorings we’ve been able to fill two important gaps that we had previously relied on our internal fork to fill: threaded, multi-lane transcoding and real-time quality metrics.  

Building More Efficient Multi-Lane Transcoding for VOD and Livestreaming

A video transcoding pipeline producing multiple outputs at different resolutions.

When a user uploads a video through one of our apps, we generate a set of encodings to support Dynamic Adaptive Streaming over HTTP (DASH) playback. DASH playback allows the app’s video player to dynamically choose an encoding based on signals such as network conditions. These encodings can differ in resolution, codec, framerate, and visual quality level but they are created from the same source encoding, and the player can seamlessly switch between them in real time.

In a very simple system separate FFmpeg command lines can generate the encodings for each lane one-by-one in serial. This could be optimized by running each command in parallel, but this quickly becomes inefficient due to the duplicate work done by each process.

To work around this, multiple outputs could be generated within a single FFmpeg command line, decoding the frames of a video once and sending them to each output’s encoder instance. This eliminates a lot of overhead by deduplicating the video decoding and process startup time overhead incurred by each command line. Given that we process over 1 billion video uploads daily, each requiring multiple FFmpeg executions, reductions in per-process compute usage yield significant efficiency gains.

Our internal FFmpeg fork provided an additional optimization to this: parallelized video encoding. While individual video encoders are often internally multi-threaded, previous FFmpeg versions executed each encoder in serial for a given frame when multiple encoders were in use. By running all encoder instances in parallel, better parallelism can be obtained overall.

Thanks to contributions from FFmpeg developers, including those at FFlabs and VideoLAN, more efficient threading was implemented starting with FFmpeg 6.0, with the finishing touches landing in 8.0. This was directly influenced by the design of our internal fork and was one of the main features we had relied on it to provide. This development led to the most complex refactoring of FFmpeg in decades and has enabled more efficient encodings for all FFmpeg users.

To fully migrate off of our internal fork we needed one more feature implemented upstream: real-time quality metrics.

Enabling Real-Time Quality Metrics While Transcoding for Livestreams

Visual quality metrics, which give a numeric representation of the perceived visual quality of media, can be used to quantify the quality loss incurred from compression. These metrics are categorized as reference or no-reference metrics, where the former compares a reference encoding to some other distorted encoding.

FFmpeg can compute various visual quality metrics such as PSNR, SSIM, and VMAF using two existing encodings in a separate command line after encoding has finished. This is okay for offline or VOD use cases, but not for livestreaming where we might want to compute quality metrics in real time.

To do this, we need to insert a video decoder after each video encoder used by each output lane. These provide bitmaps for each frame in the video after compression has been applied so that we can compare against the frames before compression. In the end, we can produce a quality metric for each encoded lane in real time using a single FFmpeg command line.

Thanks to “in-loop” decoding, which was enabled by FFmpeg developers including those from FFlabs and VideoLAN, beginning with FFmpeg 7.0, we no longer have to rely on our internal FFmpeg fork for this capability.

We Upstream When It Will Have the Most Community Impact

Things like real-time quality metrics while transcoding and more efficient threading can bring efficiency gains to a variety of FFmpeg-based pipelines both in and outside of Meta, and we strive to enable these developments upstream to benefit the FFmpeg community and wider industry. However, there are some patches we’ve developed internally that don’t make sense to contribute upstream. These are highly specific to our infrastructure and don’t generalize well.

FFmpeg supports hardware-accelerated decoding, encoding, and filtering with devices such as NVIDIA’s NVDEC and NVENC, AMD’s Unified Video Decoder (UVD), and Intel’s Quick Sync Video (QSV). Each device is supported through an implementation of standard APIs in FFmpeg, allowing for easier integration and minimizing the need for device-specific command line flags. We’ve added support for the Meta Scalable Video Processor (MSVP), our custom ASIC for video transcoding, through these same APIs, enabling the use of common tooling across different hardware platforms with minimal platform-specific quirks.

As MSVP is only used within Meta’s own infrastructure, it would create a challenge for FFmpeg developers to support it without access to the hardware for testing and validation. In this case, it makes sense to keep patches like this internal since they wouldn’t provide benefit externally. We’ve taken on the responsibility of rebasing our internal patches onto more recent FFmpeg versions over time, utilizing extensive validation to ensure robustness and correctness during upgrades.

Our Continued Commitment to FFmpeg

With more efficient multi-lane encoding and real-time quality metrics, we were able to fully deprecate our internal FFmpeg fork for all VOD and livestreaming pipelines. And thanks to standardized hardware APIs in FFmpeg, we’ve been able to support our MSVP ASIC alongside software-based pipelines with minimal friction.

FFmpeg has withstood the test of time with over 25 years of active development. Developments that improve resource utilization, add support for new codecs and features, and increase reliability enable robust support for a wider range of media. For people on our platforms, this means enabling new experiences and improving the reliability of existing ones. We plan to continue investing in FFmpeg in partnership with open source developers, bringing benefits to Meta, the wider industry, and people who use our products.

Acknowledgments

We would like to acknowledge contributions from the open source community, our partners in FFlabs and VideoLAN, and many Meta engineers, including Max Bykov, Jordi Cenzano Ferret, Tim Harris, Colleen Henry, Mark Shwartzman, Haixia Shi, Cosmin Stejerean, Hassene Tmar, and Victor Loh.


Read the original article

Comments

  • By dewey 2026-03-0913:565 reply

    > As our internal fork became increasingly outdated, we collaborated with FFmpeg developers, FFlabs, and VideoLAN to develop features in FFmpeg that allowed us to fully deprecate our internal fork and rely exclusively on the upstream version for our use cases.

    Some comments seem to glance over the fact that they did give back and they are not the only ones benefitting from this. Could they give more? Sure, but this is exactly one of the benefits of open source where everyone benefits from changes that were upstreamed or financially supported by an entity instead of re-implementing it internally.

    • By sergiotapia 2026-03-0916:233 reply

      One thing people can't fault Meta for is that they contribute back to the community at large.

      We're using React Native, hello!?

      We're using React!

      Tons of projects, we should be very grateful they give so much tbh.

      • By kindkang2024 2026-03-0917:391 reply

        Let alone PyTorch, which greatly boosted the entire LLM wave. Thanks, Meta.

        Those who benefit others deserve to be benefited in return — and if we could, we should help make them more fit.

        • By arjvik 2026-03-0920:05

          hey hey hey the world would be a better place if we all used JAX instead :)

      • By jcul 2026-03-0918:38

        zstd and the Folly C++ library are two that come to mind.

      • By popalchemist 2026-03-0918:211 reply

        Yes, they do that, but it's not out of altruism. Gratitude may be the wrong word when Meta and Zuck have actively worked to erode people's trust in society and reality, while actualizing a technofuedalist vision of serfdom; literally a 21st century scheme for world domination and subjugation of the poors.

    • By lofaszvanitt 2026-03-1019:001 reply

      Yeah, but who needs the features of a corp who has to process zillions of videos? Very very very few. So your argument about FOSS is deeply flawed and only serves those in power.

      • By sincerely 2026-03-1115:271 reply

        Well the alternative is them just never upstreaming and maintaining their own fork, which seems worse for everybody

        • By lofaszvanitt 2026-03-1212:20

          You missed the point. Who even needs their features? If you are as big as Meta you have the money and means to do whatever you want.

    • By j45 2026-03-0917:59

      It's a positive development, but we can't minimize or ignore the conditions that precipitated it, giving back was less than hanging onto the changes for private benefit.

      Still, Meta has also put a lot out there in open source, from a differentiation perspective it doesn't seem to go unnoticed.

    • By vmaurin 2026-03-0914:594 reply

      A gentle reminder that all the big techs companies would not exist without open source projects

      • By kccqzy 2026-03-0915:173 reply

        Would Microsoft not exist without open source project? Microsoft is that company founded in 1975, but the GPL license only appeared in 1989, and BSD licenses appearing at roughly the same time just because of the Unix Wars.

        Big tech companies can easily hire manpower to make proprietary versions of software, or just pay licensing fees for other proprietary software. They don’t rely on open source. Microsoft bought 86-DOS to produce MS-DOS; Microsoft paid the Unix license to produce Xenix; and when Microsoft hired former DEC people to make NT, it later paid DEC.

        Instead, modern startups wouldn’t exist without open source.

        • By golfer 2026-03-0916:421 reply

          Indeed, open source exists despite Microsoft trying its hardest to kill it. Microsoft was (and still is) a ruthless, savage competitor. Their image has softened as of late but I'll never forget the BS they did under Bill Gates and Steve Ballmer.

        • By ok123456 2026-03-0917:11

          Microsoft wouldn't exist without the theft of CPU time on time-shared computers.

      • By cedws 2026-03-0915:19

        I think they would due to massive financial incentive. On the other hand, a lot more developers might actually be getting compensated for their work, instead of putting their code on the internet for free and then complaining on social media that they feel exploited.

      • By dirasieb 2026-03-0916:392 reply

        it's the exact opposite but alright, take a look at who's behind funding and sending code to the linux kernel if you want an example

        • By rcxdude 2026-03-108:44

          I would more say it's both: this is kind of the idea of open source, at least for the large-scale projects that are permissively licensed.

        • By OKRainbowKid 2026-03-0922:51

          The exact opposite? Can you elaborate?

      • By izacus 2026-03-0917:14

        And a gentle reminder that most of open source you use was developed and is maintained by tech companies.

        Take a glance and contributor lists for your projects sometime.

    • By EdNutting 2026-03-0915:068 reply

      Yes, they contributed to open source - this is a good thing.

      But personally, I took issue with the tone of the blog post, characterised by this opening framing:

      >For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg

      Could they not have upstreamed those features in the first place? They didn't integrate with upstream and now they're trying to spin this whole thing as a positive? It doesn't seem to acknowledge that they could've done better (e.g. the mantra of 'upstream early; upstream often').

      The attempt to spin it ("bringing benefits to Meta, the wider industry, and people who use our products") just felt tone-deaf. The people reading this post are engineers - I don't like it when marketing fluff gets shoe-horned into a technical blog post, especially when it's trying to put lipstick on a story that is a mix of good and not so good things.

      So yeah, you're right, they've contributed to OSS, which is good. But the communication of that contribution could have been different.

      • By pdpi 2026-03-0915:25

        > e.g. the mantra of 'upstream early; upstream often'

        This is the gold standard, sure. In practice, you end up maintaining a branch simply because upstream isn't merging your changes on your timescale, or because you don't quite match their design — this is completely reasonable on both sides, because they have different priorities.

      • By dewey 2026-03-0915:111 reply

        > Could they not have upstreamed those features in the first place?

        Hard to say without being there, but in my experience it's very easy to end up in "we'll just patch this thing quickly for this use case" to applying a bunch of hacks in various places and then ending up with an out of sync fork. As a developer I've been there many times.

        It's a big step to go from patching one specific company internal use case to contributing a feature that works for every user of ffmpeg and will be accepted upstream.

        • By EdNutting 2026-03-0915:14

          I've also had that experience of patching an OSS project internally, with the best intention of upstreaming externally-useful improvements in the future (when allowed).

          However, my interpretation of the article was that they did a lot more than just patching pieces. They, perhaps, could have taken a much earlier opportunity to work with the core maintainers of ffmpeg to help define its direction and integrate improvements, rather than having to assist a significant overhaul now (years later).

      • By Aurornis 2026-03-0916:01

        Getting something accepted upstream is orders of magnitude harder than patching it internally.

        The typical situation is that you need to write a proof of concept internally and get it deployed fast. Then you can iterate on it and improve it through real world use. Once it matures you can start working on aligning with upstream, which may take a lot of effort if upstream has different ideas about how it should be designed.

        I’ve also had cases where upstream decided that the feature was good but they didn’t want it. If it doesn’t overlap with what the maintainers want for the project then you can’t force them to take it.

        Upstreaming is a good goal to aim toward but it can’t be a default assumption.

      • By xienze 2026-03-0916:22

        > Could they not have upstreamed those features in the first place?

        This can be harder than you think. Some time ago I worked a $BIGCORP and internally we used an open source library with some modifications to allow it to fit better into our architecture. In order to get things upstreamed we had to become official contributors AND lobby to get everyone involved to see the usefulness of what we were trying to do. This took a lot of back-and-forth and rethinking the design to make it less specific to OUR needs and more generally applicable to everyone. It's a process. I'm not surprised that Facebook's initial approach would be an internal fork instead of trying to play the political games necessary to get everything upstreamed right off the bat. That's exactly the situation we were in, so I get it.

      • By summerlight 2026-03-0916:25

        I guess it is much more frequent to maintain internal patches rather than doing all the merging work into upstream, especially the feature is non-trivial. Merging upstream consumes more time externally and internally, and many developers are working with an aggressive timeline. I don't think it is fair to criticize them because they didn't do ideal things from the beginning.

      • By zer0zzz 2026-03-0916:25

        > Could they not have upstreamed those features in the first place?

        Often when you are working on a downstream code base either you are inheriting the laziness of non-upstreaming of others or you are dealing with an upstream code base that’s really opinionated and doesn’t want many of your teams patches. It can vary, and I definitely empathize.

      • By kevincox 2026-03-0915:101 reply

        I find it hard to be too upset, better late than never. Would it have been better to upstream shortly after they wrote the code? Yes. Would it have been better if they also made a sizable contribution to fmmpeg? Yes. But at the end of the day they did contribute back valuable code and that is worth celebrating even if it was done purely because of the benefit to them. Let's hope that this is a small step and they do even more in the future.

        • By EdNutting 2026-03-0915:172 reply

          As I said, the contribution is good, it's the communication via this blog post that I don't entirely like. It could have been different. It could have acknowledged better ways of engaging with ffmpeg (that would've benefitted both Meta and ffmpeg/the community, not _just_ ffmpeg).

          But corporate blog posts often go this way. I'm not mad at them or anything. Just a mild dislike ;)

          • By kevincox 2026-03-0915:202 reply

            Yeah, I see what you mean. It basically shows that they contributed to ffmpeg purely because it helped them, but then they wrote this post to get good will for that contribution.

            • By EdNutting 2026-03-0915:23

              :thumbs-up:

            • By arcfour 2026-03-0915:55

              I'm glad to know that outcomes are affected by having pure intentions. /s

          • By pyrolistical 2026-03-0919:30

            I’ll take it. Metas purpose isnt to help the community, it’s to make money. Sucks to hear that out loud, but that is how capitalism works.

            But you can use that to steer Meta. Explain how doing x (which also helps the community) makes them more money.

      • By p-o 2026-03-0916:43

        >For many years we had to rely on our own internally developed fork of FFmpeg to provide features that have only recently been added to FFmpeg

        I really wonder if they couldn't have run the fork as an open source project. They present their options as binary when it fact they had many different options from the get go. They could have run the fork in an open-source fashion for developers of FFmpeg to see what their work was and be able to understand what the features they were working on was.

        Keeping everything close source and then contributing back X amount of years later feels a little bit disingenuous.

  • By HumblyTossed 2026-03-0918:34

    I hope when Fabrice Bellard retires, he's able to do so quite comfortably. So much money has been made on the back of his software creations.

  • By neutrinobro 2026-03-0914:151 reply

    > At the same time, new versions of FFmpeg brought support for new codecs and file formats, and reliability improvements, all of which allowed us to ingest more diverse video content from users without disruptions.

    While it is good they worked to get their internal improvements into upstream, and this is certainly better behavior than some other unmentioned tech giants. It makes one wonder (since they are presumably running it tens of billions of times per day), if they were involved in supporting these improvements all along. If not, why not?

HackerNews