Display Next Hackfest 2025
A few weeks ago, a bunch of display driver and compositor developers met once again for the third iteration of the Display Next Hackfest. The tradition was started by Red Hat, followed by Igalia (thanks Melissa), and now AMD (thanks Harry). We met in the AMD offices in Markham, Ontario, Canada; and online, to discuss issues, present things we worked on, figure out future steps on a bunch of topics related to displays, GPUs, and compositors.
It was really nice meeting everyone again, and also seeing some new faces! Notably, Charles Poynton who “decided that HD should have 1080 image rows, and square pixels”, and Keith Lee who works for AMD and designed their color pipeline, joined us this year. This turned out to be invaluable. It was also great to see AMD not only organizing the event, but also showing genuine interest and support for what we are trying to achieve.
This year’s edition is likely going to be the last dedicated Display Next Hackfest, but we’re already plotting to somehow fuse it with XDC next year in some way.
If you’re looking for a more detailed technical rundown of what we were doing there, you can read Xaver’s, or Louis’ blog posts, or our notes.
With all that being said, here is an incomplete list of things I found exciting:
- The biggest update of the Atomic KMS API (used to control displays) is about to get merged. The Color Pipeline API is something I came up with three years ago, and thanks to the tireless efforts of AMD, Intel and Igalia and others, this is about to become reality. Read Melissa’s blog post for more details.
- As part of the work enabling displaying HDR content in Wayland compositors, we’ve been unhappy with the current HDR modes in displays, as they are essentially created for video playback and have lots of unpredictable behavior. To address this, myself and Xaver have since last year been lobbying for displays to allow the use of Source Based Tone Mapping (SBTM), and this year, it seems that what we have asked for have made it to the right people. Let’s see!
- In a similar vein, on mobile devices we want to dynamically increase or decrease the HDR headroom, depending on what content applications want to show. This requires backlight changes to be somewhat atomic and having a mapping to luminance. The planned KMS backlight API will allow us to expose this, if the platform supports it. I worked a lot on backlight support in mutter this year so we can immediately start using this when it becomes available.
- Charles, Christopher, and I had a discussion about compositing HDR and SDR content, and specifically about how to adjust content that was mastered for a dark viewing environment that is being shown in a bright viewing environment, so that the perception is maintained. I believe that we now have a complete picture of how compositing should work, and I’m working on documenting this in the color-and-hdr repo.
- For Variable Refresh Rates (VRR) we want a new KMS API to set the minimum and maximum refresh cycle, where setting min=max gives us a fixed refresh rate without a mode set. To make use of VRR in more than the single-fullscreen-window case, we also agreed that a Wayland protocol letting clients communicate their preferred refresh rate would be a good idea.
Like always, lots of work ahead of us, but it’s great to actually see the progress this year with the entire ecosystem having HDR support now.
See you all at XDC this year (or at least the one next year)!