-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AgX and HDR in Resolve #3
Comments
This is a very deep discussion, and one that I’ve spent quite a large number of less than healthy hours outlining and patterning a protocol around. One that I probably should Wiki here. My likely unpopular opinion is “HDR is a load of hogwash”1, which follows with a thought that conventional thinking around HDR is doubly bullshit. I approach the subject of HDR as “What is the point?” There is one hidden and pernicious idea that a picture in the idealized state is a reproduction / simulacrum of reality. This is unequivocally false, as quite a few pieces of research have already hinted at, going back to at least 1951 via MacAdam’s Quality of Color reproduction2. We can trace the false “ideal picture is idealized reproduction of stimulus” construct even further if we chase L. A. Jones’ work discussing the Photographic Reproduction of Tone in black and white photography3. This all goes to say that a picture is not merely a simulacrum / reproduction of stimulus in front of the camera. So what does this have to do with HDR? It means that we need to frame what we seek to achieve with HDR backwards, from the picture goal. To make this clearer, we can use the paradigm of a chemical creative film print. For example, when considering Star Wars: A New Hope and an HDR target, should we:
This might seem like an obvious question, but one that isn’t typically discussed. To this end, while I believe that we could consider either 1. or 2. above as legitimate creative choices, the vast majority of authorship would consider 2. as the logical default. That is, the picture does not exist in the camera / render quantal catches / colourimetry, but rather is uniquely formed by way of the picture recipe algorithm. If you wish to create HDR pictures from the generic processing, I’d avoid the idea of 1000nits or 100nits, in much the same way that a creative chemical film print does not specifically have a “nit” range beyond the implicit assumption of a theatrical presentation illumination level in a dark context. That is, the picture formation is in no way a “compression”. The easiest way that Just Works is to route the picture you approve through an HLG encoding, as is, with the proper adjustment of primaries to BT.2020. That is, simply treat the SDR approved picture as HDR by way of the HLG encoding schema for the transfer characteristic adjustment. Note that you will need to adjust the “middle grey” point down slightly for the HDR pass. But in all cases I have tested, this will yield an HDR picture that maintains the creative integrity of the original generic “SDR” picture, with the added cognitively disruptive nonsense of HDR. In short:
If one wishes to formulate a new picture from the colourimetry for HDR, that too is viable, but requires adjusting the parameterization to form a totally new picture. Both paths are creatively viable, but I strongly lean toward the HLG approach4 as an anecdotally acceptable and agreeable result. Let me know if you require aid. — 2 MacAdam, D.L. “Quality of Color Reproduction.” Proceedings of the IRE 39, no. 5 (May 1951): 468–85. https://doi.org/10.1109/JRPROC.1951.232825. 3 Jones, Loyd A. “Photographic Reproduction of Tone.” Journal of the Optical Society of America 5, no. 3 (May 1, 1921): 232. https://doi.org/10.1364/JOSA.5.000232. 4 I have had several little birds with a vast amount of experience suggest that a large number of popular “HDR” titles have used this exact approach. |
Could you please still consider implementing the PQ and HLG options that work natively and the correct way with the sliders/contrast? |
|
|
There’s only density. And density is not “tone”, given that “tone” is a cognitive computation. The demonstration I linked is an example of this. All that a “measurement” can give us is an idea of the particular density, and the impact on the stimuli. There’s no “tone” there as “tone” is a cognitive computation, and exists solely in meatspace. So while I’m open to the idea, the outline you’ve presented is problematic as best as I can tell.
My personal vantage, as plausibly indicated by the Diffuse White Creep as folks experiment with HDR, is that all of HDR is a complete 3D upselling corporate driven grift, round two. Visual cognition does not work the way a fistful of electronics engineers seemed to think it does. Take a look at the Diffuse White Creep, where the specification started with 100 nits, then went to 203, and now some companies including one that may or may not have a fruit in the title are authoring ~350. The point is: the house of cards of nonsense is revealing itself as nonsense. All of that as an aside, what folks learn as they fart with “HDR” is that we still need to form pictures. So for an “HDR first” it’s no different to SDR picture formation. Remember that our visual cognition does not have an HDR mode like our displays do. We only compute and create information from the energy fields and the gradients. From that, we form inferences and further information. All of that is to say that forming the picture remains the same. I would like to think that this is easily demonstrated. If you want to test this claim and have an HDR display handy, create the generic AgX output to your liking, then change the timeline colourspace to HLG. Presto. Reasonable enough picture, because again, I stress, the discussion around pictorial depiction hinges on the cognitive fission aspect of the fields. Therefore we don’t have some “crazy new amazing buy buy buy” medium, but rather the exact same mechanism that has been with us since the advent of painting.
This cannot work. Our visual cognition takes into account the entire visual field. There’s not only a spatial anchoring, but also a “slow” temporal anchoring. EG: If we are in a completely dark room and “look” at a BT.709 set of swatches on a 85% or so BT.2020 display, the BT.709 set will look perfectly fine. If we then change to “more pure” versions of the BT.709 set, at maximal “purity” of the BT.2020 display, it too will look perfectly fine. However, if we now subsequently look at the original BT.709 set? It will seem “washed out”. And this temporal effect will last somewhere around ten minutes. As such, when we are “comparing” we are engaging in a pretty complex neurophysiological based analysis, and that analysis cannot work in an A to B comparison. Visual cognition is not fixed, but fluid. And this is not even beginning to suggest that we need to numerically define “contrast”, “lightness”, “midtone”, “shadow”, and “brightness”. As someone who reads an unhealthy amount of research papers, those terms are not defined anywhere and there’s no consensus on how we calculate those terms neurophysiological, let alone numerically in stimuli.
What’s a “highlight”? If you want us to engineer such a transform, we’ll need a demarcation point of these “highlights” and somehow “handle” them. Creative chemical film does not have “highlights”. Pictures do not have “highlights”. This is conventional wisdom orthodoxy, and the “wisdom” suggested by such is glaringly false. I’m all for including “HDR” mumbo jumbo, but the first bridge to cross is to appreciate that it’s not even a thing, but a deeper discussion. |
You seem to look at it from a philosophical/theoretical point of view, whereas I seem to view it as a real-usage issue
You can still use 100 nit diffuse white. I do, and ARRI does too. I don't know how actually improving picture quality and immersion is a marketing stunt, especially if your viewers like it, be it 3D or HDR.
Our eyes are always in HDR mode though, with much higher absolute brightness levels than even PQ formally encodes. Look at any real scene and ask yourself whether you have ever seen such dynamic range and colors on an SDR screen. HDR is much closer, even at "only" 1000 nits
From a linear curve? I would have to manually match it, and what settings should I even use if everything is targeted at SDR currently? Just tag a 709 output as HLG? Won't be great, tonemapping to HLG directly is not the same as tonemapping to 709 and tagging that as HLG
But now we have such a medium that is superior to paper, and it's already integrated into many products which are already bought, why should we ignore it?
I'm not sure what the problem here is. Sure, the eyes always adapt, but an attempt to automatically match the grades is better than nothing
Here I use highlight as a general, colloquially known term, such as "AgX compresses highlights". It's true that pictures did not have true highlights because the brightness of their medium, printed, projected or displayed, was 100 nits at most. But now they do and we could utilize it. There's no need to strictly define the brightness of what is a highlight as we need to look at the image in general
So I'm not sure what the actual problem is. HDR looks better and is already used in many devices. I don't see any disadvantages in implementing it, and you could always switch back to Rec. 709 if you like |
Rest assured I’m being the most “pragmatic” of minds when it comes to implementing these sorts of things. Folks who suggest “real-usage” really are lost with respect to how visual cognition works, let alone pictorial depictions. We can’t design something without understanding how the mechanics work, and therefore when someone utters the term “highlight” we need to dissect what that means. What folks will realize very quickly, is that they are speaking in circles and nonsense. This is not adequate for designing algorithmic solutions.
I’ve looked at thousands of selects of pictures in HDR. There’s no such thing as “HDR”. There really isn’t. For someone who wants to make this claim, they are going to need to show how the visual cognition system works at a level that is better than some hand waving electronic engineering mumbo jumbo. I am reasonably confident at this point that I can showcase a research paper that can counter any such claim. The suggestion of “improvement” is one that I reject outright. Big box shops have spinning pictures in their stores, and every one is a wretched mess.
This is sadly nonsense.
You’ve tried it then? You are confusing where the pictorial depiction exists as a stimuli encoding I suspect?
It’s not superior. Case in point, imagine all of those impoverished people who could look at pictorial depictions in newsprint and somehow manage to cognize what the pictorial depiction was of? Pictorial depiction is where the key is at, and folks simply don’t understand how it works. If one sits in a top of the line light steering projection system and watches pictures, I absolutely promise they’ll have no clue that they are watching an “HDR” picture with BT.2020 primaries after about five minutes. I’ve experienced this. Many others have. Try it. The human visual cognition system broadly “normalizes” in a fluid manner.
And this simply cannot be done when comparing in an A to B situation as the visual cognition is normalizing to the totality of the field, including some “slow” temporal normalization. Heck, we can’t even “match” two simple stimuli presentations without considering the field. For example, it is generally agreed that a satisfactory match of the following is impossible5. We can get a sense as to why this is, by way of a simple demonstration showcasing how the spatiotemporal articulation fields force different cognitive computations: Any “modification” of the pictorial depiction must make an effort to keep the field relationships, in terms of differentials, intact. This is tricky because there is currently no underlying neurophysiological model that can be evaluated against.
The vernacular doesn’t help us arriving at engineered solutions. Doubly so given that AgX does not do this.
I don’t believe that this is “the reason” at all. Pictorial depiction has little to do with “brightness” as hopefully the demonstrations have shown. Pictorial depictions are crucially dependent on the differential gradients and relational fields. Again, the retinal assemblies cannot supply scalar values to the visual cortext / LGN, so any such suggestion in relation to absolute scalar values is outright false.
“Brightness” is ill defined. It’s a meatspace computation, not something that is present as a measurable stimuli. Cognitive fission / scission is the important activity here, not stimuli.
Over the years, I’ve built up a degree of trust with respect to the concepts and experiments. If I am going to try and make something work, it is imperative that it’s not a sales pitch anchored in nonsense. Whether folks want to see the nuances or not, the idea of forming a different picture is something that needs consideration on how to do so. I reject HDR and all incarnations outright, and there’s little evidence of the claim “looks better”. Quite the contrary, in fact. But most folks are not evaluating the pictorial qualia in isolation, and as such, are seduced by the sirens of a non-comparison. In summary, I may add the HLG approach to the repository. But currently, given how trivial it is for someone to use the DCTL and create their own as per the above description, it’s not at the highest priority given some of the other work I’m doing. —
|
Ok, thanks for your answer and sorry for such a long time to reply. I hope you do continue the work even if it's only SDR though. |
Hi ! I have no way to configure Agx Dctl to work correctly with HDR. When I try to configure everything according to the instructions, the dctl compresses everything to 100 nit and not 1000 nit.
The text was updated successfully, but these errors were encountered: