Jump to content

troy s

Limited Member
  • Posts

    5
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by troy s

  1. Not sure exactly, but if you can decompose your buffers, you can manually perform the proper alpha over math. Just peel out the associated alpha channel, and multiply it by your under plate to scale the RGB emissions down as per geometric occlusion. Then add the RGB to the result. It's a bit janky as a two step, but the result should theoretically be identical to the proper alpha over calculation, assuming that Adobe isn't botching the RGB with a predivide or something janky in their software stack. The issue of open domain EXRs in Photoshop would be another nightmare stack, as you are then having to form the picture. You'd be at the whim of whatever bad idea Adobe has such as ACES or some other nonsense. I believe there's an EXR plugin and an OpenColorIO plugin that might allow you to negotiate the nightmare fuel of the Adobe designs.
  2. That is why it can be useful for the pedagogical side to provide citations, and demonstrations that align with meaningful ways to think about the problem. The latter seems to be the largest stumbling block, due to preconceived ways of thinking about the problem. You can say that again! This is the preconconception problem. Someone poses a baseline model, and we orient to it, and then our thinking goes out the window and we are left with the fragments of a broken understanding. A good example is "white" and "black". Ask a hundred experienced authors what "white" and "black" are and you'll hear someone say "Code value zero" or "One hundred percent code value". This is almost too obvious to think about. Yet when we are told these things, and other ridiculous ideas such as "highlights" or "super whites" or such nonsense, we turn our brains off and instead get stuck in the confused orthodox soup. Have we actually tried to think about what "black" or "white" is? Yet another one of those "So simple it is silly" questions. But then we look around us... Wait a minute... how can "black" and "white" be at what appears to be every single code value??? And those... those are still "highlights", aren't they? How can those be at every code value too? How can "black" and "white" reside at every single luminance value of a pictorial depiction? And how are we even able to sample a region on the sphere-like depictions that corresponds loosely to what we think we are seeing? The moment we dedicate any actual thought to some of these taken-for-granted subjects, the perplexing nature of their mechanisms becomes far more apparent. No harm in downloading the free-as-beer version of Resolve, which includes Fusion, and testing the mechanisms to make sure one isn't losing their marbles. The Resolve-Fusion should composite and display the EXR in the thread according to what has been discussed, plus or minus the massively significant picture formation algorithm. I hate to agree with you, but this is sadly where we currently find ourselves. Trust is a tricky thing to build, and if you read that horrible Adobe thread, you can get a sense of reality defying push back. Also note the date... and the software may still not work properly. The best inoculation is an intuitive understanding that one can hopefully, and cautiously, pass along to those who may not quite see the complexity. Find names and minds you have skeptically come to rely on, and chase their threads. Oh... and test against other software!
  3. I was absolutely perplexed by something that seemed so simple at first. Later on, I acquired some mental trauma from tracking down a particularly nasty bug around alpha and PNGs. You'd probably not be surprised that once one becomes more or less familiar with the nuances around alpha channel handling, that nuanced bugs can crop up in even the most robust software. So... yes. 🤣 This is where I encourage folks to gain enough confidence in their own, hopefully well researched, understanding. Eventually, it enables folks to identify where the specific problem is emerging. The skull demonstration has zero alpha regions within the "flame" portions. Following the One True Alpha formula, the operation should be adding the emission component to the unoccluded plate code values. There is no "scaling by proportion of occlusion" to the plate that is "under", as indicated by the alpha being zero. The author has some expanded commentary over on the issue tracker. The following diagram loosely shows how the incoming energy, the sum of the green and pink arrows, yields a direct "additive" component in the green arrow, and the remaining energy indicated by the pink arrow, scaled by whatever is "removed" in that additive component, is then passed down the stack. If this seems peculiar, the next time you are looking out of a window, look at the reflections "on" the window. They are not occluding in any way! Similar things occur with many, many other phenomena of course. For example, in the case of burning material from a candle, the particulate is effectively close to zero occlusion so as to be zero. Not quite identical to the reflection or gloss examples, but suitable enough for a reasonable demonstration. Sadly, Adobe is a complete and utter failure on the subject of alpha for many, many years. If you crawl over the OpenImageIO mailing list and repository, you will find all sorts of references as to how Adobe is mishandling alpha. Adobe's alpha handling is likely a byproduct of tech debt at this point, culminating with a who's who of image folks over in the infamous Adobe Thread. Zap makes a reference to the debacle in yet-another-thread-about-Adobe-and-Alpha here. Gritz, in the prior post, makes reference to this problem: You can probably read between the lines of Alan's video at this point. So this is two problems, one of which I've been specifically chasing for three decades, and one that relates to alpha. As for the alpha problem, I cannot speak directly to AfterEffects as I am unfamiliar with it, but the folks I have spoken with said the last time they attempted, Adobe still does not properly handle alpha. I'd suggest testing it in some other software for compositing just to verify the veracity of the claims folks like myself make, such as Nuke non-commercial, Fusion, or even Blender. All three of those should work as expected. My trust that Adobe will doing anything "correct" at this point, is close to zero. As for "colour management"... that is another rabbit hole well worth investigating, although it's probably easier to find a leprechaun than pin down what "colour management" in relation to picture authorship means to some people or organizations. Keeping a well researched and reasoned skepticism in mind in all of these pursuits is key. 🤣
  4. Apologies... I can only post one post per day. Probably better that way... 🤣 The issue, as the linked video in the previous post gets into, is sadly nuanced. The correct math within the tristimulus system of RGB emissions is remarkably simple. Sadly, it is just that software and formats are less than optimal. The dependencies of the "simple compositing operation" are: 1. Software employed. Many pieces of software are hugely problematic. See the infamous Adobe thread as a good example. 2. File formats employed. Some file encodings cannot support the proper One True Alpha, to borrow Mr. Gritz's turn of phrase, such as PNG. 3. Data state within software. Even if we are applying the proper mathematical operation to the data, if the data state is incorrect, incorrect results will emerge. The short answer is that if we have a generic EXR, the RGB emission samples are likely normatively encoded as linear with respect to normalized wattages, and encoded as gained with respect to geometric occlusion. That is, in most cases, the EXR data state is often ready for compositing. If your example had a reflection off of glass, a satin or glossy effect, a flare or glare or glow, a volumetric air material or emissive gas, etc., you'd be ready to simply composite using the One True Alpha formula, for the RGB resultant emissions only^1: A.RGB_Emission + ((100% - A.Alpha_Occlusion) * B.RGB_Emission) Your cube glow is no different to any of the other conventional energy transport phenomena outlined above, so it would "Just Work". If however, the software or the encoding is broken in some way, then all bets are off. That's where the video mentions that the only way to work through these problems is by way of understanding. Remember that the geometry is implicitly defined in the sample. In terms of a "plate", the plate that the A is being composited "over" simply lists the RGB emission, which may be code value zero. As such, according to the above formula, your red cube RGB emission sample of gloss or glow or volumetric would simply be added to the "under" plate. The key takeaway is that all RGB emissions always carry an implicit spatial and geometric set of assumptions. This should never happen in well behaved encodings and software. If it does, there's an error in the chain! JKierbel created a nice little test EXR to see if your software is behaving poorly. Hope this helps to try and clear up a bit of the problem surface. See you in another 24 hours if required... 🤣 -- 1. The example comes with a caveat that the "geometry" of the sample is "uncorrelated". For "correlated" geometry, like a puzzle piece perfectly aligned with another puzzle piece, like a holdout matte, the formula shifts slightly. The formula employed is a variation of the generic "probability" formula as Jeremy Selan explains in this linked post. If we expand the formula, we end up with the exact alpha over additive formula above. It should be noted that the multiplicative component is actually a scaling of the stimuli, based on energy per unit area. A more "full fledged" version of the energy transport math was offered up by Yule and (often misspelled) Neilsen which accounts for the nature of the energy transport in relation to the multiplicative mechanisms of absorption attenuation, as well as the more generic additive component of the energy.
  5. Hello all. Flattered to be mentioned here. I just wanted to point out that the statement is not quite correct; the result will indeed include an emission component in the RGB after the composite. With associated alpha (aka “premultiplied”) the emission is directly added to the result of the occlusion. This happens to be the only way alpha is generated in PBR-like rendering systems, and is more or less the closest normative case of a reasonable model of transparency and emission. It also happens to be the sole way to encode any additive component like a gloss, a flare, a glare, fires or emissive gases, glows, etc. I’ve spent quite a few years now trying to accumulate some quotations and such on the subject, from luminaries in the field. Hope this helps.
×
×
  • Create New...