-
-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Gamut mapping app] Scale LH improvements #438
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for colorjs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@LeaVerou This is primarily a demonstration, so I'm fine with it being merged as is, or if you want me to incorporate into the existing method, or if you prefer this to just hang out on a branch for demonstration reasons. Feel free to touch base with me on Discord if that's easiest. |
I've updated this to apply the 2 changes directly to the LH method, and for the sake of dedicated PRs, I have split out the delta display question into its own PR. |
apps/gamut-mapping/methods.js
Outdated
@@ -23,6 +23,16 @@ const methods = { | |||
label: "Scale LH", | |||
description: "Runs Scale, sets L, H to those of the original color, then runs Scale again.", | |||
compute: (color) => { | |||
if (color.inGamut("p3", { epsilon: 0 })) { | |||
return color.to("p3"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not simply return color
? There is no reason to convert it to anything, and any conversion is potentially a lossy operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did this to match what was happening in the scale
method, which returns the color converted to p3. This way, the method returns in a consistent space regardless of whether it starts in gamut or not. Otherwise, the serialized display with switch back and forth between p3 and oklch depending on whether it was in gamut.
(Perhaps the results from all methods should be serialized in a single format- currently some are oklch, and some are p3, making them harder to compare, but that's somewhat orthogonal to the question of whether the Scale LH method should return in a consistent space).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer this to be a separate method (named something more descriptive than Scale LH 2).
- The separate conversion hurts performance, and perf is a core advantage of Scale LH.
- The discontinuity at 0 and 1 could affect quality.
Also, as I said in my comment, there is no reason to convert to P3 at any point. The objective is to be within P3 gamut, not to be in the P3 space.
I think the reason this change was made is maybe due to a misunderstanding as to what Scale LH is trying to accomplish. I think I was also guilty of not quite understanding what Scale LH is either, and maybe I'm still wrong, please correct me if I'm wrong. But I think Scale LH is more aimed at being a sane replacement for clip as opposed to a better overall gamut mapping approach. Like if a browser didn't want to opt into a more advanced gamut mapping due to speed concerns, this would be way better. Am I right with my understanding? It is trying to retain as much hue as possible and a reasonable amount of lightness (something clip doesn't really do at all). |
My thinking in applying this is that this is part of the CSS Color 4 spec for (ok)lab and (ok)lch.
Because we are wanting to evaluate the method for gamut mapping to the display, I wanted to see what happens when this is applied. It appears it isn't specified that this would also be the case if the lightness of an p3 color is 100% when converted to oklch, so perhaps I'm mistaken on that. I'm fine creating a separate method, but my intention is to apply my understandings of the spec to Scale LH, and if those understandings are incorrect, then my alternate method wouldn't be needed anyways. |
I don't have time to check the code rn, but if
We should not be comparing GMAs by looking at coordinates (or eyeballing, which seems to be the primary method rn!). There are established metrics to evaluate color difference. Humans are notoriously bad in comparing sets of multiple variables (which is why decision making frameworks involve scorecards etc).
Yes and no. That was the original motivation behind Scale LH, but then when we saw the results in practice, I wondered if we could have our cake and eat it too. |
I agree the coordinates are less meaningful. Near the end of my journey in finalizing the ray trace GMA concept, ∆L and ∆h specifically were key to honing in on what was absolutely necessary. Dialing those in, in turn, gave visually good results. Visually good results did not always yield low ∆L and ∆H values. Trying to tune things specifically to visuals is far more difficult. I don't find the ∆E as meaningful except to see if there is a huge deviation between algorithms. I think it is good to keep though, just not necessarily THE metric to judge an algorithm on. When comparing at the distancing we are at times with GMAs, I think if they are roughly in the ballpark of each other (the different GMAs) it is a good sign you are close to where you need to be. |
I’m not arguing that ΔE is the metric at all. I’m only arguing that we need a metric that is a single number. Perhaps that single number could be a weighed sum of ΔL and ΔH (and even ΔC with a much reduced weight), maybe you have a sense of what that could look like now that you have all this empirical data? |
I don't know if I have a minimum range yet. I do think certain lightness ranges can tolerate more hue shift. Like really dark colors can tolerate more, and maybe really really light. If anyone is interested here is a comparison of approaches, with maximum deviations broken up from about 25 increments of lightness. I went ahead and threw out white and black. No one is beating the LUT-based one 🙃.
|
Separately, let me know if the white/black conversions are outside the scope for what these gamut mapping algorithms are intended to do, and I can remove that part as well. |
Very interesting. I hadn't yet evaluated the data to this length yet, so it is interesting to see some of my thoughts more strongly confirmed. The good news is that faster, more accurate approaches to the current CSS recommendation/example are possible. I also think specifying some minimum metric that should be met is probably not a bad idea as it seems there will be strong opinions as to the best way to approach it: complexity vs speed vs accuracy. Someone is always going to come along and say, "I can do it better!". I am interested to see if other novel approaches surface as well. I've found this area to be interesting 🙂. |
@jamesnw wow, these are fascinating! What is the vertical axis? @facelessuser While the ranges are useful, I was asking if you had a sense of what kind of weighted sum of deltas might give us a good single-dimension measure of proximity. I.e. a * ΔL + b * ΔC + c * ΔH -> what should a, b, c be? I’m also wondering if I should re-instate Scale LH+L, where there's an extra step where it sets lightness to the original color and scales again, perhaps conditionally if the shift is above a certain level. It had worse ΔΕs so I rejected it early, but it might not have this issue? |
@LeaVerou Yeah, I realized what you asked, I just don't have an answer yet. I probably didn't make that clear. I had not yet considered deriving such a metric, though I admit such a metric would be useful. I would need more time to consider the question. I had not tried to see how much slack in the system I could get away with, but more how can I get as close as possible as quick as possible in the least complex way. So I shared some interesting musings instead :). |
@LeaVerou Your intuitions were right, the extra step of correcting only L increases ∆h more significantly in the upper lightness range. Results
I think the real issue is the approach, it was never going to net you much lower than what you got. I did try. The way in which Scale LH is scaling the colors just can't correct as well, but the idea of what it is doing is what sparked something in my brain. Make no mistake, the fancy "raytrace" name I gave my method and even the new function is no different than what I started out with, functionally it is the same, so much so that you can replace the current function with the inverse interpolation approach and get the same results. And, the whole idea was inspired by Scale LH. Force max saturation of the color in RGB, but correct it in OkLCh. The original idea was to use inverse interpolation to figure out the value needed to force the channel with the greatest out of gamut magnitude to either 0 or 1 (whichever is closest) with interpolation instead of scaling to the midpoint. This is basically finding the intersection of the line through the achromatic color to the color of interest and the RGB cube surface. We were just calculating what interpolation value is needed to get us there, and then calculating the color at that point. The new method does the same thing, just in a different way. I'm not even sure which approach is actually faster. Using this approach and keeping the same flow of Scale LH immediately produced better deltas. The approach of scaling the saturation is what made all the difference. Results
But I thought we could do better, so I thought, let's correct LH twice and scale one more time. It did even better, but the approach gives max saturation when decreasing chroma, so often the final scale did nothing. It doesn't increase chroma when we've overcorrected due to being on the RGB cube surface and correcting LH such that it puts you below the surface. This is where I got stuck for a bit. I thought, let's back off and maybe approach "softer", but you'd end up relatively in the same spot, close, but not quite right. Maybe yellows in some regions were a little more orange than you'd like. So it occurred to me that I was close to the surface, but the chroma line back to the original color is no longer of use to me anymore because the colors don't change the same way in RGB as they do in OkLCh, but I was close to the surface, much closer to the real color I was after. That's when I realized if I just extended the line from the achromatic point through the new point, back outside the cube, ignoring the original color point, I could find the intersection one more time and I'd be closer to the actual color I wanted. You can actually stop there, I think you get a max ∆h ~8 across all lightness at that point. Maybe that's good enough, and you'd be faster, but the perfectionist in me said, nah, correct LH one more time and then clip, now we have ∆h ~4., |
I take it back, Scale LH does improve considerably, but you must correct both L and H in both iterations. It still saturates high lightness more than other approaches, but it does get much better at preserving hue. I may have stepped away from Scale LH when I saw larger improvements with scale towards achromatic and doing the same thing, I also found the high saturation of high lightness difficult to work with when generating tones via interpolation.
|
It's the delta between |
Yes, absolutely; it is the "please at least do something less broken" method (and is surprisingly good, for that). |
I agree, it does a great job as a clip replacement. |
I realize I took this on a bit of a sidetrack here, but returning to the proposed changes on the PR, what changes should be made to Scale-LH?
|
In my opinion.
|
Ok, I have implemented it with returning in the input color space, with no white/black. |
This updates the Scale LH method, applying a few initial checks-