Hi!
I just want to ask something about the srgb standard. I found not a real answer via google so maybe someone here can explain it further.
In the real world, we have linear values of course.
Our eyes are perceptual, so we can see smaller changes in the darks instead of brights. So Our eyes have a "curve" like this:
Our monitor with a gamma of 2,2 (or 2,5 to be correct) darkens the image we are looking at. So for now, a linear image would look to dark on a monitor.
In order to compensate this, a Gamma Correction of 2,2 (or to be correct an inverse Gamma of 0,454545) has to be applied to all the pictures (digicam, video…). Now the "image itself" looks to bight.
But mixed together, the gamma functions cancel out ech other (nearly) and we get a perfect linear image displayed on our monitor, right?
And with our eyes looking at this "linear image", it´s like we would look out of the window, and our perception leads it into something like this:
Okay, so far so good. But, why is in many many articles about sRGB said, that it matches the Monitor Gamma of 2,2 closely? Do they mean the inverse? Because an image loaded with the srgb standard embedded would look way to dark, because it has already a darkening gamma of 2,2 applied and then the monitor gamma of 2,2 darkens in down again. Or am I missing something?
Maybe someone can explain a little but about this further.
Kind regards,
Dez