I hope this will be soon to come commercially …
It’s already there, I didn’t know …
I hope this will be soon to come commercially …
It’s already there, I didn’t know …
With the disadvantage, of course, of loosing light sensitivity since the light has to pass three filter films instead of one…
Not really because the light has always to pass through three color fiters … because all sensors capture light well as … light and dark.
They say the Sigma SD9 and SD10 use it. But Sigma stuff is rubbish isn’t it? I bet it costs eleven hundred billion dollars.
Sygma lenses are quite good, but these are the first digcams they make so they have to proof what they are worth.
That’s not really correct. In a normal image sensors the color filter colors are arranged side by side. Each pixel is put together by three subpixels - red, green, blue (RGB). The light has to pass either the red film, the blue film or the green film, but not all three of them.
Right, you’re right … I mixed up some stuff here but anyways it makes that you have more pixels to capture light as normally you have about every third pixel capturing a color with green being the most.
For a given silicon area, yes. However, at a given resolution, it gives you much less sensor area to catch the light. And this is the big advantage of this technology, it can reduce the sensor area at the same resolution, that is, it makes the sensors cheaper.
So I guess we will see this sensor mainly in cheap consumer cameras, mobilphones, etc.
Or they need to increase the size to full frame or more and use it in professional cams …
Yes, but at professional cameras the sensor area is a given and cannot be increased without changing the optical system. So the amount of light on the senor is the same no matter what kind of sensor.
While the mentioned direct technology then indeed would increase the resolution (which I believe can be driven to a satisfactionable level by conventional technology as well), I doubt that the color filter material mix can be adjusted well enough for professional levels. The materials for RGB have different sensitivities at the different wavelength of lights, that’s why professionals change the white balance or use RAW data to later adjusting the color temperature. I can’t see how this can be done with a combined sensor.
I mean like Mamiya RB 67 and the like …
They could do it in traditional film, improved it over the years … so why not for sensors?
The sensor has been out for many years. I remember hearing about it maybe 3 or 4 years ago.
You can find plenty of sample pictures around.