I think this is different, because BINNING is the clustering of more than one pixel to act like one, it is used to reduce the read noise but on the sensor resolution expense.
So in the end you got a picture with light and dark pixel stripes? its really strange mode) maybe for some technical applicationsCorrect, it has nothing to do with binning, the sensor does not combine or average any photosites, its just an image with rows having different exposure times. Also note that this mode acts differently with monochrome and bayer pattern sensors.
Monochrome: even/odd rows are affected
Bayer Pattern: one 2x2 photosite GRBG block has same exposure time so its actually alternating exposures every 2 lines.
so they called a some kind of log sensor curve for sensor operation - non linear response mode?I got an update from cmosis yesterday, good news:
90dB = 15 F-stops is in piecewise linear response mode, I don't see any problem with color reconstruction, you just need to be aware of the changed response curve, but we define the curve ourselves very precisely so we can easily store it with the footage and unapply the curve and transform pixels back into linear color space at a higher bitdepth if we want to in post. Its similar to log modes in other cinema cameras.
So in the end you got a picture with light and dark pixel stripes? its really strange mode) maybe for some technical applications
so they called a some kind of log sensor curve for sensor operation - non linear response mode?
Sebastian wrote:The idea for the new sensor-front-end is to place an additional FPGA on the PCB that downscales the 12Megapixel images to something reasonable (like FullHD) and converts the data from into a format that the current 353 camera can work with. So the new sensor-front-end would would be sending image data the same way as the current sensor-front-end on all 353s does. That way its fully compatible with 353 and will also be fully compatible with 373.
When the performance of the image processing pipeline increases with new Elphel hardware (like 373) we can simply increase the resolution/FPS of the data that our FPGA on the sensor-front-end generates with a software update.
So the new sensor-front-end would not allow us to capture 12 Megapixels at 300FPS, sorry
I agree you cannot "use" it directly without post-processing, it would require either a debayering algorithm that is aware of the 2 different strips and can gather gather properly from the correct exposure or a tool that splits the image in 2 different full exposure images trying to reconstruct the missing strips from the other exposure image. Remember we have twice the amount of rows than FullHD/2K so if the target is a downsized 2K image this could work very well
Sebastian wrote:The quoted text is by now outdated. Latest announcement from just recently changed that plan a bit (in line with your concerns) http://apertus.org/en/apertus-camera-dp-team-up
Users browsing this forum: No registered users and 1 guest