This topic contains 1 reply, has 3 voices, and was last updated by Anonymous 1 day, 4 hours ago.
Hello, can someone help me with this unanswered question.
Shooting 8 bit 4:2:0 in camera, and then exporting it to H264 8 bit 4:2:0 looks worst than shooting 12 bit 4:4:4 in camera and also exporting it in H264 8 bit 4:2:0 ? Logically it should look the same since ultimately we end up with 8 bit 4:2:0 signal ? What am i getting wrong here ? Thank you for your answers.
Shooting in a higher bit depth ALWAYs results in a better end image. When you compress in camera you get a lower color resolution and color noise cause it does it in real time and does it poorly. When you compress after at the end of pipeline the effect on image quality is not noticeable in HD or above quality. Compressing in camera kills the image recorded cause it compresses sensor data as it gets recorded so you lose a bunch of data. Compressing at the end does not lose data it just cuts out data that isn’t needed.
Adding to what jack says, each sensor read out has to be compressed into a frame at 1/30th of second or more, to do this if you compress the data to 8 bits in camera it often simply chops the extra bit depth off with a generic Pixel combination algorithm that results in a simply dumps half the data down the toilet. It's basically cutting the resolution of color data in half. You may have 1920 pixels but 970 of them are just copied from another pixel to make it lower bit depth.
In post instead, the software only takes away data where pixels are the same already to conserve image quality. It's two totally different results.
You must be logged in to reply to this topic.