Media server pro guide3

Have you considered becoming a professional media server operator or user? Unsure of what technical skills you need? This is blog number three in our series on media server pros, and this time we’re digging further into the details of media server content (yes, video) for playback in media servers, based on recommendations from the fine people of the Media Server Professionals (Facebook) group.

Let’s pick up from where we finished in the last post!

    • Video bit depth or color depth
    • Understanding chroma subsampling
    • Video space and data rate calculations
    • Calculate chroma subsampling impact on file size
    • Drop frame vs non drop frame
    • Frame accuracy and keyframes
    • Conclusion

Video bit depth or color depth

in addition to frame rates and bitrate, a further parameter for video quality is the bit depth or color depth, as it is also known. The bit depth of a video file or image is determined by the number of bits used to define the individual pixels. With higher bit depths you can have greater number of colours or grayscale levels that can be shown.

A black and white image is represented by 1 bit (which has 2 values, 1 for black and 0 for white – or vice versa). For almost all purposes the majority of images and videos have traditionally been represented from 8 to 24 bits. But higher bit depths are possible, as you will see later.

Before we go further, let me explain bits. A bit is a binary on-off switch. Each additional bit doubles the information:

1-bit = 2 values (0 and 1)

2-bit = 4 values

3-bit = 8 values

4-bit = 16 values

5-bit = 32 values

6-bit = 64 values

7-bit = 128 values

8-bit = 256 values

10-bit = 1,024 values

12-bit = 4,096 values

14-bit = 16,384 values

16-bit = 65,536 values

When we refer to an 8-bit image in most cases we are talking about 3 times 8 bits, as it is 8 bits per color channel and the images/video footage have 3 channels: RGB (Red, Green and Blue). This will allow each color to be represented by 256 individual levels. It might not sound too impressive, but if we combine them… 256*256*256 equals 16,777,216 total colors.

Let us do the same math for 10, 12, 14 and 16-bit images:

8 bits per colour channel: 16,777,216 total colours

10 bits per colour channel: 1,073,741,824 total colours

12 bits per colour channel: 68,719,476,736 total colours

14 bits per colour channel: 4,398,046,511,104 total colours

16 bits per colour channel: 281,474,976,710,656 total colours

If your display (projector, screen, LED-wall) is only capable of displaying 8-bit or if any component in the entire value chain is 8-bit, there should be no reason why you should add content with higher than 8-bit (24, that is, remember?).

But if your display is capable of 10 or 12-bit then it could make sense for the media server to support and output 10 or 12-bit content.

But as with frame rates, high bit depths add to the load of both content production, storage, as well as retrieval for playback with higher requirements for read-speeds.

Understanding chroma subsampling

Sometimes, you will see numbers such as 4:4:4, 4:2:2 or 4:2:0. This is called chroma subsampling, and it refers to how much color information is stored at each pixel. Depending on content in your video/image, you might be well off with less than full color resolution because you sample color from the nearby pixels to generate the image.

  • 4:4:4 is the highest quality level. Here it is no subsampling and each pixel has its own color information.
  • 4:2:2 is half the horizontal resolution while keeping the full vertical resolution.
  • 4:2:0 is half of both vertical and horizontal resolution.             

The numbers should be interpreted as follows:  The first number is how many pixels wide the sample is (almost always 4). The second number is how many pixels in the top row that will have color information (chroma samples), and the third is how many pixels in the bottom row that will have chroma samples.

CHROMA SAMPLE ILLUSTRATION-2

Video space and data rate calculations

Let’s take a look at some examples of file sizes based on different types of files and formats. To calculate this I used the Video Space Calculator from DigitalRebellion.

1080p resolution (1920x1080) and 10-minute videos, 8-bit:

H.264 @ 30 frames per second = 7.23GB data

H.264 @ 50 frames per second = 12.06 GB data

H.264 @ 60 frames per second = 14.47 GB data

 

ProRes 422 LT @ 30 frames per second = 7.48 GB data

ProRes 422 LT @ 50 frames per second = 12.46 GB data

ProRes 422 LT @ 60 frames per second = 14.96 GB data

 

ProRes 4444 @ 30 frames per second = 24.19 GB data

ProRes 4444 @ 50 frames per second = 40.32 GB data

ProRes 4444 @ 60 frames per second = 48.39 GB data

Calculate chroma subsampling impact on file size

A good way to calculate chroma subsampling is to say that a full color image at 4:4:4 is 100%. Add the figures 4+4+4 = 12, and 12 equals 100%. With that as a starting point, you can derive the rest:

4:2:2 = 4+2+2 = 8 – or 66.7% of 4:4:4 (12)

4:2:0 = 4+2+0 = 6 – or 50% of 4:4:4 (12)

How do you apply this to the file? Here is the formula, which also takes into consideration the bit depth, chroma sampling and resolution:

Color channels * Bit depth * Chroma Subsampling Percentage * Resolution = Number of bits

To convert bits to Kilobytes you divide the number of bits with (8 * 1024). To convert to Megabytes, you divide the number of bits with (8*1024*1024). Wikipedia can teach you more about bits and bytes but one byte is represented by 8 bits.

Here goes:
4:4:4 calculations

8-bit    = (3 * 8 * 1 * 1920 * 1080) / (8 * 1024) = 6075 Kilobytes = 5.9 Megabytes

12-bit  = (3 * 12 * 1 * 1920 * 1080) / (8 * 1024) =   9112 Kilobytes = 8.89 Megabytes

 

4:2:2 (this is 66,7% off 12, remember)

8-bit    = (3 * 8 * 0,667 * 1920 * 1080) / (8 * 1024) = 4052 Kilobytes = 3.95 Megabytes

12-bit  = (3 * 12 * 0,667 * 1920 * 1080) / (8 * 1024) = 6078 Kilobytes = 5.93 Megabytes

Drop frame vs non drop frame

Did you lose something? As you saw earlier, the NTSC frame rate is 29.97fps and not 30. This means that every second you are short of 0.03 frame. Timecode (will be covered later) only counts whole frames and after an hour you will have a visible discrepancy. Here is the math:

1 hour of video at 30 frames is 108,000 frames (30*60*60). But with 29.97 it is 107,892 (29,97*60*60). You have lost 108 frames. In this case, the timecode on the recording would be 108 frames or 3.6 seconds (108 / 30 = 3.6).

To compensate for this, you have an option of making non-drop frame video (Drop Frame Timecode) which works by dropping two frame numbers from each minute except every tenth minute. You drop the count (frame numbers) but not content, so the content itself is not changed.

Timecode as well as other synchronization methods will be covered later, but Frame.io Insider has an excellent article on timecode, framerates, etc.

Frame accuracy and keyframes

When compressing video files to certain formats, such as H.264, MPEG-2 and MPEG 4, you can set a number of keyframes (i-frame) and non-keyframes. IBM has published a great article about video frames in general and writes this about keyframes:

“The keyframe (i-frame) is the full frame of the image in a video. Subsequent frames, the delta frames, only contain the information that has changed. Keyframes will appear multiple times within a stream, depending on how it was created or how it’s being streamed.”

Key frame pic-1

Being able to find and display each video frame correctly, regardless of key or non-keyframes, is called frame accuracy. Without frame accuracy, you cannot start displaying at the exact frame you want.

Conclusion

Understanding video and video formats are of course some of the most important aspects of running a media server. That is one of the reasons for digging into it on this level. And still, I could easily have gone even deeper. This article series should be seen as a guide on WHERE to START, rather than the journey’s end…  Next time, we will start with images and still formats, digging into the realms of uncompressed playback!

As always, a big shout-out to the great people who commented on my post – making it very worthwhile to write these articles. Thank you very much!

Patrick Campbell, Ian McClain, Ola Fredenlund, Matt Ardine, Marek Papke, Eric Gazzillo, Axel Sundbotten, Joe Bleasdale, Parker Langvardt, Alex Mysterio Mueller, Christopher John Bolton, Andy Bates, David Gillett, Charlie Cooper, Tom Bass, Fred Lang, Nhoj Yelnif, Hugh Davies-Webb, Marcus Bayer, Arran Vj-Air, Manny Conde , Joel Adria, Alex Oliszewski, Ruben Laine, Jan Huewel, Majid Younis, Ernst Ziller, Marco Pastuovic, Geoffrey Platt, Ted Pallas, Dale Rehbein, Michael Kohler, Joe Dunkley, John Bulver, Jack Banks, Stuart McGowan, Todd Neville Scrutchfield