This is an overview of what I will cover in this instalment of the guide to budding media server professionals:
There are many scenarios where you want to bring in content from external sources. It can be a live video feed from a camera, a PowerPoint or Keynote presentation, an online video stream, etc. Luckily it is not rocket science to capture data from external sources into the media server.
Basically, you have two alternatives: Cabled and networked (IP-based) capture.
A capture card is a physical device that you connect to (or is built into) your media server for capturing external devices via video signals. Capture cards support a wide range of display standards, such as HDMI, DVI, 3G-SDI, DisplayPort, etc. The capture card can be designed to be built into the media server (internal capture card) or connected through USB (external capture card). These cards are typically PCI Express cards such as the Blackmagic Design Intensity Pro 4K. An example of a USB capture card is the Epiphan AV.io 4, a HDMI to USB capture card.
With any of these solutions, you need to have a cable running from the source (where your video is located) to the capture card.
If you have very long distances from the source to the media server, the alternative is to use network-based capture.
In previous blog posts, I have written about both NewTek’s NDI® (Network Device Interface) and other standards. To facilitate NDI capture, the source either has to support NDI directly or you need to convert the output from the source to NDI. This conversion is easily managed through devices such as BirdDog’s Mini) where the HDMI out from the source is converted to NDI and distributed on the network.
The media server needs to support NDI, of course, to be able to capture the NDI stream directly in the media server.
Behold! Note that many simultaneous video streams will have a severe impact on the network and if you run on a 1Gb network, you might experience issues with just a handful of streams. Calculate at least 150 Mbps per stream to secure overhead.
Generally speaking, latency is the time delay between cause and effect. In this case, from the source output of a video stream to the time when the media server manages to output the captured stream.
There are many instances where latency is added: by the network, by the capture card (conversion from physical cabled data and into the software on the media server) and depending on the media server’s capability to handle captured sources.
In many cases, latency is a deal-breaker. Imagine yourself at a concert with your favorite artist. It's a large venue and there are numerous screens showing video from the stage. What if the video is out of sync with the audio by a few seconds? For some applications, even a few milliseconds latency can be a challenge. Imagine if you are in a Formula 1 simulator and it takes a half second between moving the wheel and the car's supposed reaction…
In general, built-in capture cards will have lowest latency, followed by USB and then network capture. I say "in general" because the quality of network video has improved tremendously and there are big differences between different products. Even more generally speaking – you get what you pay for.
When you have a physically connected source, you need to make sure that the media server can both manage and set up the proper resolution of the source. If your source outputs 1920 x 1080 it would be great if the media server read the signal as 1920 x 1080, too.
Enter EDID (Extended Display Identification Data). Mr & Mrs Trouble combined.
EDID and EDID management are synonymous with high blood pressure and sleepless nights for those who have been around for a while. A media server system I ran in one of my jobs constantly lost resolution of the source (that never changed) and the only solution was to reboot the media server. In the middle of a live event the media server could rescale the image because the resolution changed. Not very smooth. Not very professional. The solution was an external device that removed the problem. More on EDID in this blog: I got 99 problems but EDID ain't one.
In some cases, you need to make sure that the source and/or the output are synchronized. And this is when you need to know these three terms and how they can help you succeed.
There's a lengthy piece on the topic already in this blog so if you need to learn about this – here goes: Genlock, framelock, timecode sync – when do I need them?:
This is the 10th instalment in this guide and we're nearing the end. There are more articles in the pipeline, but I would really appreciate your feedback on the content so far. Most of all, I'd like to know if there are any topics that you would like me to cover? If so, drop me a line in the comments field below!
Check out the other blogs in this series: