Synchronization of several cameras: problem and solution
According to feedback from our customers, the popularity of multi-camera broadcasts moves into high gear. And if earlier it was about the usage of one or a maximum of two cameras, now there are many requests for three or more cameras. This is especially important for sport streams, where multi-angle shooting allows you to show the audience the most interesting moments. In this case, we can talk not only about outstanding sporting events, but also about local competitions, for example, in colleges and universities.
It is worth noting that cameras that transmit a signal via an SDI cable are sometimes problematic to place in the right places for a good angle. Therefore, wireless cameras or cameras in conjunction with wireless encoders are increasingly being used for multi-camera shooting. This is very convenient when the operator can change the location without being bound to the SDI cable. In this version, signals from different parts of the field are transmitted to the central computer of the event director via Wi-Fi.
But there is a problem: due to the peculiarities of WI-Fi, the signal from different cameras can arrive with different delays. This means, for example, that from one camera the director sees the ball approaching the goal, and from the other camera, the ball is already in the goal. This complicates the convergence of the picture and the quick switching of the angle. Therefore, the problem of video stream’s synchronization from several wireless sources is quite relevant when conducting modern online broadcasts, and in what is now called Remote Video Production.
Wi-Fi signals come with different delays
Synchronization methods are present in outstanding Hardware decoders. We, in our turn, have also added this functionality to our Software decoder SRTMiniServer. At the same time, the number of synchronization lines in our product is significantly larger than in similar hardware decisions.
Let us briefly describe the main methods of video stream’s synchronization.
Preliminary stage
Before broadcasting, all cameras must be synchronized with some kind of central clock. For this, the NTP protocol is used. After this stage, you can be sure that all the clocks on the devices show the same time to the nearest millisecond. In this case, the process must be repeated periodically. Since the internal clocks of devices sooner or later begin to diverge.
checking clock with NTP server
After “aligning” the clock, video cameras can already sew in time information (timecode) when transmitting a signal. We shall consider 2 main methods of how this can be done.
Stitching –in the stream
As you know, H264 is the most common video codec right now. It allows you to pass within itself meta-information about each frame. To do this, it provides a special section of the SEI. In particular, there is a standard for transmitting timecode, including the current time and frame number.
This method of transmitting timecode has its advantages and disadvantages.
Advantages:
this is a standard (written in the H264/HEVC specification) and theoretically any manufacturer can implement it.
Disadvantages:
the method is only applicable to the H264/HEVC group
not all manufacturers of stream cameras and encoders support it
there is no way to generate it programmatically, on IOS and Android
timecode is lost on the next encode-decode cycle
In reality, precious few manufacturers implement this method of timecode transmission. We know JVC and Magewell, these are manufacturers who implement this method. In cameras and encoders from other manufacturers, the transmission of timecode within the SEI is questionable.
We and our colleagues tested our SRTMiniServer with the following JVC cameras and Magewell encoders:
JVC GY-HC900
JVC GY-HC550
JVC GY-HC500
Magewell Ultra Encode
Stitching-in the frame
This method consists in imposing, directly on the frame, information about the timecode in the form of a black and white stripe. This one looks like this:
Advantages of this method:
is an old standard (came from the VHS version)
easy to implement programmatically, in particular on iOS and Android
does not depend on the video codec
persists across multiple encode-decode cycles
Disadvantages:
black and white stripe slightly takes up useful space in the frame. This is usually 10 px high at the top of the frame, as shown in the image above.
About synchronization accuracy
According to the test results sent by our colleagues from JVC and other companies, it follows that the synchronization function in our software product is implemented at the level of outstanding HW decisions. In particular, the same synchronization accuracy is provided: 1-3 frames. At the same time, unlike HW decisions, in which the number of channels is rigidly fixed, our SRTMiniServer does not have this limitation.
Pseudo- synchronization
It is also worth mentioning pseudo-synchronization. This method is applicable for both SRT and RTMP protocols. It is used if the encoder does not support timecode transmission. For example, if you use multiple GoPro cameras or the LarixBroadcaster streaming app.
In this case, the principle of “keep the stream as close to real time as possible” is applied by dropping late frames. In this case, all streams will be “close” to each other.
Given a stable local network for several hours, the discrepancy between streams can be within 200-300ms, and the number of dropped frames is not critical.
Advantages:
works with any SRT and RTMP encoders
Disadvantages:
uncontrolled inaccuracy. It can reach 100ms, and maybe 400ms. An accuracy of 2-3 frames is practically unattainable.
the method works well only in the local network
the method assumes reset (drops) of “late” frames.
In our SRTMiniServer (and RTMPMiniServer) this method is also supported by setting the Max buffer parameter. We recommend using a value of 300ms for this method.
Options for pseudo-synchronization
Conclusion
Since the days of working with the RTMP protocol, we have received requests from users to synchronize several video sources. With the advent of the SRT protocol, we solved this problem and implemented the described methods in our SRTMiniServer. We are looking forward to get your feedback on using this feature in your broadcasts.