There’s now a simple solution for overcoming the incompatibilities between the USB3 I/O channels commonly used in today’s PCs and the digital media interfaces, such as High-Definition media Interface (HDMI) and Serial digital interface (SDI), favoured by audio/video equipment.
Until now, differences in their frame formats and signaling mechanisms made it difficult for HD video cameras, image sensors and other AV sources to exchange their uncompressed HD audio/video content across a USB3 connection.
Finding a simpler way to move HD video across a USB connection has become increasingly important as PCs become an integral part of a growing number of industrial, commercial and consumer applications. This includes machine vision systems (USB3 video and vision cameras), video production products (video converters, video capture devices), digital signage, surveillance, and even mobile consumer gadgets.This article will introduce an encapsulation technique which provides a transparent pipeline for nearly any multimedia stream within a USB channel, and how it can be implemented using a new breed of small, inexpensive, low-power FPGAs.
We’ll be referencing the USB3 video bridge reference design as a practical example which addresses the needs of many PC-based applications in consumer, video broadcast, machine vision, and surveillance systems.
Although USB3 technology finally gives PCs the I/O bandwidth they need to support uncompressed HD video/audio streams, it’s had difficulty realizing its full potential in media processing applications because most USB connections aren’t available on most commonly-used video and multimedia equipment.For example, HDMI interfaces are a de facto standard for consumer video products, most professional video equipment relies on SDI ports and the image sensors used in industrial and scientific applications frequently use either sub-LVDS or MIPI CSI-2 interfaces. While it’s relatively easy to convert the different electrical signals used by the PHY layer of each interface into data streams which can be processed by standard digital logic, their frame structures and signaling protocols are radically different.
Traditionally, the best solution to this “digital disconnect” between the PC’s USB ports and AV equipment’s SDI/HDMI connections was to work around it.
This usually involved a frame grabber card which could capture the video data, encapsulate it within PCI Express data packets and move it directly onto the host system’s data bus. Although it works well, a frame grabber also adds enough cost and complexity to a design that it’s unsuitable for many applications.
The digital disconnect can also be spanned with a packet bridge which performs real-time format conversions between an AV device’s output and the host system’s USB3 interface.
These bridges provide a simpler, more flexible, cost-effective alternative to frame grabbers by employing one or more of the bridging mechanisms defined in the USB standard for encapsulating multimedia streams within USB packets and reassembling outgoing streams back into their native format.
Webcams and other USB media devices already have these encapsulation mechanisms built into their electronics but there are few, if any, off-the-shelf ICs available capable of bridging HD video at USB3 speed.
The USB standard already includes well-defined bridging protocols for HDMI, SDI, and almost every other commonly-used digital media interface. The most commonly-used of these is the USB Video Class (UVC) which defines the characteristics of USB-connected video devices like webcams, digital camcorders, transcoders and analog video converters.
UVC supports most common frame- and stream-based video formats, including frame-based standards such as MJPEG and DV used by most of the applications described in this article.
These high-quality/high-bandwidth formats typically transmit their frames as uncompressed YUV (YUY2, NV12) data with a separate out-of-band channel for the frame/sample boundary information. While not directly applicable to this application, it should be noted that UVC also supports compressed stream-based video formats, including MPEG-2 TS, MPEG-2 PS and MPEG-1 as well as temporally encoded video formats like H.264 and VP8.
In PC-based applications, UVC provides a method for encapsulating video data and control signals from a camera or other source, within standard USB packets for transport over the USB3 link to the host PC.
The host PC then replays the received video data using a standard Microsoft UVC driver and a compatible application. Stand-alone audio streams (not associated with a video stream) are accommodated in a similar way using the USB Audio Class (UAC) 2 format
Ideally, a USB video bridge should be able to convert streams from several video interfaces (HDMI, Tri-rate SDI, CMOS Sensor serial and MIPI CSI-2 etc…) and sensor outputs into standard UVC packets and transfer the data packets over the USB3 link to the host PC. In addition to supporting video streams, the bridge should also act as an interworking conduit for multi-channel I2S UAC-compliant audio content.
To accomplish these tasks, the bridge must perform two important functions:
1) Electrical Conversion – The electrical signals from the incoming media source must be recovered, synchronized and converted to a serial bit stream which can be processed and encapsulated within USB packets. The bridge must be able to accept inputs from HDMI, MIPI, and SDI A/V interfaces and support a USB3 MAC/PHY host system connection.
Some FPGAs include analog circuitry required to support the PHY layer transmit/receive functions of one or two commonly-used interfaces, they can also be implemented with the addition of readily-available commercial devices. The next section will provide details on how this is done.
2) Format conversion – The data and control streams associated with the HDMI, MIPI, and SDI interfaces must be encapsulated within the in UVC format and transmitted to the host system via a USB3 bus transceiver. In addition it must provide buffering between its video input sources, which are received in high-speed bursts, and the relatively steady transfer rate of the USB3 interface.
To conserve bandwidth, the video’s blanking interval is dropped before conversion so only active video data are transmitted in UVC packets. Audio data not associated with a video stream is transferred via USB’s isochronous mode.
The bridge must also be able to process the outputs of commonly-used image sensors into a format which can be processed by the USB interface. This includes serial outputs such as the MIPI CSI-2 used in most mobile applications and the subLVDS format used by Panasonic and Sony image sensors.
It also needs to accept the parallel outputs of digital signal processors (DSPs)3 and other devices used to control and process the outputs of advanced image sensors used in scientific and industrial applications.
In order to deliver the necessary performance, configurability, solution cost and time-to-market, this multi-interface bridge uses an FPGA to implement as much of its functionality as possible, with only a handful of external components to support the MAC/PHY portions of the interfaces.
A member of Lattice’s ECP3 FPGA4 family (LFE3-17EA), was selected for this design because it has the necessary FPGA resources required, such as high-performance I/O, speed and size and contains several hardware cores which implement key functions used in this application. These pre-integrated building blocks, such as multi-lane SERDES transceiver and SDI PHY , simplies the design and further reduces its total solution cost.
The resulting solution supports high speed reception and packing of video and audio data into USB 3.0 UVC and UAC data frames without the use of external memory buffers.
For more detail: FPGA-based USB3 video bridge can repair the PC-HDMI disconnect