USB3 Vision has become an industry standard thanks to its connectivity and wide bandwidth, and what is the reason for this acceptance and deployment as one of the reference protocols in machine vision?
In recent years, USB3 Vision has become an industry standard thanks to its connectivity and wide bandwidth, reaching speeds of up to 350 MB/sec. We are already talking about this standard as the successor to Firewire and USB 2.0.
But why is this acceptance and implementation one of the reference protocols in artificial vision?
A little bit of history and best practices when it comes to its use…
USB 2.0 vs USB 3.0
USB 3.0 was released in November 2008, about 8 years after the release of USB 2.0. A long time to incorporate improvements and not to disappoint with the result….
- Transfer rate: USB 2.0 offers a speed of 480 Mb/s while USB 3.0 offers a speed of 4.8 Gb/s, 10 times faster data handling.
- Add another physical bus: The number of wires in USB 3.0 increases from 4 to 9. These additional pins require more space on cables and connectors, so new types of connectors appear.
- USB 2.0 is capable of delivering up to 500 mA while USB 3.0 provides up to 900 mA. USB 3.0 ports are capable of delivering more power but also much more efficiently.
USB 3.0 supports Direct Memory Access (DMA) that minimizes CPU load. This is a critical aspect to take into account in applications that require a wide bandwidth.
- More bandwidth: USB 2.0 implements a one-sided connection, while USB 3.0 makes use of two-way communication, one for transmitting and one for receiving data.
- True plug-and-play connectivity. The USB 3.0 device notifies you immediately that it is ready for use.
This interface coexists with other standards such as GigE Vision or CameraLink and the choice of this will be linked to the requirements of our application. USB 3 features immediate plug-and-play connectivity and ample bandwidth while GigE allows for versatile configuration in multi-camera systems.
¿USB 3.0 or USB3 Vision?
Are we talking about the same thing? No, we’re not talking about the same thing.
USB3 Vision corresponds to the standard for machine vision. It is based on the USB 3.0 interface but subject to standardization to “eliminate any problems” in the interconnection of cameras and controllers.
As with the GigE interface, defined as part of the GigE Vision standard, USB3 Vision relies on the most common programming interface for industrial cameras today: GenICam.
USB3 Vision and GenICam provide the stability and minimal latency required during image transfer and camera control.
The standard was officially published in 2013 by the AIA (Automated Imaging Association) and provides the official standard for the USB 3.0 interface in machine vision.
Prior to the advent of the USB3 Vision standard, there was no standard for USB in the machine vision industry. Until then, several camera manufacturers launched their own developments and solutions based on USB 2.0. Normally this was not sufficient to ensure the stability of the complete solution at the levels required by an industrial application.
The USB3 standard defines its own transport layers, adapted to the needs of machine vision. It includes the Control layer, the Events layer to transfer events asynchronously, and the Data layer to ensure that information is transported quickly and reliably.
Connecting USB3 cameras to a PC or controller.
Although USB 3.0 ports may be compatible with USB 2.0 at the connection level, a USB 3.0 port is designed to provide 4.5W of power while most USB 2.0 ports deliver 500mA at 5V, 2.5W.
There are variations according to the different motherboards used, the distribution of the components on the board or the chip used. These are points to take into account depending on the bandwidth requirements or the power consumed by our camera(s) for example.
Motherboards that commonly offer multiple USB 3.0 ports are often hubs, saving manufacturing and component costs. For example, it is not uncommon to find boards with 8 USB 3.0 connections but only two or a single chip.
This means that all connections are routed to a single physical channel and the bandwidth distributed to the different connected cameras.
While the use of hubs (on board or external) is a very economical solution, ideally, the bandwidth of each camera needs to be managed individually to avoid frame loss or bus overload. Although the camera is capable of delivering 350 MB/s of input data, it is of little use to us if we do not ensure that the PC or controller is capable of managing it efficiently.
Therefore, the optimal solution is the use of acquisition cards (or framegrabbers) with a dedicated chip for each port, true USB 3.0 port connectivity.
In the case of PCIe cards, we find simple x1 paths but these do not support the full bandwidth of USB 3.0.
We will use cards with x4 channels or higher, being able to manage 1GB/s in the PCIe 1.0 version, 500 MB/s per line in the 2.0 version, or 985 MB/s per line in the 3.0 version.
USB 3.0 cabling
Due to all the above, sometimes it is advisable to use special wiring but, in many other cases, it is a guarantee of operation.
The cameras and the controller may not always be close together, as USB 3.0 has a 5m limitation in the wiring length. From this length on, the extension of the wiring by means of passive cables results in signal loss, significantly affecting the operation of the system.
Many applications require longer cable lengths for device interconnection. There are different solutions for this, such as passive cables, active cables or fibre extenders.
As distances increase, manufacturers integrate silicon chips to maintain signal quality. It’s the active wires.
At 20m or more, the reliable solution is to no longer use copper cabling but fiber optic based devices. A fiber extender consists of two transceivers that convert the electrical signal to an optical signal and vice versa.
Another solution that we should initially consider if our application requires long distances between cameras and controller, is to use GigE Vision and not USB3 Vision.
Author: Jaume Fontanella