Cameras are one of the most vital sensors in advanced driver assistance system (ADAS) solutions. They gather extensive information on the vehicle’s surroundings: context, players, obstacles and infrastructure, can all be accurately perceived thanks to them.
At OTIV, we’re using cameras as part of our sensor suites to ensure that rail vehicles can achieve their full potential. With cameras, current rail vehicles can be upgraded so that they can become smarter and safer. Cameras enable faster-than-human detection of obstacles and dangerous situations without ever being distracted. They also enhance visibility by covering blindspots, and perceive traffic lights and other road signs; a must in complex urban environments.
Cameras: a well-known ally
The world is no stranger to using cameras in assistance technology. They are widespread in the automotive industry and a key element in increasing road user safety, being used to monitor blind spots, assist in parking and to prevent collisions. For example, with rear and other blind spot cameras, the driver’s awareness of the vehicle’s surroundings has been greatly enhanced.
However, and curiously enough, this technology that has become commonplace in the automotive industry has yet to be applied to rail vehicles. As a consequence, cars have become smarter, while rail vehicles have been left behind.
At OTIV, we strive to change this.
Zoom in: the right camera and its settings
Our ADAS solution, OTIV.TWO, uses five cameras, along with high resolution LiDAR and GNSS sensors, to revolutionize the rail industry: with them, we can perceive all road signs, users and obstacles, as well as determine the precise localization of our vehicle. When fused with LiDAR data, cameras are used to estimate the future ego path — or rail trajectory — of the vehicle: the output is fed as redundant information to our localization module, thus increasing its accuracy.
In order to do all this, we want to extract as much information as possible from these sensors. We integrate the best camera models on the market, and customize their settings to achieve high performance in our safety systems. Many features are considered when choosing and configuring a camera: let’s present a few of them.
Since the ego vehicle, and all other players on the road, are in motion, anticipation is key: high-resolution images are important for computer vision (CV) algorithms to detect small/distant objects and road signs. Also, we want our safety solutions to be aware of the vehicle’s surroundings at all times. Thus, cameras must generate outputs at elevated frame rates and feed a continuous stream of data to our perception stack.
Another fundamental element to master is the exposure, which defines how the camera adapts to the environmental brightness. Its importance relies on the fact that a bad configuration, like underexposure or overexposure, can induce a loss of details on the resulting images, thus negatively impacting the performance of CV modules.
Depending on the application, exposure can be fixed, or automatically adjusted in real time. The former setting allows the use of a predefined configuration, ideal for one specific function, e.g. for guaranteeing that the colors of traffic lights are consistent and true to life in all situations, regardless of other sources of luminosity. The latter allows cameras to find the configuration which offers the best overall balance for each different situation, useful in environments where brightness levels often vary, e.g. for vehicles regularly driving both on open roads and under tunnels.
Exposure consists of three separate settings, all influencing the ability to adapt to brightness levels in a different way:
- The aperture, also referred to as the f-stop number, is part of the camera lens specifications. It defines how much light passes through the lens onto the camera sensor. Since these sensors can be affected differently by the aperture, camera manufacturers often provide advice for the ideal lens pairing.
- The ISO indicates the sensor’s level of sensitivity to light. Increasing it improves the visibility under low-light situations, i.e. at night, but also induces noise in the images: the correct balance must be reached.
- The shutter speed defines how long the shutter remains open for each frame capturing. Expressed in milliseconds, the greater its value, the longer the shutter remains open and the more light reaches the sensor. A long exposure time improves visibility in low-light situations, but also increases the sensitivity to motion, resulting in a phenomenon called “motion blur” on the images, which is to be avoided. Yet, an excessively short exposure time is not better: since traffic lights emit signals at their own frequency, such a configuration can result in capturing frames while the road sign is not transmitting any light, thus appearing “off” on the image. This — briefly explained — LED flickering issue is a challenge well known in the machine vision community, and another reason why properly configuring the camera shutter speed is essential for an ADAS application.
Finally, maybe the most essential characteristic of the cameras embedded in our solutions is their robustness: staying functional in all conditions. Vibrations, temperature changes, shocks, moisture and dust — none of these should stop the cameras from doing their job. They must be reliable 24/7 to truly increase the safety and efficiency of operations.
For OTIV.TWO to be the best-in-class railway ADAS solution, we considered all the elements above — and more — when integrating the cameras in our sensor suite. Typically, most configurations are taken care of by the image signal processing unit (ISP) of the camera. This component’s main role is to transform the raw data coming from the sensor into an image whose pixel format is understandable by the CV algorithms. The ISP is also responsible for configuring parts of the sensor’s behavior. It was fundamental for us to keep the ability to customize these settings. Tailored camera configurations allow us to achieve the best perception results, and ensure the success of our system.
Our ADAS solution uses different types of sensors to make rail vehicles smarter, meaning the cameras must smoothly interact with all other hardware. This is where the frame grabber comes in: this embedded FPGA (Field-Programmable Gate Array) serves as an interface between OTIV’s computing unit and the five cameras.
The frame grabber fulfills several functions. First, we use it to configure our cameras: this includes some of the settings mentioned before, but also the implementation of triggers and the synchronization of the cameras, to name a few. Then, it is used during runtime to provide power to the cameras and to deserialize — or decode — all received GMSL2 data messages, making the images available for our perception stack. These bi-directional data exchanges are made possible by the FAKRA coaxial cables linking the cameras to the frame grabber. Using such a setup allows us to reach a seamless datastream, from the cameras to the OTIV CV algorithms.
At OTIV, much progress has been made since work first started on railway assistance systems. In parallel to the state-of-the-art software we develop for our ADAS solution, we make sure that the hardware integrated in our sensor suite enables us to achieve industry leading performance: for which cameras are an essential component. In the past, our OTIV.TWO prototype used three cameras, covering the front 180° field of view of the tram. Now, our five cameras and our LiDAR sensors offer a complete view of the surroundings of the vehicle, including blind spots usually invisible to the driver, making this sensor suite a game-changer in the industry.
In conclusion, to guarantee high quality results, we put extensive work and testing into the selection, configuration and integration of the sensors, and we are excited to build autonomous technology alongside leading hardware manufacturers and partners.
If you’re interested in finding out more about our latest developments, be sure to follow us on Medium and LinkedIn, or take a look at our website. And if you’re equally passionate about our goal of bringing autonomous technology to rail, get in touch via firstname.lastname@example.org!