On Track Towards Autonomy: Calibration

OTIV
5 min readApr 26, 2022

--

Calibration is an optimization problem that must be overcome in order to ensure the safety of autonomous technology. At OTIV, we tackled this challenge head on by developing our own algorithm in-house. Today, we’ll share part of our ground-breaking journey with you.

PERCEPTION AND SENSOR FUSION

When it comes to autonomous technology, it’s essential to understand the surroundings of our vehicle. What is happening around us? Is there a car that hasn’t seen the tram, and is making a turn onto the tracks? A first step in this perception is working with cameras. Our camera sensors provide a detailed view of the world as a collection of pixels. And thanks to our machine learning models, we know if those pixels belong to a car, a person, a truck, or something else. But cameras alone are not enough. After all, they give us no detailed information of just how far an object is from our vehicle.

That’s where lidar sensors come in.

With its laser technology, a lidar is able to output a real-time 3D image of the vehicle’s surroundings, in the form of point clouds. A point cloud is a collection of points specified by their real-world XYZ coordinates. We use these point clouds to assign depth information to objects we detect in the camera images. In other words, thanks to the camera we know, for example, that there’s a person in front of the vehicle, and thanks to the lidar we know that the person is three meters away. This is a crucial step in obtaining accurate and robust perception: sensor fusion, or bringing together information from different sensor sources into one real-time model of the environment.

For this sensor fusion to be accurate, we need to perfectly map each lidar point to the right camera pixel. Put differently, we need to perfectly map the lidar space onto the camera space. As an essential process in our sensor fusion pipeline, our autonomous technology wouldn’t succeed without calibration. But how is OTIV tackling this calibration problem?

SOLVING THE CALIBRATION PROBLEM

Calibration allows us to know -precisely- how the lidar is positioned with respect to the camera. The required outcome? A transformation matrix that links the camera coordinate system and the lidar coordinate system, for every camera-lidar combination.

In order to develop this transformation matrix, we need a number of inputs. Our transformation has six degrees of freedom (6DOF): three for translation (distances) and three for orientation (angles). Although it is possible to determine these parameters in a manual way, this approach lacks precision and results in a rough estimate of the transformation matrix. Consequently, at OTIV, we developed an in-house algorithm that automatically calculates the 6 parameters for each camera-lidar pair.

After extensive research on the subject, soon enough it became clear that there are no ready-to-use solutions for bringing calibration to rail. Basing our methods on scientific papers and research, we stepped up to the challenge of developing our very own algorithm in-house.

A first step towards developing the calibration algorithm is data collection. An ideal environment for calibration data collection is an open space, where the sensor suite (consisting of camera and lidar) can be positioned. The seemingly simple, yet crucial element here? A big checkerboard. This board acts as a reference for the algorithm, since we’re able to detect its position and orientation in both the camera and the lidar space. After doing this detection for multiple frames in different positions, we can calculate a precise estimation of the desired transformation matrix.

A CLOSER LOOK

Once the data is collected, we develop the algorithm to automatically detect the four corners of the checkerboard in both the camera images and the lidar point clouds. To process the camera images we use an open API for checkerboard detection. For point clouds, the story is different: there is no open source solution available. So we built a custom checkerboard detection algorithm to guarantee that our calibration results are spot-on.

Our method combines cutting edge unsupervised machine learning techniques with advanced linear algebra to give accurate and robust results. As a result, we end up having the 3D coordinates of the checkerboard corners in both camera and lidar coordinate systems for each frame.

The final step consists of calculating the optimal transformation matrix given a set of corresponding 3D points in two coordinate systems. For this we integrated the Kabsch algorithm, which is based on the singular value decomposition.

ALWAYS STRIVING FOR THE BEST

Testing, validating and piloting our assistance and autonomous solutions requires rigorous processes for tests that happen in an efficient and safe way. That is why at OTIV we also automate internal processes — And calibration is a great example of this.

Expanding our sensor suite with additional cameras and lidars, and optimizing the positioning, created the need for an automatic calibration tool. After all, every iteration or alteration made to a sensor set-up has a direct impact on calibration, and therefore on the success of the sensor fusion.

It’s clear that the gains we have made in speed and precision thanks to our custom calibration algorithm are contributing to our mission to revolutionize the rail industry. And we’re just getting started!

If you’re interested in finding out more about our latest developments, be sure to follow us on Medium and LinkedIn, or take a look at our website. And if you’re equally passionate about our goal of bringing autonomous technology to rail, get in touch via apply@otiv.ai!

--

--