1️⃣

Introduction to LiDAR and Point Clouds

In Sensor Fusion, by combining lidar’s high resolution imaging with radar's ability to measure velocity of objects we can get a better understanding of the surrounding environment than we could using one of the sensors alone.

Velodyne lidar sensors, with HDL 64, HDL 32, VLP 16 from left to right. The larger the sensor, the higher the resolution.

Here are the specs for a HDL 64 lidar. The lidar has 64 layers, where each layer is sent out at a different angle from the z axis, so different inclines. Each layer covers a 360 degree view and has an angular resolution of 0.08 degrees. On average the lidar scans ten times a second. The lidar can pick out objects up to 120M for cars and foliage, and can sense pavement up to 50M.

Types of lidars: 1. Lidar uses micro-mirrors or larger mirrors to scan the laser beam across the FOV. 2. Solid State Lidars use phased array principle where phase difference is used to steer the beam. 3. Lidars that use dispersion relationship and prisms

The power that Lidar uses depends on few things. What is more interesting is the output power of the laser, because that depends on the wave length, and this is correlated with eye safety. A laser in 905 nm range, that uses about 2 milli-watts. A laser in 1500 nm range, uses up to 10 times more power = so it can actually reach further than the short wave length, and be as eye safe as the other one, but it has more expensive components.

Eye safe means that you can look into the laser beam without it hurting your eyes. The class of lasers that is used in automotive lidars is class one.

Point Cloud

A set of all lidar reflections that are measured. Each point is a laser beam that is going to the object and reflected from it.

The data that the lidar generates depends on the principle of the lidar field (number of layers in the lidar, ...), but roughly a 100 MB/sec.

Point Cloud Data (PCD) file

PCD of a city block with parked cars, and a passing van. Intensity values are being shown as different colors. The big black spot is where the car with the lidar sensor is located.
PCD Coordinate System

First the distance of the ray is calculated. Iit takes 66.7 ns to do a round trip, so it takes half that time to reach the object. The distance of the ray is then 299792458 (66.7/2) 10e-9 = 10 meters. The ray is traveling along the X-axis so the Y component is 0. The X, and Z components can be calculated by applying some Trig, X component = 10m sin(90-24.8) = 9.08 , Z component = 10m -cos(90-24.8) = -4.19.

Point Cloud Library (PCL)

PCL is an open source C++ library for working with point clouds, which is used to visualize data, render shapes, and other helpful built in processing functions. PCL is widely used in the robotics community for working with point cloud data, and there are many tutorials available online for using it. There are a lot of built in functions in PCL that can help to detect obstacles. Built in PCL functions that will be used later in this module are Segmentation, Extraction, and Clustering.

Lidars are mounted on the roof to maximize FOV. But it will not going to see anything that is happening close to the vehicle down to the bottom. To cover all the FOV, we mount multiple lidars in different places (where we have gaps in the overall sense of coverage).

Vertical Field of view: most of lidars have the same vertical field of view (independent of the number of layers) which is approximately 30 degrees. if you have 30 degrees and 16 layers, then you have 2 degrees spacing between each 2 layers in the image. Two degrees in about 60 meters, you can hide a pedestrian between 2 layers, so that limits actually your resolution and limits in a sense how far you can see. Because if you have no scan layer on an object, then this object is invisible to the lidar. The more layers you have in the vertical field of view, the finer the granularity is, the more objects you can see, and the further you can see.

Granularity of a lidar:

PCL Viewer

It handles all the graphics for us.

The viewer is usually passed in as a reference. That way the process is more streamlined because something doesn't need to get returned.

How to represent a LiDAR in a simulator?

Modeling lidar in a simulator is useful, because you don't have to make many assumptions. Lidar is represented by multiple beams that you can do ray tracing in real time using GPUs. The environment in a simulator is only an approximation of the real world in terms of reflectivity and material properties.

Note: The lidar arguments are necessary for modeling ray collisions. The Lidar object is going to be holding point cloud data which could be very large. By instantiating on the heap, we have more memory to work with than the 2MB on the stack. However, it takes longer to look up objects on the heap, while stack lookup is very fast.

Templates

LiDAR Parameters

Visualize PCD Data

We use renderCloud instead of renderRays. → By default rendering of Point Cloud is done in white. → Enables you to give a name to the point cloud. This way you can have multiple point clouds showing up in the viewer, and each one can be identified.

Flag in environment.cpp renderScene if you want to render point cloud by itself without any of the cars in the street.