Introduction
Overview
Over the next few weeks, you will gain knowledge and learn new skills in the areas of camera technology, computer vision, and sensor fusion between Camera and Lidar. In the final project, you will develop an algorithm to estimate the time-to-collision until a vehicle collides with an obstacle in front of it. To make this project as realistic as possible, you will use data from a vehicle equipped with a forward-facing camera and a high-resolution Lidar sensor.
Cameras are one of the main sensors in AV and they are the only sensors able to interpret 2D information as road signs or lane markings. Every thing is optimized for human visual perception and the camera benefits from this.
There are so many situation in traffic where camera alone will fail, such as darkness, bad weather, direct sunlight, and many more which make computer vision really challenging and on those cases a 2nd sensor is needed to handle the situation.
Using a single sensor is an AV is way too risky. That's way both camera and lidar are now being used by companies all over the world to build AV.
Cameras are very mature technology now, but the main problem is that they are subject to the same drawbacks as the human eyes such as having reduced performance in darkness and bad performance on adverse weather conditions as snow and heavy rain... In 1990s, automotive companies started working on 1st ADAS.
There are 2 broad types of computer vision: 1. Classic Science: Well understood and the art of engineering to know which method to use for which problem and how to choose the right parameters is well established. → Feature Tracking → Segmentation → 3D Reconstruction →Object Recognition 2. Deep Learning: The art in DL is how to select the right network for your problem and how to train it properly with right data. However, the downside is that it is hard to understand how it works and this makes it really hard to pin point certain problems and solve them.
A good engineer knows both of worlds, so he has a large toolbox to choose from depending on the problem at hand.
Course Outline
- Camera Technology and Optics
- Image Processing and Computer Vision
- Sensor Fusion with Lidar
- Project: Collision Avoidance for Vehicles
The course is divided into the following sections:
- Lesson 2: Autonomous Vehicles and Computer Vision
- Levels of autonomous driving
- Autonomous vehicle sensor sets
- Camera technology
- Computer vision and the OpenCV
- Lesson 3: Engineering a Collision Detection System
- Collision detection basics
- Estimating time-to-collision with Lidar
- Estimating time-to-collision with a camera
- Lesson 4: Tracking Image Features
- Intensity gradient and filtering
- Harris corner detection
- Overview of popular keypoint detectors
- Gradient-based vs. binary descriptors
- Descriptor matching
- Tracking an object across images
- Project: Camera-Based 2D Feature Tracking
- Lesson 6: Combining Camera and Radar
- Lidar-to-camera point projection
- Object detection with YOLO
- Standard computer vision vs deep learning
- Creating 3D-objects
- Project: Track an Object in 3D Space
Course Github Repo
Notes:
According to CB Insights, there are 40+ corporations currently working on autonomous vehicles (as of March 2020). This includes automakers such as Audi, Tesla, BMW, Volvo or GM but also new additions to the automotive space such as Alphabet / Waymo, Uber or Baidu - who hail from totally different industries.
It is expected that the autonomous vehicle market will grow from $54 billion in 2019 to $556 billion in 2026, according to Allied Market Research estimates. Such growth rates, combined with technological break-throughs of recent years, explain why things are moving so fast and why everyone is trying to get a share of this emerging market.
In addition to autonomous vehicles, there are systems which assist the driver in various driving tasks such as changing lanes, remembering speed signs or braking in time in case the preceding vehicle suddenly decelerates. Such systems are referred to as Advanced Driver Assistance Systems (ADAS) and they are the predecessor of a fully autonomous vehicle. There are some vehicles in the market, however, that imply full autonomy (such as the Tesla Autopilot), but are only an ADAS systems.
Before we analyze a selection of autonomous vehicle prototypes and their respective sensors, let us look at a way to define autonomy - because not all autonomous cars and driver assistance systems are created equal. The following graphic shows the „levels of autonomous driving“, which the Society of Autonomous Engineers (SAE) has defined.