Tải bản đầy đủ (.pdf) (96 trang)

BÁO CÁO ĐỒ ÁN TỐT NGHIỆP - Xe tự hành dùng Convolutional Neural Network (Có file code)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.33 MB, 96 trang )

THESIS TASKS
Student name:

Dao Duy Phuong

Student ID: 15151199

Student name:

Phan Vo Thanh Lam

Student ID: 15151173

Major:

Automation and Control Engneering Technology

Program:

Full-time program

School year:

2015 – 2019

Class: 151511C

I. THESIS NAME:
VISION-BASED NAVIGATION OF AUTONOMOUS CAR USING
CONVOLUTIONAL NEURAL NETWORK
II. TASKS:


1. INITIAL FIGURES AND DOCUMENTS:
2. CONTENT OF THE THESIS:
- In this thesis we researched and built a Car based on the theory of an
autonomous vehicle, we provided a general survey on this issue, datasets and
methods in computer vision for autonomous vehicles.
- Using Convolutional Neural Network directly with raw input images to a
predicted steering angle and detect some object about traffic sign (Left, Right, Stop)
and “Car” object as output.
III. RECEIVED DATE:
IV. THESIS COMPLETED DATE:
V. ADVISOR: My-Ha Le, PhD.
Ho Chi Minh, July 02 2019

Ho Chi Minh, July 02 2019

Advisor

Head of department

(signature)

(signature)

i


SCHEDULE
Student name:

Dao Duy Phuong


Student ID: 15151199

Student name:

Phan Vo Thanh Lam

Student ID: 15151173

Major:

Automation and Control Engneering Technology

Program:

Full-time program

School year:

2015 – 2019

Class: 151511C

THESIS NAME:
VISION-BASED NAVIGATION OF AUTONOMOUS CAR USING
CONVOLUTIONAL NEURAL NETWORK
Week/Day
Student
Content
Advisor

01/02/201907/02/2019

Dao Duy Phuong
Phan Vo Thanh Lam

Register topic with instructor

08/03/201915/03/2019

Dao Duy Phuong
Phan Vo Thanh Lam

List of components need to buy (RC
Car platform, Driver, Camera,
Battery, Servo, Raspberry Pi…)

16/03/201901/04/2019

Dao Duy Phuong
Phan Vo Thanh Lam

Research on theory of Convolutional
Neural Network (CNN), Deep
Learning (DL).

Phan Vo Thanh Lam

Designed PCB, connect all
components base on RC Car
platform.


02/04/201920/04/2019
Dao Duy Phuong

Phan Vo Thanh Lam
21/04/201902/05/2019
Dao Duy Phuong

Collect data to train
Study how to control and use the
embedded computer (Raspberry Pi
and Jetson Nano)
Study how to train Classification and
Object Detection model

ii


Phan Vo Thanh Lam
03/05/201920/05/2019
Dao Duy Phuong

Program to control Servo and Motor,
communicate between Raspberry Pi,
Jetson Nano Kit and Arduino

Train data and output the model

21/05/201901/06/2019


Dao Duy Phuong
Phan Vo Thanh Lam

Combine all components (Hardware
- Software)

02/06/201920/06/2019

Dao Duy Phuong
Phan Vo Thanh Lam

Operate in the outdoor environment
and adjust to the best operation

21/06/201901/07/2019

Dao Duy Phuong
Phan Vo Thanh Lam

Write the technical report

ADVISOR
(signature)

iii


ASSURANCE STATEMENT
We hereby certify that the implementation for this topic has been by ourselves and
depended on previous documents. We have not reused or copied from any

documents without reference.

Authors

iv


ACKNOWLEDGEMENT
First and foremost, we would like to thanks to our thesis advisor, Dr. Le My Ha,
for his help and advice for me very much in the process of implementing the topic.
His guidance is an important factor for us to succeed in this project.
Second, we would like to thank the Faculty of Electrical and Electronics,
teachers who have taught and mentored us throughout the past school years to train
to us knowledge. During our thesis, I received a lot of help, suggestions and
enthusiastic advice from teachers.
Third, we would like to thank Intelligent System Laboratory (IS Lab) for the
support of facilities and enabling us to carry out the thesis.
Finally, we would like to thank the families and friends as well as the class
members who always stand by and support me in the process of implementing the
topic.
In the past time, we have tried very hard to complete our topic, because of the
limited knowledge, research content and time, it will surely have many
shortcomings.
We sincerely thank you.

Ho Chi Minh City, July 02 2019
Dao Duy Phuong

Phan Vo Thanh Lam


v


ADVISOR’S COMMENT SHEET
Student name:

Dao Duy Phuong

Student ID: 15151199

Student name:

Phan Vo Thanh Lam

Student ID: 15151173

Major:

Automation and Control Engneering Technology

Program:

Full-time program

School year:

2015 – 2019

Class: 151511C


1. About the thesis contents:
Students have implemented the final project satisfied the requirement of student
undergraduate program.
2. Advantage:
-System operated stably in outdoor environments.
-Recognition the road sign with high accuracy.
3. Disadvantage:
The system hasn’t test on the different environments.
4. Propose defending thesis?
Permit students to present thesis.
5. Rating:
Excellent
6. Mark: 9.8

(In writing: Nine point eight)
Ho Chi Minh City, July 02, 2019
Advisor

vi


REVIEWER’S COMMENT SHEET
Student name:

Dao Duy Phuong

Student ID: 15151199

Student name:


Phan Vo Thanh Lam

Student ID: 15151173

Major:

Automation and Control Engneering Technology

Program:

Full-time program

School year:

2015 – 2019

Class:

151511C

1. About the thesis contents:
Đồ án phù hợp với yêu cầu về cấu trúc, kỹ thuật trình bày và nội dung của một đồ
án tốt nghiệp. Nội dung chính của đồ án tốt nghiệp là tạo tín hiệu điều khiển một
chiếc xe tự lái từ hình ảnh từ camera trong thời gian thực. Các tác giả thực hiện
các nội dung gồm 2 mô hình chính là phân loại và phát hiện đối tượng. Các tác giả
đã sử dụng Raspberry Pi trong mô hình Phân loại và Jetson Nano trong mô hình
Phát hiện đối tượng. Các giải thuật được trình bày trong đồ án tốt nghiệp gồm giải
thuật tạo tập dữ liệu trong thời gian thực, giải thuật huấn luyện cho mạng thần kinh
loại CNN. Các kết luận, đánh giá, nhận xét về nghiên cứu các tác giả đưa ra chưa
thật sự thuyết phục vì không có các so sánh với các nghiên cứu khác và không có số

liệu đánh giá đó chính là nhược điểm của đồ án tốt nghiệp.
2. Opinions – Conclusions
- Viết lại tổng quan đặc biệt lưu ý đến các nghiên cứu gần đây.
- Để có số liệu so sánh làm nổi bật các ưu khuyết điểm của kỹ thuật mà tác giả đề
xuất thì các tác giả cần phân tích nghiên cứu về các kết quả họ đạt được, cần có
các thử nghiệm, có đo đạc trên hệ thống. Đây chính là cơ sở để các tác giả viết lại
kết luận.
- Việc trích dẫn chưa hợp lý, cần điều chỉnh. Các tài liệu tham khảo chưa đúng quy
định cần có sự điều chỉnh.
3. Rating:
.......................................................................................................................................
4. Mark: 7.1 (In writing: seven point one)
Ho Chi Minh City, July 10 2019
Reviewer

Quách Thanh Hải

vii


CONTENTS
THESIS TASKS ............................................................................................................ i
SCHEDULE ............................................................................................................... ii
ASSURANCE STATEMENT ....................................................................................... iv
ACKNOWLEDGEMENT............................................................................................. v
ADVISOR’S COMMENT SHEET .............................................................................. vi
REVIEWER’S COMMENT SHEET ..........................................................................vii
CONTENTS ............................................................................................................. viii
ABBREVIATIONS AND ACRONYMS ........................................................................ x
LIST OF FIGURES ...................................................................................................xii

LIST OF TABLES ...................................................................................................... xv
ABSTRACT ............................................................................................................... xvi
CHAPTER 1: OVERVIEW .......................................................................................1
1.1. INTRODUCTION .................................................................................................1
1.2. BACKGROUND AND RELATED WORK ...........................................................2
1.2.1. Overview about Autonomous Car .................................................................2
1.2.2. Literature Review and Other Study ...............................................................3
1.3. OBJECTIVES OF THE THESIS .........................................................................7
1.4. OBJECT AND RESEARCHING SCOPE ...........................................................7
1.5. RESEARCHING METHOD ................................................................................8
1.6. THE CONTENT OF THESIS..............................................................................9
CHAPTER 2: THE PRINCIPLE OF SELF – DRIVING CARS ..........................10
2.1. INTRODUCTION OF SELF – DRIVING CARS................................................10
2.2. DIFFERENT TECHNOLOGIES USED IN SELF-DRIVING CARS .................11
2.2.1. Laser.............................................................................................................11
2.2.2. Lidar .............................................................................................................12
2.2.3. Radar ............................................................................................................15
2.2.4. GPS ..............................................................................................................16
2.2.5. Camera .........................................................................................................16
2.2.6. Ultrasonic Sensors .......................................................................................17
2.3. OVERVIEW ABOUT ARTIFICIAL INTELLIGENCE ...................................18
2.3.1. Artificial Intelligence ...................................................................................18
2.3.2. Machine Learning ........................................................................................19
2.3.3. Deep Learning .............................................................................................21
CHAPTER 3: CONVOLUTIONAL NEURAL NETWORK ..................................24
3.1. INTRODUCTION ...............................................................................................24
3.2. STRUCTURE OF CONVOLUTIONAL NEURAL NETWORKS ........................24
3.2.1. Convolution Layer .......................................................................................25
3.2.2. Activation function ......................................................................................27
3.2.3. Stride and Padding .......................................................................................28

3.2.4. Pooling Layer ...............................................................................................29

viii


3.2.5. Fully-Connected layer .................................................................................30
3.3. NETWORK ARCHITECTURE AND PARAMETER OPTIMIZATION........31
3.4. OBJECT DETECTION ......................................................................................32
3.4.1. Single Shot Detection framework ................................................................32
3.4.2. MobileNet Architecture ...............................................................................34
3.4.3. Non-Maximum Suppression ........................................................................38
3.5. OPTIMIZE NEURAL NETWORKS .................................................................39
3.5.1. Types of Gradient Descent ..........................................................................39
3.5.2. Types of Optimizer ......................................................................................40
CHAPTER 4: HARDWARE DESIGN OF SELF-DRIVING CAR PROTOTYPE .........43
4.1. HARDWARE COMPONENTS ...........................................................................43
4.1.1. 1/10 Scale 4WD Off Road Remote Control Car Buggy Desert ..................43
4.1.2. Brushed Motor RC-540PH ..........................................................................44
4.1.3. Motor control module BTS7960 ..................................................................45
4.1.4. RC Servo MG996 ........................................................................................47
4.1.5. Raspberry Pi 3 Model B+ ............................................................................47
4.1.6. NVIDIA Jetson Nano Developer Kit ...........................................................50
4.1.7. Camera Logitech C270 ................................................................................53
4.1.8. Arduino Nano ..............................................................................................54
4.1.9. Lipo Battery 2S-30C 2200mAh ...................................................................56
4.1.10. Voltage reduction module..........................................................................57
4.1.11. USB UART PL2303 ..................................................................................59
4.2. HARDWARE WIRING DIAGRAM .................................................................59
4.2.1. Construct The Hardware Platform ...............................................................60
4.2.2. PCB Of Hardware ........................................................................................61

CHAPTER 5: CONTROL ALGORITHMS OF SELF-DRIVING CAR
PROTOTYPE ...........................................................................................................62
5.1. CONTROL THEORY ..........................................................................................62
5.1.1. Servo Control Theory ..................................................................................62
5.1.2. UART Communication Theory ...................................................................64
5.2. FLOWCHART OF COLLECTING TRAINING DATA ..................................67
5.3. FLOWCHART OF NAVIGATING THE CAR USING TRAINED MODEL .68
CHAPTER 6: EXPERIMENTS ..............................................................................69
6.1. EXPERIMENTAL ENVIRONMENTS ................................................................69
6.2. COLLECT DATA ...............................................................................................70
6.3. DATA AUGMENTATIONS ................................................................................71
6.4. TRAINING PROCESS ........................................................................................72
6.5. OUTDOOR EXPERIMENTS RESULTS ............................................................75
CHAPTER 7: CONCLUSION AND FUTURE WORK .........................................77
REFERENCES ........................................................................................................79

ix


ABBREVIATIONS AND ACRONYMS
ADAS : Advanced Driving Assistance System
CCD : Charge Coupled Device
CMOS : Complimentary Metal-Oxide Semiconductor
CNN : Convolutional Neural Network
DSP : Digital Signal Processing
FOV : Field of View.
FPGA : Field-Programmable Gate Array
GPS : Global Positioning System
GPIO : General Purpose Input-Output
GPU : Graphics Processing Unit

IMU : Inertial Measurement Unit
LIDAR : Light Detection And Ranging
PAS : Parking Assistance System
PCB : Printed Circuit Board
PWM: Pulse Width Modulation
RADAR : Radio Detection And Ranging
RC : Radio Controlled
RNN : Recurrent Neural Network
SOCs : System-On-a-Chips
UV : UltraViolet
4WD : 4 Wheel Drive
WP : Water Proof
YAG : Yttrium Aluminum Garnet

x


SSD : Single Shot Detection
YOLO : You Only Look Once
NMS : Non-Max Suppression
IOU : Intersection Over Union
AI : Intelligence Artificial
ML : Machine Learning
DL : Deep Learning
SGD : Stochastic Gradient Descent
RMSProp : Root Mean Square Propagation
NAG : Nesterov Accelerated Gradient
Adagrad : Adaptive Gradient Algorithm
Adam : Adaptive Moment Estimation


xi


LIST OF FIGURES
Figure 1.1 Google’s fully Self-Driving Car design introduced in May 2014 ............3
Figure 1.2 Features of Google Self-Driving Car .......................................................4
Figure 1.3 Uber Self-driving Car ...............................................................................5
Figure 1.4 Self-driving Car of HCMUTE ...................................................................7
Figure 2.1 How Cars are getting smarter ................................................................10
Figure 2.2 Important components of a self-driving car ...........................................11
Figure 2.3 A laser sensor on the roof constantly scans the surroundings. ..............12
Figure 2.4 The map was drawn by LIDAR ...............................................................13
Figure 2.5 Structure and functionality of LIDAR .....................................................14
Figure 2.6 Comparison between LIDAR and RADAR .............................................16
Figure 2.7 Camera on Autonomous Car ..................................................................17
Figure 2.8 Placement of ultrasonic sensors for PAS................................................17
Figure 2.9 Ultrasonic Sensor on Autonomous Car ..................................................18
Figure 2.10 Relation Between AI, Machine Learning, and Deep Learning.............20
Figure 2.11 Neural networks, which are organized in layers consisting of a set of
interconnected nodes. Networks can have tens or hundreds of hidden layers. ........22
Figure 2.12 A simple Neural Network or a Perceptron. ..........................................22
Figure 2.13 Compare performance between DL with order learning algorithms ...23
Figure 3.1 CNN architecture ....................................................................................24
Figure 3.2 The input data, filter and result of a convolution layer ..........................25
Figure 3.3 The convolutional operation of a CNN. I is an input array. K is a kernal.
I*K is a output of convolution operation. .................................................................26
Figure 3.4 The result of a convolution operation. (a) is input image. .....................26
Figure 3.5 Perform multiple convolutions on an input ............................................27
Figure 3.6 The convolution operation for each filter ...............................................27
Figure 3.7 Apply zero Padding for input matrix ......................................................29

Figure 3.8 The max pooling operation. ....................................................................29
Figure 3.9 The deep Neural Networks to classify multiple classes. .........................30
Figure 3.10 Network architecture ............................................................................31
Figure 3.11 Classify multiple classes in image. (a) Image with generate boxes. (b)
Default boxes with 8x8 feature map. (c) Default boxes with 4x4 feature map .........32
Figure 3.12 The standard convolutional filters in (a) are replaced by two layers:
depthwise convolution in (b) and pointwise convolution in (c) to build a depthwise
separate filter. ...........................................................................................................36
Figure 3.13 The left side is a convolutional layer with batchnorm and ReLU. The
right side is a depth wise separate convolutions with depthwise and pointwise layers
followed by batchnorm and ReLU. ...........................................................................37
Figure 3.14 Detected Images before applying Non-max suppression. (a) Left traffic
sign, (b) Car object ...................................................................................................38

xii


Figure 3.15 Detected Images after applying Non-max suppression (a) Left traffic
sign, (b) Car object ...................................................................................................38
Figure 4.1 Block diagram of RC self-driving car platform ......................................43
Figure 4.2 1/10 Scale 4WD Off Road Remote Control Car Dessert Buggy.............43
Figure 4.3 Brushed Motor RC-540PH .....................................................................45
Figure 4.4 Outline drawing of Brushed motor RC-540............................................45
Figure 4.5 Motor control motor BTS7960................................................................46
Figure 4.6 Digital RC Servo FR-1501MG ...............................................................47
Figure 4.7 Raspberry Pi 3 Model B+ .......................................................................47
Figure 4.8 Raspberry Pi block function ...................................................................49
Figure 4.9 Raspberry Pi 3 B+ Pinout ......................................................................49
Figure 4.10 Location of connectors and main ICs on Raspberry Pi 3 .....................50
Figure 4.11 NVIDIA Jetson Nano Developer Kit .....................................................50

Figure 4.12 Jetson Nano compute module with 260-pin edge connector ................51
Figure 4.13 NVIDIA Jetson Nano Pinout .................................................................52
Figure 4.14 Performance of various deep learning inference networks with Jetson
Nano and TensorRT, using FP16 precision and batch size 1 ...................................53
Figure 4.15 Camera Logitech C270 .........................................................................53
Figure 4.16 Arduino Nano .......................................................................................54
Figure 4.17 Arduino Nano Pinout ............................................................................56
Figure 4.18 Lipo Battery 2S-30C 2200mAh .............................................................56
Figure 4.19 LM2596S Dual USB Type .....................................................................57
Figure 4.20 DC XH-M404 XL4016E1 8A ................................................................58
Figure 4.21 USB UART PL2303 ..............................................................................59
Figure 4.22 Hardware Wiring Diagram .................................................................59
Figure 4.23 Hardware Platform. (a) Front, (b) Side ...............................................60
Figure 4.24 Circuit diagram....................................................................................61
Figure 4.25 Layout of the PCB .................................................................................61
Figure 5.1 Inside of RC Servo ..................................................................................63
Figure 5.2 Variable Pulse width control servo position .........................................63
Figure 5.3 UART Wiring .........................................................................................64
Figure 5.4 UART receives data in parallel from the data bus ................................65
Figure 5.5 Data Frame of Transmitting UART .......................................................66
Figure 5.6 Transmitting and Receiving UART ........................................................66
Figure 5.7 Data Frame of Receiving UART ............................................................66
Figure 5.8 Converts the serial data back into parallel ...........................................66
Figure 5.9 Flowchart of collect image training .......................................................67
Figure 5.10 Flowchart of Navigating the Car using Trained Model. ......................68
Figure 6.1 The overall oval-shaped lined track .......................................................69
Figure 6.2 Lined track and traffic signs recognition................................................69
Figure 6.3 Traffic signs. (a) Left, (b) Right, (c) Stop ...............................................70
Figure 6.4 Some typical images of the Dataset ........................................................70
Figure 6.5 Horizontal Flipis. (a) Original image, (b) Horizontal flipped image. ...71


xiii


Figure 6.6 Brightness Augmentation. (a) Original image, (b) Brighter image and
(c) Darker image. ......................................................................................................72
Figure 6.7 GUI of Training App ...............................................................................72
Figure 6.8 Model is under training ..........................................................................73
Figure 6.9 The visualization output of convolutional layers. (a) is an originally
selected frame.(b), (c), (d), (e) and (f) are the feature maps at first five
convolutional layers. .................................................................................................74
Figure 6.10 Change in loss value throughout training ............................................74
Figure 6.11 Change in accuracy value throughout training ....................................75
Figure 6.12 Experimental Results: The top is images and the bottom is outputs after
through Softmax function of the model. (a) Steer is100, (b) Steer is 110, (c) Steer is
120, (d) Steer is 130, (e) Steer is 140, (f) Steer is 150, (g) Steer is 160 ...................75
Figure 6.13 The actual and predicted steering wheel angles of the models ............76
Figure 6.14 The outputs of the object detection model ............................................76

xiv


LIST OF TABLES
Table 3.1 MobileNet body Architecture ..................................................................377
Table 4.1 Specification of Brushed motor RC-540 .................................................455
Table 4.2 Specifications of Raspberry Pi 3 Model B+ ...........................................488
Table 4.3 Specifications of NVIDIA Jetson Nano .....................................................51
Table 4.4 Technical specifications of Arduino Nano. .............................................555

xv



ABSTRACT
Recent years have witnessed amazing progress in AI related fields such as
computer vision, machine learning and autonomous vehicles.
In this thesis, we present a Deep neural network (DNN) model based on
autonomous car platform. This is a small-scale replication of a real self-driving car,
which drove on the roads using a convolutional neural network (CNN) model, that
takes images from a front-facing camera as input and produces car steering angles
as output. The experimental results demonstrate the effectiveness and robustness of
autopilot model in lane keeping task and detect traffic sign. The speed is about 56km/h in a wide variety of driving conditions, regardless of whether lane markings
are present or not.
Keywords: Computer Vison, Real-time navigation, self-driving car, Object
Detection Model, Classification Model.

xvi


CHAPTER 1: OVERVIEW
1.1. INTRODUCTION
Lately year, technology companies have been discussing autonomous cars and
trucks. Promises of life-changing safety and ease have been hung on these vehicles.
Now some of these promises are beginning to come to fruition, as cars with more
and more autonomous features hit the market each year.
Autonomous car is most likely still years away from being available to
consumers, they are closer than many people think. Current estimates predict that
by 2025 the world will see over 600,000 self-driving cars on the road, and by 2035
that number will jump to almost 21 million. Trials of self-driving car services have
actually begun in some cities in the United States. And even though fully selfdriving cars are not on the market yet, current technology allows vehicles to be
more autonomous than ever before. Using intricate systems of cameras, lasers,

radar, GPS, and interconnected communication between vehicles.
Since their introduction in the early 1990s, Convolutional Neural Networks
(CNNs) [1] have been the most popular deep learning architecture due to the
effectiveness of conv-nets on image related problems such as handwriting
recognition, facial recognition, cancer cell classification. The breakthrough of
CNNs is that features are learned automatically from training examples. Although
their primary disadvantage is the requirements of very large amounts of training
data, recent studies have shown that excellent performance can be achieved with
networks trained using “generic” data. For the last few years, CNNs have achieved
state-of-the-art performance in most of important classification tasks [2], Object
detection tasks [3]; Generative Adversarial Networks [4].
Besides, with the increase in computational capacities, we are presently able to
train complex neural networks to understand the vehicle’s environment and decide
its behavior. For example, Tesla Model S was known to use a specialized chip
(MobileEye EyeQ), which used a deep neural network for vision-based real-time
obstacle detection and avoidance. More recently, researchers are investigating DNN
based end-to-end control of cars and other robots [5]. Executing CNN on an
embedded computing platform has several challenges. First, despite many
calculations, strict real-time requirements are demanded. For instance, latency in a
vision-based object detection task may be directly linked to the safety of the
vehicle. On the other hand, the computing hardware platform must also satisfy cost,
size, weight, and power constraints. These two conflicting requirements complicate
1


the platform selection process as observed in. There are already several relatively
low-cost RC-car based prototypes, such as MIT’s RaceCar [6] and UPenn’s F1/10
racecar.
Encouraged by these positive results, in this thesis we develop a real-time end-toend deep learning based RC-car platform. In term of hardware, it is included a
Raspberry Pi 3 Model B plus quad-core computer, a Jetson Nano Developer Kit ,

two Logitech camera, an Arduino Nano and a 1/10 scale RC car. This research
target is training two model: a vision-oriented model and an object detection model
to real-time autonomously navigate the car in outdoor environments with a wide
variety of driving conditions.
1.2. BACKGROUND AND RELATED WORK
1.2.1. Overview about Autonomous Car
Autonomous Car is the car that can be self-driving, they combine sensors and
software to control, navigate, and drive the vehicle. Currently, there are no legally
operating, fully-autonomous vehicles in the United States. However, there are
partially-autonomous vehicles-cars and trucks with varying amounts of selfautomation, from conventional cars with brake and lane assistance to highlyindependent, self-driving prototypes. Though still in its infancy, self-driving
technology is becoming increasingly common and could radically transform our
transportation system (and by extension, our economy and society). Based on
automaker and technology company estimates, level 4 self-driving cars could be for
sale in the next several years (see the callout box for details on autonomy levels).
Various self-driving technologies have been developed by Google, Uber, Tesla,
Nissan, and other major automakers, researchers, and technology companies.
While design details vary, most self-driving systems create and maintain an
internal map of their surroundings, based on a wide array of sensors, like radar.
Uber’s self-driving prototypes use sixty-four laser beams, along with other sensors,
to construct their internal map; Google’s prototypes have, at various stages, used
lasers, radar, high-powered cameras, and sonar. Software then processes those
inputs, plots a path, and sends instructions to the vehicle’s “actuators,” which
control acceleration, braking, and steering. Hard-coded rules, obstacle avoidance
algorithms, predictive modeling, and “smart” object discrimination (etc, knowing
the difference between a bicycle and a motorcycle) help the software follow traffic
rules and navigate obstacles. Partially-autonomous vehicles may require a human
driver to intervene if the system encounters uncertainty; fully-autonomous vehicles
may not even offer a steering wheel. Self-driving cars can be further distinguished
2



as being “connected” or not, indicating whether they can communicate with other
vehicles and/or infrastructure, such as next generation traffic lights. Most prototypes
do not currently have this capability.
1.2.2. Literature Review and Other Study
1.2.2.1. Google Self-driving Car Project
Although many companies are racing to be the ones to bring a fully autonomous,
commercially viable vehicle to the market including Lyft, Ford, Uber, Honda,
Toyota, Tesla and many others, it’s Waymo, the autonomous vehicle division of
Alphabet, Google’s parent company, that has been the first to reach many
milestones along the journey.
At the end of last year, the Waymo team announced on November 7, 2017,
“Starting now, Waymo's fully self-driving vehicles—our safest, most advanced
vehicles on the road today—are test-driving on public roads, without anyone in the
driver's seat.”
These vehicles have been on the roads of Chandler, AZ, a suburb of Phoenix,
since mid-October without a safety driver behind the wheel, although until further
notice there is a Waymo employee in the back seat. Waymo vehicles are equipped
with powerful sensors that provide them with 360-degree views of the world,
something a human behind the wheel never gets. There are short-range lasers and
those that can see up to 300 meters away.
These vehicles don’t have free rein to drive wherever they want quite yet—they
are “geofenced” within a 100-square-mile area. As the cars collect more data and
acquire more driving experience, that area will expand. Waymo has an Early Rider
program that allows those to apply who might be interested in using the
autonomous vehicles to transport them around town.

Figure 1.1 Google’s fully Self-Driving Car design introduced in May 2014

3



Figure 1.2 Features of Google Self-Driving Car
Technology: Google’s robotic cars have about $150,000 in equipment as shown
in Figure 1.2 including a LIDAR system that itself costs $70,000. The Velodyne
64-beam laser range finder mounted on top allows the vehicle to generate a detailed
3D map of its environment. The car takes these generated maps and combines them
with high-resolution maps of the world, producing different types of data models
that are then used for driving itself. Some of these computations are performed on
remote computer farms, in addition to on-board systems.
Limitations: As of August 28, 2014 the latest prototype has not been tested in
heavy rain or snow due to safety concerns. The car still relies primarily on preprogrammed route data; they do not obey temporary traffic signals, and in some
situations, revert to a slower “extra cautious” mode in complex unmapped
intersections. The lidar technology cannot spot some potholes or discern when
humans, such as a police officer, are signaling the car to stop. However, Google is
having these issues fixed by 2020.
1.2.2.2. Uber Self-driving Car Project
Uber thought it would have 75,000 autonomous vehicles on the roads this year
and be operating driverless taxi services in 13 cities by 2022, according to court
documents unsealed last week. To reach those ambitious goals, the ridesharing
company, which hopes to go public later this year, was spending $20 million a
month on developing self-driving technologies.
The figures, dating back to 2016, paint a picture of a company desperate to meet
over-ambitious autonomy targets and one that is willing to spend freely, even
recklessly, to get there. As Uber prepares for its IPO later this year, the new details
could prove an embarrassing reminder that the company is still trailing in its efforts
4


to develop technology that founder Travis Kalanick called “existential” to Uber’s

future. The report was written for Uber as part of last year’s patent and trade secret
theft lawsuit with rival Waymo, which accused engineer Anthony Levandowski of
taking technical secrets with him when he left Google to found self-driving truck
startup Otto. Uber acquired Otto in 2016. Uber hired Walter Bratic, the author of the
report, as an expert witness to question Waymo’s valuation of the economic
damages it had suffered — a whopping $1.85 billion. Bratic’s report capped at
$605,000 the cost to independently develop Waymo’s purported trade secrets.
Waymo eventually settled for 0.34 percent of Uber’s equity, which could be
worth around $300 million after an IPO if a recent $90 billion valuation of the
company is accurate. Bratic’s report provides details of internal analyses and reports
codenamed Project Rubicon that Uber carried out during 2016. A presentation in
January that year projected that driverless cars could become profitable for Uber in
2018, while a May report said Uber might have 13,000 self-driving taxis by 2019.
Just four months later, that estimate had jumped to 75,000 vehicles.
The current head of Uber’s self-driving technologies, Eric Meyhofer, testified
that Uber’s original estimates of having tens of thousands of AVs in a dozen cities
by 2022 were “highly speculative” “assumptions and estimates.” Although
Meyhofer declined to provide any other numbers, he did say, “They probably ran a
lot of scenarios beyond 13 cities. Maybe they assumed two in another scenario, or
one, or three hundred. It’s a set of knobs you turn to try to understand parameters
that you need to try to meet.”

Figure 1.3 Uber Self-driving Car
5


One specific goal, set by John Bares, the engineer then in charge of Uber’s
autonomous vehicles, was for Uber to be able to forgo human safety drivers by
2020. The company’s engineers seemed certain that acquiring Otto and
Levandowski would supercharge its progress.

1.2.2.3. Embedded Computing Platforms for Real-Time Inferencing
Real-time embedded systems, such as an autonomous vehicle, present unique
challenges for deep learning, as the computing platforms of such systems must
satisfy two often conflicting goals [7]: The platform must provide enough
computing capacity for real-time processing of computationally expensive AI
workloads (deep neural networks); The platform must also satisfy various
constraints such as cost, size, weight, and power consumption limits.
Accelerating AI workloads, especially inferencing operations, has received a lot
of attention from academia and industry in recent years as applications of deep
learning are broadening to areas of real-time embedded systems such as
autonomous vehicles. These efforts include the development of various
heterogeneous architecture-based system-on-a-chips (SOCs) that may include
multiple cores, GPU, DSP, FPGA, and neural network optimized ASIC hardware.
Consolidating multiple tasks on SoCs with lots of shared hardware resources while
guaranteeing real-time performance is also an active research area, which is
orthogonal to improving raw performance. Consolidation is necessary for
efficiency, but unmanaged interference can nullify the benefits of consolida-tion.
For these reasons, finding a good computing platform is a non-trivial task, one that
requires a deep understanding of the workloads and the hardware platform being
utilized.
1.2.2.4. Real-Time Self-Driving Car Navigation Using Deep Neural
Network (Paper)
This is the paper was published on the 4th International Conference on Green
Technology and Sustainable Development – GTSD 2018.
This paper presented a monocular vision-based self-driving car prototype using
Deep Neural Network. First, the CNN model parameters were trained by using data
collected from vehicle platform built with a 1/10 scale RC car, Raspberry Pi 3
Model B computer and front-facing camera. The training data were road images
paired with the time-synchronized steering angle generated by manually driving.
Second, road tests the model on Raspberry to drive itself in the outdoor

environment around oval-shaped and 8-shaped with traffic sign lined track. The
6


experimental results demonstrate the effectiveness and robustness of autopilot
model in lane keeping task. Vehicle's top speed is about 5-6km/h in a wide variety
of driving conditions, regardless of whether lane markings are present or not [8]. In
this paper, model Classification was used and the training accuracy was 92,38%.

Figure 1.4 Self-driving Car of HCMUTE
1.3. OBJECTIVES OF THE THESIS
The thesis concentrated on several key goals:
 Research will be conducted into the theory of an autonomous vehicle and the
various considerations that may be necessary during the construction of a selfdriving car prototype.
 Researching and using the suitable and compatible components such as sensor,
microcontroller, motor, driver, power supply, communication, etc… In detail,
this thesis will mainly focus on using Raspberry Pi 3 Model B+, NVIDIA
Jetson Nano Kit, Camera Module, Arduino Nano and some of Driver.
 Using Convolutional Neural Network directly maps raw input images to a
predicted steering angle and detect some object about traffic sign (Left, Right,
Stop) and “Car” object as output.
 The target is the car can drive itself real-time in the outdoor environment into
the map and obey the traffic sign.
1.4. OBJECT AND RESEARCHING SCOPE
In the scope of this thesis, a real-time end-to-end deep learning based on RC-car
platform was develop using 1/10 scale RC Car chassis, Raspberry Pi 3 Model B+,
NVIDIA Jetson Nano Kit, Logitech Camera and Arduino Nano. Convolutional
Neural Network (CNN) directly maps raw input images to a predicted steering

7



angle as output. Road testing the trained model on Raspberry and Jetson Nano to
drive itself real-time in the outdoor environment around the map with traffic sign
lined track. Then evaluating the effectiveness and robustness of autopilot model in
lane keeping task rely on experimental results.
1.5. RESEARCHING METHOD
The research project was divided into chapters, each a sequential step in the
process of developing and building the self-driving car prototype. This approach
was utilized in an attempt to progress the project from one task to the next, as it was
undertaken. Each is defined so that it builds on the previous task thus evolving the
robot within the goals and requirements generated. This ultimately led to the
completion of the car that met the objectives within the timeframe available.
The first step is to define the key point and objective to deeply understanding
what a real-time end-to-end deep learning based self-driving car and convolutional
neural network actually is. Besides that, it is critical to determining plans for
conducting research and performing the suitable design and programming.
The second step is to refer the previous projects. It is useful to have appropriate
approaches. This established the foundations for making an informed decision
based on the previous experiences to avoid mistakes and obsolete design.
The third step in theoretical method was to apply knowledge studied to control
system. This approach provides valuable data of the likely stability capability of the
control system by finding the suitable parameters through many days.
The next step is to design, manufacture PCB, chassis, drive shaft, etc…and
assembly all hardware components. In parallel, for making the good system, the
following step is to analyze the performance of the system. This also provides the
chances to calibrate and perform additional changes to making the final system.
The practical method is to directly design mechanism and PCB. This approach
allows for testing on real system and making the final assessment by the response of
self – driving car prototype.

Road testing the trained model on Raspberry to drive itself real-time in the
outdoor environment around oval-shaped and 8-shaped with traffic sign lined track.
Then evaluating the effectiveness and robustness of autopilot model in lane
keeping task rely on experimental results.

8


1.6. THE CONTENT OF THESIS
The thesis “Research, Design and Construct Real–Time Self–Driving Car using
Deep Neural Network” includes the following chapters:
Chapter I: Overview: This chapter provides a brief overview of the requirements
of the report including introduction, goals, scope and content of the thesis.
Chapter II: The principle of self-driving cars: This chapter provides the basic
knowledge about this thesis such as the principle of self-driving car, artificial
intelligence, machine learning, deep learning, convolutional neural network.
Chapter III: Convolutional Neural Network: This chapter gives the
knowledge about Convolutional Neural Network, the Structure of CNN.
Chapter IV: Hardware Design of Self-driving car Prototype: This chapter
provides the car’s design, choosing hardware. After that, constructing the car
platform based on the design.
Chapter V: Control Algorithms of Self-driving car Prototype: This chapter
provides algorithms, diagrams of software.
Chapter VI: Experiment: This chapter shows the experimental results of this
thesis.
Chapter VII: Conclusion and Future Work: This chapter provides the
conclusion in term of advantage and limitation of this thesis. This also concludes
the contribution and proposing ideas, orientations for future work.

9



×