Key Ingredients of Self-Driving Cars

Over the past decade, many research articles have been published in the area of autonomous driving. However, most of them focus only on a specific technological area, such as visual environment perception, vehicle control, etc. Furthermore, due to fa…

Authors: Rui Fan, Jianhao Jiao, Haoyang Ye

Key Ingredients of Self-Driving Cars
K e y Ingredients of Self-Dri ving Cars Rui Fan, Jianhao Jiao, Hao yang Y e, Y ang Y u, Ioannis Pitas, Ming Liu Abstract —Over the past decade, many resear ch articles ha ve been published in the area of autonomous dri ving. Howe ver , most of them focus only on a specific technological ar ea, such as visual en vironment perception, vehicle contr ol, etc. Furthermore, due to fast advances in the self-dri ving car technology , such articles become obsolete very fast. In this paper , we give a brief but comprehensive over view on key ingredients of autonomous cars (A Cs), including dri ving automation le vels, A C sensors, A C software, open sour ce datasets, industry leaders, A C applications and existing challenges. I . I N T RO D U C T I O N Over the past decade, with a number of autonomous system technology breakthroughs being witnessed in the world, the race to commercialize Autonomous Cars (ACs) has become fiercer than e ver [1]. For example, in 2016, W aymo un veiled its autonomous taxi service in Arizona, which has attracted large publicity [2]. Furthermore, W aymo has spent around nine years in developing and improving its Automated Driving Systems (ADSs) using various adv anced engineering technologies, e.g., machine learning and computer vision [2]. These cutting-edge technologies ha ve greatly assisted their dri ver-less vehicles in better world understanding, making the right decisions, and taking the right actions at the right time [2]. Owing to the development of autonomous dri ving, many scientific articles ha ve been published over the past decade, and their citations 1 are increasing exponentially , as sho wn in Fig. 1. W e can clearly see that the numbers of both publications and citations in each year have been increasing gradually since 2010 and rose to a new height in the last year . Howe ver , most of autonomous driving ov erview articles focus only on a specific technological area, such as Advanced Driv er Assistance Systems (ADAS) [3], vehicle control [4], visual en vironment perception [5], etc. Therefore, there is a strong motiv ation to pro vide readers with a comprehensi ve literature revie w on autonomous driving, including systems and algo- rithms, open source datasets, industry leaders, autonomous car applications and existing challenges. I I . AC S Y S T E M S ADSs enable A Cs to operate in a real-world environment without intervention by Human Dri vers (HDs). Each ADS consists of two main components: hardware (car sensors and hardware controllers, i.e., throttle, brake, wheel, etc.) and software (functional groups). Software has been modeled in se veral different software architectures, such as Stanley (Grand Challenge) [6], Junior (Urban Challenge) [7], Boss (Urban Challenge) [8] and T ongji A C [9]. Stanle y [6] software architecture comprises four 1 https://www .webofknowledge.com Fig. 1. Numbers of publications and citations in autonomous driving research over the past decade. modules: sensor interface, perception, planning and control, as well as user interface. Junior [7] software architecture has fiv e parts: sensor interface, perception, navigation (planning and control), driv e-by-wire interface (user interface and ve- hicle interface) and global services. Boss [8] uses a three- layer architecture: mission, behavioral and motion planning. T ongji’ s ADS [9] partitions the software architecture in: per- ception, decision and planning, control and chassis. In this paper , we divide the software architecture into fiv e modules: perception, localization and mapping, prediction, planning and control, as shown in Fig. 2, which is very similar to T ongji ADS’ s software architecture [9]. The remainder of this section introduces dri ving automation le vels, presents the A C sensors and hardware controllers. A. Driving Automation Le vels According to the Society of Automotiv e Engineers (SAE international), driving automation can be categorized into six lev els, as shown in T able I [10]. HD is responsible for Driving En vironment Monitoring (DEM) in level 0-2 ADSs, while this responsibility shifts to the system in level 3-5 ADSs. From lev el 4, the HD is not responsible for the Dynamic Driving T ask Fallback (DDTF) an y more, and the ADSs will not need to ask for intervention from the HD at level 5. The state-of- the-art ADSs are mainly at level 2 and 3. A long time may still be needed to achie ve higher automation le vels [11]. B. A C Sensors The sensors mounted on A Cs are generally used to percei ve the environment. Each sensor is chosen as a trade-off between sampling rate, Field of V iew (F oV), accurac y , range, cost T ABLE I S A E L E V E LS O F D RI V I N G A U TO M A T I O N Lev el Name Driv er DEM DDTF 0 No automation HD HD HD 1 Driv er assistance HD & system HD HD 2 Partial automation System HD HD 3 Conditional automation System System HD 4 High automation System System System 5 Full automation System System System Fig. 2. Software architecture of our proposed ADS. and ov erall system complexity [12]. The most commonly used A C sensors are passive ones (e.g., cameras), active ones (e.g., Lidar , Radar and ultrasonic transcei vers) and other sensor types, e.g., Global Positioning System (GPS), Inertial Measurement Unit (IMU) and wheel encoders [12]. Cameras capture 2D images by collecting light reflected on the 3D environment objects. The image quality is usually subject to the en vironmental conditions, e.g., weather and il- lumination. Computer vision and machine learning algorithms are generally used to extract useful information from captured images/videos [13]. For example, the images captured from different view points, i.e., using a single mov able camera or multiple synchronized cameras, can be used to acquire 3D world geometry information [14]. Lidar illuminates a target with pulsed laser light and mea- sures the source distance to the target, by analyzing the reflected pulses [15]. Due to its high 3D geometry accuracy , Lidar is generally used to create high-definition world maps [15]. Lidars are usually mounted on different parts of the A C, e.g., roof, side and front, for different purposes [16], [17]. Radars can measure accurate range and radial velocity of an object, by transmitting an electromagnetic wa ve and analyzing the reflected one [18]. They are particularly good at detecting metallic objects, but can also detect non-metallic objects, such as pedestrians and trees, in a short distance [12]. Radars ha ve been established in the automotive industry for many years to enable AD AS features, such as autonomous emer gency braking, adapti ve cruise control, etc [12]. In a similar w ay to Radar , ultrasonic transducers calculate the distance to an object by measuring the time between transmitting an ultrasonic signal and receiving its echo [19]. Ultrasonic transducers are commonly utilized for A C localization and navigation [20]. GPS, a satellite-based radio-na vigation system o wned by the US government, can provide time and geolocation information for A Cs. Howe ver , GPS signals are very weak and they can be easily blocked by obstacles, such as buildings and mountains, resulting in GPS-denied regions, e.g., in the so-called urban canyons [21]. Therefore, IMUs are commonly inte grated into GPS devices to ensure A C localization in such places [22]. Wheel encoders are also prev alently utilized to determine the A C position, speed and direction by measuring electronic signals regarding wheel motion [23]. C. Har dware Contr ollers A C hardware controllers are torque steering motor , elec- tronic brake booster , electronic throttle, gear shifter and park- ing brake. The vehicle states, such as wheel speed and steering angle, are sensed automatically and sent to the computer sys- tem via a Controller Area Network (CAN) bus. This enables either the HD or the ADS to control throttle, brak e and steering wheel [24]. I I I . AC S O F T W A R E A. P er ception The perception module analyzes the raw sensor data and outputs an en vironment understanding to be used by the A Cs [25]. This process is similar to human visual cognition. Perception module typically includes object (free space, lane, vehicle, pedestrian, road damage, etc) detection and tracking, 3D world reconstruction (using structure from motion, stereo vision, etc), among others [26], [27]. The state-of-the-art perception technologies can be broken into two categories: computer vision-based and machine learning-based ones [28]. The former generally addresses visual perception problems by formulating them with e xplicit projecti ve geometry models and finding the best solution using optimization approaches. For example, in [29], the horizontal and vertical coordinates of multiple v anishing points were modeled using a parabola and a quartic polynomial, respectiv ely . The lanes were then detected using these two polynomial functions. Machine learning-based technologies learn the best solution to a given perception prob- lem, by employing data-driven classification and/or regression models, such as the Con volutional Neural Networks (CNNs) [30]. For instance, some deep CNNs, e.g., SegNet [31] and U- Net [32], hav e achiev ed impressiv e performance in semantic image segmentation and object classification. Such CNNs can also be easily utilized for other similar perception tasks using transfer learning (TL) [25]. V isual world perception can be complemented by using other sensors, e.g., Lidars or Radars, for obstacle detection/localization and for 3D world modeling. Multi-sensor information fusion for world perception can produce superior world understanding results. B. Localization and Mapping Using sensor data and perception output, the localization and mapping module can not only estimate AC location, but also build and update a 3D world map [25]. This topic has become very popular since the concept of Simultaneous Lo- calization and Mapping (SLAM) was introduced in 1986 [33]. The state-of-the-art SLAM systems are generally classified as filter -based [34] and optimization-based [35]. The filter- based SLAM systems are derived from Bayesian filtering [35]. They iterativ ely estimate AC pose and update the 3D en vironmental map, by incrementally integrating sensor data. The most commonly used filters are Extended Kalman Filter (EKF) [36], Unscented Kalman Filter (UKF) [37], Information Filter (IF) [38] and Particle Filter (PF) [39]. On the other hand, the optimization-based SLAM approaches firstly identify the problem constraints by finding a correspondence between new observations and the map. Then, they compute and refine A C previous poses and update the 3D environmental map. The optimization-based SLAM approaches can be divided into two main branches: Bundle Adjustment (BA) and graph SLAM [35]. The former one jointly optimizes the 3D world map and the camera poses by minimizing an error function using optimization techniques, such as the Gaussian-Ne wton method and Gradient Descent [40]. The latter one models the localization problem as a graph representation problem and solves it by finding an error function with respect to different vehicle poses [41]. C. Pr ediction The prediction module analyzes the motion patterns of other traffic agents and predicts A C future trajectories [42], which enables the AC to make appropriate navigation decisions. Cur- rent prediction approaches can be grouped into tw o main cat- egories: model-based and data-driven-based [43]. The former computes the A C future motion, by propag ating its kinematic state (position, speed and acceleration) over time, based on the underlying physical system kinematics and dynamics [43]. For example, Mercedes-Benz motion prediction component employs map information as a constraint to compute the ne xt A C position [44]. A Kalman filter [45] works well for short- term predictions, but its performance degrades for long-term horizons, as it ignores surrounding context, e.g., roads and traffic rules [46]. Furthermore, a pedestrian motion predic- tion model can be formed based on attractiv e and repulsiv e forces [47]. W ith recent adv ances in Artificial Intelligence (AI) and High-Performance Computing (HPC), many data- driv en techniques, e.g., the Hidden Markov Models (HMM) [48], Bayesian Networks (BNs) [49] and Gaussian Process (GP) regression, hav e been utilized to predict A C states. In recent years, researchers hav e modeled the environmental context using In verse Reinforcement Learning (IRL) [50]. For example, an in verse optimal control method was employed in [51] to predict pedestrian paths. D. Planning The planning module determines possible safe AC navi- gation routes based on perception, localization and mapping, as well as prediction information [52]. The planning tasks can mainly be classified as path, maneuver and trajectory [53]. Path is a list of geometrical way points that the A C should follow , so as to reach its destination, without colliding with obstacles [54]. The most commonly used path planning techniques include: Dijkstra [55], dynamic programming [56], A* [57], state lattice [58], etc. Maneuver planning is a high- lev el A C motion characterization process, because it also takes traffic rules and other AC states into consideration [54]. The trajectory is generally represented by a sequence of A C states. A trajectory satisfying the motion model and state constraints must be generated after finding the best path and maneuver , because this can ensure traf fic safety and comfort. E. Contr ol The control module sends appropriate commands to throttle, braking, or steering torque, based on the predicted trajectory and the estimated vehicle states [59]. The control module enables the A C to follow the planned trajectory as closely as possible. The controller parameters can be estimated by minimizing an error function (deviation) between the ideal and observed states. The most prev alently used approaches to minimize such error function are Proportional-Integral- Deriv ativ e (PID) control [60], Linear-Quadratic Regulator (LQR) control [61] and Model Predictive Control (MPC) [62]. A PID controller is a control loop feedback mechanism, which employs proportional, integral and deriv ative terms to minimize the error function [60]. LQR controller is utilized to minimize the error function, when the system dynamics are represented by a set of linear differential equations and the cost is described by a quadratic function [61]. MPC is an advanced process control technique which relies on a dynamic process model [62]. These three controllers have their own benefits and drawbacks. A C control module generally employs a mixture of them. For example, Junior A C [63] employs MPC and PID to complete some low-le vel feedback control tasks, e.g., for applying the torque conv erter to achieve a desired wheel angle. Baidu Apollo employs a mixture of these three controllers: PID is used for feed-forward control; LQR is used for wheel angle control; MPC is used to optimize PID and LQR controller parameters [64]. I V . O P E N S O U R C E D AT A S E T S Over the past decade, many open source datasets have been published to contribute to autonomous driving research. In this paper , we only enumerate the most cited ones. Cityscapes [65] contains a lar ge-scale dataset which can be used for both pixel-le vel and instance-level semantic image segmentation. ApolloScape [64] can be used for various A C perception tasks, such as scene parsing, car instance understanding, lane seg- mentation, self-localization, trajectory estimation, as well as object detection and tracking. Furthermore, KITTI [66] of fers visual datasets for stereo and flow estimation, object detection and tracking, road segmentation, odometry estimation and semantic image segmentation. 6D-vision [67] uses a stereo camera to perceiv e the 3D environment. They offer datasets for stereo, optical flo w and semantic image segmentation. V . I N D U S T RY L E A D E R S Recently , in vestors hav e started to throw their money at possible winners of the race to commercialize ACs [68]. T esla’ s valuation has been soaring since 2016. This leads underwriters to speculate that this company will spawn a self- driving fleet in a couple of years’ time [68]. In addition, GM’ s shares ha ve risen by 20 percent, since their plan to b uild dri ver- less vehicles was reported in 2017 [68]. W aymo has tested its self-driving cars ov er a distance of eight million miles in the US until July 2018 [69]. Their Chrysler Pacifica mini-vans can no w navigate on highways in San Francisco at full speed [68]. GM and W aymo had the fewest accidents in the last year: GM had 22 collisions over 212 kilometers, while W aymo had only three collisions over more than 563 kilometers [69]. In addition to the industry giants, world-class univ ersities hav e also accelerated the de velopment of autonomous driving. These uni versities are all doing well in carrying out their education with the combination of production and scientific research. This renders them better contribute to enterprises, economy and society . V I . AC A P P L I C A T I O N S The autonomous driving technology can be implemented in any types of vehicles, such as taxis, coaches, tour buses, deliv- ery vans, etc. These vehicles can not only relieve humans from labor-intensi ve and tedious w ork, b ut also ensure their safety . For example, the road quality assessment vehicles equipped with autonomous driving technology can repair the detected road damages while navigating across the city [13], [70]–[72]. Furthermore, public traffic will be more efficient and secure, as the coaches and taxis will be able to communicate with each other intelligently . V I I . E X I S T I N G C H A L L E N G E S Although the autonomous driving technology has developed rapidly over the past decade, there are still many challenges. For example, the perception modules cannot perform well in poor weather and/or illumination conditions or in com- plex urban environments [73]. In addition, most perception methods are generally computationally-intensive and cannot run in real time on embedded and resource-limited hardw are. Furthermore, the use of current SLAM approaches still re- mains limited in lar ge-scale e xperiments, due to its long-term unstability [35]. Another important issue is how to fuse AC sensor data to create more accurate semantic 3D word in a fast and cheap way . Moreover , “when can people truly accept autonomous dri ving and autonomous cars?” is still a good topic for discussion and poses serious ethical issues. V I I I . C O N C L U S I O N This paper presents a brief but comprehensiv e o vervie w on the key ingredients of autonomous cars. W e introduced six driving automation le vels. The details on the sensors equipped in autonomous cars for data collection and the hardware controllers were subsequently gi ven. Furthermore, we briefly discussed each software part of the ADS, i.e., perception, localization and mapping, prediction, planning, and control. The open source datasets, such as KITTI, ApolloSpace and 6D-vision, were then introduced. Finally , we discussed current autonomous driving industry leaders, the possible applications of autonomous dri ving and the existing challenges in this research area. R E F E R E N C E S [1] J. A. Brink, R. L. Arenson, T . M. Grist, J. S. Lewin, and D. Enzmann, “Bits and bytes: the future of radiology lies in informatics and infor - mation technology , ” Eur opean r adiology , vol. 27, no. 9, pp. 3647–3651, 2017. [2] “W aymo safety report: On the road to fully self-dri ving, ” https://storage. googleapis.com/sdc- prod/v1/safety- report/SafetyReport2018.pdf, accessed: 2019-06-07. [3] R. Okuda, Y . Kajiwara, and K. T erashima, “ A survey of technical trend of adas and autonomous driving, ” in Proceedings of T echnical Pr ogram - 2014 International Symposium on VLSI T echnology , Systems and Application (VLSI-TSA) , April 2014, pp. 1–4. [4] D. González, J. Pérez, V . Milanés, and F . Nashashibi, “ A revie w of motion planning techniques for automated vehicles, ” IEEE T ransactions on Intelligent T ransportation Systems , vol. 17, no. 4, pp. 1135–1145, April 2016. [5] A. Mukhtar, L. Xia, and T . B. T ang, “V ehicle detection techniques for collision av oidance systems: A revie w , ” IEEE T ransactions on Intellig ent T ransportation Systems , vol. 16, no. 5, pp. 2318–2338, Oct 2015. [6] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P . Fong, J. Gale, M. Halpenn y , G. Hoffmann et al. , “Stanley: The robot that won the darpa grand challenge, ” Journal of field Robotics , vol. 23, no. 9, pp. 661–692, 2006. [7] M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgo v , S. Et- tinger , D. Haehnel, T . Hilden, G. Hoffmann, B. Huhnke et al. , “Junior: The stanford entry in the urban challenge, ” J ournal of field Robotics , vol. 25, no. 9, pp. 569–597, 2008. [8] C. Urmson, J. Anhalt, D. Bagnell, C. Baker , R. Bittner, M. Clark, J. Dolan, D. Duggins, T . Galatali, C. Geyer et al. , “ Autonomous dri ving in urban environments: Boss and the urban challenge, ” Journal of F ield Robotics , vol. 25, no. 8, pp. 425–466, 2008. [9] W . Zong, C. Zhang, Z. W ang, J. Zhu, and Q. Chen, “ Architecture design and implementation of an autonomous vehicle, ” IEEE Access , vol. 6, pp. 21 956–21 970, 2018. [10] SAE international, “T axonomy and definitions for terms related to dri v- ing automation systems for on-road motor vehicles, ” SAE International, (J3016) , 2016. [11] J. Hecht, “Lidar for self-driving cars, ” Optics and Photonics News , vol. 29, no. 1, pp. 26–33, Jan. 2018. [12] Felix, “Sensor set design patterns for autonomous vehi- cles. ” [Online]. A vailable: https://autonomous- driving.org/2019/01/25/ positioning- sensors- for- autonomous- v ehicles/ [13] R. Fan, “Real-time computer stereo vision for automotive applications, ” Ph.D. dissertation, University of Bristol, 2018. [14] R. Fan, X. Ai, and N. Dahnoun, “Road surface 3d reconstruction based on dense subpixel disparity map estimation, ” IEEE T ransactions on Image Pr ocessing , vol. 27, no. 6, pp. 3025–3035, 2018. [15] “Lidar–light detection and ranging–is a remote sensing method used to examine the surface of the earth, ” NOAA. Arc hived from the original on , vol. 4, 2013. [16] J. Jiao, Q. Liao, Y . Zhu, T . Liu, Y . Y u, R. Fan, L. W ang, and M. Liu, “ A novel dual-lidar calibration algorithm using planar surf aces, ” arXiv pr eprint arXiv:1904.12116 , 2019. [17] J. Jiao, Y . Y u, Q. Liao, H. Y e, and M. Liu, “ Automatic calibration of mul- tiple 3d lidars in urban environments, ” arXiv pr eprint arXiv:1905.04912 , 2019. [18] T . Bureau, “Radar definition, ” Public W orks and Government Services Canada , 2013. [19] W . J. W esterveld, “Silicon photonic micro-ring resonators to sense strain and ultrasound, ” 2014. [20] Y . Liu, R. F an, B. Y u, M. J. Bocus, M. Liu, H. Ni, J. F an, and S. Mao, “Mobile robot localisation and na vigation using lego nxt and ultrasonic sensor , ” in Pr oc. IEEE Int. Conf. Robotics and Biomimetics (R OBIO) , Dec. 2018, pp. 1088–1093. [21] N. Samama, Global positioning: T ec hnologies and performance . John W iley & Sons, 2008, vol. 7. [22] D. B. Cox Jr, “Integration of gps with inertial navigation systems, ” Navigation , vol. 25, no. 2, pp. 236–245, 1978. [23] S. T rahey , “Choosing a code wheel: A detailed look at how encoders work, ” Small F orm F actors , 2008. [24] V . Bhandari, Design of machine elements . T ata McGraw-Hill Education, 2010. [25] M. Maurer , J. C. Gerdes, B. Lenz, H. W inner et al. , “ Autonomous driving, ” Berlin, Germany: Springer Berlin Heidelber g , vol. 10, pp. 978– 3, 2016. [26] R. Fan, V . Prokhoro v , and N. Dahnoun, “F aster-than-real-time linear lane detection implementation using soc dsp tms320c6678, ” in 2016 IEEE International Confer ence on Imaging Systems and T echniques (IST) . IEEE, 2016, pp. 306–311. [27] R. Fan and N. Dahnoun, “Real-time implementation of stereo vision based on optimised normalised cross-correlation and propagated search range on a gpu, ” in 2017 IEEE International Confer ence on Ima ging Systems and T echniques (IST) . IEEE, 2017, pp. 1–6. [28] B. Apolloni, A. Ghosh, F . Alpaslan, and S. Patnaik, Machine learning and r obot perception . Springer Science & Business Media, 2005, vol. 7. [29] U. Ozgunalp, R. Fan, X. Ai, and N. Dahnoun, “Multiple lane detection algorithm based on novel dense v anishing point estimation, ” IEEE T ransactions on Intellig ent T ransportation Systems , vol. 18, no. 3, pp. 621–632, 2017. [30] C. Chen, A. Seff, A. Kornhauser , and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous dri ving, ” in Pr oceedings of the IEEE International Confer ence on Computer V ision , 2015, pp. 2722–2730. [31] V . Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep con- volutional encoder-decoder architecture for image segmentation, ” IEEE transactions on pattern analysis and machine intelligence , vol. 39, no. 12, pp. 2481–2495, 2017. [32] O. Ronneberger , P . Fischer , and T . Brox, “U-net: Con volutional netw orks for biomedical image segmentation, ” in International Confer ence on Medical imag e computing and computer-assisted intervention . Springer, 2015, pp. 234–241. [33] R. C. Smith and P . Cheeseman, “On the representation and estimation of spatial uncertainty , ” The international journal of Robotics Resear ch , vol. 5, no. 4, pp. 56–68, 1986. [34] H. Y e, Y . Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping, ” in 2019 IEEE International Confer ence on Robotics and Automation (ICRA) . IEEE, 2019. [35] G. Bresson, Z. Alsayed, L. Y u, and S. Glaser , “Simultaneous localization and mapping: A survey of current trends in autonomous driving, ” IEEE T ransactions on Intelligent V ehicles , vol. 2, no. 3, pp. 194–220, 2017. [36] R. E. Kalman, “ A new approach to linear filtering and prediction problems, ” J. Basic Eng. , vol. 82, p. 35, 1960. [37] S. J. Julier and J. K. Uhlmann, “New extension of the kalman filter to nonlinear systems, ” 1997. [38] P . S. Maybeck and G. M. Siouris, “Stochastic models, estimation, and control, volume i, ” vol. 10, pp. 282–282, 1980. [39] F . Dellaert, D. Fox, W . Burgard, and S. Thrun, “Monte Carlo localization for mobile robots, ” in Proc. IEEE Int. Conf. Robotics and A utomation (Cat. No.99CH36288C) , vol. 2, May 1999, pp. 1322–1328 vol.2. [40] S. S. Rao, Engineering optimization: theory and pr actice . John W iley & Sons, 2009. [41] S. Thrun and J. J. Leonard, “Simultaneous localization and mapping, ” Springer Handbook of Robotics, pp. 871–889, 2008. [42] Y . Ma, X. Zhu, S. Zhang, R. Y ang, W . W ang, and D. Manocha, “T rafficpredict: Trajectory prediction for heterogeneous traf fic-agents, ” arXiv preprint arXiv:1811.02146 , 2018. [43] S. Lefèvre, D. V asquez, and C. Laugier , “ A surv ey on motion prediction and risk assessment for intelligent vehicles, ” ROBOMECH journal , vol. 1, no. 1, p. 1, 2014. [44] M. Bojarski, D. Del T esta, D. Dworako wski, B. Firner , B. Flepp, P . Goyal, L. D. Jackel, M. Monfort, U. Muller , J. Zhang et al. , “End to end learning for self-dri ving cars, ” arXiv preprint , 2016. [45] R. E. Kalman, “ A new approach to linear filtering and prediction problems, ” Journal of basic Engineering , vol. 82, no. 1, pp. 35–45, 1960. [46] N. Djuric, V . Radosavlje vic, H. Cui, T . Nguyen, F .-C. Chou, T .-H. Lin, and J. Schneider, “Motion prediction of traffic actors for au- tonomous driving using deep conv olutional networks, ” arXiv pr eprint arXiv:1808.05819 , 2018. [47] D. Helbing and P . Molnar, “Social force model for pedestrian dynamics, ” Physical re view E , vol. 51, no. 5, p. 4282, 1995. [48] T . Streubel and K. H. Hoffmann, “Prediction of driv er intended path at intersections, ” in 2014 IEEE Intelligent V ehicles Symposium Pr oceed- ings . IEEE, 2014, pp. 134–139. [49] M. Schreier, V . Willert, and J. Adamy , “ An integrated approach to maneuver -based trajectory prediction and criticality assessment in arbi- trary road en vironments, ” IEEE T ransactions on Intelligent Tr ansporta- tion Systems , vol. 17, no. 10, pp. 2751–2766, 2016. [50] A. Y . Ng and S. J. Russell, “ Algorithms for in verse reinforcement learning. ” in Icml , vol. 1, 2000, p. 2. [51] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert, “ Activity forecasting, ” in Eur opean Conference on Computer V ision . Springer , 2012, pp. 201–214. [52] C. Katrakazas, M. Quddus, W .-H. Chen, and L. Deka, “Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions, ” T ransportation Research P art C: Emerging T ec hnologies , vol. 60, pp. 416–442, 2015. [53] B. Paden, M. ˇ Cáp, S. Z. Y ong, D. Y ershov , and E. Frazzoli, “ A survey of motion planning and control techniques for self-dri ving urban vehicles, ” IEEE T ransactions on intelligent vehicles , vol. 1, no. 1, pp. 33–55, 2016. [54] D. González, J. Pérez, V . Milanés, and F . Nashashibi, “ A revie w of motion planning techniques for automated vehicles, ” IEEE T ransactions on Intelligent T ransportation Systems , vol. 17, no. 4, pp. 1135–1145, 2016. [55] T . H. Cormen, “Section 24.3: Dijkstra’ s algorithm, ” Introduction to Algorithms , pp. 595–601, 2001. [56] J. Jiao, R. Fan, H. Ma, and M. Liu, “Using dp towards a shortest path problem-related application, ” arXiv pr eprint arXiv:1903.02765 , Jiao2019. [57] D. Delling, P . Sanders, D. Schultes, and D. W agner , “Engineering route planning algorithms, ” pp. 117–139, 2009. [58] A. González-Sieira, M. Mucientes, and A. Bugarín, “ A state lattice approach for motion planning under control and sensor uncertainty , ” in R OBO T2013: F irst Iberian Robotics Conference . Springer , Jan. 2014. [Online]. A vailable: http://dx.doi.org/10.1007/978- 3- 319- 03653- 3_19 [59] D. Gruyer, S. Demmel, V . Magnier , and R. Belaroussi, “Multi- hypotheses tracking using the dempster–shafer theory , application to ambiguous road conte xt, ” Information Fusion , vol. 29, pp. 40–56, 2016. [60] M. Araki, “Pid control, ” Contr ol Systems, Robotics and Automation: System Analysis and Contr ol: Classical Appr oaches II , pp. 58–79, 2009. [61] G. C. Goodwin, S. F . Graebe, M. E. Salgado et al. , Contr ol system design . Upper Saddle River , NJ: Prentice Hall„ 2001. [62] C. E. Garcia, D. M. Prett, and M. Morari, “Model predicti ve control: theory and practiceâ ˘ A ˇ T a survey , ” Automatica , vol. 25, no. 3, pp. 335– 348, 1989. [63] J. Levinson, J. Askeland, J. Becker , J. Dolson, D. Held, S. Kammel, J. Z. K olter , D. Langer, O. Pink, V . Pratt et al. , “T owards fully autonomous driving: Systems and algorithms, ” in 2011 IEEE Intelligent V ehicles Symposium (IV) . IEEE, 2011, pp. 163–168. [64] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P . W ang, Y . Lin, and R. Y ang, “The apolloscape dataset for autonomous driving, ” in Proc. IEEE/CVF Conf. Computer V ision and P attern Recognition W orkshops (CVPRW) , Jun. 2018, pp. 1067–10 676. [65] M. Cordts, M. Omran, S. Ramos, T . Rehfeld, M. Enzweiler , R. Be- nenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding, ” in Proceedings of the IEEE confer ence on computer vision and pattern reco gnition , 2016, pp. 3213– 3223. [66] A. Geiger , P . Lenz, C. Stiller, and R. Urtasun, “V ision meets robotics: The kitti dataset, ” The International Journal of Robotics Researc h , vol. 32, no. 11, pp. 1231–1237, 2013. [67] T . K. Hernán Badino, “ A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion, ” 2011. [68] D. W elch and E. Behrmann, “Who’ s winning the self-driving car race?” https://www .bloomberg.com/ne ws/features/2018-05-07/who- s-winning-the-self-driving-car -race, accessed: 2019-04-21. [69] C. Coberly , “W aymo’ s self-driving car fleet has racked up 8 million miles in total driving distance on public roads, ” https://www .techspot.com/news/75608-waymo-self-dri ving-car-fleet- racks-up-8.html, accessed: 2019-04-21. [70] R. Fan, Y . Liu, X. Y ang, M. J. Bocus, N. Dahnoun, and S. T ancock, “Real-time stereo vision for road surface 3-d reconstruction, ” in 2018 IEEE International Conference on Imaging Systems and T echniques (IST) . IEEE, 2018, pp. 1–6. [71] R. Fan, M. J. Bocus, and N. Dahnoun, “ A novel disparity transformation algorithm for road se gmentation, ” Information Processing Letters , vol. 140, pp. 18–24, 2018. [72] R. Fan, M. J. Bocus, Y . Zhu, J. Jiao, L. W ang, F . Ma, S. Cheng, and M. Liu, “Road crack detection using deep con volutional neural network and adaptive thresholding, ” arXiv preprint , 2019. [73] J. V an Brummelen, M. O’Brien, D. Gruyer , and H. Najjaran, “ Au- tonomous vehicle perception: The technology of today and tomorrow , ” T ransportation r esearch part C: emer ging technolo gies , v ol. 89, pp. 384– 406, 2018.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment