Publication:
Deep reinforcement learning for autonomous driving

dc.contributor.affiliation#PLACEHOLDER_PARENT_METADATA_VALUE#en_US
dc.contributor.authorOsman, Hassan Abdalla Abdelkarimen_US
dc.date.accessioned2024-10-08T03:17:51Z
dc.date.available2024-10-08T03:17:51Z
dc.date.issued2019
dc.description.abstractRecently, both the automotive industry and research communities have directed attention towards Autonomous Driving (AD) to tackle issues like traffic congestion and road accidents. End-to-end driving has gained interest as sensory inputs are mapped straight to controls. Machine learning approaches have been used for end-to-end driving, particularly Deep Learning (DL). However, DL requires expensive labelling. Another approached used is Deep Reinforcement Learning (DRL). However, work that use DRL predominantly use one input sensor modality to learn policies such as image pixels of the state. The state-of-art DRL algorithm is Proximal Policy Optimization (PPO). One shortcoming of using PPO in the context of autonomous driving using inputs from multiple sensors is robustness to sensor defectiveness or sensor failure. This is due to naïve sensor fusion. This thesis investigates the use of a stochastic regularization technique named Sensor Dropout (SD) in an attempt to address this shortcoming. Training and evaluation are carried out on a car racing simulator called TORCS. The input to the agent were captured from different sensing modalities such as range-finders, proprioceptive sensors and a front-facing RGB camera. They are used to control the car’s steering, acceleration and brakes. In order to simulate sensor defectiveness and sensor failure, Gaussian noise is added to sensor readings and input from sensors are blocked respectively. Results show that using regularization requires longer training time with lower training performance. However, in settings where sensor readings are noisy, the PPO-SD agent displayed better driving behaviour. On the other hand, the PPO agent suffered approximately 59% of performance drop, in terms of rewards, compared to the PPO-SD agent. The case was the same in settings where sensor readings are blocked.en_US
dc.description.degreelevelMasteren_US
dc.description.identifierThesis : Deep reinforcement learning for autonomous driving /by Hassan Abdalla Abdelkarim Osmanen_US
dc.description.identityt11100409665HassanAbdallaAbdelKarimOsmanen_US
dc.description.kulliyahKulliyyah of Engineeringen_US
dc.description.notesThesis (MSMCT)--International Islamic University Malaysia, 2019.en_US
dc.description.physicaldescriptionxiv, 70 leaves : illustrations ; 30cm.en_US
dc.description.programmeDepartment of Mechatronics Engineeringen_US
dc.identifier.urihttps://studentrepo.iium.edu.my/handle/123456789/7096
dc.identifier.urlhttps://lib.iium.edu.my/mom/services/mom/document/getFile/mzMFl5D1fuz3MX1XUtY46LCEv1rNA0A020200709110726439
dc.language.isoenen_US
dc.publisherKuala Lumpur :Kulliyah of Engineering, International Islamic University Malaysia, 2019en_US
dc.rightsCopyright International Islamic University Malaysia
dc.titleDeep reinforcement learning for autonomous drivingen_US
dc.typeMaster Thesisen_US
dspace.entity.typePublication

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
t11100409665HassanAbdallaAbdelKarimOsman_SEC_24.pdf
Size:
904.52 KB
Format:
Adobe Portable Document Format
Description:
24 pages file
Loading...
Thumbnail Image
Name:
t11100409665HassanAbdallaAbdelKarimOsman_SEC.pdf
Size:
1.86 MB
Format:
Adobe Portable Document Format
Description:
Full text secured file

Collections