Repository logo
  • English
  • Deutsch
  • Español
  • Français
Log In
New user? Click here to register.
  1. Home
  2. Browse by Author

Browsing by Author "Osman, Hassan Abdalla Abdelkarim"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Some of the metrics are blocked by your 
    consent settings
    Publication
    Deep reinforcement learning for autonomous driving
    (Kuala Lumpur :Kulliyah of Engineering, International Islamic University Malaysia, 2019, 2019)
    Osman, Hassan Abdalla Abdelkarim
    ;
    Recently, both the automotive industry and research communities have directed attention towards Autonomous Driving (AD) to tackle issues like traffic congestion and road accidents. End-to-end driving has gained interest as sensory inputs are mapped straight to controls. Machine learning approaches have been used for end-to-end driving, particularly Deep Learning (DL). However, DL requires expensive labelling. Another approached used is Deep Reinforcement Learning (DRL). However, work that use DRL predominantly use one input sensor modality to learn policies such as image pixels of the state. The state-of-art DRL algorithm is Proximal Policy Optimization (PPO). One shortcoming of using PPO in the context of autonomous driving using inputs from multiple sensors is robustness to sensor defectiveness or sensor failure. This is due to naïve sensor fusion. This thesis investigates the use of a stochastic regularization technique named Sensor Dropout (SD) in an attempt to address this shortcoming. Training and evaluation are carried out on a car racing simulator called TORCS. The input to the agent were captured from different sensing modalities such as range-finders, proprioceptive sensors and a front-facing RGB camera. They are used to control the car’s steering, acceleration and brakes. In order to simulate sensor defectiveness and sensor failure, Gaussian noise is added to sensor readings and input from sensors are blocked respectively. Results show that using regularization requires longer training time with lower training performance. However, in settings where sensor readings are noisy, the PPO-SD agent displayed better driving behaviour. On the other hand, the PPO agent suffered approximately 59% of performance drop, in terms of rewards, compared to the PPO-SD agent. The case was the same in settings where sensor readings are blocked.

This site contains copyrighted unpublished research owned by International Islamic University Malaysia (IIUM) and(or) the owner of the research. No part of any material contained in or derived from any unpublished research may be used without written permission of the copyright holders or due acknowledgement.

Contact:
  • Dar al-Hikmah Library
    International Islamic University Malaysia (IIUM)
    P.O Box 10, 50728
    Kuala Lumpur
  • +603-64214829/4813
  • studentrepo@iium.edu.my
Follow Us:
Copyright © 2024: Dar al-Hikmah Library, IIUM
by CDSOL