KICT - Masters Theses
Permanent URI for this collectionhttps://studentrepo.iium.edu.my/handle/123456789/9204
Browse
Browsing KICT - Masters Theses by Department "#PLACEHOLDER_PARENT_METADATA_VALUE#"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
- Some of the metrics are blocked by yourconsent settings
Publication A comparative study of software quality using the hybrid agile software development lifecycle and the plan-driven development model(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); Norzariyah YahyaSoftware development methodology is a series of processes that is necessary for software development to achieve a good quality software. Software development methodology can be divided into three general established approaches: the plan-driven development model, agile methodologies, and hybrid agile model. Agile methodologies are difficult to adapt for businesses that have strict timeline from its clients; thus, companies, especially software development houses, prefer the plan-driven development model or hybrid agile model as the models to be referred to in a software project. However, which software model between the plan-driven development model and the hybrid agile model which is widely used by software development houses will result in higher-quality software? To answer the question, this research investigates the plan-driven development model and hybrid agile model for comparison of the software quality produced by the software engineers. This research is empirical research adopting an experimental study to compare the internal quality of a plan-driven development model with a hybrid agile model involving a group of software engineers who were divided into two groups; one group using the Hybrid Agile model, WaterScrumFall while the other be using a plan-driven development model, the Waterfall model. The outcomes of this study shows that the complexity of the code, measured by the Average Cyclomatic Complexity (ACC); the lines of codes indicates the size and scale of the codebase; and the dependencies between objects specifically addressing structural quality through metrics like Coupling between Objects (CBO) and Lack of Cohesion in Methods (LCOM). The less complex project with low coupling between objects and cohesion in methods will contribute to high-quality source code and easier project management and maintenance, and it will also be cost effective. As a result, the team that implements Hybrid Agile model produce lower ACC and CBO, but higher LCOM compared to the team that applies plan-driven methodology. This research concludes that the Hybrid Agile model produces better software quality than the plan-driven methodology. This research will contribute to the benefit of knowing which model will produce better internal software quality and provide better insights for developers on which software development methodology may lead to software that is less complexity with a minimal line of codes, less coupling between methods, and more lack cohesion method.10 60 - Some of the metrics are blocked by yourconsent settings
Publication A machine learning based approach for quantifying muscle spasticity level in neurological disorder patients(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); ;Asmarani Ahmad Puzi ;Shahrul Na’im Sidek ;Ahmad Anwar ZainuddinSalmah Anim Abu HassanMuscle spasticity is a condition that occurs in patients with neurological disorders when their muscles are stiff, tight, and resistant to stretching. The current assessment method, relying on the subjective judgment of therapists using the Modified Ashworth Scale (MAS), introduces variability that may affect the rehabilitation process. In addition, many existing computational models are not aligned with clinical standards such as MAS, limiting their practical adoption in clinical settings. They also often overlook direction-specific movement phases and fail to distinguish muscle responses across different axes and muscle groups. To address the limitations, this research aimed to develop a quantitative assessment method by muscle spasticity characteristics based on mechanomyography (MMG) signals and MAS levels, utilising machine learning techniques. A Quantitative Spasticity Assessment Technology (QSAT) platform has been developed which consists of two sensors that were tri-axial accelerometer mechanomyography (ACC-MMG) functioning in measure acceleration of biceps and triceps muscle contraction and potentiometer to measure the angular position of forearm during flexion and extension movement. A comprehensive investigation was conducted to assess muscle spasticity level by recording ACC-MMG signals from patients' forearm musculature during flexion and extension movements using QSAT platform. A total of 30 patients with neurological disorders were classified into five MAS levels (0, 1, 1+, 2, and 3), along with 10 healthy subjects serving as a baseline group. The pre-processed data comprised 48 extracted features from ACC-MMG signals along the x, y, and z axes for both flexion and extension movements of the biceps and triceps. These features corresponded to the longitudinal, lateral, and transverse muscle orientations. For both flexion and extension movements, machine learning models were trained using the selected subset of 25 significant features and the full set of 48 features respectively, with performance comparisons made to identify the most effective approach. Various machine learning models algorithms, including Linear Discriminant Analysis (LDA), Decision Tree (DT), Support Vector Machine (SVM), and K-Nearest Neighbour (KNN), were tested. The KNN-based classifier demonstrated the highest performance using a 90% training and 10% testing data split surpassing the performance of other classifiers. Specifically, at k = 15 using Euclidean distance, the KNN achieved an accuracy of 91.29% for flexion using the significant features, with corresponding precision, recall, and F1-score of 91.64%, 91.25%, and 91.47%, respectively. For extension, the same configuration resulted with 96.30% for extension using the full feature set, with precision, recall, and F1-score of 96.53%, 96.30%, and 96.33%, respectively. These results indicate high classification performance, with minimal false positives or false negatives, particularly in distinguishing between different MAS levels. This research suggests that the muscle characteristic model embedded in the QSAT can serve as a standardised and objective assessment tool for measuring the spasticity level of the affected limb using computational method, leading to support clinical evaluations and enabling more effective rehabilitation strategies.13 83 - Some of the metrics are blocked by yourconsent settings
Publication An information dissemination model for scholars on cryptocurrencies(Kuala Lumpur :International Islamic University Malaysia,2019, 2019) ;Ainin Soffia HusainThis dissertation reports on an information dissemination model for scholars on cryptocurrencies. This disruptive technology of cryptocurrencies acts as a virus that goes viral. The objectives were to review the existing information dissemination models, identify known factors influencing an information dissemination process, and propose an information dissemination model for scholars on cryptocurrencies. The research questions were on why the epidemic models could be used for understanding cryptocurrencies, what were the known factors influencing the relevant information dissemination model, and how could an information spreading model on cryptocurrencies work for scholars. The methods used in this research were survey questionnaire, interview, content analysis and triangulation. Survey questionnaire conducted on two cryptocurrencies companies, while content analysis for ten cryptocurrencies companies and interview five scholars. Triangulation was done to produce the upgraded info dissemination model. A pilot study was run for content analysis and interview. The findings included variables (security system, user interaction, known identities, legal activities and user support) and themes (epidemic form, information dissemination model, virus/viral, spread the information, proposed model of information dissemination, scholars play a big role, information dissemination process, approval for information from scholars and spreading to others) for disseminating information following the SIER epidemic model. An information dissemination model was proposed, and expert opinions were gathered.8 - Some of the metrics are blocked by yourconsent settings
Publication Internet and its implications for business in Yemen(Kuala Lumpur : Kulliyyah of Information Communication Technology, International Islamic University Malaysia, 2003, 2003); Farooq AhmadThis study examines the use of Internet for business purposes in Yemen. The main sectors observed are banking and private trade organizations. It was assumed that security is a main concern in e-commerce. A survey of the banking industry and private business organizations in Yemen is conducted. Through the survey and interviews a thorough study is performed about the: Internet facilities available in Yemen (the infrastructure, the service providers), the literacy and use of ICT in the above two sectors, the level of e-commerce adopted, the main hurdles in the adoption of e-commerce, and measure required to increase the adoption of e-commerce. The study has concluded that in general the above two sectors realize the importance of ecommerce for their business and they are willing to proceed further with the ecommerce. The main causes in the delay of e-commerce adoption are the discrepancies in the infrastructure (technology, service providers and human resources), high costing of the Internet facilities, bureaucratic hurdles in obtaining the facilities, and the non availability of a secure environment. In spite of the fact that they have a high concern about the Internet security, their awareness about the security hazards and the protection measures is 1t minimum. It is also observed that the public awareness of the ICT in general is very low. In light of the data collected, the study has come up with certain recommendations for the authorities interested to improve the e-commerce in Yemen.11 35 - Some of the metrics are blocked by yourconsent settings
Publication Secure digital forensics framework for vehicle maintenance service records(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2024, 2024); ;Hafizah MansorNormaziah Abdul AzizAutomotive technology is soaring and has reached an advanced phase. Despite their benefits, these advancements may expose vehicles to additional threats, particularly regarding security and data management. Currently, handling maintenance service records is a manual process, which may lead to inaccuracy, unavailability, and limited consumer access. These concerns are compounded by the lack of trustworthy platforms or legitimate sources for retrieving the history of vehicle maintenance service records. The objectives of this research are to identify a list of stakeholders to define their roles and responsibilities, to identify the security requirements that need to be implemented and to establish a secure framework and communication protocol to address these concerns. Stakeholders were identified through a snowball literature review method, and their roles were assessed to ensure comprehensive coverage of the vehicle maintenance ecosystem. The security requirements were derived using Threat and Vulnerability Risk Assessment (TVRA) and Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege (STRIDE) threat modelling methodologies and then used in designing the secure frameworks and communication protocol for vehicle maintenance service records. The proposed frameworks that are designed for scheduled and repair maintenance services implement the use of consortium blockchain technology as it provides a decentralised yet controlled access to ensure that only authorised stakeholders can contribute and validate data. This approach helps to protect vehicle maintenance service records from unauthorised access and data manipulation. A secure communication protocol which mainly focuses on the grant access process for new vehicle owners was developed and then formally analysed using the Scyther Tool. This tool is chosen because of its ability to rigorously verify the security of protocols. As a result, this research listed eight stakeholders that are involved in the vehicle maintenance service records. The security requirements identified are then implemented in designing the secure frameworks and secure communication protocol to address and maintain the confidentiality, integrity and availability of vehicle maintenance service records. The integration of these solutions not only safeguards maintenance records but also supports digital forensics in providing a robust foundation for advancing the management and security of vehicle maintenance data.8 38 - Some of the metrics are blocked by yourconsent settings
Publication Skyline query processing for large-scale and incomplete graphs using graph convolutional network (GCN)(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); ;Raini HassanDini Oktarini Dwi HandayaniSkyline query processing is essential in multi-criteria decision-making, as it retrieves optimal results without requiring user-defined weights. Traditional skyline methods, however, face significant challenges when applied to large-scale and incomplete datasets. This study proposes a hybrid approach that integrates the ISkyline dominance graph technique with Graph Neural Networks (GNNs), specifically a Graph Convolutional Network (GCN) to improve skyline query performance under such conditions. The GCN component is utilized to predict skyline tuples in the presence of missing or incomplete data. The ISkyline algorithm serves as the foundation for identifying initial dominance relationships and labelling skyline points, enabling the GCN to learn Pareto-optimal patterns from partially incomplete data. Evaluation on both synthetic and real-world datasets demonstrates enhanced accuracy and efficiency when compared to established methods such as ISkyline, SIDS, and OIS. The proposed GNN + ISkyline framework improved classification accuracy by 72%, the F1-score by 71%, and the AUC-ROC by 49% compared to the standalone ISkyline algorithm when evaluated on the CoIL 2000 dataset. This work demonstrates the potential of creating a more efficient query processing, supporting applications in e-commerce, finance, and smart data systems, while aligning with the 9th Sustainable Development Goal on industry, innovation, and infrastructure.14 22 - Some of the metrics are blocked by yourconsent settings
Publication Utilizing blockchain-secured deep packet inspection for trusted differentiated services(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); ;Adamu Abubakar Ibrahim ;Akram M Z M KhedherAndi Fitriah Abdul KadirThe recent developments in networking have led to the generation of massive volumes of network traffic, creating many obstacles in network traffic management. The traditional deep packet inspection (DPI) is used to inspect, identify, classify and manage Internet Protocol (IP) based network packets from massive volume of traffic flow within the connected network. However, with the integration of DPI at various inspection points at the network edges, it creates certain challenges for the connected networks. Also, there are security challenges posed with conventional deep packet inspection related to the identification of tampered, manipulated and exploited IP network packets within the network transmission classified for critical data having higher priority access using differentiated service code point (DSCP). The purpose of this research is to design and develop a blockchain-secured deep packet inspection system to secure the DSCP tagging and examine the impact of blockchain on DPI system as well as on real-time operations. Moreover, the study explores the use of deep learning models due to its efficient learning hierarchical representation of features through series of layers as well as with memory for its ability to extract features through time series data. Also, the use of blockchain in the spectrum of DPI to enhance the performance and security of network. The study has utilized four deep learning neural classifiers including Autoencoders, Convolutional Neural Network, Long-Short Term Memory and Recurrent Neural Network having accuracies of 92.03%, 96.88%, 99.64% and 99.72% respectively. It was discovered the RNN performed well around 99.72% than the rest of the adopted classifiers. There is an ensemble approach used based on soft-voting mechanism with an accuracy of 99.28% having all the features of deployed models into one single model. Additionally, the ethereum-based blockchain is used to develop and deploy blockchain into DPI ecosystems, storing critical IP packets information moving forward from the classification phase of the neural network. The IP packets data was also stored in NoSQL database systems for future analysis on different use-cases. The objective of this study contributes to the analysis of the security issues related to DSCP faced by DPI with the growing volume of network data.21 44 - Some of the metrics are blocked by yourconsent settings
Publication Vaccine hesitancy detection using BERT for multiple social media platforms(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2023, 2023); ;Suriani SulaimanNorlia Md YusofVaccination has been proven to be an effective measure to prevent the spread of harmful diseases. Despite its efficacy, the moves towards vaccine hesitancy have been receiving global attention. Vaccine hesitancy issues have been openly discussed across major social media platforms including Facebook, Reddit, Twitter, Instagram and YouTube. The spread of vaccine hesitancy-related posts is propagated substantially, causing greater threats to public health. Consequently, various state-of-the-art machine learning techniques have been proposed to analyse vaccine-hesitant related posts in social media. One of the most recent approaches is the transfer learning method using a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model. Despite vaccine hesitancy being a prevalent issue across multiple social media platforms, only a few studies have utilised data from multiple social media platforms to detect vaccine hesitancy. To address this research gap, the use of BERT as one of the new language representation models is adopted to train from a collection of vaccine hesitancy related data from multiple social media platforms. Moreover, this study employs the Support Vector Machine (SVM) and Logistic Regression (LR) models and compare their performances against the BERT method. The objectives of this research are threefold; to establish a consolidated dataset from multiple social media sources for use in vaccine hesitancy detection, to evaluate the effectiveness of using mono-platform versus multi-platform vaccine hesitancy data on the performance of different machine learning models and to apply a transfer learning method using BERT in vaccine hesitancy detection. A collection of 193,023 labelled vaccine hesitant posts were aggregated from three (3) social media platforms which includes Facebook, Reddit, and Twitter. The results demonstrate that the BERT model performs the best and achieved an F1-score of 0.93, while both the SVM and LR achieved F1-scores of 0.90 when detecting vaccine hesitancy from multiple social media platforms. Our proposed research also revealed that models trained with multi-platform data perform at least 15% better than models trained with mono-platform data when tested with multi-platform data.4 36