KICT - Masters Theses
Permanent URI for this collectionhttps://studentrepo.iium.edu.my/handle/123456789/9204
Browse
Browsing KICT - Masters Theses by Department "#PLACEHOLDER_PARENT_METADATA_VALUE#"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
- Some of the metrics are blocked by yourconsent settings
Publication A comparative study of software quality using the hybrid agile software development lifecycle and the plan-driven development model(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); Norzariyah YahyaSoftware development methodology is a series of processes that is necessary for software development to achieve a good quality software. Software development methodology can be divided into three general established approaches: the plan-driven development model, agile methodologies, and hybrid agile model. Agile methodologies are difficult to adapt for businesses that have strict timeline from its clients; thus, companies, especially software development houses, prefer the plan-driven development model or hybrid agile model as the models to be referred to in a software project. However, which software model between the plan-driven development model and the hybrid agile model which is widely used by software development houses will result in higher-quality software? To answer the question, this research investigates the plan-driven development model and hybrid agile model for comparison of the software quality produced by the software engineers. This research is empirical research adopting an experimental study to compare the internal quality of a plan-driven development model with a hybrid agile model involving a group of software engineers who were divided into two groups; one group using the Hybrid Agile model, WaterScrumFall while the other be using a plan-driven development model, the Waterfall model. The outcomes of this study shows that the complexity of the code, measured by the Average Cyclomatic Complexity (ACC); the lines of codes indicates the size and scale of the codebase; and the dependencies between objects specifically addressing structural quality through metrics like Coupling between Objects (CBO) and Lack of Cohesion in Methods (LCOM). The less complex project with low coupling between objects and cohesion in methods will contribute to high-quality source code and easier project management and maintenance, and it will also be cost effective. As a result, the team that implements Hybrid Agile model produce lower ACC and CBO, but higher LCOM compared to the team that applies plan-driven methodology. This research concludes that the Hybrid Agile model produces better software quality than the plan-driven methodology. This research will contribute to the benefit of knowing which model will produce better internal software quality and provide better insights for developers on which software development methodology may lead to software that is less complexity with a minimal line of codes, less coupling between methods, and more lack cohesion method.21 127 - Some of the metrics are blocked by yourconsent settings
Publication An information dissemination model for scholars on cryptocurrencies(Kuala Lumpur :International Islamic University Malaysia,2019, 2019) ;Ainin Soffia HusainThis dissertation reports on an information dissemination model for scholars on cryptocurrencies. This disruptive technology of cryptocurrencies acts as a virus that goes viral. The objectives were to review the existing information dissemination models, identify known factors influencing an information dissemination process, and propose an information dissemination model for scholars on cryptocurrencies. The research questions were on why the epidemic models could be used for understanding cryptocurrencies, what were the known factors influencing the relevant information dissemination model, and how could an information spreading model on cryptocurrencies work for scholars. The methods used in this research were survey questionnaire, interview, content analysis and triangulation. Survey questionnaire conducted on two cryptocurrencies companies, while content analysis for ten cryptocurrencies companies and interview five scholars. Triangulation was done to produce the upgraded info dissemination model. A pilot study was run for content analysis and interview. The findings included variables (security system, user interaction, known identities, legal activities and user support) and themes (epidemic form, information dissemination model, virus/viral, spread the information, proposed model of information dissemination, scholars play a big role, information dissemination process, approval for information from scholars and spreading to others) for disseminating information following the SIER epidemic model. An information dissemination model was proposed, and expert opinions were gathered.11 1 - Some of the metrics are blocked by yourconsent settings
Publication Internet and its implications for business in Yemen(Kuala Lumpur : Kulliyyah of Information Communication Technology, International Islamic University Malaysia, 2003, 2003); Farooq AhmadThis study examines the use of Internet for business purposes in Yemen. The main sectors observed are banking and private trade organizations. It was assumed that security is a main concern in e-commerce. A survey of the banking industry and private business organizations in Yemen is conducted. Through the survey and interviews a thorough study is performed about the: Internet facilities available in Yemen (the infrastructure, the service providers), the literacy and use of ICT in the above two sectors, the level of e-commerce adopted, the main hurdles in the adoption of e-commerce, and measure required to increase the adoption of e-commerce. The study has concluded that in general the above two sectors realize the importance of ecommerce for their business and they are willing to proceed further with the ecommerce. The main causes in the delay of e-commerce adoption are the discrepancies in the infrastructure (technology, service providers and human resources), high costing of the Internet facilities, bureaucratic hurdles in obtaining the facilities, and the non availability of a secure environment. In spite of the fact that they have a high concern about the Internet security, their awareness about the security hazards and the protection measures is 1t minimum. It is also observed that the public awareness of the ICT in general is very low. In light of the data collected, the study has come up with certain recommendations for the authorities interested to improve the e-commerce in Yemen.16 61 - Some of the metrics are blocked by yourconsent settings
Publication Optimizing skyline queries in large-scale uncertain graph using graph neural networks and reinforcement learning(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); ;Raini HassanDini Oktarina Dwi HandayaniSkyline queries play a critical role in multi-criteria decision-making by identifying data points that are not dominated by others, thus offering optimal choices to users. These queries are particularly valuable in transportation management, logistics, route optimization, and decision support systems. However, existing skyline query processing algorithms exhibit limited effectiveness when applied to large-scale and uncertain graph datasets, due to their reliance on exhaustive dominance comparisons, and sensitivity to uncertainty, which collectively result in incompatibility when applied in real-world graph environments. To address these challenges, this research proposes a novel skyline query processing framework that integrates Graph Neural Networks (GNNs) with Reinforcement Learning (RL), enabling effective representation learning over uncertain graph structures and adaptive skyline selection, a solution to ensure compatibility, and also test the potential scalability of the framework. As part of the contribution, large-scale uncertain graph datasets are systematically constructed with controlled size, density, and uncertainty levels to enable rigorous evaluation and scalability analysis. The proposed method is evaluated using 10-fold cross-validation, and performance is measured using accuracy, precision, recall, F1-score, and ROC-AUC. Experimental results demonstrate that while baseline skyline algorithms achieve acceptable accuracy, they suffer from significantly lower precision and recall, leading to suboptimal identification of skyline points. In contrast, the proposed GNN-RL framework achieves an accuracy of 98.97% alongside recall and F1-score above 98%, demonstrating strong robustness in uncertain graph settings. Furthermore, scalability experiments across varying dataset sizes confirm the suitability of the proposed approach for large-scale skyline query processing. This research contributes both theoretically and practically to intelligent data analytics and supports the United Nations Sustainable Development Goal (SDG) No. 9 which promotes resilient infrastructure, sustainable industrialization, and innovation through the development of scalable and intelligent data-driven technologies.7 23 - Some of the metrics are blocked by yourconsent settings
Publication Secure digital forensics framework for vehicle maintenance service records(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2024, 2024); ;Hafizah MansorNormaziah Abdul AzizAutomotive technology is soaring and has reached an advanced phase. Despite their benefits, these advancements may expose vehicles to additional threats, particularly regarding security and data management. Currently, handling maintenance service records is a manual process, which may lead to inaccuracy, unavailability, and limited consumer access. These concerns are compounded by the lack of trustworthy platforms or legitimate sources for retrieving the history of vehicle maintenance service records. The objectives of this research are to identify a list of stakeholders to define their roles and responsibilities, to identify the security requirements that need to be implemented and to establish a secure framework and communication protocol to address these concerns. Stakeholders were identified through a snowball literature review method, and their roles were assessed to ensure comprehensive coverage of the vehicle maintenance ecosystem. The security requirements were derived using Threat and Vulnerability Risk Assessment (TVRA) and Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege (STRIDE) threat modelling methodologies and then used in designing the secure frameworks and communication protocol for vehicle maintenance service records. The proposed frameworks that are designed for scheduled and repair maintenance services implement the use of consortium blockchain technology as it provides a decentralised yet controlled access to ensure that only authorised stakeholders can contribute and validate data. This approach helps to protect vehicle maintenance service records from unauthorised access and data manipulation. A secure communication protocol which mainly focuses on the grant access process for new vehicle owners was developed and then formally analysed using the Scyther Tool. This tool is chosen because of its ability to rigorously verify the security of protocols. As a result, this research listed eight stakeholders that are involved in the vehicle maintenance service records. The security requirements identified are then implemented in designing the secure frameworks and secure communication protocol to address and maintain the confidentiality, integrity and availability of vehicle maintenance service records. The integration of these solutions not only safeguards maintenance records but also supports digital forensics in providing a robust foundation for advancing the management and security of vehicle maintenance data.15 124 - Some of the metrics are blocked by yourconsent settings
Publication Skyline query processing for large-scale and incomplete graphs using graph convolutional network (GCN)(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); ;Raini HassanDini Oktarini Dwi HandayaniSkyline query processing is essential in multi-criteria decision-making, as it retrieves optimal results without requiring user-defined weights. Traditional skyline methods, however, face significant challenges when applied to large-scale and incomplete datasets. This study proposes a hybrid approach that integrates the ISkyline dominance graph technique with Graph Neural Networks (GNNs), specifically a Graph Convolutional Network (GCN) to improve skyline query performance under such conditions. The GCN component is utilized to predict skyline tuples in the presence of missing or incomplete data. The ISkyline algorithm serves as the foundation for identifying initial dominance relationships and labelling skyline points, enabling the GCN to learn Pareto-optimal patterns from partially incomplete data. Evaluation on both synthetic and real-world datasets demonstrates enhanced accuracy and efficiency when compared to established methods such as ISkyline, SIDS, and OIS. The proposed GNN + ISkyline framework improved classification accuracy by 72%, the F1-score by 71%, and the AUC-ROC by 49% compared to the standalone ISkyline algorithm when evaluated on the CoIL 2000 dataset. This work demonstrates the potential of creating a more efficient query processing, supporting applications in e-commerce, finance, and smart data systems, while aligning with the 9th Sustainable Development Goal on industry, innovation, and infrastructure.27 123 - Some of the metrics are blocked by yourconsent settings
Publication Syntactic ambiguity detection framework for software requirements specification(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2025, 2025); ;Azlin NordinNorsaremah SallehIn software development, clear and precise requirement specifications are essential for project success. However, ambiguities in software requirements often cause misunderstandings that lead to costly errors and delays. Among the various types of ambiguity, syntactic ambiguity arising from sentence structure poses a significant challenge in requirement specifications. This research introduces the Syntactic Ambiguity Detection Framework (SADF), specifically designed to identify and resolve syntactic ambiguities in software requirement documents. The framework adopts a multi-layered approach that combines linguistic analysis with practical heuristics, developed through a comprehensive review of existing ambiguity detection techniques and related ambiguity knowledge. This review examined various aspects of syntactic ambiguity explored in prior studies, highlighting both the strengths and limitations of current methods. These insights guided the design and development of SADF. To evaluate the framework’s effectiveness, a questionnaire survey was conducted involving experienced Requirement Engineering experts. Their valuable feedback contributed to refining the framework’s components and confirmed its practical utility in guiding engineers to detect syntactic ambiguities more effectively in requirement documents. This collaborative validation process ensured that SADF not only performs theoretically but also meets the practical needs of professionals in the field. Therefore, the main objective of this study is to develop a framework that can help Requirement Engineers with a systematic and user-friendly guideline which facilitates early detection and resolution of syntactic ambiguities, thereby improving the quality of Software Requirement Specification (SRS). Implementing SADF is expected to reduce misunderstandings, minimize costly rework, and streamline the software development life cycle ultimately contributing to more successful project outcomes. In conclusion, the SADF addresses a critical issue in requirements engineering by emphasizing syntactic clarity. The framework developed in this study offers structured guidelines and rules for identifying syntactic ambiguities in Software Requirements Specifications (SRS) and its adoption may help to enhance the quality of requirements specifications and support the overall success and efficiency of software projects.10 26 - Some of the metrics are blocked by yourconsent settings
Publication Vaccine hesitancy detection using BERT for multiple social media platforms(Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2023, 2023); ;Suriani SulaimanNorlia Md YusofVaccination has been proven to be an effective measure to prevent the spread of harmful diseases. Despite its efficacy, the moves towards vaccine hesitancy have been receiving global attention. Vaccine hesitancy issues have been openly discussed across major social media platforms including Facebook, Reddit, Twitter, Instagram and YouTube. The spread of vaccine hesitancy-related posts is propagated substantially, causing greater threats to public health. Consequently, various state-of-the-art machine learning techniques have been proposed to analyse vaccine-hesitant related posts in social media. One of the most recent approaches is the transfer learning method using a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model. Despite vaccine hesitancy being a prevalent issue across multiple social media platforms, only a few studies have utilised data from multiple social media platforms to detect vaccine hesitancy. To address this research gap, the use of BERT as one of the new language representation models is adopted to train from a collection of vaccine hesitancy related data from multiple social media platforms. Moreover, this study employs the Support Vector Machine (SVM) and Logistic Regression (LR) models and compare their performances against the BERT method. The objectives of this research are threefold; to establish a consolidated dataset from multiple social media sources for use in vaccine hesitancy detection, to evaluate the effectiveness of using mono-platform versus multi-platform vaccine hesitancy data on the performance of different machine learning models and to apply a transfer learning method using BERT in vaccine hesitancy detection. A collection of 193,023 labelled vaccine hesitant posts were aggregated from three (3) social media platforms which includes Facebook, Reddit, and Twitter. The results demonstrate that the BERT model performs the best and achieved an F1-score of 0.93, while both the SVM and LR achieved F1-scores of 0.90 when detecting vaccine hesitancy from multiple social media platforms. Our proposed research also revealed that models trained with multi-platform data perform at least 15% better than models trained with mono-platform data when tested with multi-platform data.11 91
