Abstracts of Accepted Papers

 

Monday, December 15, 2025

 

Plenary Session 1: Finance, Business and AI Developments

(Monday, December 15, 2025, 10h30 – 11h45)

Paper #: PL1-KSp1

Dual-Layer Crisis Prediction Model (DLCPM): An Integrated Early-Warning and Precise-Timing

Mahmood M. Dagher1,* and Abbas K. Saddam2

1Head of Integrate Globals, Iraq Baghdad

2Head of Edumonde Research, UAE, Dubai

*Contact: dr_m_daghir@yahoo.com

 

Abstract:

Financial crises often unfold through a sequence of mounting imbalances, followed by abrupt market breakdowns. While traditional early-warning systems (EWS) can flag the likelihood of crises months in advance, they usually fail to provide precise timing for the actual onset of turmoil. This research introduces the Dual-Layer Crisis Prediction Model (DLCPM), a novel framework that integrates an Early-Warning Index (EWI) based on leading macro-financial indicators with a Timing-Trigger Index (TTI) built from high-frequency market stress measures. The EWI captures structural vulnerabilities using variables such as yield curve inversion, monetary tightening, PMI downturns, debt dynamics, and reserve adequacy. Once EWI reaches critical thresholds, the TTI becomes activated, monitoring fast-moving indicators such as VIX, funding spreads, CDS, and shipping indices to identify the precise crisis timing. A CUSUM change-point algorithm is then applied to TTI to pinpoint the crisis onset date (T₀). Backtesting on the Global Financial Crisis (2008), the oil price collapse (2014), and the COVID-19 crisis (2020) demonstrates that DLCPM not only provides earlier warnings than conventional single-indicator models but also accurately identifies the timing of crisis eruptions. The findings suggest that DLCPM can serve as a valuable policy tool for central banks, regulators, and institutional investors seeking both anticipatory signals and actionable timing of crises.

Keywords: Yield curve, financial crises, VIX, Monetary tightening, Early-Warning Systems (EWS), Timing-Trigger Index (TTI), Macro-financial indicators, Oil price collapse

 

Paper #: PL1-KSp2

AI Policies and National Strategy of AI in Iraq

 

Dhiya Al-Jumeily

Liverpool John Moores University, eSystems Engineering Society, UK

Contact: d.aljumeily@ieee.org

 

Abstract:

Artificial intelligence (AI) popularity have escalated over the last few years and its application have become popular in many disciplines including major disciplines such as healthcare and the environment. Other Industry 4.0 technologies like automation, sensing, and IoT helped boost AI adoption. In this regard, AI and machine learning (ML) algorithms have been focused on speeding up data processing and predictions, which were crucial in many applications like manufacturing, driving cars, delivering goods, dispensing medicines and food, performing surgeries, and more. AI and ML are different but related terms. Machines using AI can think like humans and perform human activities. ML trains robots to classify or predict without intelligence. So, AI is a subset of ML but not vice versa.

AIML applications have gained popularity over the last few years especially post-Covid-19 pandemic. Thus, the Covid-19 pandemic urged the need for automation and using robotics especially for vulnerable individuals. When it first emerged, algorithms became popular and particularly deep learning algorithms and deep transfer learning algorithms. Recently, generative AI surged and its use surpassed previous algorithms. It is worthnoting that AIML algorithms did not start post-pandemic. In fact, the first algorithms emerged in the 8th and 9th centuries in the Islamic golden age et the House of Wisdom in Baghdad.

The House of Wisdom in Baghdad hosted many scholars from around the world in numerous disciplines. One of these scholars, Mohammad bin Musa Al-Khawarizmi, have created the algorithms (Al-Khawarizmiyat) that were named after him. Algorithms informed about patterns in data and made predictions based on data. This placed many focus on algorithms over the years. Hence accuracy and precision of algorithms have always been a key question in any evaluation.

This marginalised the role of datasets that did not receive many attention until the last five years where big data became significant. This was facilitated by high performance computing, cloud computing, blockchain and IoT technologies that not only facilitated the collection of big data but also big data analysis in short time. However, the challenges related to datasets have been undermined until Covid-19 pandemic. Hence the pandemic shifted the focus of research (particularly healthcare research) to be relating to Covid-19 prediction and/or treatment. Specifically in cases of Covid -19 predictions, there was major focus on accuracy of prediction, and this triggered the revision of datasets. Hence, models trained with poor datasets gave poor prescribe and vice versa. Yet until now there are no guidances for datasets as algorithms apart from few regulations such as the European Union legislation on data protection, ethics as big data.

Considering the aforementioned highlights, this presentation will discuss regulatory and ethical challenges relating to handling and analysing big data using AI. It will then conclude by a case study of big data in making healthcare decision. It will highlight the emergence of AI and present some of its successful achievements in numerous disciplines. Challenges related to applying AI algorithms and choice/use of datasets will be discussed.

Session 1: Education and Signal Processing

(Monday, December 15, 2025, 12h00-13h30)

Paper #: S1-KSp3

Artificial Intelligence in Education – Transforming Teaching, Learning, and Administration

Ahmad Jammal

Arab International Foundation for Education Development, Lebanon

Email: a-jammal@ieee.org

Abstract:

Artificial Intelligence (AI) is often compared to the industrial or digital revolutions, but its scope is far greater. While earlier technologies amplified human physical and computational abilities, AI amplifies something deeper: our capacity to think, decide, and create. In doing so, AI is not just changing how we work or communicate, it is redefining what it means to be human in an age of intelligent machines. AI is not merely a tool; it’s a cultural force. It challenges traditional paradigms of identity, creativity, and morality while creating opportunities for cultural renewal and intercultural dialogue. The societies that embrace AI as both a technological and cultural phenomenon will lead the next wave of human-centered innovation. AI transforms education into a smart, adaptive, and inclusive ecosystem enhancing teaching, empowering learners, and optimizing institutional performance while emphasizing the continued importance of human values and ethics.

Artificial Intelligence transforms education by enhancing teaching effectiveness, personalizing learning experiences, and optimizing institutional administration. In teaching, AI supports personalized and data-driven pedagogy, intelligent tutoring systems, and automation of grading and curriculum design. These innovations enable educators to focus on higher-level mentoring, creativity, and student engagement rather than routine tasks. For learners, AI fosters adaptive, self-paced, and inclusive education. Intelligent platforms provide real-time feedback, track progress, and accommodate diverse learning styles and abilities. Through accessibility tools like speech recognition and translation, AI promotes equity and supports students with disabilities or language barriers. Moreover, by engaging with AI technologies, students develop essential 21st-century skills—digital literacy, creativity, and ethical reasoning. At the administrative level, AI strengthens institutional efficiency through predictive analytics, smart admissions, and data-driven decision-making. It enhances quality assurance by mapping learning outcomes, identifying performance trends, and streamlining accreditation processes. Despite its promise, AI raises ethical concerns related to privacy, bias, and human oversight. Its successful integration requires robust governance, equitable access, and capacity-building for educators and students. Policymakers should develop national AI strategies for education, invest in infrastructure, embed AI ethics in curricula, and ensure inclusive implementation. Ultimately, AI should augment. – not replace – human intelligence. When integrated responsibly, it can create a more inclusive, efficient, and innovative education ecosystem that aligns with global goals for quality and lifelong learning.

Keywords: Artificial Intelligence, Education, Data-driven Pedagogy, Policymakers, Ethical issues.

Paper #: S1-1

Leveraging AI for Optimising Student Workload and Enhancing Well-being in Higher Education

 

Waleed Al-Nuaimy

School of Engineering, University of Liverpool, UK

Contact: wax@liverpool.a.uk, phone +44 151 794 4512

 

Abstract:

The escalating demands of higher education are increasingly linked to student stress and inconsistent academic performance. Traditional workload estimations, based on static credit hours, fail to capture the dynamic and individualised nature of student effort. This disparity often results in cyclical workloads with intense peaks around assessment deadlines, negatively impacting student well-being and learning outcomes. To address this, we present an intelligent, data-driven framework designed to model, predict, and optimise student academic workload in real-time.

Our system moves beyond conventional metrics by integrating comprehensive course data such as timetables and assessment schedules, with personalised student study profiles. Using an algorithmic approach, the framework simulates a student’s entire academic schedule to analyse workload intensity and variability. A key innovation is the calculation of a “pain score,” a metric that quantifies the congestion of a student’s workload, allowing for the proactive identification of high-stress periods. Based on this analysis, the system generates dynamic and personalised study plans that distribute effort more evenly, mitigating deadline-related anxiety.

The framework provides students with clear visualisations to aid planning and offers educators valuable policy informatics to inform curriculum design and prevent systemic student overload. A continuous feedback loop, where students validate the accuracy of workload predictions, ensures the model refines itself over time. While currently data-driven, the system is architected for deep AI integration to further automate profile detection, enhance predictive accuracy, and optimise scheduling. This work presents a proactive model for workload management, demonstrating a clear pathway to leveraging technology to create a more supportive and sustainable learning environment, ultimately enhancing both student well-being and academic success.

Keywords: AI in Education, Student Workload, Student Well-being, Personalized Learning, Learning Analytics, Educational Technology, Curriculum Design.

Paper #: S1-2

Entangled Intelligence: Exploring Quantum Neural Networks for Next-Gen AI

Ezideen A Hasso

Tishk International University, Erbil, Iraq

Contact: ezideen.hasso@tiu.edu.iq

Abstract:

Quantum Neural Networks (QNNs) represent a promising and innovative convergence of quantum computing and machine learning, aiming to address complex, high-dimensional, and non-linear problems that are often intractable for purely classical systems. Unlike traditional neural networks, QNNs leverage the principles of superposition and entanglement, enabling them to potentially process information in fundamentally new ways and offer advantages in terms of representational power and computational efficiency. In this study, we present the design and implementation of a two-qubit QNN developed through MATLAB’s Quantum Support Package and integrated with IBM’s Qiskit Runtime. The framework demonstrates a hybrid quantum–classical workflow tailored to binary classification tasks. Specifically, a parameterized quantum circuit (PQC) is employed to model the XOR problem, which is a canonical benchmark for evaluating the ability of machine learning models to solve non-linearly separable functions. Classical data inputs are embedded into quantum states, and the trainable quantum gates within the PQC are optimized iteratively using classical backpropagation methods, thereby uniting the strengths of both computational paradigms. The QNN was initially tested through local MATLAB-based simulation to ensure correctness and performance stability. Subsequently, the model was executed on actual IBM quantum hardware via the Qiskit Runtime, thereby assessing its robustness under realistic noisy intermediate-scale quantum (NISQ) conditions. Results confirm that the two-qubit QNN successfully learns the structure of the XOR dataset, providing accurate classifications and demonstrating that even low-qubit quantum systems can perform meaningful machine learning tasks. This work not only validates the practical viability of hybrid QNN approaches but also establishes a foundation for future research on extending such models to larger, more complex datasets and applications. It highlights how accessible quantum resources, when combined with classical optimization, can open pathways toward scalable quantum-enhanced machine learning.

Keywords: Quantum Neural Networks (QNN), Quantum AI, Qubits, Quantum machine learning, Quantum gates, and Hybrid quantum-classical systems.

References:

[1] M. A.Hafeez, A. Munir, and H.Ullah, (2024).“H-QNN: A Hybrid Quantum–Classical Neural Network for Improved Binary Image Classification, “AI, 5(3):1462-1481,2024.doi:10.3390/ai5030070.MDPI.

[2] S.Markidis, S. (2023).“Programming Quantum Neural Networks on NISQ Systems: An Overview of Technologies and Methodologies. Entropy,”25(4):694, 2023. Doi:10.3390/e25040694.

[3] Y. Chenand A. Khaliq, “Quantum Recurrent Neural Networks: Predicting the Dynamics of Oscillatory and Chaotic Systems. Algorithms, 17(4), 163, 2024.Doi:10.3390/a17040163. MDPI.

[4] U. Singh, A. Z. Goldberg and K. Heshami, “Coherent feed-forward quantum neural network. Quantum Machine Intelligence,”6,89, 2024.Doi:10.1007/s42484-024-00222-8.

[5] S. Sahil, D. Nandhini, et al. “Hybrid Quantum Neural Networks: Harnessing Dressed Quantum Circuits for Enhanced Tsunami Prediction via Earthquake Data Fusion,” Shivanya Shomir Dutta, EPJ Quantum Technology,12(4),2025.

[6] Y. Zhang, and H. Lu, “Reliability Research on Quantum Neural Networks,” Electronics, 13(8):1514.2024.Doi:10.3390/electronics13081514. MDPI.

Paper #: S1-3

Classifying ECG Signals Using Wavelet Transforms and Convolutional Neural Networks

Munaf Salim Najim AL-Din* and Zamzam Salim Saif Alhuseini

Department of Electrical and Computer Engineering,

College of Engineering and Architecture, University of Nizwa, Nizwa, Oman

*Contact: munaf@unizwa.edu.om

 

Abstract:

Cardiovascular diseases remain a leading cause of mortality worldwide, necessitating efficient and accurate diagnostic methods to support early detection and timely intervention. This project presents a robust deep learning framework for classifying electrocardiogram (ECG) signals by integrating time-frequency analysis using Continuous Wavelet Transform (CWT) with Convolutional Neural Networks (CNNs). Raw ECG signals obtained from the MIT-BIH Arrhythmia Database, MIT-BIH Normal Sinus Rhythm Database, and BIDMC Congestive Heart Failure Database were preprocessed to remove baseline wander and high-frequency noise, then standardized to a uniform sampling rate of 128 Hz to ensure consistency across datasets. Each signal segment was transformed into a scalogram using CWT, capturing its time-frequency characteristics and enabling more effective and discriminative feature representation compared to traditional time-domain or frequency-domain approaches. These scalogram images were classified using multiple pre-trained CNN architectures, including AlexNet, GoogleNet, ResNet50, InceptionV3, EfficientNet-B0, VGG19, and SqueezeNet, through transfer learning implemented in MATLAB. Training was performed with optimized hyperparameters, data augmentation techniques, and five-fold cross-validation to improve model generalization and reduce overfitting. Experimental results demonstrated that EfficientNet-B0 and ResNet50 achieved superior accuracy, sensitivity, and specificity in detecting normal and abnormal heart rhythms, including arrhythmias and congestive heart failure, outperforming conventional ECG classification methods. Confusion matrices and receiver operating characteristic (ROC) curves further confirmed the robustness and reliability of the proposed models. The proposed system showcases the potential of combining wavelet-based feature extraction with advanced deep CNN models for automated, high-accuracy ECG signal classification, offering a scalable and interpretable solution for real-world applications. Its integration into clinical workflows could facilitate early diagnosis, reduce clinician workload, and improve patient outcomes. This approach represents a significant step toward intelligent cardiovascular monitoring and decision support systems, making it a valuable tool for modern healthcare environments focused on precision medicine and proactive disease management.

Keywords: Electrocardiogram (ECG), Wavelet Transforms, Convolutional Neural Networks.

References:

[1] World Heart Federation, World Heart Report 2023, World Heart Federation, 2023. [Online]. Available:https://world-heart-federation.org/wp-content/uploads/World-Heart-Report-2023.pdf.

[2]  Jahangir, M. N. Islam, M. S. Islam, and M. M. Islam, “ECG‑based heart arrhythmia classification using feature engineering and a hybrid stacked machine learning,” BMC Cardiovascular Disorders, vol. 25, art. no. 260,  7, 2025.

[3] Musa, N., Gital, A.Y., Aljojo, N. et al, “A systematic review and Meta-data analysis on the applications of Deep Learning in Electrocardiogram,” Journal of Ambient Intelligence and Humanized Computing, vol. 14, pp. 9677–9750, 2022.

[4]  T. Wang, C. Lu, Y. Sun, M. Yang, C. Liu, and C. Ou, “Automatic ECG classification using continuous wavelet transform and convolutional neural network,” Entropy, vol. 23, no. 1, p. 119, Jan. 2021.

[5] S. C. Mohonta, M.A. Motin, and D. Kumar, “Electrocardiogram Based Arrhythmia Classification Using Wavelet Transform with Deep Learning Model,” Sensing and Bio-Sensing Research, vol. 37, 100502, 2022.

[6] C. Lakshminarayan and T. Basil, “Feature Extraction and Automated Classification of Heartbeats by Machine Learning,” arXivpre print arXiv:1607.03822, 2016.

[7] S. Chatterjee, R. S. Thakur, R. N. Yadav, L. Gupta, and D. K. Raghuvanshi, “Review of noise removal techniques in ECG signals,” Iet Signal Processing, vol. 14, no.9, pp. 569–590, 2020.

Paper #: S1-4

Artificial Hand Force Estimation System Based on Electromyography Signals with Metaheuristic Optimization Algorithm

Sami Abduljabbar Rashid1, Yousif Al Mashhadany1, Mohanad A. Al-askari1, Sameer Algburi2

1Biomedical Engineering Research Center University of Anbar, Anbar, Iraq

2College of Engineering, Al-Kitab University, Kirkuk, Iraq

*Contact: sami.abduljabbar.rashid@uoanbar.edu.iq, Phone +964 -7815626361

 

Abstract:

Electromyography (EMG) signals are widely applied in clinical diagnostics to identify neuromuscular disorders. However, their non-stationary nature and susceptibility to noise introduce challenges in achieving reliable classification and accurate feature extraction. To address these limitations, this study proposes an Artificial Hand Force Estimation System based on EMG signals integrated with a metaheuristic optimization approach, termed AHFEMO. The model leverages Discrete Wavelet Transform (DWT) for multi-resolution feature extraction, Shapley Additive Explanations (SHAP) for interpretability, Support Vector Machine (SVM) for classification, and a Genetic Algorithm (GA) for hyperparameter tuning and feature optimization. ANOVA testing is further employed to ensure statistical significance in selected features. The overall workflow of the proposed model is presented in Fig1. Experimental analysis was performed using EMG signals collected from control, myopathic, and neuropathic subjects. The proposed AHFEMO framework achieved superior results in terms of Mean Squared Error (MSE), Root Mean Squared Error (RMSE), accuracy, precision, recall, and F1-score compared with baseline models such as AIEEMG, ISEEAI, and DISEAI. Specifically, AHFEMO reduced the MSE for healthy participants from 1.0 to 0.91 and improved classification accuracy to 0.72, with recall and F1-score values of 0.88 and 0.95, respectively. Confusion matrix analysis confirmed that the proposed method minimized false positives and false negatives, achieving a balanced performance across different patient groups. The findings demonstrate that combining DWT with explainable machine learning and evolutionary optimization significantly enhances the reliability of EMG-based diagnostics. The hybrid approach not only improves accuracy but also reduces computational complexity by eliminating redundant features. These outcomes highlight the potential of the AHFEMO model for advancing prosthetic control and medical diagnostic systems. Future work will focus on adapting the framework for real-time applications in wearable devices to support continuous neuromuscular monitoring.

Keywords: Electromyography (EMG), Discrete Wavelet Transform (DWT), Shapley Additive Explanations (SHAP), Support Vector Machine (SVM), Genetic Algorithm (GA), Biomedical Signal Processing.

References:

[1]  N. Cárdenas-Bolaño, A. Polo, et al., “Implementation of an Intelligent EMG Signal Classifier Using Open-Source Hardware, “Computers, vol. 12, p. 263, 2023, doi: 10.3390/computers12120263.

[2]  Y. I. Al-Mashhadany, “Inverse Kinematics Problem (IKP) of 6-DOF Manipulator by Locally Recurrent Neural Networks (LRNNs),” in Proc. Int. Conf. on Management and Service Science (ICMSS), 2010, pp. 1–5, doi:10.1109/ICMSS.2010.5577613.

[3]  M. S. Ahmed, A. Ramizy, and Y. Al-Mashhadany, “Analysis of Real Measurement for EMG Signal Based on Surface Traditional Sensors,” in Lecture Notes in Networks and Systems, vol. 1138, Cham: Springer, 2024, doi:10.1007/978-3-031-70924-1_19.

[4]  C. L. Kok, C. K. Ho, et al., “Machine Learning-Based Feature Extraction and Classification of EMG Signals for Intuitive Prosthetic Control, “Applied Sciences, vol. 14, p. 5784, 2024, doi: 10.3390/app14135784.

[5]  M. Z. Al-Faiz and Y. I. Al-Mashhadany, “Analytical Solution for Anthropomorphic Limbs Model (IK of Human Arm),” in Proc. IEEE Symp. on Industrial Electronics & Applications (ISIEA), Kuala Lumpur, Malaysia, 2009, pp. 684–689, doi: 10.1109/ISIEA.2009.5356374

Paper #: S1-5

Deep Learning-Enabled Equalization for High-Mobility OTFS Wireless Systems

 

Sura H. Khadem1* , Thamer M. Jamel2, and Hassan F. khazaal3

1,2University of Technology, College of Communication Engineering, Baghdad, Iraq

3Department of Electrical Engineering, Wasit University

*Contact: coe.23.02@grad.uotechnology.edu.iq

 

Abstract:

Orthogonal Time Frequency Space (OTFS) modulation offers a powerful solution for high-mobility wireless communication by effectively describing channels in the delay-Doppler domain, where sparsity and gradual variation are prevalent. However, achieving effective equalization in OTFS for doubly selective fading channels, which exhibit both time and frequency selectivity, remains a significant challenge. Traditional equalization techniques, such as linear Minimum Mean Square Error (MMSE) and iterative message-passing methods, face limitations due to high computational demands and increased sensitivity to imperfections in channel state information (CSI). This paper introduces a novel and comprehensive deep learning (DL)-based equalization framework for OTFS, leveraging convolutional neural networks (CNNs) to address these critical issues. Our proposed CNN-based equalizer is designed to overcome the computational burden and sensitivity inherent in conventional approaches. Extensive simulations were conducted across a variety of practical high-mobility channel models, including Extended Vehicular A (EVA), Extended Typical Urban (ETU), Unmanned Aerial Vehicle (UAV), TDL-C, and other generic doubly selective channels. The results consistently demonstrate that the CNN-based equalizer achieves significant improvements in Bit Error Rate (BER) performance. Furthermore, it exhibits enhanced tolerance to channel uncertainties and considerably reduces computational complexity compared to traditional equalization methods. This advancement paves the way for more robust and efficient OTFS systems in dynamic wireless environments.

Keywords: Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), OTFS, Equalization, Bit Error Rate (BER), High-Mobility Channels.

References:

[1] Bo Ai, Yunlong Lu, Yuguang Fang, Dusit Niyato, Ruisi He, Wei Chen, Jiayi Zhang, Guoyu Ma, Yong Niu, and Zhangdui Zhong,” 6G-Enabled Smart Railways”, arXiv:2505.12946v1 [eess.SY] 19 May 2025.

[2] B. Sathish Kumar, K. R. Shankar Kumar, R. Radhakrishnan, “
An Efficient Inter-Carrier Interference Cancellation Scheme for OFDM Systems”, International Journal of Computer Science and Information Security, IJCSIS, Vol. 6, No. 3, pp. 141-148, December 2009, USA.

[3] Thamer M. Jamel, Sura H. Khadum, Hassan F. Khazaal, “Comparative Analysis of Orthogonal Time Frequency Space in Mobile Communications: A Review and Simulation Study”, IEEE 22nd International Multi-Conference on Systems, Signals & Devices (SSD), 2025.

[4] R. Hadani et al., “Orthogonal Time Frequency Space Modulation,” IEEE WCNC, 2017.

[5] Simone Servadio, Renato Zanetti, and Roberto Armellin, “MAXIMUM A POSTERIORI ESTIMATION OF HAMILTONIAN SYSTEMS WITH HIGH ORDER SERIES EXPANSIONS”, The University of Texas at Austin, 2024.

[6] Manoj Kumar S; Shyama Sreeekumar; Sameer S. M., “Joint CFO and Channel Estimation for OTFS-Based High Mobility Wireless Communication Systems”, 2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT).

[7] Juan Meng, Ziping Wei, Yang Zhang, Bin Li & Chenglin Zhao, “Machine learning based low-complexity channel state information estimation”, EURASIP J. Adv. Signal Process. 2023, 98 (2023). https://doi.org/10.1186/s13634-023-00994-4.

Session 2: Artificial Intelligence and Medical Diagnosis

Monday, December 15, 2025, 14h30 – 16h00

Paper #: S2-KSp3A

Smart Humanoid Robots in Paediatric Diabetes Management

Majid Al-Taee1,*, Zahra J. Muhsin2, Ahmad Al-Taee3

1Consultant of Computing and Intelligent Systems Engineering, Liverpool, United Kingdom

2School of Computing, Arden University, Manchester, United Kingdom

3Loyola University Medical Center, Maywood, IL 60153, United States

*Contact: altaeema@gmail.com

 

Abstract:

Smart robotics is reshaping the landscape of diabetes care in children and adolescents by enhancing accuracy, personalization, and the ability to continuously monitor health even at a distance. Unlike traditional care models that often depend on periodic clinic visits, IoT-enabled eHealth platform that integrates humanoid robots to support pediatric diabetes self-management enables real-time data collection and feedback, ensuring timely interventions. Social and virtual robotic assistants provide crucial support in managing daily routines, helping young patients and their families track diet, exercise, and medication adherence in engaging and interactive ways. This study proposes an eHealth platform that leverages IoT connectivity, robotic coaching, and artificial intelligence (AI) to enhance diabetes care beyond hospital settings into the home environment. The humanoid robot wirelessly collects biometric measurements from patient devices, including glucose monitors and activity trackers, while also capturing data on diet, insulin intake, and emotional well-being through conversational interactions. These data streams are transmitted to a centralized smart disease management hub, where AI algorithms analyze the information and generate tailored recommendations. Feedback is then relayed to patients through the robot, offering personalized guidance in a supportive and interactive manner. A pilot clinical study of the platform demonstrated strong promise: patient satisfaction exceeded 86%, and more than 90% of participants expressed a positive perception of the robot as an innovative medical device. These findings highlight the transformative potential of robotics in pediatric diabetes care, pointing to a future where technology not only empowers children and families to actively manage T1DM with confidence but also improves clinical outcomes.

Keywords: AI-enabled telecare, eHealth, human-robot interaction, Internet of things (IoT), Pediatric diabetes management, Remote patient monitoring.

References

[1]  M. Al-Taee, Zahra J. Muhsin, Waleed Al-Nuaimy and Ahmad Al-Taee. “Smart Care: An IoT Platform with Humanoid Robots for Diabetes Management in Children.” in IEEE 22nd Int. Multi-Conf. on Systems, Signals & Devices (SSD), Tunisia, 17-20 February 2025, pp. 494-499.

[2]    M. Smudja, T. Milenković, I. Minaković, V. Zdravković, J. Javorac, and D. Milutinović, “Self-care activities in pediatric patients with type 1 diabetes mellitus,” Plos one, vol. 19(3): e0300055, 2024.

[3]  M. A. Al-Taee, S. N. Abood, W. Al-Nuaimy and A. M. Al-Taee, “Blood-glucose pattern mining algorithm for decision support in diabetes management,” in 2014 14th UK Workshop on Computational Intelligence (UKCI), Bradford, UK, 2014, pp. 1-7.

[4]    A. M. Al-Taee, M. A. Al-Taee, W. Al-Nuaimy, Z. J. Muhsin, and H. AlZu’bi, “Smart bolus estimation taking into account the amount of insulin on board,” in IEEE Conf. on Computer and Info Technology and Pervasive Intelligence and Computing, Liverpool, UK, 2015, pp. 1051-1056.

[5]    S. C. Mackenzie, C. A. Sainsbury, and D. J. Wake, “Diabetes and artificial intelligence beyond the closed loop: a review of the landscape, promise and challenges,” Diabetologia, vol. 67(2): 223-235, 2024.

[6]    M. A. Al-Taee and S. N. Abood, “Mobile acquisition and monitoring system for improved diabetes management using emergent wireless and web technologies,” Int. Journal of Information Technology and Web Engineering, vol. 7(1): 17-30, 2012.

[7]    T. Alhmiedat, L. A. AlBishi, F. Alnajjar, M. Alotaibi, A. M. Marei, and R. Shalayl, “Improving diabetes education and metabolic control in children using social robots: A randomized trial,” Technologies, vol. 12(11): 209, 2024.

[8]    A. M. Al-Taee, A. Al-Taee, Z. Muhsin, M. A. Al-Taee, and W. Al-Nuaimy, “Towards developing online compliance index for self-monitoring of blood glucose in diabetes management,” in 2016 Int. Conf. on Developments in eSystems Engineering (DeSE), Liverpool & Leeds, UK, 2016, pp. 45-50.

[9]    M. A. Al-Taee, W. Al-Nuaimy, Z. J. Muhsin, and A. Al-Ataby, “Robot assistant in management of diabetes in children based on the Internet of things,” IEEE Internet of Things Journal, vol. 4(2): 437-445, 2016.

Paper #: S2-KSp3B

AI-Driven Rapid-Acting Insulin Bolus Adjustment Using Insulin on Board for Type 1 Diabetes Management

Majid Al-Taee1,*, Zahra J. Muhsin2, Anas Al-Taee3

1Consultant of Computing and Intelligent Systems Engineering, Liverpool, United Kingdom

2School of Computing, Arden University, Manchester, United Kingdom

3Pines Health Services and Cary Medical Centre, Caribou, ME 04736, United States

*Contact: altaeema@gmail.com

 

Abstract:

Accurate calculation of rapid-acting insulin boluses is a cornerstone of effective blood glucose self-management for individuals with type 1 diabetes. Miscalculations can lead to hyperglycemia from under-dosing or hypoglycemia from over-dosing, both of which pose serious health risks. Thus, reliable tools that assist patients in precise dosing are essential. This study proposes a machine learning–based insulin bolus adjustment model, designed to underpin an AI-driven decision support system that provides real-time, individualized dosage recommendations. Unlike conventional bolus calculators that rely solely on carbohydrate intake and correction factors, the proposed system incorporates an additional dynamic variable to enhance accuracy and safety. A key contribution of this approach lies in its ability to estimate the residual insulin still active in the patient’s body, commonly referred to as “insulin on board” (IOB). This estimation is performed using an artificial neural network, which processes inputs such as the time elapsed since the last rapid-acting insulin injection and the pharmacological properties of the specific insulin formulation administered. By quantifying the IOB, the system adjusts the recommended dose to reduce the likelihood of insulin stacking and subsequent hypoglycemia. The final bolus recommendation integrates three main components: insulin required for carbohydrate coverage, a correction dose to address deviations from target glucose levels, and adjustments for contextual factors such as recent physical activity, stress, illness, alcohol consumption, or the macronutrient composition of the meal. The inclusion of IOB as an explicit adjustment further refines the accuracy of the dosage calculation. To implement this model, a fully functional application was developed and tested. Preliminary results indicate that the tool provides accurate, personalized insulin recommendations, supporting safer glycemic control and enhancing the quality of life for individuals with type 1 diabetes.

Keywords: Artificial intelligence, Artificial neural networks, Rapid-acting insulin dosing, Decision support systems, Glycemic control, Type 1 diabetes.

References:

[1]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Two-stage ensemble learning frameworkfor automated classification of keratoconus severity,” Computers in Biology and Medicine Journal, vol. 195:110568,2025.

[2]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, M. Al Bdour, S. AlRyalatand M. Al-Taee, “Highly efficient stacking ensemble learningmodel for automated keratoconus screening,” Eye and Vision Journal, vol. 12(1):1-20,2025.

[3]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Advances in machine learningfor keratoconus diagnosis,” International Ophthalmology Journal, vol. 45(128):1-22, 2025.

[4]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, S. A. AlRyalat, M. Al Bdour and M. Al-Taee, “Smart decision support system forkeratoconus severity staging using corneal curvature and thinnest pachymetry indices,” Eye and Vision, vol. 11(1): 1-20, 2024.

[5]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Clinician-assisted exploratorydata analysis framework for early diagnosis of keratoconus, ” in 2025 IEEE 22ndInternational Multi-Conference on Systems, Signals& Devices (SSD), Monastir, Tunisia, 17-20 February 2025, pp. 215-220.

[6]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Keratoconus severity staging using random forest and gradient boosting ensemble techniques,” in IEEE 22ndInternational Multi-Conference on Systems, Signals & Devices (SSD), Monastir, Tunisia, 17-20 February 2025,pp. 593-598.

[7] Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Performance comparison of machine learning algorithms for keratoconus detection,” in IEEE 30th Int. Conf. on Telecoms (ICT’24), Amman, Jordan, 24-27 June 2024, pp. 01-06.

[8] B. Al-Bander, B. Williams, M. Al-Taee, W. Al-Nuaimy and Y. Zheng, ”A novel choroid segmentation method for retinal diagnosisusing deep learning,” in 2017 Int. Conf. on Developments in eSystems Engineering (DeSE’2017), Paris, France,14-16 June 2017, pp. 182-187.

Paper #: S2-1

Molecular Detection of Mutations in BRCA1 gene (exon 11) in Iraqi Breast Cancer Patients and their Families

 

Shatha S. Jumaah1* and Khalid K. Tobal2

1 Biology Dept., Education Faculty, University of Al-Hamdaniya Iraq

2Oncology Dept., Guy’s& St. Thomas’s Hospital, UK

*Contact: drshathasaadi@uohamdaniya.edu.iq Phone: +964 7731329529

 

Abstract:

Detecting gene mutations for cancer patients is very important for finding the genes which can affect the susceptibility and/or prognosis. BRCA1 and BRCA2 recorded and considered as the most common cause of hereditary breast cancer. The whole exon 11 was covered for thirty-six blood samples of Iraqi breast cancer patients by dividing into six amplicons.  Also, blood samples from 1 to 2 family members for each patient were obtained; exactly 1st and 2nd degree relationship and ten blood samples of apparently healthy control were collected. Conventional Sanger sequencing was used to detect the BRCA1 mutations in their specific amplicons by using Mutation Surveyor Release version 3.24 copyright 2006 Software to screen for mutations. BRCA1 mutations recorded in 30 cases and all detected amplicons mutations were point mutation (missense type). The detected point mutations in blood samples included (a.2128A>G, T710A), (c. 2271C>T, P724L), (t. 1949T>G, I650R), (t. 2255T>C, L752S), (a. 2938A>C, I980L) and (t.4032T>A, D1344E), none of the relatives included in the study showed similar mutations except in (t. 2255T>C, L752S) appeared in first degree relative, and statistical analysis was significant (P<0.01). As a conclusion, it is possible to pass the deleterious mutations through generations. The detected polymorphisms are very important in early detection of breast cancer, and this confirmed by sequencing results.

Keywords: Breast Cancer, Mutation, High-Resolution Melt (HRM), BRCA1, Exon 11.

References:

[1]  A. N. AL-Thaweni, W. H. Yousif, and S. S. Hassan, “Detection of BRCA1and BRCA2 mutation for Breast Cancer in Sample ofIraqi Women above 40 Years,”Baghdad Science Journal, vol. 7, no. 1, pp. 394–400, Mar. 2025, doi:https://doi.org/10.21123/bsj.2010.7.1.394-400.

[2] N. A. S. Alwan, “Breast Cancer Among Iraqi Women: Preliminary Findings From a Regional Comparative Breast Cancer ResearchProject,”Journal of Global Oncology, vol. 2, no. 5, pp. 255–258, Oct. 2016, doi:https://doi.org/10.1200/jgo.2015.003087.

[3] B. M. Crossleyet al., “Guidelines for Sanger sequencing and molecular assay monitoring,”Journal of Veterinary DiagnosticInvestigation, vol. 32, no. 6, pp. 767–775, Feb. 2020, doi:https://doi.org/10.1177/1040638720905833.

[4]  X. Fu, W. Tan, Q. Song, H. Pei, and J. Li, “BRCA1 and Breast Cancer: Molecular Mechanisms and Therapeutic Strategies,”Frontiersin Cell and Developmental Biology, vol. 10, Mar. 2022, doi:https://doi.org/10.3389/fcell.2022.813457.

[5] H. T. Hashim, M. A. Ramadhan, K. M. Theban, J. Bchara, A. El-Abed-El-Rassoul, and J. Shah, “Assessment of breast cancer riskamong Iraqi women in 2019,”BMC Women’s Health, vol. 21, no. 1, Dec. 2021, doi:https://doi.org/10.1186/s12905-021-01557-1.

[6]  E. M. Rosen, S. Fan, R. G. Pestell, and I. D. Goldberg, “BRCA1 gene in breast cancer,”Journal of Cellular Physiology, vol. 196,no. 1, pp. 19–41, May 2003, doi:https://doi.org/10.1002/jcp.10257.

[7] F. Sanger, S. Nicklen, and A. R. Coulson, “DNA sequencing with chain-terminating inhibitors,”Proceedings of the NationalAcademy of Sciences, vol. 74, no. 12, pp. 5463–5467, Dec. 1977

Paper #: S2-2

Highly Accurate Super Learner for Tomographic and Clinical Keratoconus Classification

Zahra J. Muhsin1,*, Rami Qahwaji1, Ibrahim Ghafir1, Saif AlRyalat2, Muawyah Al Bdour2, Mo’ath AlShawabkeh3, Majid Al-Taee4

1Faculty of Eng. and Digital Technologies, University of Bradford, UK

2Scool of Medicine, The University of Jordan; Amman, Jordan

3Faculty of Medicine, The Hashemite University, Zarqa, Jordan

4Consultant of Computing and Intelligent Systems Engineering, Liverpool, UK

*Contact: z.j.muhsin@bradford.ac.uk

 

Abstract:

Early diagnosis of keratoconus (KC) is difficult due to normal slit-lamp exams and minimal visual acuity impact. Despite proposing many machine learning models, clinical adoption is limited by insufficient AI-clinical collaboration. This study proposes a super learner, jointly developed with clinicians, based on corneal tomography to more accurately differentiate Tomographic KC (TKC) from Clinical KC (CKC). This study utilized 79 Pentacam-derived features from 640 corneas (300 TKC, 340 CKC). To balance the dataset and avoid bias, an oversampling method was applied, ensuring 340 samples for each class. After pre-processing, the feature set was reduced by 26.58%. Statistical analysis based on relative importance scores and feature dependency, with input from clinicians, further reduced features to 3.78%. The selected features focus on corneal curvature radii and thinnest pachymetry, excluding visual acuity test. A super learner, combining the complementary strengths of top performing models was then built by stacking Random Forest, Gradient Boost, and Decision Tree models, with their predictions used as input for a Support Vector Machine meta-classifier for final predictions. Performance of the developed model was evaluated both through standard ML metrics, as well as by expert assessment from ophthalmologists. It achieved 99.26% accuracy, 99.27% precision, and 99.26% sensitivity. F1 and F2 scores were 99.42% and 99.41%, respectively. It perfectly classified all CKC test cases and misclassified a single borderline TKC case out of 68. These results show robust and near-perfect performance in distinguishing TKC from CKC, demonstrating strong promise in aiding KC diagnosis with objective, consistent criteria, reducing human error. It simplifies the diagnosis process by relying solely on few key Pentacam indices. Clinician involvement throughout development ensures clinical relevance, enhancing its potential for adoption in routine practice.

Keywords: Corneal Tomography, Classification, Ensemble Learning, Keratoconus diagnosis, Ophthalmology.

References:

[1]   Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Two-stage ensemble learning frameworkfor automated classification of keratoconus severity,” Computers in Biology and Medicine Journal, vol. 195: 110568, 2025.

[2]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Highly efficient stacking ensemblelearningmodel for automated keratoconus screening,” Eye and Vision Journal, vol. 12(1): 1-20, 2025.

[3]   Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Advances in machine learningfor keratoconus diagnosis,” International Ophthalmology Journal, vol. 45(128): 1-22, 2025.

[4]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, S. A. AlRyalat, M. Al Bdour and M. Al-Taee, “Smart decision support system forkeratoconus severity staging using corneal curvature and thinnest pachymetry indices,” Eye and Vision, vol. 11(1): 1-20, 2024.

[5]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Clinician-assisted exploratorydata analysis framework for early diagnosis of keratoconus,” in 2025 IEEE 22ndInternational Multi-Conference on Systems, Signals& Devices (SSD), Monastir, Tunisia, 17-20 February 2025, pp. 215-220.

[6]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Keratoconus severity stagingusing random forest and gradient boosting ensemble techniques,” in 2025 IEEE 22ndInternational Multi-Conference on Systems,Signals & Devices (SSD), Monastir, Tunisia, 17-20 February 2025, pp. 593-598.

[7]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Performance comparison ofmachine learning algorithms for keratoconus detection,” in IEEE 30th Int. Conf. on Telecoms (ICT’24), Amman,Jordan, 24-27 June 2024, pp. 01-06.

[8] B. Al-Bander, B. Williams, M. Al-Taee, W. Al-Nuaimy and Y. Zheng, ”A novel choroid segmentation method for retinal diagnosisusing deep learning,” in 2017 Int. Conf. on Developments in eSystems Engineering (DeSE’2017), Paris, France, 14-16 June 2017,pp. 182-187.

Paper #: S2-3

Bibliometric Analysis of the Correlation Between Deep Learning and Alzheimer’s Disease Detection

 

Raya Kh. Yashooa1*, Abd Al-Bar Al-Farha2, Wissam Albeer Nooh1, shatha S. Jumaah1, Ali Q. Saeed3, Ahmad Samed Al-Adwan4, Ahmad Alshamayleh5, Taher M. Ghazal6,7

1Department of Biology, University of Al-Hamdaniya, Mosul, 41002, Iraq.

2Center of Technical Research, Northern Technical University, Mosul, 41002, Iraq.

3Scientific Affairs division, Northern Technical University, Mosul, Ninevah,41002, Iraq.

4Business Technology Department, Al-Ahliyya Amman University, Amman, Jordan

5Department of Data Science and AI, Al-Ahliyya Amman University, Amman, Jordan.

6Department of Networks and Cybersecurity, Al-Ahliyya Amman University, Amman, Jordan.

7Center for Cyber Security, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor, Malaysia.

*Contact: Raya Kh. Yashooa raya.yashooa@uohamdaniya.edu.iq, Phone:009647711684077

 

Abstract:

This study focuses on the bibliometric examination of studies on Alzheimer’s disease, AD, and deep learning. The Web of Science Core Collection data were obtained using specific search phrases related to deep learning and Alzheimer’s from the Science Citation Index (SCI-E). The collection comprises of English-language papers from computer science that were generated in recent years. Each publication was meticulously examined to exclude all extraneous content, including books, retracted papers, and uses of artificial intelligence (AI) beyond finance. Any discrepancies that arose during the selection process were resolved through group deliberation. A thorough bibliometric analysis utilized three instruments: Bibliometric.com, VOSviewer, and Publish or Perish. Seventy-nine empirical research articles published in various nations from 2018 to 2024 were selected for this approach. The quantity of pertinent articles has risen annually. In 2022, China led in article production with n=27 (28.7%), followed closely by the USA with n=22 (23.4%) and the UK with n=11 (11.7%). Viewed from this perspective, the three countries accounted for 63.8% of the output. The five leading journals discovered were IEEE Access, which published five publications (17.2%), and IEEE Transactions on Medical Imaging, which published four articles (13.7%); both are in the USA. This indicates that only the institutions of the University of Pennsylvania, the University of Southern California, and the Chinese Academy of Sciences published more than one article in this study. The primary areas of interest discovered by keyword analysis for study focus include ‘detection,’ ‘diagnostic,’ ‘imaging,’ ‘Alzheimer’s,’ and ‘formula.’ The findings emphasize the transformation and advancement that deep learning has introduced in diagnosing and treating Alzheimer’s disease. Subsequent research should concentrate on equitable multinational collaborations that include AI to enhance physicians’ practices, improving the quality of care.

Keywords: Deep learning, Artificial intelligent, Alzheimer disease, detection, Bibliometric analysis, VOSviewer.

References:

[1] R. A. Sperling et al., “Toward defining the preclinical stages of Alzheimer’s disease: Recommendations from the National InstituteonAging-Alzheimer’s Association workgroups, ” Alzheimer’s & dementia, vol. 7,no. 3, pp. 280-292, 2011.

[2] A. Atri, “The Alzheimer’s disease clinical spectrum: diagnosis and management,” Medical Clinics, vol.103, no. 2, pp. 263-293,2019.

[3] H. Matsuda, Y. Shigemoto, and N. Sato, “Neuroimaging of Alzheimer’s disease: focus on amyloid and tau PET,” Japanese journalof radiology, vol. 37, pp. 735-749, 2019.

[4] R. K. Yashooa and A. Q. Nabi, “The miR-146a-5p andmiR-125b-5p levels as biomarkers for early prediction of Alzheimer’sdisease,” Human Gene, vol. 34, p. 201129, 2022.

[5] R. K. Yashooa and A. Q. Nabi, “Telomerase Reverse Transcriptase and Telomerase RNA Component Gene Expression as a NovelBiomarkers forAlzheimer’s disease,” Cellular and Molecular Biology, vol. 68, no. 9, pp. 14-20, 2022.

[6] L. E. Hebert, J. Weuve, P. A. Scherr, and D. A. Evans, “Alzheimer disease in the United States (2010–2050) estimated using the2010 census,” Neurology, vol. 80, no.19, pp. 1778-1783, 2013.

[7] A. Association, “Alzheimer’s disease facts and figures,” Alzheimer’s Dement, vol. 15, no. 3, pp. 321-387, 2019.

Paper #: S2-4

Deep Learning and Hybrid Techniques Evaluation for Enhancing Diagnostic Performance of Bladder Cancer

Safa Anmar Albarwary1*, and Berna Uzun 2,3

1Mechatronics Engineering Department, Tishk International University, Erbil, Iraq

2Operation Research Center in Healthcare, Near East University, 99138 Nicosia (via Mersin 10, Turkey), Cyprus

3Department of Mathematics, Faculty of Letters and Sciences, Near East University, 99138 Nicosia (via Mersin 10, Turkey), Cyprus

*Contact: safa.anmar@tiu.edu.iq, phone: +9647502078406

 

Abstract:

Most urological malignancies include Bladder Cancer (Bla.C.) initial examination and post-treatment monitoring require a precise and efficient diagnosis. Clinicians are interested in new diagnostic tools to fill this gap. Recent research shows that artificial intelligence (AI) can improve bladder cancer diagnosis using Deep learning (DL) or hybrid models (Machine Learning with Deep Learning models). Data-driven methods can analyze complex medical imaging and clinical data, reduce diagnosis errors and help clinicians. To find the optimal diagnostic strategy, comparing models across datasets and using numerous evaluation criteria is required. This study uses the multi-criteria decision-making (MCDM) approach to analyze AI-based models for bladder cancer diagnosis. The fuzzy Preference Ranking Organization Method for Enrichment Evaluation (F-PROMETHEE) and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) analytical approaches are used to rank the alternative deep learning models based on performance metrics and expert judgment. For this investigation, the most sophisticated deep learning and hybrid models reported between 2020 and 2025 examined more than 15 models for accuracy, sensitivity, specificity, precision, and F1 score. This study provides a comprehensive comparative analysis of cutting-edge AI models for bladder cancer diagnosis, using extensive performance indicators and sophisticated decision-making methodologies.  Transformer-based designs, have emerged as the leading models, demonstrating superior accuracy, sensitivity, precision, and F1-score, highlighting their capacity to detect nuanced patterns in histopathology images and reduce diagnostic omissions. Utilizing TOPSIS and fuzzy PROMETHEE to integrate multi-criteria data and manage uncertainty in imputed values, we attained a sophisticated, clinically relevant ranking of models that can guide more assured, context-specific judgments on AI use in bladder cancer diagnosis.

Keywords: Bladder Cancer, Deep Learning (DL), Vision Transformers (ViTs), MCDM, Fuzzy PROMETHEE II, TOPSIS.

References:

[1]L. Cai et al., “Deep learning on T2WI to predict the muscle-invasive bladder cancer: a multi-center clinical study,” Scientific Reports, vol. 15, no. 1, Mar. 2025, doi: 10.1038/s41598-024-82909-3.

[2]W. K. Hwang et al., “Artificial Intelligence-Based Classification and Segmentation of bladder cancer in cystoscope images, ”Cancers, vol. 17, no. 1, p. 57, Dec. 2024, doi: 10.3390/cancers17010057.

[3]Y. Mehmood and U. I. Bajwa, “Brain tumor grade classification using the ConvNext architecture,” Digital Health, vol. 10, Jan.2024, doi: 10.1177/20552076241284920.

[4]D. I. Emegano, M. T. Mustapha, I. Ozsahin, D. U. Ozsahin, and B. Uzun, “Advancing Prostate Cancer Diagnostics: AConVNEXT Approach to Multi-Class Classification in Underrepresented Populations,” Bioengineering, vol. 12, no. 4, p. 369,Apr. 2025, doi: 10.3390/bioengineering12040369.

[5]B. Ozdemir and I. Pacal, “A robust deep learning framework for multiclass skin cancer classification,” Scientific Reports, vol.15, no. 1, Feb. 2025, doi: 10.1038/s41598-025-89230-7.

[6]N. Drissi, H. El-Kassabi, and M. A. Serhani, “A multi-criteria decision analysis framework for evaluating deep learning models in healthcare research,” Decision Analytics Journal, vol. 13, p. 100523, Oct. 2024, doi: 10.1016/j.dajour.2024.100523.

[7]H. Erdagli, D. U. Ozsahin, and B. Uzun, “Evaluation of myocardial perfusion imaging techniques and artificial intelligence (AI)tools in coronary artery disease (CAD) diagnosis through multi-criteria decision-making method,” Cardiovascular Diagnosis andTherapy, vol. 14, no. 6, pp. 1134–1147, Dec. 2024, doi: 10.21037/cdt-24-237.

Session 3 [online]: Healthcare Applications and Networks

Monday, December 15, 2025, 16h00 – 17h15

Paper #: S3-1

Launching and Strategic Implementation of the First Digital Bank in Iraq

 

Nabeel Raheem alebady

Chairman of the Founding Committee of the Iraqi Digital Bank, Iraq

Email: Nabeel.Alebady@DIB-iq.com

Abstract:

This presentation will provide an in-depth analysis of the Iraqi Digital Bank’s groundbreaking journey to redefine traditional banking. We will examine its core strategy of integrating Digital Technology and AI to create a seamless, secure, and customer-centric banking ecosystem. The discussion will center on how this transformation:

  • Maximizes Customer Perceived Value through instant, accessible, and personalized financial products.
  • Establishes a powerful Unique Selling Proposition (USP) via a fully digital, branchless model that prioritizes user experience and operational excellence.
  • Implements advanced Digital and Mobile Banking solutions, including biometric authentication (fingerprint, voice, facial recognition) and electronic Know-Your-Customer (eKYC) protocols, to ensure robust security and build trust.
  • Aims to fundamentally alter customer financial behavior, encouraging a shift from cash-based transactions to digital financial inclusion, thereby contributing to Iraq’s broader economic digital transformation.

The paper examines the strategic vision of a digitally oriented bank, highlighting customer experience and value creation. It addresses security, innovation, and the establishment of trust in digital finance, together with the impact on financial behavior and broader societal transformation.

Keywords: Digital bank, Mobile banking, Selling Proposition, Bank security, Societal transformation.

Paper #: S3-2

AI-Driven Glaucoma Detection and Progression Prediction Using Deep Learning and Ocular Imaging

Ali Al Ataby

Electrical and Electronics Engineering Department,

American University of Ras Al Khaimah, Ras Al Khaimah, UAE

*Contact: ali.ataby@aurak.ac.ae

 

Abstract:

Glaucoma has been recognized as one of the leading causes of irreversible blindness across the globe, and its initiation often passes undetected until the disease has progressed to a severe and vision-compromising state. As damage to the nerves is permanent, early detection and accurate prediction of glaucoma development are both necessary to preserve functional vision and to guide prompt clinical intervention. Traditional diagnosis methods, while useful in certain contexts, are limited in efficacy, accessibility, and sensitivity. In response to these limitations, this research presents a deep learning framework for computer-aided glaucoma diagnosis and risk estimation using fundus imaging and optical coherence tomography (OCT) data. This approach employs a convolutional neural network (CNN) trained on publicly available datasets to identify significant structural markers of glaucoma, including optic nerve head cupping and thinning of the retinal nerve fiber layer. Through the detection of subtle patterns that may be missed by conventional analysis, the model aims to be extremely sensitive and specific, superior to traditional diagnostic accuracy. In addition to detection, the platform will also enable real-time risk stratification, such that earlier identification of patients requiring closer monitoring or immediate therapy is possible. With the incorporation of artificial intelligence in the ophthalmic diagnosis, the proposed research indicates the potential of intelligent systems to augment large-scale screening programs, especially in regions that continue to lack access to specialist health facilities. Lastly, the study highlights the pivotal role of AI-based tools in the advancement of preventable healthcare interventions as well as reducing the global burden of glaucoma-related blindness.

Keywords: Glaucoma, Deep learning, Fundus imaging, Optical coherence tomography, Convolutional neural networks.

References:

[1] S. Ajitha, J. D. Akkara, and M. V. Judy, “Identification of glaucoma from fundus images using deep learning techniques, ”Indian Journal of Ophthalmology, vol. 69, no. 10, pp. 2702–2709, Oct. 2021, doi: 10.4103/ijo.IJO_92_21. PMID: 34571619; PMCID:PMC8597466.

[2] A. A. Soofi and F.-e.-Amin, “Exploring deep learning techniques for glaucoma detection: A comprehensive review,”arXivpre print arXiv:2311.01425, 2023.

[3] J. A. Kim, H. Yoon, D. Lee,et al., “Development of a deep learning system to detect glaucoma using macular vertical optical coherence tomography scans of myopic eyes, ”Scientific Reports, vol. 13, no. 1, p. 8040, May 2023, doi:10.1038/s41598-023-34794-5.

[4] M. Ashtari-Majlan, M. M. Dehshibi, and D. Masip, “Glaucoma diagnosis in the era of deep learning: A survey, ”Expert Systems with Applications, vol. 256, p. 124888, Jan. 2024, doi:10.1016/j.eswa.2024.124888.

[5] S. Hussain, J. Chua, D. Wong, et al., “Predicting glaucoma progression using deep learning framework guided by generative algorithm, ”Scientific Reports, vol. 13, no. 1, p. 19960, Nov. 2023, doi:10.1038/s41598-023-46253-2.

[6] J. N. K., M. H. Ali, S. Senthil, and M. B. Srinivas, “Early detection of glaucoma: feature visualization with a deep convolutionalnetwork,”Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 12, no. 1, pp. 1–9,2024, doi:10.1080/21681163.2024.2350508.

Paper #: S3-3

AI-Powered Gaze Tracking to Improve Prosthetic Arm Users Experience in Daily Life Tasks

 

Musa Al-Yaman

Mechatronics Engineering Department, University of Jordan, Amman, Jordan

Contact: m.alyaman@ju.edu.jo

 

Abstract:

Understanding how upper-limb prosthetic users adapt to and interact with their devices during everyday activities is crucial for improving prosthetic design, tailoring rehabilitation strategies, and supporting long-term user independence. A key factor in this process is gaze behaviour, the way individuals visually engage with objects while performing real-world tasks. This offers insights into motor planning, attention, and task execution. Wearable eye-tracking devices can capture this data in natural settings, but traditional analysis methods are often slow, labour-intensive, and prone to error, limiting their scalability and practical value in clinical and research environments. To address these challenges, this work introduces an AI-powered system that automates the high-precision analysis of gaze data collected via wearable eye-trackers during daily activities. Unlike live-tracking approaches, our method focuses on post-session video recordings, allowing researchers and clinicians to extract detailed patterns of visual attention, cognitive load, and task coordination over time. For instance, when a therapist reviews a patient pouring water or buttoning a shirt, the system can reveal exactly where the gaze lingers, how frequently it shifts, and where difficulties arise, capturing subtle behaviours often invisible to the naked eye. These insights can also guide prosthetic designers in refining control algorithms and interfaces, making devices more intuitive, responsive, and better aligned with natural human movement. An interactive demo has been developed to showcase the system’s capabilities, allowing participants to explore sample visualizations, discuss ethical considerations, and examine pathways for integration into clinical or research workflows. The proposed approach achieved 93.3% accuracy on medically validated datasets and processes data 97.2% faster than existing methods, reducing hours of manual review to minutes of actionable insight. We invite collaborations with clinicians, rehabilitation centres, and assistive technology developers to co-design future pilots and expand the system’s impact in real-world settings.

Keywords: Video analysis, Computer Vision, Deep learning, Gaze tracking. Upper limb tasks.

References:

[1] M. Al-Yaman, D. Alswaiti, A. Alsharkawi, M. Al-Taee. A cost-effective modular laboratory solution for industrial automation andapplied engineering education. MethodsX. 2025 May 28;14:103388. doi: 10.1016/j.mex.2025.103388. PMID: 40529514; PMCID:PMC12171526.

[2] J.J. Brito, P. Toledo, S. Alayon, Virtual laboratory for automation combining inventor 3D models and Simulink control models: Virtual laboratory for automation, in: 2018 IEEE Global Engineering Education Conference (EDUCON), IEEE, Tenerife, 2018:pp. 555–562.https://doi.org/10.1109/EDUCON.2018.8363279.

[3] I. Burhan, S. Talib, A.A. Azman, Design and fabrication of Programmable Logic Controller Kit with multiple output module forteaching and learning purposes, in: 2012 IEEE 8th International Colloquium on Signal Processing and Its Applications, IEEE, Malacca, Malaysia, 2012: pp. 14–18.https://doi.org/10.1109/CSPA.2012.6194681.Huang, C. M., Andrist, S., Sauppé, A., &Mutlu, B. (2015).

[4] M.C. Carvalho, “Using Low-Cost Equipment for Laboratory Automation, in: Practical Laboratory Automation, John Wiley & Sons, Ltd, 2016: pp.71–78.https://doi.org/10.1002/9783527801954.ch7.

[5] S. Çeven, A. Albayrak, Design and implementation of modular test equipment for process measurements in mechatronics education, Comp Applic In Engineering 28 (2020) 324–337.https://doi.org/10.1002/cae.22196.

Paper #: S3-4

A Wearable Real-Time Facial Recognition System for Assisting People with Visual Impairment

Thair A. Kadhim1,*, Sajad T. Abdalwahid2, Ahmed T. Abdalwahid3, Abdullah T. Abdalwahid4

1Babylon Directorate of Education, Ministry of Education, Babylon, Iraq

2Computer Engineering Department, Semnan University, Iran

3Photographic processing specialists, Babylon, Iraq

4Cybersecurity Researcher, Babylon, Iraq

*Contact: altaeeth@gmail.com

 

Abstract:

Visual impairment limits a person’s ability to recognize others and engage in social interactions. Although assistive technologies such as screen readers and navigation aids have improved accessibility, real-time face recognition for visually impaired users remains a challenge. Existing solutions often depend on cloud computing or external devices, leading to latency, reliance on internet connectivity, and higher cost. To overcome these issues, this study proposes a lightweight wearable system that delivers real-time facial recognition through smart glasses and a Raspberry Pi minicomputer.  The Raspberry Pi functions both as the computational unit and as a wireless access point, thus avoiding reliance on cloud services or intermediary devices. This architecture not only enhances processing speed and reliability but also reduces overall device weight and cost, making it more practical for daily use. In operation, human faces are captured in real-time video frames via the ESP32 camera and transmitted over Wi-Fi to the Raspberry Pi. Here, deep learning algorithms perform face detection and recognition. Once an individual is identified, their name is transmitted via Bluetooth to an earbud, providing the user with discreet auditory feedback. During system development, challenges such as optimizing recognition speed, managing power consumption, and ensuring hardware integration were successfully addressed. Experimental evaluation revealed outstanding performance: the system required approximately 0.821 seconds to deliver audible feedback after image capture. Face detection and recognition accuracies reached 99.33% and 99.31%, respectively, surpassing results reported in previous studies. These encouraging outcomes highlight the potential of the proposed system to substantially improve independence, confidence, and quality of life for visually impaired individuals, while also demonstrating promise for broader applications in wearable assistive technologies.

Keywords: Wearable device; Face recognition; Real-time; Raspberry Pi; ESP32-CAM; Smart glasses.

References:

[1]  S. N. Ilkal, S. S. Munshi, S. Katarki, N. Kotwal, M. Chikkond, and A. Makandar, “AI Based Facial Recognition Smart Glass for Visually Impaired Person,” Saudi J. Eng. Technol., vol. 10, no. 06, pp. 270–276, 2025, doi: 10.36348/sjet. 2025.v10i06.003.

[2]  T. A. Kadhim, W. Hariri, N. Smaoui Zghal, and D. Ben Aissa, “A face recognition application for Alzheimer’s patients usingESP32-CAM and Raspberry Pi,” J. Real-Time Image Process., vol. 20, no. 5, 2023, doi: 10.1007/s11554-023-01357-w.

[3]  T. A. Kadhim, N. S. Zghal, W. Hariri, and D. Ben Aissa, “A Review of Alzheimer’s Disease and Emerging Patient Support Systems,” in 20th International Multi-Conference on Systems, Signals and Devices (IEEE SSD’23), 2023.

[4]  B. C. Ubochi, A. E. Olawumi, A. O. Shuiabu, and K. F. Akingbade, “A Wearable Facial Recognition Device for the Visually Impaired,” J. Eng. Eng. Technol., vol. 18, no. 1, pp. 48–55, 2024, [Online]. Available:https://doi.org/10.51459/futajeet.

[5]  A. A. A. Aboluhom and I. Kandilli, “Face recognition using deep learning on Raspberry Pi,” Comput. J., vol. 67, no. 10, pp. 3020–3030, 2024, doi: 10.1093/comjnl/bxae066.

[6]  T. A. Kadhim; Nadia Smaoui Zghal; Walid Hariri; Dalenda Ben Aissa, “Face Recognition in Multiple Variations Using Deep Learning and Convolutional Neural Networks,” 9th Int. Conf. Sci. Electron. Technol. Inf. Telecommun. (SETIT’22), ” 2022., no.1, 2022.

 

Paper #: S3-5

6G-XR: Pioneering AI-Driven Extended Reality in Next-Generation 6G Networks

Mohammed Al-Rawi,

Instituto de Telecomunicações, Aveiro, Portugal

Contact: al-rawi@ua.pt

Abstract:

The 6G eXperimental Research infrastructure (6G-XR; https://6g-xr.eu/) project, funded by Horizon Europe and the Smart Networks and Services Joint Undertaking (SNS JU), aims to strengthen European leadership in 6G technologies by developing a cutting-edge experimental infrastructure for next-generation extended reality services (XR). The project aims to enhance the performance and scalability of advanced 5G and 6G architectures, focusing on immersive applications such as holographic communications and digital twins. AI plays a pivotal role in 6G-XR by optimizing end-to-end service provisioning, edge computing, and cloud continuum integration. The project employs AI-driven algorithms to enhance network slicing, enabling dynamic resource allocation for demanding XR use cases. Machine learning models are utilized to predict and manage network traffic, ensuring ultra-low latency and high reliability for real-time holographic communications. Additionally, AI-powered analytics facilitate energy optimization, aligning with sustainability goals by minimizing power consumption across network and platform levels. Our experimental infrastructure, comprising testbeds like OULU 5GTN, VTT 5GTN, i2CAT and 5TONIC, supports AI-based validation of key performance indicators. Through open calls, 6G-XR fosters collaboration with third parties, enabling SMEs, academia, and industry to develop and test AI-enhanced XR applications. These efforts contribute to global 6G standardization and demonstrate the technological feasibility of societal objectives. 6G-XR is paving the way for a scalable, intelligent, and sustainable 6G ecosystem, revolutionizing immersive communication and collaboration. In this talk, I will provide an overview of the 6G-XR project, highlighting its implementation, the significance of 6G networks, the funding framework, the key results achieved, and future directions.

Keywords: Mobile Networks, Extended Reality, Virtual Reality, 6 Generation Networks

Paper #: S3-6

Comparative Analysis of Machine Learning Classifiers for Network Intrusion Detection

Khalid A. Kadhim

Babylon Electricity Distribution Company, Babylon, Iraq

Contact: khalidkadhim629@gmail.com

 

Abstract

The rapid growth of network-based services has led to increased storage and exchange of personal and sensitive information. At the same time, intrusion techniques have become increasingly sophisticated, surpassing the capabilities of traditional security measures. To address this, anomaly detection methods are employed to identify malicious network traffic. Data mining and machine learning techniques, particularly classification, are widely used for this purpose. Classification enables systems to learn patterns from labelled datasets and predict whether new network packets are normal or malicious, supporting accurate and timely intrusion detection. This study evaluates the performance of three classifiers: Support Vector Machine (SVM), Random Forest (RF), and Feed-Forward Deep Neural Network (DNN), for use in intrusion detection systems. The UNSW-NB15 dataset is used for training and testing the proposed classifiers. To ensure efficient classification, simplified topologies are employed. Each classifier performs two tasks: (i) binary classification to distinguish normal from attack traffic and (ii) multi-class classification to identify specific attack types. Experimental results showed that the DNN, with three hidden layers of 32 neurons each, achieved the best trade-off between accuracy and speed. For binary classification, it reached 99.27% accuracy with an average prediction time of 0.7 µs per packet. The Random Forest model achieved slightly higher accuracy (99.6%) but required 8.54 µs per prediction, making it less efficient. The SVM model lags with 98.7% accuracy and 218.3 µs per prediction. In multi-class classification, the DNN again outperformed with 90.82% accuracy and 0.89 µs per prediction. Random Forest achieves 87.92% accuracy with 17.28 µs per prediction, while SVM performs poorly with 70.43% accuracy and 709.65 µs per prediction. Overall, the results demonstrated that the DNN offers the most effective and efficient solution for deployment in real-time intrusion detection systems.

Keywords: Network security; Intrusion detection system; Machine learning; Classification.

References:

[1]  Y. Harbi, Z. Aliouat, A. Refoufi, and S. Harous, “Recent security trends in internet of things: A comprehensive survey”, IEEE Access, Vol. 9,pp. 113292–113314, 2021.

[2] A. H. Najim et al., “An IoT healthcare system with deep learning functionality for patient monitoring”, International Journal of Communication Systems, pp. e6020, 2024.

[3] M. A. Al-Garadi, A. Mohamed, A. K. Al-Ali, X. Du, I. Ali, and M. Guizani, “A survey of machine and deep learning methods for internet of things (IoT) security”, IEEE communications surveys & tutorials, Vol. 22, No. 3, pp. 1646–1685, 2020.

[4] M. N. Kadhim, A. H. Mutlag, and D. A. Hammood, “Vehicle detection and classification from images/videos using deep learning architectures: A survey”, In: AIP Conference Proceedings, AIP Publishing, 2024.

[5] G. Liu, H. Zhao, F. Fan, G. Liu, Q. Xu, and S. Nazir, “An enhanced intrusion detection model based on improved kNN in WSNs”, Sensors, Vol. 22, No. 4, pp. 1407, 2022

[6]  H. Hu, J. Liu, X. Zhang, and M. Fang, “An effective and adaptable K-means algorithm for big data cluster analysis”, Pattern Recognition, Vol. 139, pp. 109404, 2023.

[7] A. H. Najim and S. Kurnaz, “Study of integration of wireless sensor network and Internet of Things (IoT)”, Wireless Personal Communications, pp. 1–14, 2023.

Tuesday 16 December 2025

 

Plenary Session 2: AI in Medical Diagnosis and Education

Monday, December 16, 2025, 09h30 – 10h30

Paper #: PL2-1

AI Assisted Diagnostics

Rami Qahwaji

School of Computing and Engineering, University of Bradford, Bradford, UK

Contact: r.s.r.qahwaji@bradford.ac.uk

Abstract:

Artificial Intelligence (AI) and visual computing are transforming healthcare by enabling faster and more accurate disease diagnosis, as well as personalized treatment. This talk explores real-world applications of AI in medical fields such as ophthalmology, neurology, surgery, and primary care, where complex, high-dimensional data are used to enhance patient outcomes and improve efficiency. In this talk, we will present ongoing research in digital health and diagnostics, highlighting collaborations with healthcare providers and industry partners. Key topics include automated medical image analysis, predictive modelling, remote diagnostics, and wearable technologies. The talk will also address challenges in data processing/analytics and knowledge representation, along with the ethical and regulatory considerations of integrating AI into clinical practice.

Keywords: Digital Health; AI; Medical Imaging; Diagnostics; Ophthalmology; Neurology; Surgery; Regulatory Challenges; Healthcare Innovation

Paper # PL-2-2

We Shouldn’t Be Teachers – Educating in the Age of Intelligent Systems

 

Waleed Al-Nuaimy

School of Engineering, University of Liverpool, Liverpool L69 3GJ, UK

Contact: wax@liverpool.ac.uk

 

Abstract:

Artificial intelligence (AI) is transforming not only how students learn but also what it means to teach. As generative AI systems begin to perform tasks that were once considered uniquely human, such as writing, coding, and problem-solving, the traditional role of the educator as the primary source of knowledge is becoming increasingly outdated. If intelligent systems can already explain, assess, and adapt to learners’ needs, then perhaps we should stop thinking of ourselves simply as teachers. Our purpose must shift towards developing critical thinking, ethical judgement, creativity, and resilience: the qualities that make learning truly human.

This talk argues that the educator’s role is moving from transmission to transformation, from the “sage on the stage” to the designer of learning experiences. It explores how universities can reshape curricula, assessment, and feedback in a world where AI is both widespread and unavoidable. Drawing on examples from engineering education and institutional practice, the talk shows how scaffolding, authenticity, and mentoring can work alongside intelligent tools to prevent learned helplessness while supporting personalization and learner agency. The central question is not whether AI will replace educators, but whether educators can redefine their role quickly enough to remain essential. We must stop protecting the tasks that machines can do better and focus instead on designing learning that only humans can make meaningful.

Keywords: AI, Higher Education Futures, Curriculum Design, Assessment & Authenticity, Human–AI Collaboration, Learning Integrity, Educational Innovation, Digital Pedagogy.

Session 4: Financial Security and Smart Applications

Monday, December 16, 2025, 10h30 – 11h45

Paper #: S4-1

AI-Based Cybersecurity and Quantum Communications: Trends and Challenges in Securing Financial Transactions

Qaysar Salih Mahdi1,*, Sherwan Anwar Mustafa2

1School of Mechatronic Systems Engineering, Tishk International University, Iraq

2Iraqi Private Bankes League , Iraq

*Contact: Qaysar.mahdy@tiu.edu.iq

Abstract:

The evolution of cyber threats against financial institutions demands robust, future-ready security architectures. This paper investigates the convergence of artificial intelligence (AI) and quantum communication as a next-generation solution to secure financial transactions. We review the current cybersecurity landscape, introduce key quantum communication concepts including quantum key distribution (QKD), and explore how AI can adaptively monitor and defend transaction environments. A hybrid model integrating AI-based anomaly detection and quantum-safe cryptography is simulated in MATLAB, showing superior resilience against both classical and quantum-enabled attacks. Results show the hybrid system producing a detection accuracy of ≈97.9%, a false-positive rate under 2.5%, and a simulated QKD key rate of ≈25 kbps over a 50 km fiber equivalent under assumed loss/noise parameters. Challenges such as implementation costs, hardware limitations, and policy gaps are discussed, with future research directions outlined to pave the way for practical deployment.

 

Keywords: Artificial Intelligence, Quantum key Distribution, Cybersecurity, Financial transactions, 6G security.

References:

[1]  Bennett, C. H., & Brassard, G. (1984). Quantum cryptography: Publickey distribution and coin tossing. Proceedings of IEEEInternational Conference on Computers, Systems and Signal Processing

[2]  Lo, H.-K., Curty, M., & Tamaki, K. (2014). Secure quantum key distribution. Nature Photonics

[3]  Scarani, V., et al. (2009). The security of practical quantum key distribution. Rev. Mod. Physics.

[4]  MathWorks. Sensor Fusion and Tracking Toolbox Documentation (2023).

[5]  MathWorks. Aerospace Toolbox & Blockset (2023).

[6]  Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction.MIT Press.

[7]  Richards, M. A. (2014). Fundamentals of Radar Signal Processing. McGraw-Hill.

[8]  Skolnik, M. I. (2008). Radar Handbook. McGraw-Hill.

[9]  Collinson, R. P. G. (2011). Introduction to Avionics Systems. Springer.

[10] National Institute of Standards and Technology (NIST). Post-Quantum Cryptography Standardization Project. (Use latest NISTreports for PQC algorithms.

[11] Grover, L. K. (1996). A fast quantum mechanical algorithm for database search. (contextual)

Paper #: S4-2

Speech Commands Recognition using Convolutional Neural Networks for Mobile Robot Control

Munaf Salim Najim AL-Din* and Said Ali Said Al-Balushi

Department of Electrical and Computer Engineering,

College of Engineering and Architecture, University of Nizwa, Nizwa, Oman

*Contact: munaf@unizwa.edu.om

 

Abstract:

Voice-controlled mobile robots represent a promising step toward intuitive human–robot interaction, offering hands-free operation suitable for assistive technologies, industrial automation, and smart environments. This study presents the design, implementation, and evaluation of a speech command recognition framework for real-time mobile robot control using convolutional neural networks (CNNs). The system integrates MATLAB-based deep learning with Arduino-driven hardware in a differential-drive robot platform. A curated subset of the Google Speech Commands Dataset was used, where selected utterances underwent a robust preprocessing pipeline including noise reduction and adaptive filtering, followed by Bark-frequency cepstral coefficient (BFCC) feature extraction to enhance resilience under diverse acoustic conditions. Five CNN architectures—AlexNet, GoogleNet, SqueezeNet, ShuffleNet, and a custom lightweight CNN—were trained and validated on 5,400 preprocessed samples. Comparative evaluation demonstrated that ShuffleNet achieved the best balance between recognition accuracy (95.8%) and real-time latency (0.92 s), while the custom CNN provided the lowest inference time (0.75 s), making it highly suitable for embedded platforms. The recognized commands were wirelessly transmitted to the mobile robot via an RF communication link, where an Arduino Uno decoded the signals and executed the corresponding motor actions. System-level experiments, conducted under varied acoustic conditions and with previously unseen speakers, confirmed high execution success rates (up to 96.1%) and low-latency responses across all tested models. The integration of BFCC-based feature extraction and adaptive filtering ensured robustness against environmental noise, while the modular architecture enabled scalability and future enhancements. This research validates the feasibility of CNN-based speech recognition for reliable and responsive voice-driven robotic control. The proposed system provides a practical foundation for assistive robotics, autonomous navigation, and smart environments.

Keywords: Mobile Robot, Speech Control, Convolutional Neural Networks.

References:

[1]   A. J. M. Al-Ahmadi and A. S. Al-Saleh, “Steering a Robotic Wheelchair Based on Voice Recognition System Using Convolutional Neural Networks,” Journal of Intelligent & Robotic Systems, vol. 106, no. 2, pp. 1–14, 2022.

[2]  M. Umar, R. Shukla, and M. S. Khan, “Voice Controlled Intelligent Personal Assistant for Remote Controlled Mobile Robot,”In2023 International Conference on Computer, Communication and Control Technology (I4CT),2023,pp. 45–50.

[3]   S. Hasan, S. Joarder, A. Nayem, K. H. Hassan and S. A. Fattah, “Development of a Multilingual Voice-Controlled Smart Wheel chair with Advanced Features,”In2025 Int. Conf. on Electrical, Computer and Comm.Engineering (ECCE),Chittagong, Bangladesh, 2025, pp. 1-6.

[4]   S. Hasan, S. Joarder, A. Nayem, K. H. Hassan and S. A. Fattah, “Development of a Multilingual Voice-Controlled Smart Wheelchairwith Advanced Features,”In2025 International Conference on Electrical, Computer and Communication Engineering (ECCE), Chittagong, Bangladesh, 2025, pp. 1-6.

[5]   Abdulghani, M. M., Al-Aubidy, K. M., Ali, M. M., & Hamarsheh, Q., “Wheelchair Neuro Fuzzy Control and Tracking System Based on Voice Recognition”, Sensors, Vol.20, no.10, 2872, 2020.

[6]   W. Ye,J. Gao, H. Chen and J. Guo, “A Voice Control Platform of Mobile Robot through ROS,”In2019 Chinese Control Conference (CCC), Guangzhou, China, 2019, pp. 4338-4341.

[7]  Gupta, S., Mamodiya, U., & Al-Gburi, A. J. A., “Speech Recognition-Based Wireless Control System for Mobile Robotics: Design, Implementation, and Analysis”, Automation, Vol.6, no.3, 25,2025.

[8]  C. Yapicioğlu, Z. Dokur and T. Ȫlmez, “Voice Command Recognition for Drone Control by Deep Neural Networks on EmbeddedSystem,”In2021 8thInternational Conference on Electrical and Electronics Engineering (ICEEE), Antalya, Turkey, 2021, pp. 65-72

Paper #: S4-3

EEG-Based Classification of Human Limb Movements: A Comprehensive Performance Evaluation of KNN, LDA, Decision Tree, and Random Forest Models

 

Yousif Al Mashhadany1*, Ali Amer Ahmed Alrawi1, Abdullah Al-Ani3 , Kasim M. Al-Aubidy4

1Biomedical Engineering Research Center, University of Anbar, Anbar, Iraq

3Department of Electrical Engineering University of Anbar, Anbar, Iraq

4Mechatronics Department, Tishk International University, Erbil, Iraq

*Contact: yousif.mohammed@uoanbar.edu.iq

 

Abstract:

The domain of brain-computer interfaces (BCI) has garnered significant interest in modern scientific inquiry, emerging as a potent solution for aiding individuals with physical impairments. A notable obstacle within the realm of rehabilitation lies in the accurate assessment of electromyography (EMG) signals, which frequently encounter various constraints, including challenges in precise electrode positioning, the requirement for conductive gel, and the complexities associated with sustaining data acquisition over prolonged durations due to muscle fatigue. In juxtaposition, electroencephalography (EEG) offers a more accessible and user-friendly technique for signal acquisition, facilitating extended data collection and enhancing the dependability of the resultant findings. This investigation aims to differentiate three specific hand movements through the analysis of EEG signals. Our approach encompasses an exhaustive preprocessing of the EEG data, succeeded by the implementation of three distinct window widths tailored for effective feature extraction. Subsequent to the feature extraction phase, a feature selection methodology was employed to refine the dataset. A variety of classification models were utilized to accurately categorize the hand movements, including k-nearest neighbors (KNN), linear discriminant analysis (LDA), decision tree, and random forest. The KNN and LDA attained an impressive accuracy of 96.906%, while the Decision Tree model recorded an accuracy of 94.388%, and the Random Forest model achieved 93.957%. Furthermore, elucidates that in the context of right-hand movements, KNN and LDA also reached an accuracy of 96.906%, with the Decision Tree yielding 94.388% and Random Forest documenting 93.957%. These outcomes underscore the proficiency of our methodology in reliably classifying hand movements, particularly accentuating the advancements realized in the analysis of data pertaining to right-hand movements. The results of this thorough examination have been systematically articulated, highlighting the effectiveness of our techniques and the methodological rigor employed throughout the study.

Keywords: EEG Signal, Human Limb Movements, Human Blind, Brain-Computer Interface, Emotive EEG Headset, Comprehensive Performance, Random Forest Model.

References:

[1]  A. N. Belkacem, N. Jamil, J. A. Palmer, and S. Ouhbi, “Brain computer interfaces for improving the quality of life of older adults and elderly patients,” Frontiers in, 2020.frontiersin.org

[2]  A. K. Abduljabbar, Y. Al Mashhadany and S. Algburi, “Q-Learning for Path Planning in Complex Environments: A YOLO and Vision-Based Approach,” 2024 21st International Multi-Conference on Systems, Signals & Devices (SSD), Erbil, Iraq, 2024, pp.626-630, doi: 10.1109/SSD61670.2024.10549642.

[3]  H. Zhang, L. Jiao, S. Yang, H. Li, and X. Jiang, “Brain–computer interfaces: the innovative key to unlocking neuro logical conditions,” Journal of Surgery, 2024.lww.com

[4]  M.M Islam, A Vashishat, and M Kumar, “Advancements Beyond Limb Loss: Exploring the Intersection of AI and BCI in Prosthetic Evaluation,” Current Pharmaceutical, 2024.researchgate.net

[5]  S. Suryadevara, “Enhancing Brain-Computer Interface Applications through IoT Optimization,” Revista de Inteligencia Artificialen Medicina, 2022.redcrevistas.com

Paper #: S4-4

Preserving Energy Load Management Using Federated Learning and LoRa Mesh Networks

Hilal Al-Libawy1, *, Mohammed Al-khafajiy2

1Department of Electrical Engineering, University of Babylon, Babylon, Iraq

2School of Engineering & Physical Sciences, University of Lincoln, Lincoln, UK

*Contact: eng.hilal_al-libawy@uobabylon.edu.iq

Abstract:

The global integration of renewable energy and rising electricity demand underscore the critical need for intelligent grid management. This is particularly urgent in nations like Iraq, which suffers from chronic power shortages due to damaged infrastructure and fuel supply issues, leading to prolonged daily outages. Residential load shifting, which adjusts consumption patterns to balance supply and demand, is a promising strategy. However, its adoption faces significant hurdles related to data privacy, communication reliability, and the scalability of centralized systems. This research proposes a novel framework that integrates Federated Learning (FL) with a LoRa mesh network for privacy-preserving residential load shifting. FL is a decentralized machine learning paradigm that enables collaborative model training without sharing raw household energy data. Instead, only model updates are exchanged, thereby preserving user privacy. A reliable communication backbone is essential for FL, and a LoRa mesh network is ideally suited for this role. It provides robust, low-power, long-range connectivity, with its mesh topology enhancing reliability and coverage through multi-hop data relays, creating a resilient network in distributed residential environments. The methodology will involve selecting and adapting advanced FL algorithms like Federated Averaging (FedAvg) and FedProx to handle data heterogeneity. A key focus is developing a personalized FL approach using layer-wise personalization strategies to tailor global models to diverse household consumption profiles without compromising collaborative learning. A simulation environment will evaluate performance based on metrics like peak demand reduction, energy cost savings, and model accuracy. The system’s scalability will be tested by varying household numbers, and a formal privacy analysis will be conducted using frameworks like differential privacy to quantify guarantees. Expected outcomes include a validated FL architecture for load shifting and a robust FL-LoRa mesh communication framework. This work will provide a comprehensive analysis of performance trade-offs and establish a foundation for future decentralized, privacy-conscious energy management systems.

Keywords: Smart grids, Federated learning, Load shifting, LoRa mesh networks, Data privacy.

References:

[1]  S. A. N. Al-Azzawi, “Challenges of electricity sector in Iraq after 2003,”J. Eng. Appl. Sci., vol. 12, no. 4, pp. 720–725, 2017.

[2]  M. H. Cintuglu, O. A. Mohammed, K. Akkaya, and A. S. Uluagac, “A Survey on Smart Grid Cyber-Physical System Test beds,”IEEE Commun. Surveys Tuts., vol. 19, no. 1, pp. 446-464, First quarter 2017.

[3] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proc. 20th Int. Conf. Artif. Intell. Stat. (AISTATS), 2017, pp. 1273–1282.

[4]  J. de Carvalho Silva, J. J. P. C. Rodrigues, A. M. Alberti, P. Solic, and A. L. L. Aquino, “LoRaWAN—A low power WAN protocol for Internet of Things: A review and opportunities,” inProc. 2nd Int. Multidisciplinary Conf. Comput. Energy Sci.(SpliTech), 2017, pp. 1-6.

[5] H. B. McMahan, E. Moore, D. Ramage, and S. Hampson, “Federated Learning of Deep Networks using FederatedAveraging,”arXiv:1602.05629, 2016.

[6] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated Optimization in Heterogeneous Networks,” in Proc. Mach. Learn. Syst., vol. 2, 2020, pp. 429–450.

[7] Y. Jiang, J. Konečný, K. Rush, and S. Kannan, “Improving Federated Learning Personalization via Model Agnostic MetaLearning,”arXiv:1909.12488, 2019.

[8] C. Dwork, “Differential Privacy: A Survey of Results,” in Proc. 5th Int. Conf. Theory Appl. Models Comput., 2008, pp. 1–19

Paper #: S4-5

A Review of the Design Development of Robotic Exoskeletons in Rehabilitation and Tele-Rehabilitation

Rand B. Mohammed and Safa Anmar Albarwary

Mechatronics Engineering Department, Tishk International University, Erbil, Iraq

*Contact: rand.basil@tiu.edu.iq

Abstract:

Robotic exoskeletons have emerged rapidly over the past two decades as a promising treatment modality for a wide range of physical impairments. Their application has been particularly transformative in fields such as neurorehabilitation, physiotherapy, and mobility assistance, where these devices enable individuals with disabilities to perform activities that were once considered impossible. Such actions include walking, standing, grasping, and other essential motor functions that significantly enhance independence, mobility, and overall quality of life. Beyond simply restoring movement, robotic exoskeletons hold the potential to improve long-term motor recovery outcomes by facilitating repetitive, precise, and task-specific training tailored to the needs of each patient. This work seeks to analyze 100 peer-reviewed articles published between 2000 and 2025 to present a comprehensive overview of the historical evolution and current state of robotic exoskeleton technologies. The analysis will focus on their progression in rehabilitation and tele-rehabilitation contexts, highlighting advances in design, engineering, and integration with digital health platforms. Special emphasis will be placed on the fundamental technologies and control strategies that underpin contemporary exoskeleton systems for both upper and lower limb applications. Key aspects to be discussed include hardware architecture, actuation systems, sensor technologies, and software frameworks, as well as the complex interaction between human users and robotic devices. Furthermore, the study will examine the growing role of Artificial Intelligence (AI) in making these systems adaptive and personalized, thereby enhancing their effectiveness in clinical and home settings. Ultimately, this review aims to provide insights into the challenges and opportunities shaping the future of robotic exoskeletons in rehabilitation science.

Keywords: Rehabilitation Robots, Robotic Exoskeletons, Hardware, Software, AI, Upper limbs, Lower Limbs.

References:

[1]   G. Mashud, S. Hasan, and N. Alam, “Advances in control techniques for rehabilitation exoskeleton robots: A systematic review, ”Machines, vol. 14, no. 3, p.108, Mar. 2025.

[2]  D. Su, et al., “Review of adaptive control for stroke lower limb exoskeletons rehabilitation robot based on motion intention recognition,” Frontiers in Neurorobotics, vol. 17, 2023.[3]S. Wen, R. Huang, L. Liu, Y. Zheng, and H. Yu, “Robotic exoskeleton-assisted walking rehabilitation for stroke patients: A bibliometric and visual analysis,” Frontiers in Bioengineering and Biotechnology, vol. 12, 2024.

[4]  A. Olimb Hillkirk, et al., “Physiotherapists’ user acceptance of a lower limb robotic exoskeleton,” JMIR Rehabilitation and Assistive Technologies, vol. 12, no. 1, 2025.

[5]  K. Boards worth, et al., “Upper limb robotic rehabilitation following stroke: a systematic review and meta-analysis investigating efficacy and the influence of device features and program parameters,” Journal of Neuro Engineering and Rehabilitation, vol. 22,no. 1, p. 62, 2025.

[6]  J. Yang et al.,“Effect of robotic exoskeleton training on lower limb function, ”Frontiers in Rehabilitation Medicine, vol. 3, 2024.[Online]. Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC11347425/

[7] B. Moriarty, et al., “The use of exoskeleton robotic training on lower extremity function after spinal cord injury, ”Journal of Clinical Neuroscience, vol. 118, 2025. [Online]. Available:https://www.sciencedirect.com/science/article/pii/S0972978X24003647

[8] J. M. Cha, et al., “Wearable robots for rehabilitation and assistance of gait,”npj Flexible Electronics, vol. 9, no. 1, p. 24, 2025.[Online]. Available:https://pmc.ncbi.nlm.nih.gov/articles/PMC12411867/Preferred.hilal

Session 5: AI and Healthcare Applications

Monday, December 16, 2025, 10h30 – 11h45

Paper #: S5-KSp7

High Performance of Artificial Limbs Based on Different Biosensor Signals

Yousif Al Mashhadany (SMIEEE)

Biomedical Engineering Research Center, University of Anbar, Iraq

Email: yousif.mohammed@uoanbar.edu.iq

Abstract:

This presentation delves into advanced biosensing technologies that can improve the performance and adaptability of artificial limbs. The primary focus is on electromyographic (EMG) and electroencephalographic (EEG) data, which are key input modalities for deciphering human intent in real time. EMG signals obtained from residual limb muscles allow for accurate control of prosthetic activities using pattern recognition algorithms, and EEG signals function as a non-invasive conduit for brain-machine interfacing, providing control based on brain activity. The combined use of these biosensors is found allows for a hybrid control system, which promotes adaptability as well as intuitiveness in prosthesis operation. The presentation covers gathering signals techniques, noise reduction tactics, feature extraction approaches, and artificial intelligence models that are critical for effective motion prediction. It also looks at how combining sensors, real-time processing, and individual calibration help to achieve optimal performance by increasing efficiency, precision, and user comfort. Examples and experimental data show that integrating EMG and EEG is useful for complicated tasks including gripping, walking, and interacting with the environment. This study demonstrates the groundbreaking potential and biosensor-driven control mechanisms for restoring normal limb functionality and increasing amputees’ quality of life. The presentation finishes with developments to come in wearable biosensors, which closed-loop feedback, and artificial prosthetics within the rapidly changing world of assistive technology.

Paper #: S5-1

Quantum-Enhanced Classification of Skull X-Ray Images for Head Trauma Detection

Hussein K. Ibrahim 1, *, Nizar Rokbani 2

1 National School of Electronics and Telecommunications of Sfax (ENET.COM) University of Sfax, Tunisia,

2 Department of Biomedical Technology, College of Applied Medical Sciences in Al-Kharj, Prince Sattam bin Abdul-Aziz University, Al-Kharj, 11942 Saudi Arabia|

*Contact: hkhudher@uowasit.edu.iq

 

Abstract:

This research presents the deep QIGA neural network, a Deep Neural Network supervised by the QIGA meta-heuristic, with the QIGA ensuring the dynamic adaptivity of the deep neural network. This novel approach is applied to classify skull radiographs for head trauma detection, utilizing principles of quantum mechanics to enhance the performance of algorithms in pattern recognition and diagnostic accuracy. This is particularly relevant for the timely and effective treatment of head injuries. Deep learning is also used to automatically identify skull fractures and other cranial pathologies, accelerating the detection of critical conditions. Comparative studies demonstrate that the proposed deep QIGA neural network exhibits superior performance over traditional deep learning models like AlexNet in skull radiograph classification. Empirical results show that the heightened accuracy of this system underscores the significant potential of quantum algorithms to revolutionize head trauma diagnosis, leading to more accurate and reliable clinical assessments and improved patient outcomes.

Keywords: Quantum machine learning, Artificial Neural Network, GA, Coevolutionary Neural Networks, Medical Images.

References:

[1] S. M. Metev and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin, Germany: Springer-Verlag,1998.

[2] J. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. Berlin, Germany: Springer, 1989, vol. 61.

[3] S. Zhang, C. Zhu, J. K. O. Sin, and P. K. T. Mok, “A novel ultrathin elevated channel low-temperature poly-Si TFT,” IEEE Electron Device Lett., vol. 20, pp. 569–571, Nov. 1999.

[4] M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin, “High resolution fiber distributed measurements with coherent OFDR, ”in Proc. ECOC’00, 2000, paper 11.3.4, p. 109.

[5] R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, “High-speed digital-to-RF converter,” U.S. Patent 5 668 842, Sept. 16, 1997.

Paper #: S5-2

Artificial Intelligence-Driven Diagnostic Tool for Holistic  Keratoconus Assessment 

 

Zahra J. Muhsin1,*, Rami Qahwaji1, Ibrahim Ghafir1, Muawyah Al Bdour2, Saif AlRyalat2, Mo’ath AlShawabkeh3, Majid Al-Taee4

1Faculty of Eng. and Digital Technologies, University of Bradford, UK

2Scool of Medicine, The University of Jordan; Amman, Jordan

3Faculty of Medicine, The Hashemite University, Zarqa, Jordan

4Consultant of Computing and Intelligent Systems Engineering, Liverpool, UK

*Contact: z.j.muhsin@bradford.ac.uk

 

Abstract:

Artificial intelligence (AI) has the potential to transform healthcare by streamlining clinical workflows, enhancing clinician productivity, expanding patient access, and improving diagnostic accuracy. In ophthalmology, AI systems now match or exceed experienced clinicians in detecting conditions such as diabetic retinopathy and keratoconus (KC). Yet real-world deployment remains limited, revealing a gap between research success and clinical adoption. This study presents a smart AI-based platform for comprehensive keratoconus assessment, combining early detection and severity staging in a single system. Using corneal tomography data from a widely used Pentacam device, the tool integrates seamlessly into existing workflows. Advanced preprocessing and feature-selection techniques identify the most clinically relevant variables. Three top-performing machine learning models (Random Forest, Decision Trees, Gradient Boosting) were stacked with  Support Vector Machine as a meta classifier were optimized and used to categorize eyes into non-keratoconus (NKC), early tomographic keratoconus (TKC), and clinical keratoconus (CKC) at the screening stage. Cases labeled TKC or CKC undergo further staging through ensemble models that assign one of five disease-severity levels, representing the complete progression of the disease. The screening model achieved cross-validation accuracy of 99.72%, and staging models reached perfect classification. External validation on 100 previously unseen clinical cases, reviewed by expert ophthalmologists, confirmed the model’s strong generalizability and clinical applicability. Developed in close collaboration with domain experts, this work provides a robust foundation for future clinical trials and commercial translation, supporting the broader adoption of AI-driven, personalized eye care solutions.

Keywords: Artificial Intelligence, Corneal Tomography, Ensemble Learning, Keratoconus diagnosis, Ophthalmology.

References:

[1]   Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Two-stage ensemble learning frameworkfor automated classification of keratoconus severity,” Computers in Biology and Medicine Journal, vol. 195: 110568, 2025.

[2]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Highly efficient stacking ensemblelearningmodel for automated keratoconus screening,” Eye and Vision Journal, vol. 12(1): 1-20, 2025.

[3]   Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. AlShawabkeh, M. Al Bdour, S. AlRyalat and M. Al-Taee, “Advances in machine learningfor keratoconus diagnosis,” International Ophthalmology Journal, vol. 45(128): 1-22, 2025.

[4]  Z. J. Muhsin, R. Qahwaji, M. AlShawabkeh, S. A. AlRyalat, M. Al Bdour and M. Al-Taee, “Smart decision support system forkeratoconus severity staging using corneal curvature and thinnest pachymetry indices,” Eye and Vision, vol. 11(1): 1-20, 2024.

[5]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Clinician-assisted exploratorydata analysis framework for early diagnosis of keratoconus,” in 2025 IEEE 22ndInternational Multi-Conference on Systems, Signals& Devices (SSD), Monastir, Tunisia, 17-20 February 2025, pp. 215-220.

[6]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Keratoconus severity stagingusing random forest and gradient boosting ensemble techniques,” in 2025 IEEE 22ndInternational Multi-Conference on Systems,Signals & Devices (SSD), Monastir, Tunisia, 17-20 February 2025, pp. 593-598.

[7]  Z. J. Muhsin, R. Qahwaji, I. Ghafir, M. Al Bdour, S. AlRyalat, M. AlShawabkeh and M. Al-Taee, “Performance comparison ofmachine learning algorithms for keratoconus detection,” in IEEE 30th Int. Conf. on Telecoms (ICT’24), Amman,Jordan, 24-27 June 2024, pp. 01-06.

[8]  B. Al-Bander, B. Williams, M. Al-Taee, W. Al-Nuaimy and Y. Zheng, ”A novel choroid segmentation method for retinal diagnosisusing deep learning,” in 2017 Int. Conf. on Developments in eSystems Engineering (DeSE’2017), Paris, France, 14-16 June 2017,pp. 182-187.

Paper #: S5-3

The Primacy of Outcomes and Trust: Perceived Well-Being and Cost Transparency as Key Drivers of Patient Satisfaction in Private Hospitals of Erbil, Iraq

Wasfi T. Kahwachi1,*, and Banaz Waleed Yaqoob Meran2

1Dentistry Dept., Faculty of Dentistry, Tishk International University, Erbil, Iraq

2Statistics & Informatics Dept. Salahaddin – Erbil University, Iraq

*Contact: wasfi.kahwachi@tiu.edu.iq

 

Abstract:

This study aims to identify the primary factors influencing patient satisfaction with healthcare services in private hospitals located in Erbil, Iraq. Data was collected from 250 patients across three major hospitals—Sardam Hospital, Zanko Hospital, and Par Private Hospital, using a structured questionnaire distributed between October 25, 2024, and December 15, 2024. The survey encompassed ten critical dimensions of care: quality of care, communication skills, waiting time, hospital environment, staff professionalism, efficiency of services, cost transparency, follow-up care, perceived outcomes, and modernization of technology. Factor Analysis and Regression Analysis were employed to explore relationships and determine key predictors of patient satisfaction. The findings revealed two dominant components affecting patient perceptions: Service Quality and Efficiency (encompassing factors like staff professionalism, communication, cost transparency, and efficiency) and Patient-Centered Outcomes and Modernization (highlighting the impact of technology and personal well-being on satisfaction levels). Regression analysis further identified Perceived Outcome and Personal Well-Being, Cost and Transparency of Billing, Staff Attitudes and Professionalism, Quality of Care, and Communication Skills as statistically significant predictors of patient satisfaction. Conversely, factors such as Waiting Time, Accessibility, Hospital Environment, Efficiency, and Follow-Up Care showed minimal influence on satisfaction levels. The model demonstrated a high explanatory power with an R² value of 0.990, indicating strong predictive capability. These findings suggest that enhancing communication, staff professionalism, and transparency, along with investing in technology and personalized care, can significantly improve patient satisfaction in private healthcare settings. The results provide actionable insights for healthcare administrators to strategically improve service delivery and patient outcomes.

Keywords: Patient Satisfaction, Healthcare Service Quality, Factor Analysis, Cost Transparency, Staff Professionalism.

Referencies:

[1]  Almomani, I., Al-Quran, A., & Al-Hawary, S. (2020). The impact of service quality on patients’ satisfaction in public and private hospitals: A comparative study. Int. J. of Healthcare Management, 13 (sup1), 222-230.J. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. Berlin, Germany:Springer, 1989, vol. 61.

[2]  Chen, J., Li, J., Li, Y., & Zhang, L. (2021). Inpatients’ satisfaction and its predictors in public hospitals in Shanghai, China. Journal of Patient Experience, 8, 1-9.

[3]  Hemadeh, R., Hammoud, R., & Kdouh, O. (2019). Patient satisfaction with primary healthcare services in Lebanon. International Journal of Health Planning and Management, 34 (1), e423-e435.

[4]  Javed, S. A., & Ilyas, F. (2021). Service quality and patient satisfaction in public and private hospitals of Pakistan. International Journal of Quality & Reliability Management, 38 (1), 64-85.

[5] Kamra, V., Singh, H., & Kumar, K. (2016). Factors affecting patient satisfaction: An exploratory study for healthcare services in India. International Journal of Pharmaceutical and Healthcare Marketing, 10 (1), 48-74.

[6]  Pai, Y. P., & Chary, S. T. (2011). Measuring patient-perceived hospital service quality: A conceptual framework. International Journal of Health Care Quality Assurance, 24 (7), 544-558.

[7]  Salaria, N., & Sharma, A. (2019). Service quality and patient satisfaction in healthcare sector: A study of selected hospitals inIndia.Journal of Health Management, 21 (2), 224-237.

[8] Vieira, J., Neves, J., & Dias, Á. (2023). Patient satisfaction in healthcare: A comparative analysis using different statistical approaches. Journal of Health, Population and Nutrition,*42*(1), 1-15.

 

Paper #: S5-4

A Vision for an Ethical Federated Learning Network to Enable Universal Rare Disease Diagnosis

Ibrahim Yaseen*, Shatha Saadi
Department of Biology, University of Al-Hamdaniya, Ninawa, Iraq

*Contact: ibrahim.yaseen@uohamdaniya.edu.iq

 

Abstract:

Rare disease diagnosis represents a profound global health challenge, particularly in developing regions where expertise and data are siloed across under-resourced healthcare systems. Patients in these regions often face diagnostic odysseys lasting years, leading to poor outcomes. Centralised artificial intelligence (AI) solutions promise to assist, but they are hindered by critical barriers, including data privacy, regulatory heterogeneity, and the legitimate reluctance of institutions to share sensitive patient data. This vision paper proposes the establishment of a multinational, ethical Federated Learning (FL) network specifically designed to facilitate universal diagnosis of rare diseases. Federated Learning offers a paradigm-shifting solution: it enables the training of robust AI diagnostic models across multiple hospitals without any patient data ever leaving the originating institution. Instead of pooling data, the model is sent to each hospital’s secure server, trained locally on its private data, and only the model updates (weights) are shared and aggregated. Our proposed initiative outlines a three-pillar framework: (1) Technical Infrastructure: Deploying a secure, open-source FL platform capable of handling diverse data formats (e.g., medical imaging, genomic data) from partner hospitals. (2) Ethical Governance: Establishing a multi-stakeholder ethics board to oversee data anonymisation, model bias auditing, and equitable algorithm design, ensuring the network does not perpetuate existing health disparities. (3) International Collaboration: Creating a consortium of regional hospitals and international research centres to foster knowledge transfer and build local AI capacity. This vision addresses the conference’s themes by presenting a concrete, future-oriented AI initiative that transcends national borders. It demonstrates how collaborative AI can bridge the health equity gap, turning isolated, rare cases into a powerful, collective diagnostic tool. We will present our roadmap, discuss key implementation challenges, and issue a call to action for potential collaborators to join this critical effort.

Keywords: Federated learning, Rare diseases, Medical AI, Health equity, Data privacy, Ethical AI, Diagnostic odyssey.

References:

[1]  N. Riekeet al., “The future of digital health with federated learning,”NPJ Digit. Med., vol. 3, no. 1, p. 119, 2020, doi:10.1038/s41746-020-00323-1.

[2]  M. J. Sheller, G. A. Reina, B. Edwards, J. Martin, and S. Bakas, “Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Cham: Springer, 2020, pp.92–104.

[3]  G. A. Kaissis, M. R. Makowski, D. Rückert, and R. F. Braren, “Secure, privacy-preserving and federated machine learning in medical imaging,” Nat. Mach. Intell., vol. 2, no. 6, pp.305–311, Jun. 2020, doi: 10.1038/s42256-020-0186-1.

[4]  E. Vayena, A. Blasimme, and I. G. Cohen, “Machine learning in medicine: Addressing ethical challenges,” PLOS Med., vol. 15, no.11, p. e1002689, Nov. 2018, doi: 10.1371/journal.pmed.1002689.

[5] International Rare Diseases Research Consortium (IRDiRC), “IRDiRC goals and vision for 2021-2025,” 2021. [Online]. Available:https://irdirc.org/wp-content/uploads/2021/02/IRDiRC_2021-2025_Goals_Vision_Document.pdf

[6]  T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp.50–60, May 2020, doi: 10.1109/MSP.2020.2975749.

Paper #: S5-5

Integrating AI and IoT for Real-time Monitoring and Control of Wearable Healthcare Devices

Kasim M. Al-Aubidy

Mechatronics Engineering Department, Tishk International University, Erbil, Iraq

Email: Qasim.obaidi@tiu.edu.iq

 

Abstract:

Accelerated advances in computing and wireless communication have led to intelligent systems that deeply influence daily life. Modern cloud computing, mobile technology, artificial intelligence (AI), and the Internet of Things (IoT) are transforming sectors such as healthcare, agriculture, and industry. Current healthcare research highlights the value of integrating these technologies to improve service delivery and enhance AI-driven consultations. Combining AI with IoT for real-time monitoring and control of wearable medical devices creates a powerful synergy that boosts healthcare outcomes. IoT sensors in wearables continuously capture data (e.g., heart rate, blood pressure, glucose levels), while AI algorithms analyze it in real time to predict emerging health issues and detect abnormalities linked to conditions such as stroke, heart failure, or diabetes. These technologies provide real-time monitoring for patients, healthcare providers, or medical professionals, allowing early intervention, decreasing hospital admissions, and enhancing long-term outcomes. This integration offers health care providers and medical professionals extensive and continuous patient data, facilitating remote monitoring, informed clinical decisions, precise diagnoses, and enhanced treatment planning. Several projects in this domain are analyzed, concentrating on the design principles of portable and wearable medical devices that use the IoT and integrate AI-driven control and monitoring algorithms. This paper will examine the development of intelligent algorithms using soft-computing tools, including fuzzy logic, neural networks, and genetic algorithms, which facilitate adaptive and efficient decision-making in real-time healthcare applications. The devices to be examined in detail include a smart wheelchair for individuals with disabilities, a compact and portable ventilator for emergency or home care, and an asthma detection and management system that tracks respiratory conditions and issues timely alerts. These attempts illustrate the capacity for integrating IoT and AI technology to provide creative solutions that improve patient care, accessibility, and medical responsiveness.

Keywords: Artificial intelligence, Internet of Things, Healthcare, Wearable devices, Real-time monitoring, Asthma detection and treatment.

References:

[1] M. J. Ghafoor, M. Mujeeb-U-Rahman, M. Jamal, and A. Ahmed, “Low-Cost and Portable Cardiopulmonary Resuscitation Machine”, 2019 International Conference on Engineering and Emerging Technologies (ICEET), February 2019, DOI: 10.1109/CEET1.2019.8711861.

[2] A. W. Al-Mutairi, K. Al-Aubidy, and F. N. Al-Halaiqa, “IoT-Based Real-Time Monitoring System for Epidemic Diseases Patients; Design and Evaluation”, International Journal of Online and Biomedical Engineering, Vol.17, No.1, January 2021, pp63-82, DOI: 10.3991/ijoe.v17i01.18849.

[3] K. Al-Aubidy and A. W. Al-Mutairi, “Portable Cardiopulmonary Resuscitation and Ventilator Device: Design & Implementation”, Chapter in Book entitled “Advanced Sensors and Systems for Biomedical Applications”, Springer, July 2021, DOI: 10.1007/978-3-030-71221-1_7, www.springerprofessional.de/portable-cardiopulmonary-resuscitation-and-ventilator-device-des/19370946.

[4] R. Alkhalil, S. Ammourah, and K. M. Al-Aubidy, “Real-Time Monitoring and Assessment of the Indoor Air Quality Hazard Index using a Deep Learning Approach”, i-manager’s Journal on IoT and Smart Automation (JIOT), Vol. 1, No. 1, June 2023.

[5] A. Al-Turk, K. M. Al-Aubidy and M. A. Al-Khawaldeh, “Prediction of Asthma Attacks Using ANFIS and Mobile Technologies”, 2024 17th International Conference on the Developments in eSystems Engineering (DeSE2024), 6-8 November 2024, Khorfakkan, United Arab Emirates, 2024, pp. 89-94, doi: 10.1109/DeSE63988.2024.10912035.

Session 6 [online]: Education, Healthcare and Smart Management

Tuesday, December 16, 2025, 14h45 – 15h45

Paper #: S6-1

Alzheimer’s Diagnosing and Reporting Based on Enhanced Vision Transformers and LLM

Sujan Aryal1,*, Osama Mahdi2, and Adel Al-Jumaily1,2,3,4,5,6 *

1Melbourne Institute of Technology, School of IT & Engineering, Sydney, Australia

2 Melbourne Institute of Technology, School of IT & Engineering, Melbourne, Australia

3 Dept. of Computer Science and Software Eng., University of Western Australia, Crawley, WA 6009, Australia

4School of Science, Edith Cowan University, Joondalup, WA 6027, Australia

5UNITAR International University, Petaling Jaya 47301, Malaysia

6Lab STIC, UMR 6285 CNRS, ENSTA – IP Paris, 29806 Cedex 09 Brest, France

*Contact: mit234740@stud.mit.edu.au, phone +61-449968069 or aal-jumaily@mit.edu.au

Abstract:

Alzheimer’s disease (AD) is a chronic neurodegenerative disease and the most common cause of dementia, which affects 55 million people worldwide. Despite the growing number of AD patients, diagnosis is still challenging, especially in the early stages. Conventional diagnostic methods rely on radiologists’ visual interpretation of structural MRI images and neuropsychological tests, which are experience-based and time-consuming. This has led to a huge demand for automatic and scalable output diagnostic tools. Recent advances in deep learning and computer vision have shown great promise in medical imaging applications, including the detection and classification of neurological disorders. Our proposed research offers a new combined deep learning system. It uses Vision Transformers (Vit), Gradient-weighted Class Activation Mapping, and Large Language Models (LLM). The system delivers a precise Alzheimer’s diagnosis and creates clinical reports. The system has three parts. First part is a Vision Transformer model that sorts Alzheimer’s stages from MRI images. Vision Transformers are chosen over standard Convolutional Neural Networks, as they grasp global image connections and improve classification accuracy; the second part is Grad-CAM, which creates the heatmaps. The maps point to brain areas most important for the model’s predictions, which allows doctors to visualise the MRI area that contributed to the diagnosis. Third part, an LLM generates detailed, structured clinical reports by combining imaging insights with patient demographic and health record data. The system is trained and validated on public datasets, including the OASIS MRI dataset and structured CSV files with clinical and demographic information. This multi-modal approach addresses the gaps in current research as ViTs are not being used in AD diagnosis, lack of explainable AI tools, no frameworks that combine imaging and text data, and no automated clinically relevant reporting systems. By combining classification, interpretability and reporting, the proposed method will be a practical decision support tool for clinicians. The Grad-CAM visualisations increase trust in AI predictions, the LLM reports are clear and evidence-based based and aligned with medical documentation standards. This can improve diagnostic accuracy, transparency and efficiency in Alzheimer’s care. Moreover, the approach can be applied to other neurological and medical imaging tasks, including dementia classification and reporting, so it’s a versatile solution for AI in healthcare.

Keywords: Machine Learning, Grad-CAM, Vision Transformers (ViT), Large Language Models (LLMs), Explainable AI (XAI)

Paper #: S6-2

Evaluation of Chemical Carcinogenic Compounds in Aqueous Solutions and Treatment with Biosorbent


Estabraq Ali Hameed
University of Hamdaniya, Ninawa, Iraq

Contact: estabraq-ali@uohamdaniya.edu.iq

 

Abstract:
The presence of chemical carcinogenic compounds in drinking water poses significant health risks, particularly in urban centers such as Baghdad. This study investigates the concentrations of bromates, nitrates, ammonium, and chloroform (as a representative of Trihalomethanes, THMs) in drinking water samples and evaluates an environmentally friendly treatment method using orange peel powder as a biosorbent. Ten water samples were collected over six months, with nitrate quantified via spectrophotometry at 220 and 275 nm, bromates through colorimetric and absorption methods, chloroform indirectly using organic reagents, and ammonium using the Nessler reagent at 420 nm. The biosorption experiment utilized 5 g of orange peel powder per 100 ml of contaminated water under controlled conditions (pH = 6, contact time = 150 min, stirring = 450 rpm). Results indicate that several samples exceeded permissible limits, presenting a potential health hazard. Orange peel powder demonstrated high removal efficiency for the targeted carcinogenic compounds. The biosorbent-based treatment is cost-effective, environmentally sustainable, technically feasible, and supports the recycling of agricultural waste. This work highlights a practical approach for mitigating chemical carcinogens in drinking water while promoting sustainability.

Keywords: Biosorbent, Eco-friendly, Sustainability, Carcinogenic Compounds, Bromates, Trihalomethanes.

References:

[1]  H. Liang, W. Wang, H. Liu, et al., “Porous MgO-modified biochar adsorbents fabricated by the activation of Mg(NO₃)₂ for phosphate removal, ”Chemosphere, vol. 324, p. 138320, 2023. DOI: 10.1016/j.chemosphere.2023.138320.

[2] Z. Zhang, G. Huang, P. Zhang, et al., “Development of iron-based biochar for enhancing nitrate adsorption, ”Science of The Total Environment, vol. 856, p. 159037, 2023. DOI: 10.1016/j.scitotenv.2022.159037.

[3]  J. Chang, P. Sivasubramanian, C. Dong, M. Kumar, “Study on adsorption of ammonium and nitrate in wastewater by modified biochar, ”Bioresource Technology Reports, vol. 21, p. 101346, 2023. DOI: 10.1016/j.biteb.2023.101346.

[4]  Z. Z. Ismail, B.B. Hameed, “Recycling of raw corn cob residues for ammonium removal, ”International Journal of Waste Management, vol. 13, pp.217–230, 2014. DOI: 10.1504/IJEWM.2014.059936.

[5]  N. Liu, Z.T. Sun, Z.C. Wu, et al., “Adsorption characteristics of ammonium nitrogen by biochar, ”Advanced Materials Research, vol. 664, pp. 305–312, 2013. DOI: 10.4028/www.scientific.net/AMR.664.305.

Paper #: S6-3

Smart Modular PLC-Based Learning Platform for Improving Industrial Automation Education

 

Musa Al-Yaman

Mechatronics Engineering Department, University of Jordan, Amman, Jordan

Contact: m.alyaman@ju.edu.jo

 

Abstract:

Bridging the gap between theoretical knowledge and practical application remains a central challenge in engineering education, particularly in industrial automation. Students frequently encounter difficulties transitioning from classroom concepts to hands-on PLC programming due to limited access to equipment and authentic industrial setups. To address this issue, we designed and developed a cost-effective, modular PLC-based learning platform that combines physical hardware, a smart Unity-powered simulation environment, and a structured training manual grounded in instructional best practices. The hardware component of the platform consists of ten modular lab stations; each designed for two trainees. These stations replicate real-world industrial processes such as traffic light control, elevator operation, and automated filling systems. Actual input/output devices including switches, sensors, motors, and indicator lights are utilized to enable real-time testing of the PLC ladder logic. Complementing the hardware, the software component offers a virtual PLC environment with interactive 3D models and human-machine interface, allowing students to design, simulate, and debug ladder logic programs in real time. To evaluate the platform’s educational effectiveness, a post-implementation survey was conducted with 40 students who engaged with both the hardware kits and the simulation software. The survey, designed around key performance indicators (KPIs) such as learning outcomes, usability, engagement, and confidence, revealed a 31% average improvement in students’ practical performance and a notable increase in self-reported confidence when working with PLC systems. Additionally, the modular design and use of cost-effective components resulted in an estimated 27% reduction in setup and maintenance costs compared to traditional PLC training systems. These findings confirm that the platform not only improves learning outcomes but also offers a more affordable and scalable solution for engineering education. Beyond overcoming equipment limitations, this integrated approach provides a structured, repeatable, and measurable learning experience that effectively bridges the gap between theoretical knowledge and real-world application in industrial automation.

 

Keywords: Applied Engineering Education, Hands-on learning, Modular lab design, Programmable Logic Controller, Simulation kits.

References:

[1]   M. Al-Yaman, D. Alswaiti, A. Alsharkawi, M. Al-Taee. A cost-effective modular laboratory solution for industrial automation and applied engineering education. MethodsX. 2025 May 28;14:103388. doi: 10.1016/j.mex.2025.103388. PMID: 40529514; PMCID:PMC12171526.

[2]  J. J. Brito, P. Toledo, S. Alayon, Virtual laboratory for automation combining inventor 3D models and Simulink control models: Virtual laboratory for automation, in: 2018 IEEE Global Engineering Education Conference (EDUCON), IEEE, Tenerife, 2018:pp. 555–562.https://doi.org/10.1109/EDUCON.2018.8363279.

[3]  I. Burhan, Sali al. Talib, A.A. Azman, Design and fabrication of Programmable Logic Controller Kit with multiple output module forteaching and learning purposes, in: 2012 IEEE 8th International Colloquium on Signal Processing and Its Applications, IEEE, Malacca, Malaysia, 2012: pp. 14–18.https://doi.org/10.1109/CSPA.2012.6194681.Huang, C. M., Andrist, S., Sauppé, A., &Mutlu, B. (2015).

[4]  M. C. Carvalho, “Using Low-Cost Equipment for Laboratory Automation, in: Practical Laboratory Automation,” John Wiley & Sons, Ltd, 2016: pp.71–78.https://doi.org/10.1002/9783527801954.ch7.

[5]S. Çeven, A. Albayrak, Design and implementation of modular test equipment for process measurements in mechatronics education, Comp Applic In Engineering 28 (2020) 324–337.https://doi.org/10.1002/cae.22196

 

Paper #: S6-3

Ventilator Monitoring Control Using Artificial Neural Network and Fuzzy Logic for Asthma Prediction

Aynour Al-Turk, Kasim M. Al-Aubidy, Mohammed Baniyounis, Mustafa Al-Khawaldeh

Mechatronics Engineering Faculty , Philadelphia University , Jarash, Jordan

Contact: trtnur@yahoo.com, Phone +962 796973490

Abstract:

Asthma is a chronic respiratory disease that causes airway obstruction, inflammation, and muscle tightening, often leading to severe complications and hospitalizations. Conventional treatment depends on hospital visits, physician expertise, and manual ventilator calibration, which delay timely intervention and increase healthcare burdens. The COVID-19 pandemic further highlighted the need for remote systems that provide prediction and treatment support. Advances in Artificial Intelligence (AI), machine learning, and the Internet of Things (IoT) enable real-time monitoring, accurate forecasting, and automated interventions. This study proposes an AI-based framework integrating asthma attack prediction with ventilator calibration. The prediction model employs an Adaptive Neuro-Fuzzy Inference System (ANFIS) using Air Quality Index (AQI) and Peak Expiratory Flow Rate (PEFR) as inputs. AQI values are derived from sensors measuring gases, particulate matter, humidity, and temperature, while PEFR provides patient-specific physiological data. Together, these parameters ensure accurate severity classification. For treatment, an Artificial Neural Network (ANN) automates ventilator calibration using patient attributes such as age, weight, height, gender, and gas concentrations, minimizing reliance on manual expertise [3]. Simulation results show high accuracy, with ANFIS yielding RMSE values of 0.05 and 0.0045, while ANN calibration achieved 0.11 for mechanical ventilators and 1.9×10⁻²² for CPAP ventilators. The integration of AI and IoT into a mobile application reduces hospital dependency, supports remote patients, and advances scalable asthma management solutions.

Keywords: Asthma, IOT, AI, Machine Learning, Corona Disease.

References:

[1]   K. Al-Aubidy, M. Baniyounis, and A. Al-Turk, “AI in telemedicine: Integration of machine learning and IoT for chronic disease management,” International Journal of Advanced Healthcare Systems, vol. 12, no. 4, pp. 201–210, 2023.

[2]   A. Mustafa, A. Albtoosh, and K. Shaheen, “Adaptive neuro-fuzzy inference system for asthma prediction using AQIand PEFR,”IEEE Access, vol. 11, pp. 45012–45025,2023.

[3]   M. Baniyounis and K. Al-Aubidy, “Neural network-based ventilator control for asthma and respiratory disease patients,” Journal of Medical Systems, vol. 47, no. 2, pp. 33–42, 2023.

[4]   A. A. M. Al-Turk, “Ventilator monitoring control using ANN andfuzzy logic for asthma prediction,” Master’s Thesis, Philadelphia University, Jordan, 2025.

[5]   S. Gupta, R. Sharma, and A. Kumar, “IoT-enabled healthcare monitoring and prediction systems: A review,” IEEE Reviews in Biomedical Engineering, vol. 16, pp. 215–228, 2023.

[6]   W. Zhang, J. Li, and H. Chen, “Mobile health applications for respiratory disease monitoring: Trends and future directions,” IEEE Internet of Things Journal, vol. 10, no. 8, pp. 6501–6512, 2023.

[7]   World Health Organization (WHO), Global Asthma Report 2023, Geneva, Switzerland: WHO Press, 2023.

Paper #: S6-4

AI-based Fault Detection and Allocation in PV Systems

Izziyyah M. Alsudi1 and Kasim M. Al-Aubidy2

1Al-Balqa Applied University, Amman, Jordan

2Mechatronics Engineering Department, Tishk International University, Erbil, Iraq

*Contact: Izziyyah.alsudi@gmail.com

 

Abstract:

In an effort to slow the rate of global warming and climate change, solar photovoltaic (PV) installations are growing rapidly. In a solar PV plant, clean energy is generated that is used at all scales, including utility and distributed generation systems. Large-scale PV plants are generally designed to operate as grid-tied systems due to their simplicity and ease of use. There will be an increase in the incidence of failures as PV plants grow and become more connected to the grid. With larger photovoltaic arrays, it becomes more difficult to locate and identify faults. Artificial intelligence is used in this paper to improve the efficiency and reliability of PV systems by detecting and classifying faults using deep learning. Two distinct methods were employed to address this problem. The first method considers four cases: one normal operating condition and three types of faults applied to a single panel—namely, a short-circuit fault, an open-circuit fault, and a shading fault. The results indicate that this method achieves 100% accuracy in detecting, classifying, and locating these faults. The second method expands the analysis to 12 cases, covering a broader range of fault scenarios. These include the normal operating condition, short-circuit faults affecting 1 to 5 panels, open-circuit faults affecting 1 to 5 panels, and shading faults affecting 1 to 5 panels. The results demonstrate that this method provides a validation accuracy of 98.65% in detecting, classifying, and pinpointing faulty panels within the string. By accurately detecting and categorizing faults, both methods contribute to faster and more efficient maintenance of photovoltaic systems. This capability not only reduces downtime but also enhances the overall reliability, stability, and long-term performance of the system.

Keywords: Deep Learning, Artificial Neural Network, Photovoltaic systems, Fault diagnosis.

References:

[1]  N. Sapountzoglou and B. Raison, “A Grid Connected PV System Fault Diagnosis Method,” in IEEE International Conference on Industrial Technology (ICIT), Melbourne, VIC, Australia, 2019.

[2]  R. K. Madeti, “A monitoring system for online fault detection in multiple photovoltaic arrays,” Renewable Energy Focus, vol.41,pp. 160-178, 2022.

[3] Y.-Y. Hong and R. A. Pula, “Methods of photovoltaic fault detection and classification: A review,” Energy Reports, vol. 8, pp. 5898-5929, 2022.

[4]  I. Hussain, I. U. Khalil, A. Islam, M. U. Ahsan, T. Iqbal, M. S. Chowdhury, K. Techato and N. Ullah, “Unified Fuzzy Logic Based Approach for Detection and Classification of PV Faults Using I-V Trend Line,” Energies, vol. 15, no. 14, 2022

Paper #: S6-5

Attention-Driven Invariant Feature Learning for Human Identification in Complex Environments

Ghaith Hussein *, Waleed Al-Nuaimy, and Jeremy S. Smith

Department of Electrical Engineering and Electronics,

University of Liverpool, Brownlow Hill, Liverpool L69 3GJ, UK

*Contact: G.t.Hussein2@gmail.com, phone +447878358258

 

Abstract:

This paper presents the Enhanced Region-based Attention Network (ERAN), an advanced framework in artificial intelligent for Person re-identification (P-ReID) with deep attention mechanisms. The main problem being addressed is invariant feature extraction under diverse environmental conditions, such as lighting, pose, and visual features. ERAN integrates both the Shape Attention Module (SAM) and the Appearance Attention Module (AAM) to selectively focus on human body shape and apparel-based cues. While reduce the impact of transient factors such as pose, lighting, and occlusion. Building on the Residual Attention Network (RAN) [1]. ERAN further integrates Cumulative Region Aggregation (CRA) for structured spatial segmentation and Contextual Feature Analysis (CFA) for inter-feature relationship modelling, supported by a bottom-up/top-down feedforward attention pipeline. Experimental evaluations on large-scale benchmark datasets, including LTCC[2], PRCC[3], CUHK03[4], and DukeMTMC-ReID[5], demonstrate the robustness of ERAN in real-world scenarios such as crowded urban centre or campus environments, long-term appearance changes, and urban surveillance scenes. The results show ERAN achieving up to 93.0% Rank-1 and 85.7% mAP on DukeMTMC-ReID, and 88.5% Rank-1 on CUHK03, surpassing existing state-of-the-art methods. Beyond technical innovation, ERAN highlights potential applications in healthcare for non-invasive patient monitoring, in education for secure smart attendance systems, and in finance for resilient identity verification under appearance variability.

Keywords: Person re-identification, Shape Attention Module, Appearance Attention Module, Contextual Feature Analysis, Cumulative Region Aggregation.

References:

[1]L. Liang, J. Cao, X. Li, and J. You, “Improvement of Residual Attention Network for Image Classification,”Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11935 LNCS, no. 1, pp. 529–539, 2019, doi:10.1007/978-3-030-36189-1_44.

[2]X. Qianet al., “Long-Term Cloth-Changing Person Re-Identification,”Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12624 LNCS, pp. 71–88, 2021, doi: 10.1007/978-3-030-69535-4_5.

[3]Q. Yang, A. Wu, and W. S. Zheng, “Person Re-Identification by Contour Sketch under Moderate Clothing Change, ”IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 6, pp. 2029–2046, 2021, doi: 10.1109/TPAMI.2019.2960509.

[4]W. Li, R. Zhao, T. Xiao, and X. Wang, “Deep ReID: Deep Filter Pairing Neural Network for Person Re-Identification, ”Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 152–159, 2014, doi: 10.1109/CVPR.2014.27.

[5]E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9914LNCS, no. c, pp. 17–35, 2016, doi: 10.1007/978-3-319-48881-3_2.

Session 7: Entrepreneurial Case Studies and Real-World Experiences

Tuesday, December 16, 2025, 15h45 – 16h45   

Paper #: S7-CS1

A Case Study on the Strategic Implementation of the First Digital Bank in Iraq

 

Nabeel Raheem alebady

Chairman of the Founding Committee of the Iraqi Digital Bank, Iraq

Email: Nabeel.Alebady@DIB-iq.com

Abstract:

This presentation will provide an in-depth analysis of the Iraqi Digital Bank’s groundbreaking journey to redefine traditional banking. A detailed session showcasing the bank’s model, implementation challenges, and strategic outcomes. The proposed key discussion points include:

  • Strategic Vision of a Digital-First Bank:
  • Analyzing the customer-centric vision behind establishing a fully digital bank tailored for individuals and SMEs in the Iraqi market.
  • The role of AI and unlimited digital applications in achieving operational efficiency and real-time transaction processing.
  • Enhancing Customer Experience and Value Creation:
    • Designing innovative products and services that align with specific customer segment needs, thereby maximizing perceived value.
    • How digital channels create a new level of banking convenience and well-being, accessible anytime, anywhere.
  • Security, Innovation, and Building Trust in Digital Finance:
  • Implementing multi-layered security frameworks, including multi-factor authentication and biometric integration.
  • The critical role of eKYC and Mobile Banking solutions in fraud prevention and creating a secure digital identity for customers.
  • Impact on Financial Behavior and Societal Transformation:
    • The bank’s role in fostering a new banking culture and promoting the principle of digital trust.
    • Contributing to financial literacy and digital inclusion in an emerging market, aligning with educational transformation principles by fostering new digital skills among the population.

Keywords: Digital bank, Mobile banking, Selling Proposition, Bank security, Societal transformation.

 

Paper #: S7-CS2

Biomedical Engineering Research Center at University of Anbar: Case Study

Yousif Al Mashhadany

Biomedical Engineering Research Center / University of Anbar

Contact: yousif.mohammed@uoanbar.edu.iq

Abstract:

The translation of innovative research outcomes into viable investment projects represents a critical bridge between academic discovery and real-world impact. This workshop, titled “Converting a Patent into an Investment Project in Biomedical Engineering: Case Study of the Biomedical Engineering Research Center at the University of Anbar”, aims to demonstrate a structured pathway for transforming patented biomedical ideas into sustainable institutional and commercial initiatives.

Over the past five years, five major biomedical engineering projects have been developed and published, each addressing significant challenges in healthcare technology, medical devices, and clinical applications. These projects encompass a wide range of innovations, including diagnostic systems, therapeutic devices, rehabilitation technologies, and smart biomedical sensors. The workshop will illustrate how these individual research outcomes were strategically integrated to establish the “Biomedical Engineering Research Center” at the University of Anbar—a hub designed to promote interdisciplinary collaboration, technology transfer, and commercialization.

The session will focus on three main aspects: (1) the process of converting scientific research into patentable technologies, (2) the evaluation of patents for investment readiness and market potential, and (3) the organizational and financial strategies required to institutionalize these innovations within a research center framework. Emphasis will be placed on practical steps, challenges, and lessons learned from the University of Anbar’s experience, offering a replicable model for other academic institutions seeking to align research with socioeconomic development.

By linking patent creation, project development, and investment planning, this workshop aims to inspire participants to view biomedical engineering not only as a field of scientific inquiry but also as a dynamic engine for innovation-driven entrepreneurship and national health advancement.

Paper #: S7-CS3

AI-Based Smart Traction Bed for Adaptive and Personalized Spine Treatment

 

Ezideen A Hasso1,*, Asma Ayaz 2, Kasim M. Al-Aubidy1, and Ruaa E. Al-Khalidi3

1 Mechatronics Department, Tishk International University, Erbil, Iraq

2 Physiotherapy Department, Cihan University, Erbil, Iraq

3Pharmacology, Medical Physics, and Clinical Biochemistry, Hawler Medical University, Erbil, Iraq

*Contact: ezideen.hasso@tiu.edu.iq

 

 

Abstract:

Spinal disorders are among the leading causes of disability worldwide, frequently resulting in persistent pain, reduced mobility, and diminished quality of life. Traction therapy is a common physiotherapy intervention designed to relieve discomfort, decrease pressure on intervertebral discs, and restore spinal alignment. Despite its widespread use, traditional traction systems generally operate on fixed treatment protocols, which may not sufficiently account for the individual biomechanical and clinical variations of each patient. This limitation often reduces both the effectiveness and the overall safety of such treatments. The present research aims to develop an artificial intelligence–enabled smart traction bed capable of delivering adaptive and personalized spinal therapy. The system will integrate real-time feedback from pressure sensors, motion tracking devices, and load cells to continuously monitor spinal positioning, applied traction forces, and patient responses. A machine learning–driven control algorithm will be employed to dynamically adjust therapeutic parameters, thereby enhancing treatment outcomes while ensuring patient safety. The scope of the project includes the design, construction, and initial validation of a prototype AI-assisted traction bed. Its performance will be evaluated against conventional traction therapy in terms of therapeutic impact and patient experience. Expected benefits include greater pain reduction, faster functional recovery, higher levels of patient satisfaction, and improved safety through automated monitoring and adjustment. By embedding intelligent control mechanisms into physiotherapy equipment, this initiative represents an important step toward the modernization of rehabilitation technology. Furthermore, the approach provides a foundation for the development of a new generation of AI-based rehabilitation devices, potentially transforming future physiotherapy practices.

Keywords: AI-assisted traction therapy, Adaptive control in physiotherapy, AI in spine care, Load-adaptive therapeutic systems, Digital twin in rehab.

Paper #: S7-CS4

Artificial Intelligence–Driven Auto-Contouring for Multimodal Imaging in Radiation Therapy Treatment Planning

 

Ruaa E. Al-Khalidi 1,*, Ezideen A Hasso 2, Kasim M. Al-Aubidy2, Ahmad Sami Kamal3

and Asma Ayaz4

1Pharmacology, Medical Physics, and Clinical Biochemistry, Hawler Medical University, Erbil, Iraq

2Mechatronics Department, Tishk International University, Erbil, Iraq

3 College of Dentistry , Al-kitab University, Kirkuk, Iraq

4Physiotherapy Department, Cihan University, Erbil, Iraq

*Contact: ruaa.hussien@hmu.edu.krd

Abstract:

Precise delineation of target volumes and organs at risk (OARs) is fundamental to radiation therapy planning but remains one of the most time-consuming and observer-dependent tasks in clinical workflow. This project aims to develop an artificial intelligence (AI)–based software tool capable of performing automated contouring directly on DICOM-RT datasets, independent of treatment delivery platform or vendor. The system will be trained and validated using publicly available, anonymized datasets that include computed tomography (CT) images as the primary modality, with the capacity to integrate magnetic resonance imaging (MRI) for enhanced soft tissue definition in specific cancer sites such as the brain, pelvis, and prostate. Deep learning algorithms will be employed to segment planning target volumes (PTVs) and OARs, with performance evaluated against expert delineations using standard quantitative metrics such as Dice similarity coefficient, Hausdorff distance, and volumetric overlap. By supporting DICOM-RT standards (RTSTRUCT, RTDOSE, RTPLAN), the proposed tool is compatible with multiple treatment planning systems and linear accelerator platforms (e.g., Varian, Elekta). The long-term objective is to integrate this vendor-neutral, multimodal AI solution into clinical radiotherapy workflows, thereby improving efficiency, reducing inter-observer variability, and enhancing the quality of cancer treatment planning.

Keywords: Radiation therapy, AI, auto-contouring, DICOM-RT, CT, MRI, multimodality, treatment planning.