Prof. Nirvana Popescu: AI-Powered Clinical Intelligence: from Diagnosis and Predictive Models to Explainable Decisions
Short Bio of the lecturer:
Nirvana Popescu is full professor at National University of Science and Technology POLITEHNICA Bucharest (Romania), Computer Science Department, since 2014. She received her Bachelor of Engineering in 1998 at POLITEHNICA Bucharest, Computer Science Department, followed by M.Sc at the same department. In 2003, she became PhD in Computer Science with a thesis called “Self-organizing intelligent fuzzy systems”. Meanwhile she worked and studied as PhD guest student at the University of Bielefeld, Germany. Her main research interests are neural networks, intelligent systems, fuzzy logic and control, eHealth systems, cognitive and autonomous robots, reconfigurable computers. At UPB, she is the leader of the Laboratory for “Reconfigurable high-confidence medical devices” and she coordinated research projects in the field of Medical Rehabilitation Robotics and Medical Wearable Sensors.
Abstract
The presentation explores the growing role of artificial intelligence (AI) in the analysis of medical data and its impact on disease diagnosis, prediction, and clinical decision-making. The presentation focuses on the application of machine learning (ML), deep learning (DL), and hybrid intelligent systems for the detection and evaluation of diseases using medical imaging and patient data. Particular attention is given to Parkinson’s disease (PD), where traditional clinical evaluation methods, such as the Unified Parkinson’s Disease Rating Scale (UPDRS), remain subjective and dependent on limited observations of motor activity. To address these limitations, AI-based approaches using resampling techniques for imbalanced datasets, including over-sampling, under-sampling, and hybrid methods, are analyzed together with classifiers such as XGBoost, Decision Trees, and K-Nearest Neighbors. Experimental results demonstrate that over-sampling methods significantly improve classification performance, achieving up to 99% accuracy with XGBoost.
The speech also examines broader medical imaging applications involving chest X-rays, dermoscopy images for melanoma detection, and MRI brain scans. Various ML algorithms, including Support Vector Machines, Random Forests, Naïve Bayes, Logistic Regression, and Convolutional Neural Networks (CNNs), are compared in terms of diagnostic performance. Advanced hybrid models combining CNNs with ML classifiers, stacking architectures, and optimization techniques such as Genetic Algorithms and Whale Optimization Algorithms are presented as effective tools for improving classification accuracy, sensitivity, specificity, and AUC scores. Furthermore, recent DL-based Parkinson’s disease detection models using MRI data and architectures such as ResNet50 and DenseNet121 are discussed, with recall values reaching 99% for PD detection.
An essential aspect highlighted throughout the presentation is explainability in AI systems. In medical environments, highly accurate predictions alone are insufficient unless clinicians can understand and trust the reasoning behind them. Explainable AI improves transparency, supports clinical validation, reduces bias, and facilitates safer integration of AI into healthcare practice, ultimately contributing to more reliable and ethical medical decision-making.