|
 |
Hybrid Event
|
Data Science |
G. Golob, M. Rot, G. Kosec (Jožef Štefan Institute, Ljubljana, Slovenia) Issues with accurate boundary representation in oversampled meshless methods 
A premier example of meshless methods is the Radial Basis Function-generated Finite Difference (RBF-FD) method, which approximates a linear operator based on values in discrete computational points. Compared to mesh based methods this approach offers several advantages for solving partial differential equations (PDEs), including a simpler spatial discretisation that eliminates the need for mesh generation. However, any constraints we impose only hold in computational points i.e. we solve the equation in a strong formulation. Sometimes, it is beneficial to consider the solution in a more continuous sense. One way to achieve this is with oversampling i.e. evaluating the approximation in additional points. Unfortunately, this means that the system of equations is now overdetermined and can no longer be solved exactly. This can result in large errors due to parts of the system that are important for the solution i.e. the relatively small number of nodes on the boundary, having almost no effect on the residual norm we are minimising. One possible solution to this problem is increasing the weighting for important parts of the system. In this paper, we observe how the changes in boundary weights affect the accuracy of the considered PDE solutions.
|
A. Rogan, A. Kolar-Požun, G. Kosec (Jožef Stefan Institute, Ljubljana, Slovenia) A numerical study of combining RBF interpolation and finite differences to approximate differential operators 
This paper focuses on RBF based meshless methods for approximating differential operators, one of the most popular being RBF-FD. Recently, a hybrid approach was introduced that combines RBF interpolation and traditional finite difference stencils. We compare the accuracy of this method and RBF-FD on a two dimensional Poisson problem for standard five-point and nine-point stencils and different parameters. We conclude by describing potential applicability in computational electromagnetics.
|
N. Malivuk, A. Ilić, I. Putnik, L. Milić (University of Novi Sad, Faculty of Technical Sciences, Novi Sad, Serbia), B. Petrović (University of Novi Sad, Faculty of Medicine, Novi Sad, Serbia), S. Kojić (University of Novi Sad, Faculty of Technical Sciences, Novi Sad, Serbia) Fluid Velocity Optimization in a Computational Lab-on-Chip Model for Cellular Analysis 
The research domain of lab-on-chip devices has been at the forefront in biomedical applications. We present a study which focuses on fluid velocity optimization in a computational lab-on-chip model designed for cellular analysis. It emphasizes the influence of key geometric parameters: porosity, channel width, and sieve angle. A parametric COMSOL simulation evaluates velocity profiles with and without porous media, identifying parameters with the highest impact on fluid velocity and optimizing flow conditions to ensure an adequate response for downstream sensor performance. The lab-on-chip model is parametrized to feature channel widths and sieve angles systematically varied to achieve a target velocity in a channel of length of 32.5 mm, with height and width of 0.5 µm and 10 mm. The porosity was varied between 10 µm and 50 µm, with a step of 10 µm. Geometric variations and porous media effects are analyzed to quantify their impact on flow dynamics. As prototyping of lab-on-chip devices can be a lengthy and expensive process, this step presents a much-needed antecedent. The results highlight the crucial geometric parameter for achieving optimized fluid velocity, critical for precise analyte transport. Finally, this study contributes to the development of lab-on-chip devices with enhanced performance in biomedical applications.
|
A. Ilić, N. Malivuk, L. Milić, F. Mrkić (University of Novi Sad, Faculty of Technical Science, Novi Sad, Serbia), B. Petrović (University of Novi Sad, Faculty of Medicine, Novi Sad, Serbia), S. Kojić (University of Novi Sad, Faculty of Technical Science, Novi Sad, Serbia) In silico optimization of scaffolds for skeletal muscle tissue engineering 
In silico optimization of scaffolds for skeletal muscle tissue engineering offers a precise and efficient approach to designing structures that promote cellular growth and tissue regeneration, as a precursor for fabrication. This study focuses on optimizing fluid flow through the porosity within three scaffold designs: orthogonally oriented cylindrical tube layers, cylindrical tube layers oriented at 0°, 60°, and 90°, and a honeycomb with inner connections structure design. This study aims to match the fluid dynamics and porosity of these designs to those of a porous media reference model.
Using COMSOL Multiphysics simulations, the geometric parameters of each scaffold were varied systematically. For cylindrical scaffolds, the radius and spacing between cylinders were adjusted, while for the honeycomb design, the thickness of the lines connecting opposite vertices of the base element was optimized. All models encompassed the space of 1 cm × 1 cm × 500 µm. The optimization process zeroed in on the values of the dimensions that achieve the closest match to the desired flow and porosity characteristics of a referenced porous media model. This approach provides a robust framework for designing and finding optimal parameters for scaffolds tailored to specific tissue engineering needs, more precisely in skeletal muscle regeneration.
|
D. Katović (University of Zagreb Faculty of Kinesiology, Zagreb, Croatia), T. Bronzin, D. Adamec, B. Prole, A. Stipić (CITUS, Zagreb, Croatia), M. Horvat (University of Zagreb Faculty of Electrical Engineering and Computing, Department of Applied Computin, Zagreb, Croatia) Structured overview of the similarity metrics commonly used in 3D Skeleton-based Re-Identification studies 
Technological advances in multi-modal devices, such as Microsoft Azure Kinect, have significantly facilitated the development of methods for visualizing and analyzing segmentally structured 3D models of the human body and movement patterns. These innovations have directly contributed to the application of this technology in 3D skeleton-based re-identification (SRID) of persons of interest.
The re-identification of a person (re-ID) is a computer vision task that focuses on the recognition of complete or segmented templates and the identification of individuals in different scenes.
A very important aspect of SRID research is the analytical evaluation of identity recognition using similarity metrics. These metrics — such as Euclidean Distance, Cosine Similarity, Dynamic Time Warping, Hausdorff Distance, Procrustes Analysis, Mahalanobis Distance, Earth Mover's Distance and Jaccard Index — enable accurate, efficient and robust re-identification of individuals based on uniquely constructed identity profiles.
The aim of this article is to provide a structured overview of SRID research related to similarity metrics and their various applications.
|
I. Vasileska, P. Tomšič, L. Kos (Faculty of Mechanical Engineering, University of Ljubljana, Slovenia, Ljubljana, Slovenia) Accelerating Sheath Particle-in-Cell Simulations with StarPU 
Particle-in-Cell (PIC) codes are widely used in plasma physics, but their computational requirements make efficient parallelisation a major challenge, especially on heterogeneous systems combining CPUs and GPUs. In this paper, we investigate how StarPU, a task-based runtime system, can be integrated into a one-dimensional sheath PIC code to overcome these computational challenges. The sheath simulation models fully kinetic electrons and ions, with particles interacting with boundary walls, leading to particle loss and sheath formation over time.
We decompose the PIC workflow into discrete tasks, including particle pushing, boundary treatment, and field updates, and implement them using StarPU's dynamic task scheduling. The study evaluates various StarPU scheduling strategies and optimisations to minimise communication overhead and ensure efficient resource utilisation. A novel particle flagging approach replaces traditional particle compaction and simplifies GPU execution while maintaining accuracy.
Performance benchmarks show significant speedups on heterogeneous systems, with strong and weak scaling analyses highlighting the framework's adaptability. Profiling results provide insight into the efficiency of task execution and scheduling and highlight the effectiveness of StarPU in managing complex dependencies. This work provides a robust, scalable framework for PIC codes and opens up possibilities for extending task-based parallelisation to multi-dimensional simulations and other scientific computing applications.
|
E. Ajdaraga Krluku, M. Gusev, G. Armenski, D. Mileski (Sts Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering, Skopje, Macedonia) Cost-effectiveness Analysis of a Serverless Noise Detection Module 
This paper compares the two leading serverless computing platforms, Amazon AWS and Google GCP, in detecting uninterpretable ECG signals. This research aims to determine the more accurate, faster, and cost-effective serverless solution and check the validity of the hypothesis that Amazon AWS is more cost-effective. We conducted experiments by uploading 1- hour and 24-hour ECG recordings to both serverless platforms and compared the processing tasks’ costs, speed, and costeffectiveness. Our findings show that Amazon AWS is faster and more cost-effective overall than Google GCP, with an average processing time of 377ms for 1-hour ECG recordings, compared to 690ms for GCP.
|
B. Kolarek, V. Bojović, D. Davidović, Z. Šojat, K. Skala (Institut Ruđer Bošković, Zagreb, Croatia) Improving Transparency and Accessibility in the Academic Publication Process on EBSI Network 
Lack of transparency, centralisation, restricted access and excessive fees are some of the biggest challenges currently facing academic publishing. In this paper, we propose a new method for storing and tracking peer review process data based on the European Blockchain Services Infrastructure (EBSI) DLT solution. We use EBSI, a permissioned blockchain operated by European institutions, to anchor submission and review metadata, ensuring an immutable record of academic contributions in the peer-review process. Our approach integrates the blockchain component into a standard conference management system to evaluate feasibility, performance and user acceptance under real-world conditions.
|
N. Mlinarič Hribar, A. Rashkovska, G. Kosec (Jožef Stefan Institute, Ljubljana, Slovenia) Prediction of Wind Direction Variability in Dynamic Thermal Rating with Decision Trees 
Dynamic thermal rating (DTR) systems provide the means to actively monitor the capacity of the overhead transmission line and optimise its operation. They use physics models and weather measurements with a temporal resolution of a few minutes to calculate the thermal rating, i.e., the transfer capabilities of power lines. We have found in our research that this resolution is too coarse for wind speed and direction and that using higher-resolution wind data significantly alters the results of the DTR calculations.
The discrepancy is correlated with the variability in wind direction, which is generally not known. This paper makes an attempt to predict wind direction variability, employing machine learning, specifically decision trees. We use the available 5-minute weather measurements and features constructed from high-resolution 1-second wind measurements as input parameters. We investigate several scenarios with varying input parameters and show that wind speed is the most significant standard available measurement. Our best prediction has a RMSE of 19.2◦, MAE of 13.0◦ and MAPE of 0.21. We also gain some insights into the problem that we hope to explore further in the future.
|
F. Aleksić, M. Papak, S. Begušić (University of Zagreb Faculty of Electrical Engineering and Computing, Zagreb, Croatia) Predictive modeling of correlation and volatility in multivariate financial returns 
This paper focuses on risk prediction in multivariate financial return time series by decomposing the asset covariance into its correlation and volatility components. Using a machine learning approach, we develop predictive models which ensure positive definiteness of the resulting matrices. The proposed approach allows us to separately study the contributions of correlation and volatility prediction to overall risk forecasting performance. To evaluate the proposed approach, we compare multiple performance metrics and assess the economic significance of our findings through a portfolio optimization application. The results provide insights into the relative predictability of correlation and volatility and their roles in portfolio optimization and risk management.
|
Data Science |
T. Curtis, M. Riedel, H. Neukirchen (University of Iceland, Reykjavik, Iceland), J. Busch, C. Montzka, M. Aach (Forschungszentrum Juelich, Juelich, Germany), R. Hassanian (University of Iceland, Reykjavik, Iceland), C. Barakat (Forschungszentrum Juelich, Juelich, Germany) Improving Surface Soil Moisture Estimation with Distributed Deep Learning and HPC 
Soil Moisture (SM) is crucial for land surface hydrology, impacting agriculture, ecology, wildlife, and public health. This paper explores the use of Machine Learning (ML) and Deep Learning (DL) models for SM data analysis and estimation, leveraging High-Performance Computing (HPC) for efficient model training and hyperparameter tuning. The research was performed with a commercial partner and SM experts. However, utilising HPC environments requires knowledge of many low-level HPC modules (e.g., various application libraries or vendor-specific drivers like Nvidia's CUDA, NCCL, etc.) and their specific versions needed for interoperability. Such challenges can be overcome using the Unique AI Framework (UAIF) developed in The European Center of Excellence in Exascale Computing „Research on AI- and Simulation-Based Engineering at Exascale“ (CoE RAISE) project. The research in this paper contributed to the co-design of several UAIF components using HPC to search for the optimal hyperparameter setup for each ML model. It compares Artificial Neural Networks (ANNs) and Recurrent Neural Networks (RNNs) to a baseline Random Forest (RF) model. Our result demonstrates a significant improvement in accuracy through distributed learning and systematic hyperparameter tuning. The findings suggest that HPC-driven DL can offer scalable, high-resolution SM predictions while leveraging the UAIF by application-domain-specific experts (e.g., SM experts, etc.), enabling easier use of HPC.
|
J. Meena (IrisIdea TechSolutions, Bangalore, India) Integrating Deep Learning and Raman Spectroscopy for Early Detection of Plant Diseases 
Disease caused by plants is among the greatest threats to agricultural productivity, and therefore, it's effective early detection is essential. This work is based on one-dense convolutional layers for the classification of plant leaf diseases. Trained using data augmentation techniques on the Plant Village dataset, our custom CNN model obtained a significant accuracy of 99.17%, outperforming commonly used models such as ResNet-50 (98.74%) and EfficientNet-B7 (99.46%). Furthermore, models like YOLOv3/DenseNet (95.57%) as well as AlexNet (97.62%) and XDNet (98.82%) will serve to demonstrate the increasing potential of deep learning in disease detection. This study employs a combination of image quantification and Raman spectroscopy in order to analyse visual and molecular leaf tissue changes, the latter capturing the unique biochemical fingerprints of plant disease. This dual-layered approach enhances diagnostic accuracy and reliability, even for diseases that exhibit subtle symptoms. Our custom model architecture features convolutional layers for spatial feature extraction, batch normalization for stability, max-pooling layers for dimensionality reduction, and fully connected layers for classification. This research paves the way for advanced plant disease management, with potential applications extending to human healthcare diagnostics through collaborative efforts.
|
A. Robek (Jozef Stefan Institute, Ljubljana, Slovenia), J. Stergar, M. Modic, Č. Keber, T. Tomanič (Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia), I. Štajduhar (University of Rijeka - Faculty of Engineering, Rijeka, Croatia), M. Milanič (Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia) In Vitro Visualization of Bacterial Colonies by Optical Coherence Tomography 
Assessment of bacterial species in microbiology labs typically involves cultivating bacteria on solid media, where bacterial colonies form. Identification of strains depends on colony morphology, including surface color, size, edge shape, pattern, opacity, and shine. Yet, traditional methods cannot evaluate 3D morphology. Optical coherence tomography (OCT) acquires volumetric information by detecting backscattered light at different sample depths and enables quantitative determination of 3D morphology. In this study, colonies of five bacterial strains were imaged using an OCT system at 880 nm, with a lateral resolution of 2.2 µm, depth resolution of 1.4 µm, and an imaged area of 1.1 x 1.1 mm². Analysis of OCT images revealed differences in upper surface curvatures, lateral and axial dimensions, colony bottom shapes, and internal composition. OCT imaging demonstrates that bacterial strains exhibit distinct 3D morphology features, aiding differentiation. The technique also offers insights into internal colony structures, providing microbiologists with complementary data beyond traditional methods.
|
M. Burch, M. Barbosa, M. Schmid (FHGR, University of Applied Sciences, Chur, Switzerland) Currency Matrices for the Visual Analysis of Currency Correlations 
In this paper we investigate the problem of creating an interactive visual overview for dynamic pairwise currency correlations. To reach this goal we extract real-time data and compute pairwise correlations for user-defined time windows on which the correlation values are based. To structure the generated matrix of pairwise currency correlations we further extend our approach by matrix reordering with which we can visually identify groups and subgroups of currency pairs that stand in a certain correlation behavior. Moreover, we explore the dynamic data from the perspective of political, economic, or pandemic events like COVID-19. We illustrate the usefulness of our technique by applying it to currency data and explore several data dimensions in it. Based on our algorithmic and visual analysis we can find out that COVID-19 as well as political and economic events have impacts on the currencies. Finally, we discuss scalability and limitations of our interactive visual analytics tool, coming in the form of an interactive dashboard.
|
M. Burch (FHGR, University of Applied Sciences, Chur, Switzerland), S. Dannehl (Eindhoven University of Technology, Eindhoven, Netherlands) Analyzing Player Types in Football 
We analyze football matches in combination with human expert ratings. To answer which player positions are more influential on the pitch, a Poisson regression is applied to determine a coefficient for each position’s impact on home and away goals. To determine what player combinations result in synergistic effects, players are clustered into distinguishable types on the basis of expert rated attributes. The clusters are used as indicators of the style of a player on the pitch. Matches were taken from the top five European football leagues’ 2017/2018 and 2018/2019 seasons. Resulting from the analysis, individual effects for each position could be calculated. These indicate a highly complex interaction where away goals are comparatively more determined by the offenses of both teams while home goals have higher contributions by the defenders of each team. Clusters of player types and well performing team compositions could be identified.
|
Biomedical Engineering |
M. Melinščak (Zagreb University of Applied Sciences, Zagreb, Croatia) Enhancing Interpretability in Retinal OCT Analysis Using Grad-CAM: A Study on the AROI Dataset 
The increasing use of deep learning in medical imaging, particularly optical coherence tomography (OCT), has transformed retinal disease diagnosis and segmentation. However, the limited interpretability of these models remains a significant barrier to clinical adoption. This study employs Gradient-weighted Class Activation Mapping (Grad-CAM) to enhance the interpretability of U-Net-based architectures for OCT image analysis. Using the Annotated Retinal OCT Image (AROI) dataset, we evaluate U-Net and attention-based U-Net architectures, comparing their performance in segmenting retinal layers and pathological fluids. Grad-CAM is utilized to generate visual explanations that highlight regions in OCT images influencing model predictions. Qualitative analysis reveals that heatmaps from the attention-based U-Net align more closely with clinically relevant features, especially in cases with severe pathological changes. Quantitative evaluation demonstrates improved segmentation performance, with Dice scores confirming the positive impact of attention mechanisms on diagnostically critical regions. By integrating interpretability into segmentation workflows, this study addresses the gap between AI models and their practical application in ophthalmology, fostering more transparent and trustworthy retinal disease diagnostics.
|
A. Ribarić (University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia), T. Domazet-Lošo (Ruder Boskovic Institute, Zagreb, Croatia), M. Domazet-Lošo (University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia) Gene Age Prediction is Robust to False Homology 
Phylostratigraphy is a widely used computational approach for predicting gene age by analyzing protein sequence similarities across evolutionary lineages. In this study, we used the Homo sapiens proteome as a reference to analyze the effect of false homology in gene age prediction. We implemented a workflow using state-of-the-art algorithms for sequence similarity searches enhanced by dynamic setting of an E-value threshold. This threshold was calculated for every protein individually using results of similarity searches with reversed database, which identified proteins prone to false homology. We demonstrated that dynamically adjusted E-values effectively reduced false positive rates compared to fixed threshold. However, the results indicate that, while certain proteins are more prone to false homology, they constitute a small proportion of the dataset and, in general, current sequence search algorithms are robust against false homology and thus reliable for phylostratigraphy.
|
B. Borozan, L. Borozan (School of Applied Mathematics and Informatics J. J. Strossmayer University of Osijek, Osijek, Croatia), T. Prusina (Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany), J. Maltar, D. Matijević (School of Applied Mathematics and Informatics J. J. Strossmayer University of Osijek, Osijek, Croatia), S. Canzar (Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany) Accelerating Marker Gene Selection in scRNA-seq 
Single-cell RNA sequencing (scRNA-seq) generates vast amounts of data, enabling gene expression profiling at the resolution of individual cells. Accurate classification of cells by their tissue of origin relies on selecting informative marker genes that distinguish cell types while minimizing redundancy. However, identifying an optimal subset of genes presents a challenging combinatorial optimization problem in scRNA-seq analysis. In this paper, we focus on enhancing scGeneFit, a state-of-the-art marker gene selection method. We introduce optimized solvers and nearest-neighbor search techniques to improve both accuracy and computational efficiency. Through systematic benchmarking on real-world datasets, we demonstrate that our modifications yield superior performance compared to the original implementation.
|
F. Fetaji, S. Gievska, S. Kalajdziski (Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje, Macedonia) Bridging Gaps in Ligand Binding Affinity Prediction: Empirical Machine Learning Analyses 
This research study explores the gaps and unresolved questions regarding the use of machine learning to predict ligand binding affinity. Current computational methods are computationally intensive and not necessarily transferable across protein–ligand systems. In this review we highlight how such advanced models, such as deep learning and graph-based methods, overcome these weaknesses. We explore how the spatial and sequence features are combined to incorporate complexity of molecular interaction. The advantage is theoretical, increasing our clarifying how ligand and protein structures in concert influence locality. The acceleration of drug development known as acceleration of discovery comes from greater predictive accuracy for less cost. The work concludes with insights and recommendations on improving generalizability and performance of drug discovery tasks by utilizing hybrid frameworks fusing traditional algorithms with neural network architectures.
|
F. Fetaji, S. Gievska, S. Kalajdziski (Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje, Macedonia) Graph and Convolutional Methods for Advancing Ligand Binding Affinity Modeling in Drug Research 
Drug research depends upon ligand binding affinity modeling. Existing computational methods are critiqued for limited generalization, interpretability and training efficiency, and this work addresses those issues. It studies novel ways to combine the structure of graph neural networks with convolutional models. This work introduces new data integration that captures structural and sequence features to address diverse data patterns. The theoretical contribution strengthens the insight into the interactions of topology and spatial representation in advanced machine learning. Improved predictational accuracy, reduced computational cost and broader applicability in drug development workflows are the benefits. This study finds that unified graph and convolutional strategies close previously identified performance gaps, providing a future pathway in which to conduct next stage research and applications in ligand binding affinity modeling.
|
Biomedical engineering |
O. Czimbalmos, G. Kőrösi, G. Kekesi, G. Horváth (University of Szeged, Szeged, Hungary) A Data Science Perspective for the Effective Selective Breeding Process to Produce Striking Behavioral Differences between Two Rat Substrains during Several Generations 
Development of reliable rodent models for neuropsychiatric diseases (e.g. depression, Alzheimer disease, Parkinson disease or schizophrenia) is very difficult due to the high complexity of the central nervous system. This study analyzed data from more than 1,100 male and female rats spanning 14 generations originated from Long Evans rats to develop its special substrain (Lisket) with schizophrenia-like phenomenon. Data analysis focused on the development of differences between the two groups in several behavioral measures. Key methodologies included data normalization, t-tests, Principal Component Analysis (PCA) and Agglomerative Clustering to uncover group-specific behavioral patterns. These analyses revealed that the statistically significant changes (e.g. p < 0.001) developed gradually in the new substrain both in males and females. The findings highlight the utility of combining traditional behavioral measurement techniques with modern data science tools to explore the development of complex behavioral alterations during several generations in the newly developed substrain. The applied complex data analytic method might be a reliable technic for selecting animals with appropriate characteristics to produce the new generations with even more proper features of the planned model. This research may contribute to the development of new models of different psychiatric conditions.
|
L. Pušlar (Jožef Stefan Institute, Ljubljana, Slovenia), U. Putar, J. Novak, G. Kalčíková (Faculty of Chemistry and Chemical Technology, University of Ljubljana, Ljubljana, Slovenia), F. Strniša, M. Depolli (Jožef Stefan Institute, Ljubljana, Slovenia) Non-Destructive Phenotyping of Duckweed using Segment Anything Model 
Plant phenotyping using image analysis is gaining attention due to its potential to enhance agricultural research and productivity. Among the various species, Lemna minor, a type of duckweed, shows promise for applications such as livestock feed, biofuel production, wastewater nutrient recovery, and ecotoxicity testing. Given these benefits, accessible and efficient phenotyping methods are essential for evaluating its health and growth.
This study presents a non-destructive, cost-effective, and automated phenotyping method for Lemna minor. Images captured using standard smartphone cameras are processed through an advanced analysis pipeline. A central component is the Segment Anything Model (SAM), a segmentation model trained on diverse datasets, making it adaptable to various plant species. By combining SAM with color thresholding, we segmented leaves from the background and extracted features like leaf count and RGB values. These features were analyzed for their correlation with chlorophyll content, a key indicator of plant health.
The results demonstrate strong correlations between RGB values and chlorophyll content and provide informative leaf count estimations. This approach is generalizable to other plant species and offers a scalable alternative to traditional destructive phenotyping methods, paving the way for broader applications in plant science.
|
M. Kapo, E. Buza, A. Akagic (University of Sarajevo, Faculty of Electrical Engineering, Sarajevo, Bosnia and Herzegovina) Super-resolution of DermaMNIST Images Using Deep Learning Models 
The use of artificial intelligence and deep neural networks is becoming increasingly important in the field of medical diagnostics. Medical images often have a problem with low resolution and unclear details, which, if misinterpreted, can lead to errors in diagnosis. Super-resolution algorithms can assist not only in rescaling the images, but also in enhancing the quality of the image itself, and better visibility of fine details, textures, and contours in the images. In this paper, a comparison of three super-resolution models was made: Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR), Efficient Sub-Pixel CNN (ESPCN), and Super-Resolution Convolutional Neural Network (SRCNN). The comparison of these three models was made to improve the quality and scaling of dermato- scopic images. The DermaMNIST data set was used and 3 scalings cases were considered: from 32x32 to 64x64, 64x64 to 128x128, and from 32x32 to 128x128. The results confirm that all tested models achieve significant performance, confirming the state-of-the-art results with all of them exceeding the value of the PSNR metric of 30 dB and the SSIM metric of 75%. It is particularly noteworthy that two models, in the case of 2× scaling, achieved PSNR values above 40 dB and SSIM above 95%, which further confirmed their potential for application in medical image processing.
|
J. Busch, C. Barakat (Forschungszentrum Juelich GmbH, Juelich, Germany), T. Pauli, Aachen, Germany), S. Fonck, Aachen, Germany), A. Stollenwerk (Chair of Embedded Software (Computer Science 11), Aachen, Germany), S. Fritsch, M. Riedel (Forschungszentrum Juelich GmbH, Juelich, Germany) Leveraging Vision Transformers with Hyperparameter Optimization for ARDS Classification 
Acute respiratory distress syndrome (ARDS) is a serious lung condition associated with a high mortality rate. The classification of this condition poses a challenge in intensive care medicine and diagnostic imaging. Artificial intelligence methods, particularly deep learning, can assist the diagnostic process. Recent advances have demonstrated the potential of vision transformers to improve image analysis through their ability to extract relevant features and complex patterns in images. In this study, a vision transformer is implemented for classifying ARDS in chest X-Rays using a two-step transfer learning approach. For this purpose, publicly available databases of X-rays are used, some of which have been annotated by a radiologist for the use case ARDS. Furthermore, in order to uncover the optimal combination of model parameters to streamline the training process, we implement a two-tier hyperparameter optimization using the Ray Tune framework on high-performance computing infrastructure. The retrained vision transformer was able to classify ARDS data with 95\% accuracy, outperforming previous approaches employing residual networks. Our results highlight the improvement that can be achieved through a two-step transfer learning approach leveraging vision transformers and taking advantage of powerful supercomputing architecture. Ultimately, our work facilitates timely and accurate classification of ARDS thereby enabling improved outcomes for patients receiving critical care.
|
A. Stanešić, L. Klaić, D. Cindrić, M. Cifrek (Faculty of Electrical Engineering and Computing, Zagreb, Croatia) Analog Front End for Capacitive Electrodes in Biomedical Applications 
Capacitive non-contact electrodes have emerged as a promising solution for biomedical applications, particularly in electrocardiogram (ECG) monitoring, due to their ability to measure bioelectrical signals without direct skin contact. Despite their advantages, weak signal levels and high susceptibility to noise remain significant challenges, limiting their practical adoption. This article presents a novel analog frontend (AFE) designed carefully stage-by-stage, with a focus on achieving high signal-to-noise ratio (SNR), common-mode rejection ratio (CMRR), and power supply rejection ratio (PSRR), while maintaining a compact and power-efficient design. Experimental results demonstrate a marked improvement in SNR compared to conventional AFE designs, validated through real-world scenarios and measurements on human subjects. The proposed AFE effectively mitigates environmental interference, particularly power-line network noise, ensuring robust ECG signal detection. This work advances the development of high-performance capacitive sensing technologies, enabling more reliable and non-invasive biomedical devices for continuous health monitoring.
|
L. Modrić, M. Žagar (RIT Croatia, Zagreb, Croatia), D. Tolić (RIT Croatia, Dubrovnik, Croatia) Integrating Big Data Analytics and Wearable Technology: A Machine Learning Approach to Heart Disease Prediction 
Due to the real-time monitoring of various physiological and health parameters like heart rate, activity levels, and sleeping patterns, wearable devices such as smartwatches and fitness trackers have become popular among a wide population. Analysis of data acquired from such devices provides valuable insights into user’s well-being. Additionally, the potential and usage of big data analytics in healthcare systems is extensive, and in-depth analysis of various types of healthcare-related data can be used to enhance healthcare services. This paper aims to explore combining big data analytics with wearable technology by evaluating three machine learning algorithms -
Support Vector Machines, Logistic Regression, and Linear Discriminant Analysis. Through data analysis of physiological data from wearable devices and test data for heart disease prediction, the algorithms are compared based on their performance and accuracy in the potential detection of heart disease. Furthermore, this paper focuses on the predictors that can be obtained from wearable device data and aims to find the model with the best accuracy based on the available parameters. The results offer ideas for potential early heart disease detection and on-time interventions while providing insights that could inspire individuals to use wearable devices, and take preventive measures, resulting in a healthier lifestyle.
|
I. Tomasic (Malardalen Univeristy, Vasteras, Sweden), N. Tomasic (Karolinska University Hospital, Stockholm, Sweden) Optimized Synthesis of 12-Lead ECG from Bipolar Torso Leads 
Synthesizing a 12-lead electrocardiogram (ECG) from non-standard lead systems is increasingly relevant due to the widespread use of wearable ECG devices. This study investigates the synthesis of a 12-lead ECG using three bipolar leads derived from a 35-electrode multi-lead ECG (MECG) system, with a focus on optimizing electrode placement and transformation accuracy.
A dataset comprising recordings from 20 healthy volunteers and 27 cardiac surgery patients was analyzed. The best three-lead combinations were identified using an exhaustive search approach, maximizing the minimum correlation coefficient between synthesized and target leads. Two synthesis approaches were evaluated: a universal approach, using a single transformation matrix for all subjects, and a combined approach, which incorporates personalized transformation matrices.
Results indicate that only a small subset of lead combinations yield high-quality ECG synthesis, but sufficient flexibility exists for practical applications. The combined approach significantly outperforms the universal method, achieving higher correlation coefficients, improved ST-segment accuracy, and better myocardial ischemia detection performance. These findings demonstrate the feasibility of optimizing lead selection for wearable ECG devices, enhancing diagnostic reliability while maintaining ergonomic and practical considerations.
|
K. Ćaleta, A. Šarčević, J. Božek, M. Horvat (Faculty of Electrical Engineering and Computing, Zagreb, Croatia), I. Zmijanović (General Hospital Šibenik, Šibenik, Croatia) Preliminary Results on Automated Binary Classification of Congenital Heart Defects in Fetal Ultrasound Images Using Deep Learning Algorithm 
Congenital heart defects (CHDs) are the most prevalent congenital anomalies, affecting approximately 0.8% of live births and representing the leading cause of neonatal mortality. More than 300 children are born annually with CHDs in Croatia, most undiagnosed before birth. Early fetal detection of CHDs using ultrasound significantly improves survival rates and enhances results through timely medical intervention. In this study, we explore the potential of artificial intelligence to improve fetal diagnostics of CHDs. The goal of this research is to apply a deep learning-based method to classify fetal ultrasound images as healthy or indicative of CHDs. The dataset used in the experiment included N = 86 ultrasound images, comprising 57 images of healthy hearts (from 23 fetuses) and 29 images of CHDs (from 8 fetuses). These images were obtained from routine prenatal gynecological examinations conducted in a general hospital in Croatia between 18 and 22 weeks of pregnancy. Data normalization and augmentation were performed to standardize the dataset and enhance its diversity. The ResNet-18 classifier was trained and validated using five-fold cross-validation, ensuring that images from the same fetus were not included in both training and validation sets to prevent dana leakage. Preliminary results indicate that the proposed deep learning model achieves an accuracy of 74% in distinguishing healthy fetal hearts from those with CHDs. Future work will focus on expanding the dataset, exploring alternative deep learning algorithms and techniques, and integrating post-hoc explainability methods to enhance the interpretability and clinical applicability of predictions for fetal CHDs.
|
S. Tudjarski, M. Gusev, A. Madevska Bogdanova (Ss. Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering, Skopje, Macedonia), A. Sankovski (Polytechnic University of Madrid, Madrid, Spain) Improving the AFIB ML Detector 
Cardiovascular disease is the leading cause of death, reaching one-third of all cases, indicating that Atrial Fibrillation is a high-risk condition closely related to heart failure. Timely and accurate detection is of great value in preventing mortality risks since almost 5\% of the adult population is diagnosed with Atrial Fibrillation.
In this paper, we focus on feature engineering to detect Atrial fibrillation by determining whether the heart rhythm has irregularities without an easily visible pattern. We experiment with a broad spectrum of features derived from the duration of heartbeat-to-heartbeat intervals in the electrocardiograms, including position-based features, fluctuation indices, standard deviations, mean average values, Shannon entropy, and statistical measures, such as the length of compressed time series data.
The research question is to detect the most influential features that result in the best-performing model. The model development process uses standard benchmark annotated electrocardiogram datasets. Our approach is to evaluate the model on a completely different dataset from the one on which it was trained.
The study concluded that positional features increase the model performance with variable efficiency, depending on training and test data.
|
E. Dupljak, E. Domazet (International Balkan University, Skopje, Macedonia) Hybrid Deep Learning Architectures for Multi-Modal AI-Driven Breast Cancer Detection: A Comparative Analysis of Computational Efficiency and Diagnostic Accuracy 
Disease diagnosis is a complex challenge in medicine, with artificial intelligence (AI), machine learning, and deep learning offering innovative solutions to enhance evidence-based decision-making. This paper focuses on AI-driven diagnostic and treatment strategies for breast cancer, emphasizing the integration of multimodal medical data such as imaging, text, genetic data, and physiological signals. These methods improve diagnosis and clinical outcomes by utilizing public datasets, feature engineering, and advanced classification models.
Traditional breast cancer diagnostics often rely on unimodal approaches, but multimodal techniques now incorporate histopathology images, genomics, clinical notes, and patient history, significantly enhancing diagnostic accuracy. This review explores fusion methodologies (early, intermediate, late), advanced architectures like attention-based models and graph neural networks, and applications such as Visual Question Answering (VQA) and semantic segmentation. Explainable AI (XAI) tools, including Grad-CAM, SHAP, and LIME, improve interpretability, foster clinician trust, and encourage patient engagement.
Hybrid deep learning models combining mammography, ultrasound, and histopathology minimize false positives and boost detection accuracy. Feature fusion and optimized transfer learning make these systems viable in resource-constrained settings. While challenges like limited datasets and interpretability persist, hybrid deep learning systems promise significant advancements, offering computational efficiency and improved patient outcomes.
|
|
Basic information:
Chairs:
Karolj Skala (Croatia), Aleksandra Rashkovska Koceva (Slovenia), Davor Davidović (Croatia)
Registration / Fees:
REGISTRATION / FEES
|
Price in EUR
|
EARLY BIRD
Up to 23 May 2025 |
REGULAR
From 24 May 2025 |
Members of MIPRO and IEEE |
270 |
297 |
Students (undergraduate and graduate), primary and secondary school teachers |
150 |
165 |
Others |
300 |
330 |
The student discount doesn't apply to PhD students.
NOTE FOR AUTHORS: In order to have your paper published, it is required that you pay at least one registration fee for each paper. Authors of 2 or more papers are entitled to a 10% discount.
Contact:
Karolj Skala
Rudjer Boskovic Institute
Center for Informatics and Computing
Bijenicka 54
HR-10000 Zagreb, Croatia
E-mail: skala@irb.hr
Accepted papers will be published in the ISSN registered conference proceedings. Papers presented at the conference will be submitted for inclusion in the IEEE Xplore Digital Library.

Location:
Opatija is the leading seaside resort of the Eastern Adriatic and one of the most famous tourist destinations on the Mediterranean. With its aristocratic architecture and style, Opatija has been attracting artists, kings, politicians, scientists, sportsmen, as well as business people, bankers and managers for more than 180 years.
The tourist offer in Opatija includes a vast number of hotels, excellent restaurants, entertainment venues, art festivals, superb modern and classical music concerts, beaches and swimming pools – this city satisfies all wishes and demands.
Opatija, the Queen of the Adriatic, is also one of the most prominent congress cities in the Mediterranean, particularly important for its ICT conventions, one of which is MIPRO, which has been held in Opatija since 1979, and attracts more than a thousand participants from over forty countries. These conventions promote Opatija as one of the most desirable technological, business, educational and scientific centers in South-eastern Europe and the European Union in general.
For more details, please visit www.opatija.hr and visitopatija.com.
|
|
|
Currently there are no news |
|
|
|
|