Search  English (United States) Hrvatski (Hrvatska)

innovative promotional partnershipArtificial Intelligence towards EU Multilingualism

Technical co-sponsorship

 
Hybrid Event

Event program
Thursday, 5/26/2022 9:00 AM - 1:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
DS - Data Science 
09:00 AM-09:40 AM    Invited Lecture 

Veljko Milutinovic (Indiana University in Bloomington, United States)
Supercomputing on a Chip for Machine Learning

 
09:40 AM-10:00 AM    Break 
10:00 AM-11:20 AM    Papers 
1.A. Bavec, M. Depolli (Jožef Stefan Institute, Ljubljana, Slovenia)
Alpine Glacier Simulation with Linear Climate Models 
We present a method of modeling climate forcing that can be used as inputs in simulation of alpine glaciers. We use Parallel Ice Sheet Model (PISM) framework to simulate the glaciers, which shapes the form in which we implement our method. For simulation, we consider two areas centered on mountain plateaus of Sneznik and Trnovski gozd in ˇ southern to south-western Slovenia which both bare evidence of glaciation in the last ice age. Since the glacial area to be simulated is small and the local weather is shaped by the local mountains, the readily available climate models do not have the required resolution to accurately model glacier development. We propose models for temperature and precipitation rooted in local data in form of weather stations and local topography. We experimentally test the proposed method by using it in PISM simulations and observing the results in terms of how closely the resulting glacier extent matches the estimated glacial extent from the literature. We also compare the proposed method against other methods used in similar cases that were recently published. We find that our method greatly improves the results of simulations on one of the study areas and could be modified for use in other similar cases as well.
2.B. Rojc, M. Depolli (Jožef Stefan Institute, Ljubljana, Slovenia)
Parallel Spatial Indexing for Domain Discretization 
To perform domain discretization efficiently, a data structure that allows for efficient point storage and spatial indexing is required. Such a structure must offer efficient dynamic element insertions and point neighbor lookups. In order to fully utilize the modern CPUs present in state-of-the-art computers, the structure should also allow for multiple threads of execution to simultaneously access and modify the structure in a safe and predictable manner. While many structures exist which allow for simultaneous lookups, most do not allow for simultaneous insertion. In this paper we present Polyp, an indexing data structure which allows for simultaneous multithreaded lookups and insertions of points of arbitrary dimensionality, implemented in C++. We present the approach to thread safety and correctness. We also compare the structure to an existing structure based on nanoflann.
3.F. Strniša (Jožef Stefan Institute, Ljubljana, Slovenia), M. Jančič (Jožef Stefan Institute, Jožef Stefan International Postgraduate School, Ljubljana, Slovenia), G. Kosec (Jožef Stefan Institute, Ljubljana, Slovenia)
A Meshless Solution of a Small-Strain Plasticity Problem 
When the deformations of a solid body are sufficiently large, parts of the body undergo a permanent deformation commonly refereed to as plastic deformation. Several plasticity models describing such phenomenon have been proposed, e.g. von Mises, Tresca, etc. Traditionally, the finite element method (FEM) is the numerical tool of choice for engineers who are solving such problems. In this work, however, we present the implementation of the von Mises plasticity model with non-linear isotropic hardening in our in-house developed MEDUSA library, utilizing a variant of meshless methods – namely the radial basis functiongenerated finite differences (RBF-FD). We define a simple plane stress case, where a 2D block is fixed at one edge, and a tensile force, which causes the block to deform, is applied to it at the opposite edge. We show that results are in good agreement with the numerical solution obtained by Abaqus FEA, a commercial FEM solver.
4.V. Cvrtila, M. Rot (Institut "Jožef Stefan", Ljubljana, Slovenia)
Reconstruction of Surfaces Given by Point Clouds 
One of the most general ways to represent a three-dimensional domain is with a dense point cloud that describes the boundary. This representation is convenient as it is both the output of a 3D scan and relatively simple to obtain from alternative surface description methods. However, a point cloud on its own is often insufficient for further calculations. We would like to use said point cloud to create a more convenient description of the true shape, which would allow us to use specialised discretization algorithms, and a way to determine its interior. In this paper we propose an algorithm that fits parametrized surfaces to discrete neighbourhoods that cover the point cloud and uses a partition of unity to ensure that the surfaces match along their edges. We then use this local parametrization to construct the characteristic function of the domain, i.e. a function which can determine if a given point is inside the domain or not.
11:20 AM-11:40 AM    Break 
11:40 AM-1:00 PM    Papers 
1.M. Rot, A. Rashkovska (Jožef Stefan Institute, Ljubljana, Slovenia)
Meshless Method Stencil Evaluation with Machine Learning 
Meshless methods are an active and modern branch of numerical analysis with many intriguing benefits. One of the main open research questions related to local meshless methods is how to select the best possible stencil - a collection of neighbouring nodes - to base the calculation on. In this paper, we describe the procedure for generating a labelled stencil dataset and use a variation of pointNet - a deep learning network based on point clouds - to create a classifier for the quality of the stencil. We exploit features of pointNet to implement a model that can be used to classify differently sized stencils and compare it against models dedicated to a single stencil size. The model is articularly good at detecting the best and the worst stencils with a respectable area under the curve (AUC) metric of around 0.90. There is much potential for further improvement and direct application in the meshless domain.
2.M. Jančič, G. Kosec (Jožef Stefan Institute, Ljubljana, Slovenia)
Stability Analysis of RBF-FD and WLS Based Local Strong Form Meshless Methods on Scattered Nodes 
The popularity of local meshless methods in the field of numerical simulations has increased greatly in recent years. This is mainly due to the fact that they can operate on scattered nodes and that they allow direct control over the approximation order and basis functions. In this paper we analyse two popular variants of local strong form meshless methods, namely the radial basis function-generated finite differences (RBF-FD) using polyharmonic splines (PHS) augmented with monomials, and the weighted least squares (WLS) approach using only monomials - a method also known as diffuse approximation method. Our analysis focuses on the accuracy and stability of the numerical solution computed on scattered nodes in a one-, two- and three-dimensional domain. We show that the RBF-FD method exhibits a more stable behaviour compared to WLS, but at the cost of higher computational complexity, while the accuracy of both variants is of the same order of magnitude.
3.M. Riedel (University of Iceland, Reykjavik, Iceland), M. Book (Juelich Supercomputing Centre, Juelich, Germany), H. Neukirchen (University of Iceland, Reykjavik, Iceland), G. Cavallaro, A. Lintermann (Juelich Supercomputing Centre, Juelich, Germany)
Practice and Experience Using High Performance Computing and Quantum Computing to Speed-up Data Science Methods in Scientific Applications 
High-Performance Computing (HPC) can quickly process scientific data and perform complex calculations at extremely high speeds. The past decade showed a vast increase in HPC use across scientific communities, especially in using parallel data science methods to speed-up scientific applications. Many of those applications leverage HPC to scale up machine learning and deep learning algorithms that inherently solve complex optimization problems (i.e., learning process). More recently, the field of quantum machine learning evolved as another HPC related computing approach to speed-up data science methods. This paper will address both of these approaches, traditional HPC and the new quantum machine learning, whereby the latter specifically focus on our experiences on using the Quantum Annealer system at the Juelich Supercomputing Centre (JSC) from D-Wave. Quantum annealing is particularly effective for solving optimization problems like those that are inherent in machine learning methods (e.g., support vector machines, etc.). We complement and contrast those experiences with our lessons learned of using a wide range of parallel data science methods with a high number of Graphical Processing Units (GPUs) on the infrastructure of JSC. That includes modular supercomputers such as the HPC Systems JURECA and JUWELS, the fastest European supercomputer at the time of writing. Deep insights will be described, including application experiences from the Helmholtz Association Artificial Intelligence (AI) initiative (i.e., Helmholtz AI) and significant application areas of JSC (e.g., remote sensing, bio-medical applications, etc.). Technical challenges and solutions are discussed, such as using interactive access via JupyterLab on typical batch-oriented HPC systems or enabling distributed training tools for deep learning (e.g., Horovod, DeepSpeed, PyTorch) on our HPC systems. We complement the technical findings in the paper with selected details on the new European HPC Joint Undertaking (EuroHPC JU) strategic directions and a broader view of European data science applications that take advantage of HPC.
4.A. Naumoski, G. Mirceva, K. Mitreski (Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University in Skopje, Skopje, Macedonia)
Implication of Hamacher T-norm on Two Fuzzy-Rough Rule Induction Algorithms 
From the rule induction algorithms we can obtain models in If-Then form that are very easy to be interpreted by humans. To further improve this class of algorithms, in this paper we focus on QuickRules and Vaguely Quantified Rough fuzzy-rough rule induction algorithms, by introducing the novel Hamacher T-norm. It is important to know that T-norms as well as the fuzzy tolerance relationship metrics, implicators and vague quantifiers play an important role in model accuracy because they are used to calculate the lower and upper approximations. For this purpose, in our models’ evaluation, we use five fuzzy tolerance relationship metrics to evaluate the performance of the models that are obtained with the new Hamacher T-norm. The AUC ROC metric was used to evaluate the performance, and later was used to evaluate the statistical significance. The results revealed that fuzzy tolerance relationship metrics have greater influence than the k-parameter from the Hamacher T-norm on models’ performance, and this was also compared to the vaguely quantified algorithm that uses vague quantifiers. For future work, we plan to conduct further investigation of the influence of another T-norms and fuzzy tolerance relationship metrics on this type of algorithms.
Thursday, 5/26/2022 3:00 PM - 7:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
DS - Data Science 
3:00 PM-4:00 PM    Papers 
1.N. Mijić, D. Davidović (Ruđer Bošković Institue, Zagreb, Croatia)
Batched Matrix Operations on Distributed GPUs with Application in Theoretical Physics 
One of the most important and commonly used operations in many linear algebra functions is matrix-matrix multiplication (GEMM), which is also a key component in obtaining high performance of many scientific codes. It is a computationally intensive function requiring O(n3) operations, and its high computational intensity makes it well-suited to be significantly accelerated with GPUs. Today, many research problems require solving a very large number of relatively small GEMM operations that cannot utilise the entire GPU. To overcome this bottleneck, special functions have been developed that pack several GEMM operations into one and then compute them simultaneously on a GPU, which is called a batch operation. In this research work, we have proposed a different approach based on linking multiple GEMM operations to Message Passing Interface (MPI) processes and then binding multiple MPI processes to a single GPU. To increase GPU utilisation, more MPI processes (i.e. GEMM operations) are added. We implement and test this approach in the field of theoretical physics to compute entanglement properties through simulated annealing Monte Carlo simulation of quantum spin chains. For the specific use case, we were able to simulate a much larger spin system and achieve a speedup of up to 35× compared to the parallel CPU-only version.
2.M. Turalija (Group for Applications and Services on Exascale Research Infrastructure, Department of Informatics, Rijeka, Croatia), M. Petrović (Laboratory for Semantic Technologies (SemTech), Department of Informatics, University of Rijeka, Rijeka, Croatia), B. Kovačić (Laboratory for Information Systems Development (InfoSys), Department of Informatics, University of R, Rijeka, Croatia)
Towards General-Purpose Long-Timescale Molecular Dynamics Simulation on Exascale Supercomputers with Data Processing Units 
Molecular dynamics (MD) simulation provides the atomic-level characterization of biomolecular systems and their transitions, such as conformational changes in proteins. The computational demands of such simulations and limits of parallelization techniques have prevented simulations of real-world systems from reaching the microsecond timescales, which are relevant for real-world applications. The notable exceptions are the supercomputers specifically designed for MD simulations. An example of such supercomputers is the Anton supercomputer, nowadays in its third iteration, which uses a substantial number of application-specific integrated circuits (ASICs) for MD simulation and is not generally available. Recent advances in algorithms, software, and hardware towards exascale supercomputing have made microsecond-timescale simulations of practically relevant biomolecular systems reachable within days. Data processing units (DPUs) are already being used in data centers for the in-flight processing of network packets (e.g. encryption, decryption, and intrusion detection) and are expected to be used in future exascale supercomputers in some form. The usage of DPUs in the supercomputers unlocks the potential to accelerate MD simulations that were previously available only in networking ASICs in supercomputers such as Anton. This paper proposes the usage of DPUs for MD simulation acceleration in an innovative way inspired by the Anton supercomputer.
3.M. Petrović, A. Hrelja, A. Meštrović (Faculty of Informatics and Digital Technologies, Rijeka, Croatia)
Prediction of COVID-19 Tweeting: Classification Based on Graph Neural Networks 
In this paper, the application of graph neural networks (GNNs) to the node classification task will be presented. GNNs have proven successful in different classification tasks where data and the relationships between them are defined using graphs. The aim of this research is to develop a classifier that can identify two possible classes of Twitter nodes: COVID and nonCOVID. COVID nodes refer to nodes (users) that frequently post tweets related to COVID-19 and we define that property in accordance with the number of posted tweets. For that purpose, in the first step, we implement a pipeline that enables the automatic continuous collection of data from Twitter and network construction. In the second step, we train a GNN classifier and perform an evaluation in terms of precision, recall and F1 measure.
4:00 PM-4:10 PM    Break 
4:10 PM-5:10 PM    Papers 
1.T. Pavlov, G. Mirceva (Faculty of Computer Science and Engineering, Skopje, Macedonia)
COVID-19 Fake News Detection by Using BERT and RoBERTa Models 
We live in a world where COVID-19 news is an everyday occurrence with which we interact. We are receiving that information, either consciously or unconsciously, without fact-checking it. In this regard, it has become an enormous challenge to keep only true COVID-19 news relevant. People are exposed to these stories on a daily basis, and not all of them are true and fact-checked reports on the COVID-19 pandemic, which was the primary reason for our research. We accepted the challenge that fake news is extremely common and that some people take these news as they are. Knowing the true power of the most recent NLP achievements, in this research we focus on detecting fake news regarding COVID-19. Our approach includes using pre-trained BERT and RoBERTa models, which we then fine-tune on real and fake news about the COVID-19 pandemic. By using pre-trained BERT and RoBERTa models on tweet data, we explore their capabilities and compare them to previous research in regard to fine-tuned BERT models for this task in which we achieve better accuracy, recall and f1 score.
2.A. Trpin (Faculty of information studies in Novo mesto, Novo mesto, Slovenia), B. Boshkoska (Jožef Stefan Institute, Faculty of information studies in Novo mesto, Ljubljana, Novo mesto, Slovenia)
Face Recognition with a Hyperbolic Metric Classification Model 
Facial recognition systems are increasingly being used in smartphones as biometric security instead of passwords, or in airports as automated electronic passport control. It is also emerging in other forms of technology, for example in robotics. This creates large collections of photos that cannot be managed. Data mining tools and machine learning methods can be used to process these datasets and use them for prediction and classification. In such algorithms, using the most suitable distance metrics to define similarities among data has a crucial role. This paper investigates the usage of the Poincaré metric, which is used primarily in hyperbolic geometry, on the well-known k Nearest Neighbours classification algorithm. We applied this method to a database of face images. Our results indicate that the Poincaré metric is helpful with the Large Margin Nearest Neighbour learning (LMNN) method tested on an image dataset. We found that for small values of k, up to five, the algorithm using the Poincaré metric with the Edge filter gave the best results.
3.K. Ljubičić (Privredna banka Zagreb, Zagreb, Croatia), A. Merćep, Z. Kostanjčar (Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia)
Analysis of Complex Customer Networks: A Real-World Banking Example 
Complex customer networks are an important tool for better understanding customer behaviour and customers' mutual interdependence. Hence, it has been proven insightful in numerous customer-centric domains such as recommender systems and customer relationship management. Although previous research in this field is predominantly associated with telecommunication and e-commerce sectors, similar construction principles could be transferred to the financial sector, e.g. a customer network could be created based on transaction history or domain-specific relations such as loan debtor--guarantor or loan debtor--co-debtor. However, this research path is still underdeveloped. In this paper, we analyse real-world complex customer networks of a Croatian bank. Numerous graph metrics were calculated, and the obtained results prove that these networks are non-random. We also show that these networks are scale-free and exhibit the small-world effect based on graph properties like the node degree distribution and the average shortest path.
5:10 PM-5:30 PM    Break 
BE - Biomedical Engineering 
5:30 PM-7:00 PM    Papers 
1.E. Carmegren , E. Åstrand , I. Tomasic (Malardalen University , Vasteras, Sweden)
Dependability Evaluation of an Online Pupillometry-based Feedback System for Optimized Training 
’Optimized learning’ can be defined as the objective to maximally utilize the learning outcomes when solving a given problem, e.g., an equation, memorizing concepts when studying. Due to its complexity and abstract nature, one cannot completely exclude context related behaviour, such as implicit impressions and state of mind. In other words, ’optimized learning’ cannot solely rely on the practical methods or procedures used. Previous research has uncovered a correlation between cognitive load and pupil dilation, eliminating the need for excessive amount of electroencephalogram (EEG) data. By real-time tracking of the pupil dilation the ReCog - system classifies the cognitive load via a hardware programmed neural network (NN) and regulates the difficulty level of a game accordingly. Thus, allowing the participant to be preserved in a ’optimal state of learning’. One of the primary objectives is to later reduce the rehabilitation time for patients that suffer with motor function deficiencies. At the current state the system is fully integrated, but possesses no fault-tolerant features to produce a long-term reliable service. It is an aspect that must be addressed to enable transferability to the medical domain. As a result, this paper proposes a fault-tolerant architecture for the ReCog system that is evaluated using state-of-the-art quantitative methods, in particular Continuous-Time Markov Chains (CTMC). Additionally, the concept of using the L2 (Ω) norm in an inner product space for the reliability function R(t) is introduced, to allow for architecture comparison. The results imply adequacy of the extended architecture, assuming slightly pessimistic failure rates. Further, these concepts could be of significant relevance for other similar medical devices within the field.
2.Z. Wang (University of Pannonia, Veszprem, Hungary), Z. Nagy (Semmelweis University, University of Pannonia, Budapest, Veszprem, Hungary), Z. Juhasz (University of Pannonia, Veszprem, Hungary)
On the Benefits of Empirical Mode Decomposition in Spatio-temporal EEG Analysis 
Empirical mode decomposition (EMD) is an effective tool for the analysis of non-linear and non-stationary signals, which has been widely used in various application fields for noise reduction, feature extraction and classification. Due to its adaptive and data-driven nature, it has been introduced to electroencephalography (EEG) analysis to extract more accurate information during time-frequency and phase analysis, multi-channel signal processing, and brain connectivity network construction. In our paper we review the development of EMD and its variants, illustrating their benefits in spatiotemporal EEG analysis, and introduce some practical applications of EMD in EEG analysis. Finally, we discuss future opportunities in EEG analysis with the EMD method, and outline parallelization strategies to speed up EMD processing.
3.L. Klaić, A. Stanešić, M. Cifrek (Faculty of Electrical Engineering and Computing, Zagreb, Croatia)
Numerical Modelling of Multi-layered Capacitive Electrodes for Biomedical Signals Measurement 
In this paper, a concentric cylinder model of the upper arm is used in order to perform a stationary and low-frequency analysis of the electrical field distribution on the capacitive surface electromyography (sEMG) electrode. The four-layered capacitive electrode is implemented, along with the preprocessing printed circuit. The goal of this research is to explore the quality of capacitive coupling between the electrode and skin covered with fabric, as well as the behavior of the implemented electronics, and to appraise the utility of simulation methods. For this purpose, the finite element method within the CST Studio Suite® software is used. The results have confirmed the purpose of implemented electronic components, as well as shown that the proximity of the electrode and thinner fabric layer with greater dielectric permittivity are expected to create stronger capacitive coupling.
4.S. Tudjarski, A. Stankovski, M. Gushev (Faculty of Computer Science and Engineering, Skopje, Macedonia)
Detecting Ventricular Beats with Machine Learning Models 
This paper aims at modeling a classifier of Ventricular heartbeats by experimenting with the most advanced classic binary classifiers in different scenarios for feature engineering. Methodology: The results were acquired based on experimenting with XGBoost and Random Forest algorithms, as two of the most advanced classifiers not based on neural networks. Although the annotated ECG data sets contain records with several heartbeat classes, we focus on a model that would distinguish V from others (Non-V heartbeats). Considering that we are dealing with a highly imbalanced data set, we applied the SMOTE algorithm for data enrichment to provide a better-balanced data set for training the model. To acquire better results, we added new calculated features, with and without feature selection. For feature selection, we used the Fisher Selector algorithm. Data: We used MIT-BIH Arrhythmia benchmark database, with train/test split according to the patient-oriented splitting approach that separates the original dataset into two subsets with approximately equal sizes and distribution of heartbeat types. Conclusion: The best results are achieved with XGBoost algorithm with original feature set. We achieved precision of 91.36%, recall of 88.31% and F1 score of 89.81%. Results showed that oversampling does not provide significantly better overall model performance. Still, we would recommend this approach since in practice, when dealing with imbalanced data sets, this leads to more robust models that perform better with data outside the training and test sets, such as when the model is used in production.
Friday, 5/27/2022 9:00 AM - 1:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
BE - Biomedical Engineering 
09:00 AM-10:20 AM    Papers 
1.M. Gusev (University Sts Cyril and Methodius, Skopje, Macedonia)
Detection of Premature Heartbeats 
Objectives: Premature heartbeats are those that appear earlier than the regular ones due to contractions not originating from the Sinus Atrial Node, out of a sequence with the normal heart rhythm. Although one might think this is a trivial task to detect, the distribution of premature heartbeats in the benchmark electrocardiograms shows it is not the case. Methodology: We specified several methods which calculate the relation of the premature heartbeat to the previous one or to a set of several previous instances and conduct experiments to present which method delivers the best solution. The methods are based on the calculation of the optimal number of beats that precede the premature one. Then, we calculate the deviation ratio to the average of these beats that affect the prematurity condition. We also focus on the differences while generating if the premature beat is atrial or ventricular. Data: The comprehensive MIT-BIH Arrhythmia Electrocardiogram benchmark database is used in our evaluation. The analysis is conducted on the array of beat-to-beat intervals and their type for the heartbeats preceding the premature one. The number of analyzed heartbeats is 109494, out of which 3026 are atrial and 7236 ventricular beats. Conclusion: The results show that the optimal value of preceding beats is 6 and the deviation ratio is 16%. The efficiency of this approach reveals accuracy, sensitivity, and F1 score of over 98%. Particularly, calculating an arithmetic average does not reveal the best results, rather, the other types of average calculation are more appropriate.
2.E. Merdjanovska, A. Rashkovska Koceva (Jožef Stefan Institute, Ljubljana, Slovenia)
Benchmarking Deep Learning Methods for Arrhythmia Detection 
Automatic arrhythmia detection methods are a very significant area of computational ECG analysis. This field has been researched for a long time, however, there are various challenges still faced. Some of the main flaws in current ECG-based arrhythmia classification research are limited variety of datasets used and varying experimental setups, which makes it difficult to directly compare different methods. Most often, a method is evaluated on a specific dataset and task (set of arrhythmia classes). By placing these methods under unified evaluation setup (one umbrella), we can apply (evaluate) them on a wider range of datasets and tasks than they were originally proposed for. To address these challenges, in this paper, we perform benchmarking of some of the most significant deep-learning based methods for arrhythmia detection. These methods are compared on four datasets, considering the most significant state-of-the-art arrhythmia classification tasks. Included are the data from the CinC2017 and CPSC2018 challenges, as well as two recently published large-scale ECG arrhythmia datasets: the PTB-XL and the Shaoxing Hospital Database. The analyses cover a wide range of both morphological and rhythmic arrhythmias, all while focusing on methods suitable for single-lead analysis. In addition, the classification performance on 12-lead data and single-lead data is compared and discussed.
3.I. Kuzmanov, A. Madevska Bogdanova (Ss. Cyril and Methodius University, Skopje, Macedonia), M. Kostoska, N. Ackovska (University "Ss. Cyril and Methodius", Skopje, Macedonia)
Fast Cuffless Blood Pressure Classification with ECG and PPG signals using CNN-LSTM Models in Emergency Medicine 
Cuffless blood pressure (BP) measurement is gaining a lot of attention as a promising new technology that can be embedded in a patch-like biosensor device. Electrocardiogram (ECG) and photoplethysmogram (PPG) waveforms are non-invasive by their nature - they can be recorded without sending any electrical impulses to the human body. These signals present different aspects of the cardiovascular system, thus using both of the signals for blood pressure classification seems like a viable strategy. Quick estimation of the blood pressure during the triage process in cases of natural disasters with many injured subjects, is an essential measure for following the hemostability of the injured. The main goal of this study is to develop a two-class classification model (Hypotension and Nothypotension) for fast prediction of the blood pressure category by utilizing ECG and PPG signals, in order to detect a BP sudden drop. The developed deep learning models are based on the LSTM architecture and its variants, CNN-LSTM. We also conducted three class classification model. The models were trained and tested using the data from the UCI Machine Learning Repository Cuff-Less Blood Pressure Estimation dataset with 12000 instances. The best result in the two-class model is AUROC = 0.74.
4.C. Barakat (Juelich Supercomputing Centre, Juelich, Germany), S. Fritsch (RWTH Aachen University Hospital, Aachen, Germany), M. Riedel (University of Iceland, Reykjavik, Iceland)
Lessons Learned on Using High-Performance Computing and Data Science Methods towards Understanding the Acute Respiratory Distress Syndrome (ARDS)  
Acute Respiratory Distress Syndrome (ARDS), also known as noncardiogenic pulmonary edema, is a severe condition that affects around one in ten-thousand people every year with life-threatening consequences. Its pathophysiology is characterized by bronchoalveolar injury and alveolar collapse (i.e., atelectasis), whereby its patient diagnosis is based on the so-called ‘Berlin Definition‘. One common practice in Intensive Care Units (ICUs) is to use lung recruitment manoeuvres (RMs) in ARDS to open up unstable, collapsed alveoli using a temporary increase in transpulmonary pressure. Many RMs have been proposed, but there is also confusion regarding the optimal way to achieve and maintain alveolar recruitment in ARDS. Therefore, the best solution to prevent lung damages by ARDS is to identify the onset of ARDS that is still a matter of research. The requirement is that ARDS disease onset, progression, diagnosis, and treatment needs algorithmic support that raises the demand for cutting-edge computing power. This paper thus describes several different data science approaches to understand better ARDS, such as using time series analysis and image recognition with deep learning methods and mechanistic modelling using a lung simulator. In addition, we outline how High-Performance Computing (HPC) helps in both cases. That also includes porting the mechanistic models from serial MatLab approaches (e.g., Nottingham Physiology Simulator and Warwick Physiological Model) to parallel infrastructures of the Juelich Supercomputing Centre (JSC) and its modular supercomputer designs. Finally, without losing sight of discussing the datasets, their features, and their relevance, we also include broader selected lessons learned in the context of ARDS out of our Smart Medical Information Technology for Healthcare (SMITH) research project. The SMITH consortium brings together technologists and medical doctors of nine hospitals, whereby the ARDS research is performed by our Algorithmic Surveillance of ICU (ASIC) patients team. The paper thus also describes how it is essential that HPC experts team up with medical doctors that usually lack the technical and data science experience and contribute to the fact that a wealth of data exists, but ARDS analysis is still slowly progressing. We complement the ARDS findings with selected insights from our Covid-19 research under the umbrella of the European Open Science Cloud (EOSC) fast track grant, a very similar application field.
10:20 AM-10:40 AM    Break 
10:40 AM-11:40 AM    Papers 
1.R. Fonseca-Pinto, F. Ferreira, J. Alves (ciTechCare - Center for Innovative Care and Health Technology, Leiria, Portugal), F. Januário, A. Antunes (CHL - Centro Hospitalar de Leiria, Leiria, Portugal)
Impact of the COVID-19 Pandemic on Adherence to Exercise Prescription: The Case of Cardiac Rehabilitation Programs  
Cardiac Rehabilitation Programs (CRPs) are an important tool of secondary prevention and their implementation within health services, despite the uneven geographical distribution, has been receiving attention from decision-makers in recent years. Adherence to the CRPs is one of the great challenges faced by the multidisciplinary team, and there are several strategies to maintain adherence, particularly in CRP-Phase III, which occurs outside the hospital environment. One of the strategies followed is the use of remote performance monitoring and recording of possible alert symptoms. With the pandemic due to COVID-19, these challenges have become even more evident as Phase II programs have been suspended, thus increasing the importance of the home-based CRPs. In this work, the results of a study aiming to understand the impact of the pandemic on adherence to the prescription of exercise and the perception of patients regarding the effects of physical activity on health conditions are shown. The results indicate that the pandemic did not have a major effect on the adherence to home-based exercises, in particular, in patients undergoing programs using a telemonitoring system. Moreover, the perception of the importance of physical activity for health and well-being was reinforced in the context of the pandemic.
2.B. Thaman, T. Cao, N. Caporusso (Northern Kentucky University, Highland Heights, United States)
Face Mask Detection Using MediaPipe Facemesh 
Recently, face masks have received increasing attention due to the COVID-19 pandemic, as their correct use can reduce and prevent the spread of outbreaks. Thus, several research studies focused on developing new strategies for identifying if individuals are wearing a face mask before they can be admitted into public spaces, buildings, and transportation systems. In this paper, we present an alternative approach to face mask detection pipeline for automatically detecting whether an individual is equipped with a face mask. Our proposed solution utilizes MediaPipe, a popular image segmentation and object detection machine learning model designed especially for cross-platform operation, with specific regard to mobile devices. We present the architecture of our pipeline, detail its operation, and report the results of an evaluation study in which we analyzed the performance of our model in real-world scenarios.
3.H. Zeb Khan, M. Munawar Iqbal (University of Engineering and Technology Taxila, Taxila, Pakistan)
Genomic Variant Analysis of COVID-19 Genomes by Variant Transforms 
Coronavirus strain SARS-CoV-2 is behaving like a cuttle fish, it adopts to any environment, the gloomy face of Coronavirus pandemic is keeping changing multifaceted by regenerating alike starfish as new variants came to surprise the scientists and researchers. In this work we employed Google Variant Transforms tool for processing of COVID–19 VCF files with inclusion of Google big query for genomic variant analysis enveloped on Google Cloud Platform (GCP).We have converted COVID–19 genomic sequences into VCF files by various bioinformatics tools .Google Variant Transforms preprocessor algorithm preprocess COVID–19 VCF files before subjected to process, generated a report about three scrutiny criteria about VCF files, the computation job for Google variant transforms and its preprocessor algorithm jobs managed by Google Dataflow with job graph . We attain a table of COVID–19 variants sites as a variant residue through google big query and displayed results with Google Data studio. Our main finding were storing VCF files into Big query for the analysis of COVID genomes We aimed at this research will be fruitful for the combating COVID–19 variants.
11:40 AM-11:50 AM    Break 
11:50 AM-1:00 PM    Papers 
1.A. Jovanović (University of Osijek, Department of mathematics, Osijek, Croatia), I. Alqassem (Ludwig Maximilian University of Munich, Gene Center Munich; NEC Laboratories Europe, Munich, Germany), N. Chappell (Mono Ltd., Osijek, Croatia), S. Canzar (Ludwig Maximilian University of Munich, Gene Center Munich, Munich, Germany), D. Matijević (University of Osijek, Department of mathematics, Osijek, Croatia)
Predicting RNA Splicing Branchpoints 
RNA splicing is a process where introns are removed from pre-mRNA, resulting in mature mRNA. It requires three main signals, a donor splice site (5’ss), an acceptor splice site (3’ss) and a branchpoint (BP). Splice site prediction is a well-studied problem with several reliable prediction tools. However, branchpoint prediction is a harder problem, mainly due to varying nucleotide motifs in the branchpoint area and the existence of multiple branch-points in a single intron. An RNN based approach called LaBranchoR was introduced as the state-of-the-art method for predicting a single BP for each 3’ss. In this work, we explore the fact that previous research reported that 95% of introns have multiple BPs with an estimated average of 5 to 6 BPs per intron. To that end, we extend the existing encoder in the LaBranchoR network with a PointerNetwork decoder. We train our new encoder-decoder model, named RNA PtrNets, on 70-nucleotide-long annotated sequences taken from three publicly available datasets. We evaluate its accuracy and demonstrate how well the predictor can generate multiple branchpoints on the given datasets.
2.S. Pereira, J. Verdugo, R. Fonseca-Pinto (ciTechCare - Center for Innovative Care and Heath Technology, Polythecnic of Leiria, Leiria, Portugal)
Rapid Antimicrobial Susceptibility Testing Using Laser Speckle Technology 
Antimicrobial susceptibility testing (AST) is key to support clinical decision regarding bacterial infectious diseases treatment. In particular, it is applied to direct the most appropriate antimicrobial therapy to assure its success and thus prevent infection related health complications or even death. In addition, AST is very important to prevent the emergence of bacterial antimicrobial resistance and the spread of multi-resistant bacteria. Current standard AST require long periods of time to obtain results, which is an important limitation for the required targeted prescription of antimicrobials. Faced with the challenge of antimicrobial resistance, many proposals have been made to accelerate sample processing, however, the initial step of bacterial incubation prior to AST has not yet been circumvented, being the major contributor to the extremely time-consuming current AST available technologies. This work presents a new methodology to perform AST, in which the incubation time is reduced to a minimum and the AST process is based on optical technology, the Laser Speckle. Preliminary results of this new AST approach using Pseudomonas aeruginosa and Staphylococcus aureus clinical strains, two of the most challenging pathogens worldwide, showed very promising results. This new technology may be the future solution to guide antimicrobial prescription, by possibly delivering results in as little as 30 minutes.
3.M. Tislér, M. Kozlovszky (BioTech research Center, Obuda University, Budapest, Hungary)
Detection of Postural Abnormalities with IMU Based Sensor 
Posture has a big impact on healthy lifestyle and it is an essential part of our health. Unfortunately, the problem for many member of the population is that they don’t put enough emphasis on posture while doing sedentary activities. Careless posture can occur during both standing and sitting, which can lead to pain at back and waist regions in older ages. The aim of our project is to give a solution to posture monitoring and correction. The developed special clothing is wearable and able to analyze the wearer's posture. Based on the measured results during posture monitoring the system can support prevention and active posture correction. In this paper we are describing the system design and show how the system is performing during tests.

Basic information:
Chairs:

Karolj Skala (Croatia), Aleksandra Rashkovska Koceva (Slovenia), Davor Davidović (Croatia)

Steering Committee:

Marian Bubak (Poland), Jesús Carretero Pérez (Spain), Tiziana Ferrari (Netherlands), Dieter Kranzlmüller (Germany), Ludek Matyska (Czech Republic), Dana Petcu (Romania), Uroš Stanič (Slovenia), Matjaž Veselko (Slovenia), Yingwei Wang (Canada)

Program Committee:

Enis Afgan (Croatia), Viktor Avbelj (Slovenia), Davor Davidović (Croatia), Matjaž Depolli (Slovenia), Simeon Grazio (Croatia), Marjan Gusev (North Macedonia), Vojko Jazbinšek (Slovenia), Jurij Matija Kališnik (Germany), Zalika Klemenc-Ketiš (Slovenia), Dragi Kocev (Slovenia), Gregor Kosec (Slovenia), Miklos Kozlovszky (Hungary), Lene Krøl Andersen (Denmark), Tomislav Lipić (Croatia), Željka Mihajlović (Croatia), Panče Panov (Slovenia), Tonka Poplas Susič (Slovenia), Aleksandra Rashkovska Koceva (Slovenia), Karolj Skala (Croatia), Viktor Švigelj (Slovenia), Ivan Tomašić (Sweden), Roman Trobec (Slovenia), Roman Wyrzykowski (Poland)

Registration / Fees:
REGISTRATION / FEES
Price in EUR
EARLY BIRD
Up to 9 May 2022
REGULAR
From 10 May 2022
Members of MIPRO and IEEE 230 260
Students (undergraduate and graduate), primary and secondary school teachers 120 140
Others 250 280

The discount doesn't apply to PhD students.

Contact:

Karolj Skala
Rudjer Boskovic Institute
Center for Informatics and Computing
Bijenicka 54
HR-10000 Zagreb, Croatia

E-mail: skala@irb.hr

 

SUBMISSION GUIDELINE:
All submitted papers will pass through a plagiat control and blind peer review process with at least 2 international reviewers.

On the basis of reviewers' opinion and voting result from the conference attendance we will qualify the Best paper for the prize that will be awarded as a part of the final event at the DS-BE conference.

Accepted papers will be published in the ISSN registered conference proceedings. Presented papers will be submitted for inclusion in the IEEE Xplore Digital Library.
..........................................................
JOURNAL SPECIAL ISSUE
Authors of the best scientific papers will be invited to submit an extended version of their work to the Scalable Computing: Practice and Experience (ISSN 1895-1767) Journal.
..........................................................
There is a possibility that the selected scientific papers with some further modification and refinement are being published in the following journals: Journal of Computing and Information Technology (CIT)MDPI Applied ScienceMDPI Information JournalFrontiers and EAI Endorsed Transaction on Scalable Information Systems.



Location:

Opatija is the leading seaside resort of the Eastern Adriatic and one of the most famous tourist destinations on the Mediterranean. With its aristocratic architecture and style, Opatija has been attracting artists, kings, politicians, scientists, sportsmen, as well as business people, bankers and managers for more than 170 years.

The tourist offer in Opatija includes a vast number of hotels, excellent restaurants, entertainment venues, art festivals, superb modern and classical music concerts, beaches and swimming pools – this city satisfies all wishes and demands.

Opatija, the Queen of the Adriatic, is also one of the most prominent congress cities in the Mediterranean, particularly important for its ICT conventions, one of which is MIPRO, which has been held in Opatija since 1979, and has attracted more than a thousand participants from over forty countries. These conventions promote Opatija as one of the most desirable technological, business, educational and scientific centers in South-eastern Europe and the European Union in general.


For more details, please visit www.opatija.hr and visitopatija.com.

Download
 
News about event
Currently there are no news
 
Patrons - random
HATZUNIPUT-HT ZagrebHEP ZagrebSveučilište u Zagrebu