Predsjedatelji: Karolj Skala i Enis Afgan
|M. Riedel, M. Memon, A. Memon (Juelich Supercomputing Centre, Juelich, Germany), G. Fiameni, C. Cacciari (CINECA, Bologna, Italy), T. Lippert (Juelich Supercomputing Centre, Juelich, Germany)
High Productivity Processing - Engaging in Big Data around Distributed Computing
The steadily increasing amounts of scientific data and the analysis of 'big data' is a fundamental characteristic in the context of computational simulations that are based on numerical methods or known physical laws. This represents both an opportunity and challenge on different levels for traditional distributed computing approaches, architectures, and infrastructures. On the lowest level data-intensive computing is a challenge since CPU speed has surpassed IO capabilities of HPC resources and on the higher levels complex cross-disciplinary data sharing is envisioned via data infrastructures in order to engage in the fragmented answers to societal challenges. This paper highlights how these levels share the demand for 'high productivity processing' of 'big data' including the sharing and analysis of 'large-scale science data-sets'. The paper will describe approaches such as the high-level European data infrastructure EUDAT as well as low-level requirements arising from HPC simulations used in distributed computing. The paper aims to address the fact that big data analysis methods such as computational steering&visualization, map-reduce, R, and others are around, but a lot of research and evaluations still need to be done to achieve scientific insights with them in the context of traditional distributed computing environments.
|M. Simjanoska, S. Ristov, G. Velkoski, M. Gusev (Ss. Cyril and Methodius University, Faculty of Information Sciences and Computer Engineering, Skopje, Macedonia)
Scaling the Performance and Cost While Scaling the Load and Resources in the Cloud
Cloud computing is a paradigm that offers on-demand scalable resources with the “pay-per-usage” model. Cloud service providers’ price rises linearly as the resources scale. However, the main challenge for the cloud customers is “Are the performance also scaling as the price for the resources”? In this paper we analyze the performance and the cost of two web services that utilize memory (Concat) and both memory and CPU (Sort) varying the server load with different message size and number of concurrent messages in order to determine the real cost of rented CPU resources. The results show that Concat web service provides the lowest cost while hosted on two CPUs, while Sort web service’s cost rises linearly as rising the resources, i.e. the lowest cost is determined while hosted on one CPU.
|D. Tomić (Hewlett-Packard, Zagreb, Croatia), L. Gjenero, E. Imamagić (University of Zagreb University Computing Centre, Zagreb, Croatia)
Semidefinite optimization of High Performance Linpack on Heterogeneous Cluster
Abstract – High Performance Linpack, abbreviated HPL, is an industry standard benchmark used in measuring the computational power of High Performance Clusters. Due to its high degree of parallelism, it can scale up linearly over hundred thousands of computing nodes, with efficiency that often exceeds 80% percent or more; efficiency is expressed as a ratio of measured double floating point operations per second against highest number of double floating point operations per second that can be theoretically achieved. However, running HPL on heterogeneous HPC clusters, built up of a computing nodes with different computational power, showed in most cases poor efficiency. In such type of clusters, efficiency of HPL further decreases if the speed of interconnect links between computing nodes is different. In order to improve HPL efficiency on such a clusters, one needs to optimally balance HPL workload on computing nodes accordingly to their computational power, and at the same time, take into the consideration the speed of communication links between them. However, our thesis is that the problem of efficiently running HPL on heterogeneous HPC cluster is solvable, and that one can formulate it as a Semidefinite Optimization of Second Eigenvalue in Magnitude (SLEM) matrix describing data-flow of HPL in a cluster. In order to test a validity of such an approach, we run a series of HPL benchmarks on Isabella HPC cluster, both optimized respective to SLEM and non-optimized. By comparing results obtained with SLEM optimization of HPL against non-optimized HPL, we were able to identify a dramatic improvement in HPL efficiency when using SLEM. Moreover, by taking into the consideration memory sizes of computational nodes, we were able to improve SLEM optimization of HPL further.
|G. Radchenko, E. Hudyakova (South Ural State University, Chelyabinsk, Russian Federation)
Distributed Virtual Test Bed: an Approach to Intergation of CAE Systems in UNICORE Grid Environment
Computer-Aided Engineering (CAE) systems demand a vast amount of computing resources to simulate modern hi-tech products. In this paper we consider the problem-oriented approach to access remote distributed supercomputer resources using the concept of distributed virtual test bed (DiVTB). DiVTB provides a problem-oriented user interface to distributed computing resources within the grid, online launch of CAE simulation, automated search, monitoring and allocation of computing resources for carrying out the virtual experiments. To support the concept of DiVTB the CAEBeans technology was developed. CAEBeans technology provides a solution for the development and deployment of DiVTB, integration of most common CAE systems into the distributed computing environment as grid services (based on the UNICORE grid middleware) and web access to CAE simulation process.
|E. Atanassov, T. Gurov, A. Karaivanova (IICT-BAS, Sofia, Bulgaria)
Message Oriented Framework with Low Overhead for Efficient High-Performance Monte Carlo Simulations
In the recent years Bulgaria acquired a substantial amount of HPC resources of various types. The biggest procurement has been the BlueGene/P supercomputer at SAITC with 8192 CPU cores, while the Bulgarian Academy of Sciences has now two HPC clusters with Intel CPUs and Infiniband interconnection, which total more than 1000 ogical cores. In addition some servers equipped with powerful GPU are available for applications that can take advantage of them. The coordinated use of such resources by one application faces significant challenges due to the heterogeneity of the resources and the networking and security constraints.
In order to facilitate the coordinated use of all these resources where each resource is used for the parts of the application where it is most efficient, we have developed a framework that allows the researcher to interconnect resources of the above types with minimal overhead. In this paper we describe the architecture of the system and demonstrate its effectiveness for a semiconductor modeling application, showing numerical and timing results.
|C. Blanco, A. Cofiño, V. Fernández-Quiruelas (University of Cantabria, Santander, Spain)
WRF4SG: A Scientific Gateway for the Weather Research and Forecasting Model
Numerical climate models such as Weather Research and Forecasting (WRF) are nowadays among the one of the most computationally demanding applications. Climate and weather research experiments performed with models such as WRF are complex tasks due to large storage, CPU and memory requirements. In order to manage that, we are going to develop a scientific gateway called WRF4SG (WRF for Scientific Gateway) based on WS-PGRADE/gUSE and WRF4G frameworks to cover the WRF users needs to perform climate and weather research experiments. The main objective of this work is to establish how WRF4SG is going to assist the WRF users and its development.
|E. Atanassov, M. Durchova (Institute of Information and Communication Technologies Institute of Information and Communication, Sofia, Bulgaria)
Generation of the Scrambled Halton Sequence Using Accelerators
The Halton sequence is one of the most popular low-discrepancy sequences. In order to satisfy some practical requirements, the
original sequence is usually modified in some way.
The scrambling algorithm, proposed by Owen, has several theoretical advantages, but on the other hand is
difficult to implement in practice due to the trade-off between high memory and high computational requirements.
In our work we concentrate on the case when the number of coordinates is relatively high.
The use of computational accelerators and especially GPUs is increasingly relevant for such practical applications, since
more and more of the resources, available through the Grid and Cloud infrastructures, provide access to such accelerators,
provided that the software can make use of them. In this paper we discuss our algorithm for generation of the
Halton sequence with Owen-type scrambling implemented using
NVIDIA CUDA. We also show numerical results, achieved on our GPU-enabled nodes,
which are equipped with NVIDIA M2090 cards.
|N. Anchev, M. Gusev, S. Ristov, B. Atanasovski (Ss. Cyril and Methodius University, Faculty of Information Sciences and Computer Engineering, Skopje, Macedonia)
Intel vs AMD: Matrix Multiplication Performance
Matrix-Matrix multiplication (MMM) is widely used algorithm in today’s computations and researches. Many techniques exist to speed up its execution. In this paper, we analyze the performance of MMM varying matrix size in order to determine the region where it provides the best performance. We also determine the best speedup in parallel implementation for different CPU architectures since cache organization is very important for MMM performance. Superlinear speedup (speedup greater than the number of used threads) is achieved for parallel implementations.
|A. Martinović, D. Arbula, Ž. Jeričević (TEHNIČKI FAKULTET, RIJEKA, Croatia)
Web Service for Separating Components in the Exponential Decay Process
Exponential decay describes multitude of physical, chemical and biological processes. It is very often the case that the summary signal is measured and individual components have to be extracted by computation. Due to its universality, the problem attracted a lot of interest in the last fifty years from researchers working in various fields ranging from nuclear physics to biology. One of the first developments was in chemistry in order to analyze chemical kinetics data.
Although the separation of exponentials looks deceptively simple, it is actually a difficult problem because of the non-orthogonality of exponentials. Attempts to separate exponentials with close half times by nonlinear least squares usually lead to a huge number of iterations with no convergence.
Our server uses the least squares methods with linearization step based on numerical integration. Numerical integration for this type of signal has smoothing properties because the positive and negative errors cancel each other. It could be seen from the Fourier transform of numerical integration operator that the high frequencies are suppressed in the integration result. After the integration, solution of multi-exponential problem is obtained by solving over-determined system of linear equations followed by finding the roots of polynomial. The number of exponentials in the signal dictates the degree of polynomial, rank of the linear system and multiplicity of numerical integration.
The advantage of accurate linearization with noise cancellation properties is that the exponentials separation becomes a one step procedure and condition number of the linear system can be used to control the quality of solution. The procedure can be completely generalized and apriori assumptions about the solution are not necessary, unless user wants to use them as constrains.
The server operates in two modes, one for registered, and another for non-registered users. Registered users can save the results including graphic output in pdf file and ask the server to email results to them. Unregistered users can do all calculations and observe the results through web browser.
|L. Djinevski, S. Arsenovski (Fon University, Skopje, Macedonia), S. Ristov, M. Gusev (Ss. Cyril and Methodius University - Skopje / Faculty of Computer Science and Engineering, Skopje, Macedonia)
Performance Drawbacks for Matrix Multiplication Using Set Associative Cache in GPU Devices
This paper explains the performance drawbacks in multiprocessor shared memory devices for dense matrix multiplication algorithm using n-way set associative cache memory. A description of the GPU memory hierarchy and the cache memory organization is presented. We give a theoretical analysis why performance drawback appears in dense matrix multiplication algorithm. Based on our micro benchmark, the obtained results provide the level of the cache associativity. We validate our micro-benchmark by evaluating the cache memory associativity with another known micro-benchmark.
|M. Trkman, S. Vrhovec, D. Vavpotič, M. Krisper (Faculty of computer and information scence, University of Ljubljana, Ljubljana, Slovenia)
Defending the Need for a New Global Software Approach: a Literature Review
The paper focuses on specific software development project needs. Therefore, the project has a traditional customer who wants a specific product delivered in a certain time frame. There is a challenge: the software development company does not have enough in house developers with necessary technical skills. That easily happens since the specific technical knowledge of software engineers is relatively quickly outdated. Moreover, we do not want to employ additional employees for various reasons such as: no time for training, too low budget etc. The option is to outsource the coding. The preferences are to have highly replaceable online developers. And to retain the time and content control of private software development project. The paper discusses available global software approaches and compares them.
|M. Depolli, G. Kosec, J. Ugovšek (Institut Jožef Stefan, Ljubljana, Slovenia), V. Malačič (Marine Biology Station, Piran, Slovenia)
Parallelization of NAPOM Implementation
In this paper, the code for the North Atlantic Princeton Ocean Model (NAPOM) used by the Marine Biology Station (MBS) is parallelized and optimized. The FORTRAN source code and the hardware architecture of MBS cluster are examined and analyzed to determine the behavior of the NAPOM execution with bottlenecks identified on both ends. Based on the analysis, the most effective optimization and parallelization actions are planned. Most time consuming modules of the NAPOM package are optimized to achieve maximal performance on the hardware architecture. The pre-process modules are distributed on more computational nodes while all independent complex operations are parallelized with the shared memory principles. The resulting parallelized implementation of the NAPOM package executes nearly four times faster than the original one with only a minimal additional load to the MBS cluster.
|Ž. Jeričević (Tehnicki fakultet, Rijeka, Croatia), I. Kožar (Gradevinski fakultet, Rijeka, Croatia)
Faster Solution of Large, Over-Determined, Dense Linear Systems
The solution of linear least squares system requires the solution of over-determined system of equations. For a large dense systems that requires prohibitive number of operations. We developed a novel numerical approach for finding an approximate solution of this problem if the system matrix is of a dense type. The method is based on Fourier or Hartley transform although any unitary, orthogonal transform which concentrates power in a small number of coefficients can be used. This is the strategy borrowed from digital signal processing where pruning off redundant information from spectra or filtering of selected information in frequency domain is the usual practice. For the least squares problem the procedure is to transform the linear system along the column to the frequency domain, generating a transformed system. The least significant portions in the transformed system are deleted as the whole rows, yielding a smaller, pruned system. The pruned system is solved in transform domain, yielding the approximate solution. The quality of approximate solution is compared against full system solution and differences are found to be on the level of numerical noise. Numerical experiments illustrating feasibility of the method and quality of the approximation together with operations count are presented.
|A. Šimec, O. Staničić (Tehničko veleučilište u Zagrebu, Zagreb, Croatia)
Virtual computers and virtual data storage
Virtual data storage represents a new business model which includes various concepts such as virtualization, design of distributed applications and control which enables flexible data access. These methods include using networks of remote servers instead of local servers and personal computers for storing, controlling and editing data. Locations containing servers which execute applications and store data are not strictly defined, hence terms “virtual data storage” or “storing data in the cloud” are used. As the need for storing data increased, controlling that data became harder as well. Backing up data in large organizations is an inconvenient task. In spite of the increase in power and storage capacity of computers, prices of storing and maintaining data remains high. Various technologies and solutions have developed over time to overcome this problem and in the end they evolved into virtualization of the system for data storage.
|D. Davidović, T. Lipić, K. Skala (INSTITUT "RUĐER BOŠKOVIĆ", ZAGREB, Croatia)
AdriaScience Gateway: Application Specific Gateway for Advanced Meteorological Predictions on Croatian Distributed Computing Infrastructures
Marine traffic and rapidly growing tourism in the Adriatic region are the main reasons for investigating different weather phenomena and developing the prediction models that depend on distributed computing infrastructures. Weather phenomena like storms and waterspouts are common in the in the Mediterranean Basin and thus in the Adriatic Sea region. Their occurrence is becoming easier to follow in past few years thanks to modern communication and computing technologies and could be predicted by several models like WRF-ARF, ALADIN and ROMS. This paper presents AdriaScience gateway, the application specific gateway, for meteorological applications based on WS-PGRADE/gUSE developed within the SCI-BUS project. The main goal of the application specific gateway is to facilitate an universal access to the different Croatian distributed computing infrastructures for targeted Croatian meteorology community (VOAM virtual organization).
|Pauza za ručak
Predsjedatelji: Karolj Skala i Enis Afgan
|J. Matkovic (JP Elektroprivreda HZ-HB d.d. Mosta, Mostar, Bosnia and Herzegovina), K. Fertalj (Fakultet elektrotehnike i računarstva, Sveučilište u Zagrebu, Zagreb, Croatia)
Handling Web Service Interfaces
When developing systems based on web service compositions, it is important to define a methodology for handling web service interfaces. A process of handling web service interfaces comprises publishing, storing and retrieving of interfaces. UDDI is a well-known product that offers aforementioned possibilities. This paper defines methodology with necessary steps involved in the process of handling web service interfaces and introduces how UDDI deals with it. Although registries in general do not implement managing different versions of one interface, this process must be considered together with the aforementioned processes offered by registries. Version management and its relation to registries are also considered in the paper.
The second part of the paper introduces models of search engines that are able to perform search over registries and interfaces' versions based on a given criteria and it is considered how those search engines can be used in conjunction with Web service orchestrations.
|G. Velkoski, S. Ristov, M. Gusev (Ss. Cyril and Methodius University, Faculty of Information Sciences and Computer Engineering, Skopje, Macedonia)
Loosely or Tightly Coupled Affinity for Matrix - Vector Multiplication
Introducing multi level cache memory reduces the gap between the CPU and main memory and speeds up the program execution. Modern multiprocessors can scale the speedup up to linear speedup according to Gustafson’s law. Each CPU core usually possesses private L1 and L2 cache and shares L3 cache with other cores. Using private or share cache could have significant impact to some algorithm performance in parallel implementation. Using private cache increases the overall cache size used during the execution. On the other side, shared cache reduces cache misses if all CPU cores use the same data. In this paper we analyze dense matrix vector multiplication algorithm performance for sequential and parallel implementation in multi-chip multi-core multiprocessor in order to determine the CPU affinity that provides the best performance. We also realize theoretical analysis to determine the problem size regions where some CPU affinity is better than others.
|E. Atanassov, S. Ivanovska (Institute of Information and Communication Technologies - Bulgarian Academy of Sciences, Sofia, Bulgaria)
Computation and Analysis of Sobol Coefficients for Air Pollution Concentrations over the Territory of Bulgaria
One of the main tools for modeling air pollutions over the territory of Bulgaria is the US EPA Models-3 system. The main components of the system are MM5/WRF – meteorological preprocessor, SMOKE – emission preprocessor and CMAQ – chemical transport model. TNO emission inventory is used as emission input. The Models-3 “Integrated Process Rate Analysis” option is applied to discriminate the role of different dynamic and chemical processes for the pollution for all SNAP categories. In this work we evaluate the influence of the different input parameters as concentration in different SNAP categories under constant meteorological conditions to the output concentrations over Bulgaria following methodology of Sobol-Saltelli. In order to obtain reliable estimates of the Sobol coefficients we perform a large of MPI jobs using the clusters in South-Eastern region. Using these coefficients we assess the relative importance of the various input parameters, and their interactions.
|H. Nenov, B. Dimitrov, A. Marinov (Technical university - Varna , Varna, Bulgaria)
Algorithms for Computational Procedure Acceleration for Systems Differential Equations in Matlab
Most engineers use computational environments from the matlab family as a favorite tool for solving problems. The combination of being relatively easy to use and its might of calculations make it so attractive. A significant problem in solving more complex tasks is the time needed for obtaining results. Different approaches are used in order to reduce it to an acceptable range of values - distributed systems, multiprocessor systems and others. In this paper we are discussing other useful approaches for solving the above-mentioned problem. The optimization of the matlab code in terms of effectively allocating memory, vectorization, parallelism and using a Graphical processor unit for calculations improve the solving process and dramatically decrease processing time. Thus, the need of powerful computing infrastructures such as grid systems or superarchitectures is reduced.
|Y. Kowsar (University of Melbourne, Melbourne, Australia), E. Afgan (Ruđer Bošković Institute (RBI), Zagreb, Croatia)
Support for Data-intensive Computing with CloudMan
Infrastructure-as-a-Service compute infrastructure model has showcased it's ability to transform how access to compute resources is realized; it delivered on the notion of Infrastructure-as-Code and enabled a new wave of compute adaptability. However, many workloads still execute only in a more structured and traditional cluster computing environment where jobs are handed off to a job manager and possibly executed in parallel. We have been developing CloudMan (usecloudman.org) as a versatile solution for enabling and managing compute clusters in cloud environments via a simple web interface or an API. Recently, CloudMan has been extended to support three modes of job execution: Sub Grid Engine (SGE), Hadoop, and Condor.
|M. Kozlovszky, M. Törőcsik, T. Schubert, V. Póserné (Obuda University, Budapest, Hungary)
IaaS type Cloud infrastructure assessment and monitoring
IaaS type clouds are multilayered complex systems. From end-user perspective their service quality and reliability depends heavily from the underlying hardware/software infrastructure, the used technologies and resources (e.g.: human resources). To evaluate successfully an IaaS cloud ecosystem, we need to measure it parallel from many perspectives (e.g.: security, QoS, performance, reliability, etc.). Our aim is to build up an exhaustive parameter tree, which can describe with quantitative and qualitative parameter values of a generic IaaS cloud system. The measured values of these pre-defined parameters are provided by our software measurement framework, which is capable to evaluate automatically the targeted IaaS cloud system. The measured values of the parameter tree enable both us and the end-users to compare and evaluate different IaaS type cloud systems available on the market.
|R. Trobec, M. Depolli (Jožef Stefan Institute/Department of Communication Systems, Ljubljana, Slovenia), K. Skala, T. Lipic (Ruđer Bošković Institute/Centre for Informatics and Computing, Zagreb, Croatia)
Energy Efficiency in Large-Scale Distributed Computing Systems
This paper reviews the literature on techniques and mechanisms for enabling energ-efficient large-scale distributed systems, considering both computing and networking resources and applications characteristics. Special focus will be given on two initiatives: energy efficient high performance computing in heterogenous arhitectures and providing energy efficient cloud computing.
Predsjedatelji: Karolj Skala i Enis Afgan
|D. Gorgan (Technical University of Cluj-Napoca, Cluj-Napoca, Romania), O. Capatana (Technical University of Cluj-Napoca, Cluj Napoca, Romania)
Remote Visualization of 3D Graphical Models
The complexity of graphical applications has increased considerably in the past years that make them impossible to be executed on computers with limited resources. Such an application domains are Earth Science, scientific simulation and visualization, medicine, physics, and virtual reality. Remote visualization could be a solution for this problem, which relies on rendering graphical applications on remote systems with specialized resources and fast access to huge data models. This paper presents a solution which combines several components that communicate with each other to achieve an efficient and reliable remote visualization system. The design and chosen tools allow the system to deal with constraints such as network bandwidth, latency of the network, and service oriented functionality. The paper explores and analysis the main issues related with involvement of GPU graphics clusters, and Grid and Cloud infrastructures.
|M. Mržek, J. Blažič (Institute Jožef Stefan, Ljubljana, Slovenia)
Fast Network Communities Visualization on Massively Parallel GPU Architecture
Modelling phenomena with networks has a wide application in many disciplines including biology, economics, sociology, and computer science. In network analysis modularity is an important measure for automatically extracting communities of closely connected nodes. Another important aspect of the network analysis is network visualization. Different techniques for network layout generation exist and the force-driven layout is one of the most popular ones. However, generating force-driven layouts of large networks is both time consuming and can produce a layout where distinct communities of nodes are not separated, but rather remain untangled. Such layouts are harder to be visually inspected by an end-user.
In this paper, we propose a GPU-based implementation of a force-driven algorithm for layout generation. By exploiting the massively parallel architecture of modern GPUs we reduce the computational time by orders of magnitude compared with the CPU-based implementation. Secondly, we implement a multi-layer force-driven method for network layout generation where communities are less entangled. Again, by exploiting the GPU we obtain significant speed-up of computation over the CPU implementations. Our results imply that GPUs can speed up significantly the computations in network analysis and thus larger networks can be analysed in real-time.
|P. Mrazović, M. Pilipović, M. Volarević, Ž. Mihajlović (Fakultet elektrotehnike i računarstva, Zagreb, Croatia)
Realization of Natural Interaction over the Micro Soot Device Model
Paper explores methods of achieving natural interaction between the virtual and the real world, along with their application in industry. Significant emphasis is given to the Microsoft Kinect device, whose functionality was introduced in the field of powertrain systems industry. Collaboration with Austrian company AVL, which is engaged in development, testing and simulation of powertrain systems, has contributed with specific problem that can be solved in augmented reality domain. In order to facilitate maintenance of special device for continuous measurement of lowest soot concentration in the diluted exhaust from internal combustion engines, an interactive virtual service manual has been developed. The application uses interaction with the Microsoft Kinect device which enables users to control and interact with computer world through a natural user interface using gestures and spoken commands. The device finds greatest application in interactive entertainment, but this paper presents and exploits its great potential in industrial environments.
|J. Sirotković (Siemens d.d., Split, Croatia), H. Dujmić, V. Papić (FESB, Split, Croatia)
Accelerating Mean Shift Image Segmentation with IFGT on Massively Parallel GPU
Mean shift algorithm is popular technique in many machine vision applications, including image segmentation. Main drawback of the original algorithm is its quadratic computational complexity, which is addressed with multiple acceleration approaches proposed so far. One of the most effective is usage of the Improved Fast Gauss Transformation (IFGT) to accelerate Gaussian summations of mean shift, resulting with linear computational complexity. Despite such advances, mean shift segmentation on larger images can still be too time consuming for time critical applications. However, recent rapid increase in the performance of general purpose graphic processing unit (GPGPU) hardware had opened opportunity for significant acceleration of the algorithms by parallel execution. This paper introduces first parallel implementation of IFGT-MS segmentor based on many core GPGPU platform. The emphasis is placed on adaptation of the core algorithm to efficiently exploit benefits of underlying GPU hardware architecture. Numerical experiments have demonstrated considerably faster segmentation execution compared with alternative CPU and GPU based mean shift variants.
|K. Hajdarevic, A. Civa (Faculty of Electrical Engineering, University of Sarajevo,, Sarajevo, Bosnia and Herzegovina)
ARIS MashZone KPI Visualisation of Simulated Routing Protocol Operations for Smart Dust Environments
Recent history showed that world needs more mobility, smaller mobile devices, and if possible mobile devices which have multiple sensors for different purposes. Wireless ad-hoc sensor network technology introduced in 1997 by K. Pister called Smart Dust network might be solution for future wireless ad-hoc sensor applications. Smart Dust is a network which contains tiny sensor nodes called motes which are able to, use sensors to collects data, to communicate with each other and transfer collected data. While Smart Dust components with high price (WSN classroom Kit starts at 6000 $) are commercially available, they are still not in wide usage as education assets at universities because of their price, and due to technology changes. In this paper are presented operation performances for different ad-hoc routing protocols which can be used for Smart Dust applications.
|A. Sabou, D. Gorgan (Technical University of Cluj-Napoca, Cluj-Napoca, Romania)
Physical Simulation of 3D Dynamical Surfaces on Graphic Clusters
Physical based simulation represents a research field of great interest in computer graphics. Mass-spring systems provide a simple yet powerful solution to simulating soft deformable bodies such as cloth, but have a very high computational cost, especially with high resolution models. Even with highly parallel architectures such as graphic clusters providing the computational power required for these simulations, efficient parallel techniques are needed in order to obtain the best performance in real-time. In this paper we experiment and evaluate a graphic cluster based solution for physical cloth modelling, highlighting issues that arise with model distribution, communication among nodes, distributed rendering and parallel techniques that can be integrated in order to obtain an efficient simulation method, capable of handling complex physical models.
|T. Harasthy, J. Turán, Ľ. Ovseník (Technical University, Košice, Slovakia)
Road Line Detection based on Optical Correlator
Video surveillance systems are nowadays due to many terrorist attacks very important and interesting issue. Tracking of objects in video sequences, respectively in video streaming is necessary part of video surveillance systems. Tracking and detection of objects in video sequences is very important in automobile industry as well, concretely in video based driver assistance system. Processing of video in real time puts high demands to classic methods of processing of video stream. This is a major problem in many tracking system, not just video driver systems, so this issue is very actual to solve. In this paper is presented detection of road line using optical processor. Video is captured with digital camera and preprocessed video sequence is analyzed by Optical Correlator. Using optical correlator we can in real time track a vehicle by capturing road lines. System presented here helps keep the car between the road lines and to prevent accidents caused by driver inattention. The biggest disadvantage of this system is that system does not work if road lines are not in adequate quality, or if they are covered by snow or any dirt.
Predsjedatelji: Roman Trobec i Uroš Stanič
|U. Stanič (Kosezi d.o.o., BRIS, Ljubljana, Slovenia), S. Dolinšek (Innovation Development Institute University of Ljubljana, Ljubljana, Slovenia), A. Škafar (Hospital for gynaecology and obstetrics, Kranj, Slovenia), I. Grmek-Košnik (Community Healthcare Centre Network Gorenjska, Kranj, Slovenia), J. Stanič (Iskratehno research and development institute, Podnart, Slovenia), G. Cerinšek (Innovation Development Institute University of Ljubljana, Ljubljana, Slovenia)
The Specific Approach for Establishment of Innovation Hospitals in Gorenjska Region
The objective to convert hospitals in EU regions to innovative type is of high priority for their economic growth, employment and wellbeing of EU citizens. In the paper the results presented were obtained in Gorenjska region through the implementation of the InTraMed C2C Central Europe project. The system approach consisted of identification of the state of the art in innovation at each of the involved hospitals, SWOT analysis, seminars and workshops related to innovation process and culture. Among large spectrum of ideas gathered at these events the managers of hospitals agreed that first priority should be given to most needed BME prototypes, which were supported also by SMEs business interest. Finally, through open innovation approach five ideas were selected for development of prototypes and later testing. Some of these selected BME ideas will be presented in the paper.
|D. Zavec Pavlinic (Zavod BRIS, Ljubljana, Slovenia), A. Oder (Prevent Deloza d.o.o., Celje, Slovenia), V. Grm (Etra d.o.o., Celje, Slovenia), U. Stanič (Zavod BRIS, Ljubljana, Slovenia)
Multifunctional Protective Clothing System: Development vs. Functionality
Development of personal protective equipment (PPE) requires the knowledge of fire fighter’s working environment and working activities. The PPEs primary function is the protection of the user from danger in the environment. As such it must be ensured first, while secondary functions could be added afterwards. Secondary functions are additional protection, monitoring of vital life functions and of surrounding environment. Besides abovementioned functions the PPE clothing system must be functional e.g. easy to use and care for, and above all it should not limit the range of movement of the fire fighter. The primary function is fulfilled by the choosing of optimal protection materials, because non optimal materials with non-optimal thermal balance can impact the fire fighter’s working performance. Secondary functions are presented in the paper in the form of guidelines for development of fire fighter’s PPE clothing system. The issue of rescue problems in connection with the functions of monitoring of vital functions and with the robust communication system are presented in the paper. The presented engineering approach takes into account the requirements of fire fighters, who are exposed to the extreme environmental working conditions and who risk their lives daily.
|D. Zavec Pavlinić (Zavod BRIS, Ljubljana, Slovenia)
Functional Textiles Have Important Role on the Field of Human Life
Textiles are elements which coexist with humans. Some are called functional; because they
assist humans in every way, ensure their comfort and protection. They are divided regarding the
environment in which humans live, move and/or perform their working activities. These textiles
touch human skin irrespective of environment and work type; therefore it is logical that some
functions are fulfilled by the textiles placed next to the skin surface. Some of these functions are
sweat management system, while other ensure therapeutic, healing, antibacterial and antimicrobial
effects and prevent the infections. The latter is important in medicine where patient’s bodies are
enveloped by different textiles with the aim to cool, heat and/or maintain body temperature as well
as other specific care effects, for example the prevention of infections that are spread by textiles
used in the hospitals. The inner textile layer with sweat management ability is the most important for
establishing the thermal balance in the multilayer system “human-clothing-environment”, but it is
usually ignored. However, it is obvious that functional textiles that touch our skin impact human life,
their comfort, well-being and performance at working and daily activities. In this paper we will learn
how human life could depend on the humble textiles.
|J. Stanič (Iskra Techno research and development institute, Podnart, Slovenia), J. Jelenc (Iskra Medical d.o.o., Ljubljana, Slovenia), U. Stanič (Biomedical Research and Innovative Society, Ljubljana, Slovenia)
I-TEHMED Biomedical Technology Platform – Structure and Results
Slovene Biomedical Technology Platform I-TEHMED was established in 2005 at the initiative of leading Slovene industrials, research organisations, universities in the field of biomedicine and biomedical engineering as well as university clinical centre and other secondary and tertiary hospitals. I-TEHMED is an open structure, which utilises a bottom up approach to research and develop new generations of biomedical devices, medical procedures, innovative drugs, therapeutic methods, medical and other services in the sphere of biomedicine. The platform provides a basis for constant generation and exchange of knowledge and experiences between its members. It also enables the creation of synergistic effects with other entities both foreign and domestic. Thus the platform is ideal for implementation of research projects funded from either national or EU grants.
In this paper we will discover the reasoning behind I-TEHMED, its mission and vision, take a look at its multidisciplinary members and above all focus on three case studies – research projects which were successfully implemented by its funding members. The results of said projects were the basis for new high added value products and processes.
|K. Bregar, V. Avbelj (Jožef Stefan Institute, Ljubljana, Slovenia)
Multi-Functional Wireless Body Sensor - Analysis of Autonomy
A wireless multi-function biosensor that measures a potential difference between two proximal electrodes on the skin, enables monitoring of vital functions - heart activity and respiration. The sensor is designed as a small plaster-like reusable unit that can be easily fixed onto the body surface and is therefore minimally obtrusive for users. It is equipped with a signal acquisition unit, a processor for on-line data analysis and with enough memory for a temporal storage of measured data. Incorporated low-power radio system transmits the measured data to a radio receiver, which is installed either in a dedicated personal terminal, smartphone or in a ward gateway. The sensor is powered by a small coin battery. Visualization, archiving and detailed interpretation of data can be implemented on a remote computer server. The autonomy of the monitoring system, regarding its power consumption, depends significantly on the portion of local signal processing on the sensor, on the frequency of data transmission and on the amount of transmitted data. Different test scenarios have been tested and evaluated regarding the power consumption, reliability and robustness.
|M. Kozlovszky, L. Bartalis, B. Jókai, J. Ferenczi, P. Bogdanov, Z. Meixner, L. Németh (OU NIK, Budapest, Hungary), K. Karóczkai (MTA SZTAKI, Budapest, Hungary)
Personal Health Monitoring with Android Based Mobile Devices
We have developed an Android based mobile data acquisition (DAQ) solution, which collects personalized health information of the end-user, store analyze and visualize it on the device and optionally send it towards to the datacenter for further processing. The mobile device is capable to collect information from a large set of various wireless (Bluetooth, and WiFi) and wired (USB) sensors. Embedded sensors of the mobile device provide additional useful status information (such as: user location, magnetic or noise level, acceleration, temperature, etc.). The user interface of our software solution is suitable for different skilled users, highly configurable and provides diary functionality to store information about sleep problems, can act as a diet log, or even a pain diary. The software enables correlation analysis between the various sensor data sets. The developed system is tested successfully within our Living Lab facility. Sensor data acquisition on the personal mobile device enables both end-users and care givers to provide better and more effective health monitoring and facilitate prevention. The paper describes the internal architecture of the software solution and its main functionalities. We also provide results received from our Living Lab tests.
|A. Rashkovska, V. Avbelj (Jožef Stefan Institute, Ljubljana, Slovenia)
Signal Processing Methods for ST Variability Assessment in ECG
The beat-to-beat ST variability in the ECG signal is becoming an important indicator in neurocardiology. However, accurate determination of the ST variability is difficult because of uncertainty in the determination of the second point of the ST interval on the T wave. T waves change their form because of breathing, heart movements, changing of lead positions etc. If the ST variability is small, as in most neurological patients, their reliable assessment is even more difficult. In this paper, we propose two methods for the assessment of the ST variability: the beat-to-beat variability of the RT interval and the TTs interval. The first interval, RT, is defined as the interval between the peak of the R wave and the peak of the T wave. The second interval, TTs, is defined as the interval between the peak of the T wave, i.e. the T wave maximum amplitude, and the T wave maximum slope, i.e. the T wave point with maximal negative slope. The paper elaborates the methods for the determination of these three decisive points on the ECG signal: the R peak, the T peak and the maximum negative slope on the T wave. The methods are analyzed through their noise sensitivity estimation.
|J. Pavlic (Institute Jožef Stefan, Ljubljana, Slovenia)
Coarse Grain Molecular Dynamics Study of Voids Present in the Membrane of a Lipid Vesicle
Advent of contemporary computers enables in silico studies of biological nanostructures. One of the important application fields in biomedicine is the simulation of proteins, which can help to understand the nature of their interactions with lipid nanostructures. The studies can help to improve and accelerate new drug design. Lipid bilayers have often been used as the simplest model of cell's membrane for the investigation of basic phenomena, such as interactions of membrane - protein, mechanical and structural properties or conformational changes of membranes. The bilayers can form vesicles that model living cells. The vesicles are composed of lipids arranged in a closed bubble-like shaped bilayer. In our study, we simulate and visualise the course of building up a lipid vesicle in watery environment, using coarse grain molecular dynamics (CG-MD) simulation carried out by open source software GROMACS. Running the CG-MD simulation on multicore parallel computers, we confirmed that the effective size of lipid molecules in inner and outer lipid layer plays an important role in forming proper lipid vesicles, the ones that do not possess voids in between the two lipid monolayers composing the lipid membrane.
|J. Jelenc, J. Jelenc (Iskra Medical d.o.o., 1000 Ljubljana, Slovenia), D. Miklavčič, A. Maček Lebar (University of Ljubljana, Faculty of Electrical Engineering, 1000 Ljubljana, Slovenia)
Low-Frequency Ultrasound in vitro: Experimental System and Ultrasound-Induced Changes of Cell Morphology
Ultrasound can temporary or permanently increase the membrane’s permeability for molecules that would otherwise be unable to enter the cell. Temporary changes are called sonoporation and can be used to introduce foreign material to the cell interior. Permanent increase of cell membrane permeability causes cell death that can be used for water purification, waste water treatment, ballast water treatment in shipping, food production process, to name just few.
To study these effects in an in vitro setting, we have built a custom low-frequency ultrasound experimental system based on an ultrasound transducer submerged in a water bath. Ultrasound pressure is one of the most important parameters in such a system. Using a hydrophone we have evaluated the ultrasound pressure in the waterbath with or without an ultrasound absorbing material lining the bath walls. To gain knowledge of ultrasound spatial and temporal distribution inaccessible by conventional hydrophone measurement, we built a finite-element model of the system. We now have a low-frequency ultrasound experiment system with known and controllable ultrasound pressure.
|I. Tomašić (Jožef Stefan Institute, Ljubljana, Slovenia), A. Rashkovska, M. Depolli (Jozef Stefan Institute, Ljubljana, Slovenia)
Using Hadoop MapReduce in a Multicluster Environment
Hadoop MapReduce has become one of the most popular tools for processing and generating large datasets, mostly because it allows users to build complex distributed programs using a very simple model. For storing and retrieving the data, Hadoop MapReduce relies primarily on Hadoop Distributed File System (HDFS), which is normally installed on a cluster of computers. When the cluster becomes undersized it can be scaled by adding new computers and storage devices but it can also be extended by resources on another computer cluster. In this paper we present a utilization of MapReduce paradigm on multicluster Hadoop installation extended across two clusters, connected over the Internet. The specific networking parameters and MapReduce configuration parameters needed for a multicluster installation are presented. We have benchmarked a single and dual cluster installation with the same networking and configuration parameters. The benchmark results are presented and compared for the purpose of evaluating the efficiency of multicluster MapReduce utilization.
|I. Sović, K. Skala (Institut Ruđer Bošković, Zagreb, Croatia), M. Šikić (University of Zagreb Faculty of Electrical Engineering and Computing/Department of Electronic System, Zagreb, Croatia)
Approaches to DNA de novo Assembly
DNA is the basic building block of all known life, accounting for all the diversities in nature. Determining the DNA of an individual organism is performed through a process called DNA sequencing. Although several different sequencing technologies do exist, they are limited and are able to acquire relatively short sequence reads. One of the approaches to sequencing involves randomly breaking a long DNA molecule into small fragments and sequencing only those fragments. Due to the random positioning of fragments on the source DNA, majority of them overlap, and provide the necessary information to combine them back together. The process of reconstructing the original DNA sequence from fragment reads is called DNA assembly. Assembly is a very computationally intensive process that may take days, or even weeks to produce the sequence of a more complex organism. Reconstructing a DNA sequence in the absence of a previously reconstructed reference sequence from a similar organism is called de novo assembly. De novo assembly methods currently provide the only means to discover new, previously unknown sequences, and are currently indispensable in biological research.
In this paper, short descriptions of the sequencing process and the current sequencing platforms are given. DNA assembly process is thoroughly described, and the analysis of several de novo approaches used for assembly are presented. Overview and description of existing software tools are given, including some parallel implementations. As a conclusion, aspects of possible future development of DNA assembly are considered.
|R. Trobec (Institute Jožef Stefan, Ljubljana, Slovenia), I. Belehar (-, Ljubljana, Slovenia), J. Polajnar (Agencija RS za okolje, Ljubljana, Slovenia), M. Veselko (University Medical Centre , Ljubljana, Slovenia)
Ski Injury Triggers of Tibial Plateau Compression Fracture
Fractures of the tibial plateau occur in 3.4% of skiing-related fractures and this number has significantly increased over time. Often the fracture occurs during an accident with no serious fall or collision; this is despite experimental results which suggest that significant forces (>30kN) are needed for a compression fracture of the tibial plateau. A sudden outward turn of the inner ski initiates the tibial rotation, leg extension and its swing to backwards. The accident ends with a sudden stop, supported by the surrounding knee tissues, and results in an extensive compression force on the tibial plateau. We propose a new injury mechanism that is modelled by a double pendulum and could potentially reproduce the sufficient forces. We show that with typical skiing velocities of 40 km/h the compression forces can reach 70kN. The proposed mechanism could explain also more frequent sport injuries that result in ligament injuries and less serious injuries of bone, also in connection with other sport activities.
|P. Pečlin, J. Rozman (ITIS d. o. o. Ljubljana, Ljubljana, Slovenia)
A Model of Selective Stimulation and ENG Recording in the Human Left Vagus Nerve
In this study we have developed a model of using a thirty-nine-electrode spiral nerve cuff for selective stimulation of fibres in the left vagus nerve of a man to control the heart rate and tachycardia in particular. Furthermore, according to our recent work on experiments in human, we predicted the precisely defined stimulus shape and parameters. In a forthcoming study we are to conduct experiments to determine the effect of cervical selective vagus nerve stimulation (VNS) on the atrial fibrillation in a man using a developed model and designed implantable thirty-nine-electrode spiral cuff. Specifically, we intend to look at the effects on the atrial fibrillation when the superficial compartments of the left vagus nerve, including the cardiac branches containing nerve fibres A, B and C, are selectively stimulated with the precisely defined stimulation parameters.
Karolj Skala (Croatia), Roman Trobec (Slovenia), Uroš Stanič (Slovenia)
Piotr Bala (Poland), Leo Budin (Croatia), Yike Guo (United Kingdom), Gordan Gulan (Croatia), Ladislav Hluchy (Slovakia), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Charles Loomis (France), Ludek Matyska (Czech Republic), Laszlo Szirmay-Kalos (Hungary), Tibor Vámos (Hungary)
Predsjednik Međunarodnog programskog odbora:
Petar Biljanović (Croatia)
Međunarodni programski odbor:
Alberto Abello Gamazo (Spain), Slavko Amon (Slovenia), Vesna Anđelić (Croatia), Michael E. Auer (Austria), Mirta Baranović (Croatia), Ladjel Bellatreche (France), Nikola Bogunović (Croatia), Andrea Budin (Croatia), Željko Butković (Croatia), Željka Car (Croatia), Matjaž Colnarič (Slovenia), Alfredo Cuzzocrea (Italy), Marina Čičin-Šain (Croatia), Dragan Čišić (Croatia), Marko Delimar (Croatia), Todd Eavis (Canada), Maurizio Ferrari (Italy), Bekim Fetaji (Macedonia), Tihana Galinac Grbac (Croatia), Liljana Gavrilovska (Macedonia), Matteo Golfarelli (Italy), Stjepan Golubić (Croatia), Francesco Gregoretti (Italy), Stjepan Groš (Croatia), Niko Guid (Slovenia), Yike Guo (United Kingdom), Jaak Henno (Estonia), Ladislav Hluchy (Slovakia), Vlasta Hudek (Croatia), Željko Hutinski (Croatia), Mile Ivanda (Croatia), Hannu Jaakkola (Finland), Robert Jones (Switzerland), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Bernhard Katzy (Germany), Christian Kittl (Austria), Dragan Knežević (Croatia), Mladen Mauher (Croatia), Branko Mikac (Croatia), Veljko Milutinović (Serbia), Alexandru-Ioan Mincu (Slovenia), Vladimir Mrvoš (Croatia), Jadranko F. Novak (Croatia), Jesus Pardillo (Spain), Nikola Pavešić (Slovenia), Ivan Petrović (Croatia), Joško Radej (Croatia), Goran Radić (Croatia), Slobodan Ribarić (Croatia), Karolj Skala (Croatia), Ivanka Sluganović (Croatia), Vanja Smokvina (Croatia), Vlado Sruk (Croatia), Ninoslav Stojadinović (Serbia), Jadranka Šunde (Australia), Aleksandar Szabo (Croatia), Laszlo Szirmay-Kalos (Hungary), Dina Šimunić (Croatia), Goran Škvarč (Croatia), Antonio Teixeira (Portugal), Edvard Tijan (Croatia), A Min Tjoa (Austria), Roman Trobec (Slovenia), Ivana Turčić Prstačić (Croatia), Walter Ukovich (Italy), Ivan Uroda (Croatia), Tibor Vámos (Hungary), Mladen Varga (Croatia), Boris Vrdoljak (Croatia), Robert Wrembel (Poland), Baldomir Zajc (Slovenia)
PRIJAVA / KOTIZACIJE
CIJENA U EUR-ima
|Članovi MIPRO i IEEE
|Studenti (diplomski) te nastavnici osnovnih i srednjih škola
Institut Ruđer Bošković
10000 Zagreb, Hrvatska
GSM.:+385 99 3833 888
Fax: +385 1 4680 212
Opatija, često nazivana “jadranskom ljepoticom”, jedno je od najpopularnijih turističkih mjesta u Hrvatskoj s najdužom turističkom tradicijom na sjevero-istočnoj jadranskoj obali. Njezina ponuda uključuje dvadesetak hotela, velik broj restorana, te brojne sportske i rekreacijske sadržaje. Detaljnije informacije se mogu potražiti na www.opatija.hr i www.opatija-tourism.hr.