Search  English (United States) Hrvatski (Hrvatska)

innovative promotional partnershipROBOTICS - future of technology world

Event program
Thursday, 5/24/2012 9:00 AM - 1:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
Distributed Computing
Chair: Karolj Skala
 
Invited Paper

  

9:00 AM - 9:15 AMO. Mencer (Maxeler Technologies, London, UK)
Delivering Next Generation Scientific Computing with Dataflow Supercomputing

  

Papers

  

9:15 AM - 9:30 AMZ. Krpić, G. Martinović (Faculty of Electrical Engineering, J.J. Strossmayer Univ. of Osijek, Osijek, Croatia), I. Crnković (Malardalen University, Sweden , Vasteras, Sweden)
Green HPC: MPI vs OpenMP on a Shared Memory System 
A power consumption of a high performance computer (HPC) system has been an issue lately. Many programming techniques are still relying on performance gain, but only few of them are concerning energy footprint of the increased computing power. MPI and OpenMP are considered as a core scientific HPC programming libraries for distributed-memory and shared-memory computer systems respectively. Each of them brings performance on a parallel system, but there are dissimilarities in their performance/W ratio. The key is to find the best appliance for each of them on a shared memory computer system.
9:30 AM - 9:45 AMS. Ristov, M. Gusev (Ss. Cyril and Methodius University, Faculty of Information Sciences and Computer Engineering, Skopje, Macedonia)
Matrix Multiplication Performance in Virtualized Shared Memory Multiprocessor 
In this paper we analyze performance discrepancy for multiprocessor using a virtualized environment. There are papers analyzing this phenomenon with conclusion that it happens due to L2 cache. Our conclusion based on observation and experimental research shows that the main reason should be L3 in this case. Our hypothesis to be confirmed by experimental research is that virtual environment will have bigger execution time and smaller speed, as well as smaller speedup compared to traditional server. However, analyzing the speedup graphs one can conclude that speedup in virtual environment is greater than speedup in traditional environment. There is a region for particular size where virtual environment provides better speed than traditional environment. The experiments showed that the achieved performance in virtual environment is even better than traditional environment, but only in L1, L2 and L3 regions. In L4 region there is a huge performance drawback in virtual environment downto average 66.58% of traditional environment.
9:45 AM - 10:00 AMB. Chitsaz, M. Razzazi (Amirkabir University of Technology, Tehran, Iran)
Preventing State Divergence in Duplex Systems Using Causal Memory 
Replicated execution of distributed programs provides a means of masking hardware or software failures in a distributed system. Application level entities (processes, objects) are replicated to execute on distinct processors. Such replica entities communicate via message-passing. Non-determinism within the replicas could cause messages to be processed in non-identical order, producing a divergence of state. The replicas could thereafter produce inconsistent responses to identical messages and hence appear to be faulty. The partial-order model of distributed computations based on the happened-before relation like primary-backup approach has been criticized for allowing false causality between messages, the false causality causes unnecessary blocking processes and results in high time overhead for replicating entities. In this paper we use the concept of causal memories and multi version states to reduce the false causality between messages. We capture the read/write operations on the variables of each process to find out the dependencies between messages, and save some old values of variables to use in cases the read that operations may cause divergence in the states of replicas. The results of simulation show that this approach has lower execution time than the primary-backup approach.
10:15 AM - 10:30 AMI. Vukasinovic (School of Electrical Engineering, University of Belgrade, Belgrade, Serbia), G. Rakocevic (Mathematical Institute Serbian Academy of Sciences and Arts, Belgrade, Serbia)
An Improved Approach to Track Forest Fires and to Predict the Spread Direction with WSNs Using Mobile Agents 
Tracking of fires in large areas of land/forests can be done by using Wireless Sensor Networks (WSNs) embedded into those areas. Mobile Agents (MAs) injected into the WSNs can help about fire detection and can maintain the information about the perimeter surrounding the fire. This paper proposes a further improvement of the state-of-the-art in the manner of predicting the direction in which the fire might spread. The proposed solution uses land humidity sensors and MAs that are sent on the edge of the perimeter formerly created to gather humidity data and to make a prediction based on the gathered data. The prediction is sent to the appropriate fire department unit enabling them to react quickly.
10:30 AM - 10:45 AMJ. Kovacs (MTA SZTAKI, Budapest, Hungary), F. Araujo, S. Boychenko (CISUC, Dept. of Informatics Engineering, University of Coimbra, Coimbra, Portugal), M. Keller (Univ. of Paderborn, Paderborn, Germany), A. Brinkmann (JGU Mainz, Mainz, Germany)
Monitoring UNICORE Jobs Executed on Desktop Grid Resources 
In the EDGI EU FP7 project a federation of Desktop Grid sites are built and maintained by the infrastructure team. These sites provides volunteer and institutional type resources in the range of hundred thousands. The EDGI project offers these resources for the various gLite, ARC and UNICORE user communities for their research and supports job execution with transparent bridging in case of gLite, ARC and UNICORE middleware. The paper focuses on the UNICORE side, especially on the monitoring aspect. A bridging mechanism has been implemented in a modified UNICORE computing element to forward jobs towards the Desktop Grid servers. This component is part of the EDGI infrastructure, where all the bridge-related components and resources are continuously monitored by a central monitoring system. Based on the monitoring system users can have information about the traffic among the sites in the infrastructure. The paper gives a short overview of the EDGI infrastructure, especially the monitoring system and introduces the technical details about how the monitoring is integrated and supported by the modified UNICORE computing element. The introduced solution is also part of the EMI software stack to make it easily accessible for the UNICORE infrastructure providers.
10:45 AM - 11:00 AMC. Soettrup (Univ. of Copenhagen, Copenhagen, Denmark), J. Kovacs (MTA SZTAKI, Budapest, Hungary), A. Waananen (Univ. of Copenhagen, Copenhagen, Denmark)
Transparent Execution of ARC Jobs on Desktop Grid Resources 
In the EDGI EU FP7 project a federation of Desktop Grid sites are built and maintained by the infrastructure team. These sites provides volunteer and institutional type resources in the range of hundred thousands. The EDGI project offers these resources for the various gLite, ARC and UNICORE user communities and supports job execution with transparent bridging in case of gLite, ARC and UNICORE middleware. The paper focuses on the ARC side. A special bridging mechanism has been implemented in order to forward ARC jobs towards the Desktop Grid servers. The ARC bridge works in such a way that the A-REX execution service sees the the Desktop Grid as a virtual batch system and users are able to submit to the Desktop resources seamlessly. The solution is scalable and includes handling large amount of files through a p2p distributed file system, called Attic. The paper gives a short overview of the EDGI infrastructure and introduces the technical details of the bridging mechanism. The introduced solution is also part of the EMI software stack to make it easily accessible by the ARC infrastructure providers.
11:00 AM - 11:15 AMD. Rojković (Comping, Zagreb, Croatia), T. Crnić (SRCE, Zagreb, Croatia), I. Čavrak (FER, Zagreb, Croatia)
Agent-Based Topology Control for Wireless Sensor Network Applications 
Complexity of design, development and deployment of software for wireless sensor networks is high, and even more exacerbated by ad-hoc nature of some network deployments and the need for high adaptability to varying internal and environmental factors. In this paper the authors present a model of distributed application based on a set of application specific roles/functional modules. Deployment and runtime adaptation of application topology is achieved by controlling role activity on each of the network nodes, thus forming a functional application instance for a specific sensor network. Application control decisions are delegated to agents located on sensor nodes, and based on a set of role-specific utility functions. Existing wireless sensor network system for vehicle detection and classification is refactored according to the proposed model, modeled in a sensor network simulator and simulation results presented.
11:15 AM - 11:30 AMBreak 
11:30 AM - 11:45 AMT. Petrović, M. Žagar (Faculty of electrical engineering and computing, University of Zagreb, Zagreb, Croatia)
Security in distributed wireless sensor networks 
Distributed computer systems of microcontrollers with environmental sensors, known as wireless sensor networks, allow real time monitoring of complex events. Over the last ten years this technology is slowly being applied in many areas such as: agriculture, environmental monitoring and urban planning. However, the technology has yet to see a strong commercial application because of a: lack of universal norms, short lifetime cycle and security concerns. This paper will describe the basic characteristics of a wireless sensor network, followed by a list of different types of security problems that wireless sensor networks face. For each problem, a recommended method for improving security is given, along with all the downsides of such a method. In the end a new method is proposed to improve security and intruder detection for certain types of networks.
11:45 AM - 12:00 PMM. Mišić, Đ. Đurđević, M. Tomašević (School of EE, University of Belgrade, Belgrade, Serbia)
Evolution and Trends in GPU Computing 
Introduction to GPU Computing and CUDA (the tutorial presentation may span from 3 hours to 3 days) Marko Misic, University of Belgrade, SRB Milo Tomasevic, University of Belgrade, SRB Abstract/Summary The first part of the tutorial gives a brief introduction to general-purpose computation on Graphic Processing Units (GPUs). It concentrates on issues in traditional high performance computing and explains the differences between the task-parallel and data-parallel processors. The GPUs are specialized for compute-intensive, highly parallel applications like: computational physics, computational chemistry, life sciences, signal processing, as well as finances, oil and gas exploration, etc. The second part of the tutorial presents an overview of the new Compute Unified Device Architecture (CUDA) that enables GPUs to execute programs at much higher execution speeds. It explains the CUDA architecture and programming model, as well as the CUDA application programming interface. It describes in details: threading model, memory architecture, and API features. The third part discusses the CUDA hardware and execution model. The CUDA programming is relatively easy to learn, although to get the highest possible execution performance, understanding of hardware architecture and execution model is required. This part concentrates on issues related to program execution and performance maximization. The fourth part of the tutorial presents a simple case study to demonstrate benefits of GPU computing and data-parallel processors. The CUDA programming paradigm is presented through two common problems: matrix multiplication and vector reduction. The fifth part briefly discusses future trends of GPU computing. It covers issues like: OpenCL, Fermi architecture, hybrid CPU/GPU solutions, and it reveals some of the experiences and lessons learned from an internship at NVIDIA by one of the co-authors of this tutorial. About the tutorial co-authors: Marko Misic is with the University of Belgrade. He is a research/teaching assistant and PHD student with interests in high performance computing, parallel programming, and computer architecture. His major research contributions are in the field of GPU computing and parallel algorithms. He spent three months with the NVIDIA CUDA team, as a system software intern during 2009. Professor Milo Tomasevic is with the University of Belgrade. During '90, he contributed to cache coherence algorithms and techniques. At that time he contributed to a number of R&D projects for NCR, Encore, DEC, and HP. He was the project leader for the Encore RMS (reflective memory system) board, which was advertised by the research sponsor as the fastest I/O board on Earth. His research focuses now are multiprocessors, cache coherence, and interconnection networks for supercomputers. OutLine/TableOfContents: 1: Introduction to GPU computing 2: GPU vs CPU 3: CUDA overview 4: CUDA architecture & programming model 5: CUDA memory model 6: CUDA hardware & execution model 7: CUDA parallel algorithms (case studies: matrix multiplication & vector reduction problems) 8: GPU computing trends Note: This tutorial is used as an integral part of the Multiprocessor Systems course at the School of Electrical Engineering, University of Belgrade, Serbia. Previous updates of this and related topics were presented as in-house tutorials for major industry of the USA, EU, and Japan (Intel, Philips, Oki, etc...). Also, in a number of invited talks for major universities (Stanford, Purdue, UPC, Pisa, Seul, Singapore, etc). Attachment: Selected 11 pages of the 166-pages tutorial.
12:00 PM - 12:15 PMÁ. Visegrádi, S. Ács, J. Kovács (MTA SZTAKI, Budapest, Hungary), G. Terstyánszky (University of Westminster, London, United Kingdom)
Application Repository based evaluation of the EDGI infrastructure 
The infrastructure set up by the EDGI EU FP7 project contains Desktop Grid (DG) sites (BOINC or XtremWeb) performing the execution of jobs coming from gLite, ARC or Unicore type service grids. The infrastructure contains an Application Repository (AR) as a central service storing all relevant information for applications. This AR is also the key to the gateways of the Desktop Grid sites, since enabling the execution of a given application on a DG site can be performed through the AR. The entire infrastructure has a monitoring system developed to collect statistical information about job execution. However, validating the information in the AR and testing job execution against the DG site was still missing. In order to evaluate the operation of the EDGI infrastructure, a new service has been designed and prototyped. This service collects all relevant information and submits jobs to Desktop Grids to gather data about their current state. This data can then be used by monitoring agents. A reporting webpage for the administrators is implemented. We will also show how reporting can be integrated with the Nagios system.
12:15 PM - 12:30 PMI. Tomašić, A. Rashkovska, J. Ugovšek (Jožef Stefan Institute, Ljubljana, Slovenia)
Multicluster Hadoop Distributed File System 
One of the important subprojects of the Apache Hadoop project is Hadoop Distributed File System (HDFS). HDFS is primarily a distributed storage used by Hadoop applications. It is normally installed on a cluster of computers. When the cluster becomes undersized, one commonly used possibility is to scale the cluster by adding new computers and storage devices. Another possibility, not exploited so far, is to resort for resources on another computer cluster. In this paper we present a multicluster HDFS installation extended across two clusters, with different operating systems, connected over the Internet. The specific networking parameters and HDFS configuration parameters needed for a multicluster installation are presented. We have benchmarked a single and dual cluster installation with the same networking and configuration parameters. The benchmark results are presented and compared for the purpose of evaluating the efficiency of multicluster HDFS.
12:30 PM - 12:45 PMI. Nuhić, M. Šterk (XLAB d.o.o., Ljubljana, Slovenia), T. Cortes (Barcelona Supercomputing Centre, Barcelona, Spain)
AbacusFS - Integrated Storage and Computational Platform 
AbacusFS – integrated stroage and computational platform Isak Nuhić*, Marjan Šterk*, Toni Cortes+ * XLAB d.o.o., Ljubljana, Slovenia + Barcelona Supercomputing Centre, Barcelona, Spain Corresponding author: isak.nuhic@xlab.si Today's applications, especially those in the scientific community, deal with an ever growing amount of data. Among the problems that arise from this explosion of data are how to organize the data so that the information about how the data was produced (e.g. which version of which application produced the file, what were the input parameters) is not lost, how to ensure repeatability of calculations, how to automate calculations, and how to save computational and storage resources. We propose a solution which changes the paradigm in which we observe storage, or more specifically, file systems. Applications currently use the storage without any interaction – the storage is a passive component that neither imposes any rules on data organization nor helps with associating a calculation with its result. The basic idea of the solution presented in this paper is to make file systems application-aware in order to address the problems enumerated above. We thus introduce an integrated storage and computational platform which interacts with the file system and in a way connects the calculations and the storage. Whenever a user runs some calculation, the full command-line and other information is stored as metadata associated with the calculation's output file(s). Then, each time one of the output files is open for reading, it is first checked whether this file is up-to-date, i.e. whether the input files have changed. If they have changed, an appropriate action can be taken, such as automatically re-calculating the file or warning the user that the file is out of date. All existing command-line applications that read their input files and create new output files can be used without any modifications. The fact that input files can also be results of previous calculations does not pose any problem, so workflows are supported natively. Running the calculation when the command is issued and then re-calculating whenever results are needed but are not up-to-date is not always optimal. For example, if the calculation is time consuming but quick access to results is critical, it may be better to speculatively re-calculate as soon as an input file changes. On the other hand, we plan to later extend our platform to grid and cloud systems, where fetching data from the storage to the computing nodes may be more time and resource consuming than recomputing it when idle CPUs are available. We have thus integrated a rule-based decision-making process that can take into account parameters such as file size, number of read accesses, number of re-calculations of the file, and a user-assigned file importance. The process is based on fuzzy logic and, while currently only consisting of obvious rules, can easily be improved by adding new rules. Further parameters can be added as well, such as predicted calculation runtime, importance of file or available storage space. We have implemented the computational platform as a custom file-system built with FUSE (File System in User Space [1]). So far we have implemented the first time run of the calculation, automatic re-calculation of the files and the decision making process (which still needs some refinement). The solution is currently implemented for a single machine usage. It uses an ordinary file system as backing storage and adds special operations such as potential re-calculation of a file on access. The extended metadata are stored in a noSQL database. Our FUSE module is written in Python. Preliminary tests show that the overhead of the platform is negligible for any non-trivial calculation. The scalability to large number of files and calculations is currently being assessed. In the future we plan to extend the platform to clusters, clouds, and other distributed systems, where the performance benefits and the importance of automation and interaction between the application and the storage will be even more pronounced [2]. Such parallel computational platform will have to not only decide among the above-given file handling scenarios but also to appropriately schedule the computational tasks, manage resources, and distribute files over the distributed system. [1] http://fuse.sourceforge.net/ [2] L. Stein, D. Holland, M. Seltzer and Z. Zhang, „Can a file system virtualize a processors?“, Association for Computing Machinery, Inc., March 2007.
12:45 PM - 1:00 PMA. Martinić (Končar - Inženjering za energetiku i transport d.d., Zagreb, Croatia), K. Fertalj, D. Kalpić (Fakultet elektrotehnike i računarstva, Zagreb, Croatia)
Tool Orchestration Framework for Virtual Team Environments 
Virtual teams are being increasingly used to run projects in globalized business environments. Virtual teams primarily rely on information and communication technologies to support many of communicative and collaborative activities that traditional collocated teams take for granted. To be efficient, virtual teams need more than just a plain set of tools dealing with different aspects of their operations. Comprehensive, integrated project management and project execution environment to support project activities is required. By integrating tools and methodologies such environment must enable efficient execution of projects involving virtual teams. One of fundamental components of efficient virtual team environment is tool orchestration framework that enables seamless integration and value-added utilization of tools. This paper proposes such tool orchestration framework based on concepts of common information model, service oriented architecture and project processes. Proposed framework enables tools integration and orchestration in heterogeneous environments inherent to virtual teams. Utilization of the tool orchestration framework in conjunction with adequate project management techniques and methodologies improves virtual team project awareness whilst reducing need for frequent interaction among team members, consequently resulting with increased virtual team performances. This paper describes basic concepts, architecture and utilization of the proposed tool orchestration framework for virtual team environments.
1:00 PM - 1:15 PMA. Poghosyan, L. Arsenyan (International Scientific Educational Center, National Academy of Sciences of the Republic of Armenia, Yerevan, Armenia), H. Astsatryan (Institute for Informatics and Automation Problems, National Academy of Sciences of the Republic of A, Yerevan, Armenia)
Comparative NAMD Benchmarking of Complex System on Bulgarian BlueGene/P 
Series of benchmarking of 210K atoms of complex system have been carried out on Bulgarian BlueGene/P (IBM Blue Gene/P: PowerPC 450 processors, a total of 8192 cores) and on computational resources of Armenian National Grid Initiative (ArmGrid). The experiments show that the linear increase of speed of calculation with the increase of number of processors has been obtained. It should be noted that no saturation is observed and finally we received 0.121day/ns using 4096 processors of BlueGene/P, meanwhile the data within the ArmGrid infrastructures shows the breakdown in scaling, which is probably due to interconnection type, i.e. program spends more time on communication between processors. It is also planning to compare existing NAMD data with GROMACS package finding.
Thursday, 5/24/2012 3:00 PM - 8:30 PM,
Camelia 2, Grand hotel Adriatic, Opatija
Distributed Programming and Cloud Service
Chair: Miklos Kozlovszky
 
3:00 PM - 3:15 PMS. Ye, P. Chen, P. Brezany, I. Janciak (Research Group Scientific Computing, Vienna, Austria)
Accessing and Steering the Elastic OLAP Cloud 
Accessing and Steering the Elastic OLAP Cloud Sicen Ye , Peng Chen, Peter Brezany, and Ivan Janciak Research Group Scientific Computing, Faculty of Computer Science, University of Vienna, Nordbergstrasse 15/C/3, 1090 Vienna, Austria Email: {brezany, janciak}@par.univie.ac.at}, {a0309037, a0307969}@unet.univie.ac.at A typical cloud platform provides capability of scalability, elasticity and fault tolerance. Moreover, it is designed to deal with high volumes of data on nearly unlimited number of machines. On-Line Analytical Processing (OLAP), a kernel part of modern decision support systems, allows interactive analysis of multidimensional data of varied granularity. A combination of the Cloud Computing and OLAP technologies brings challenges in providing OLAP analysis services in distributed environments. This paper presents an overview of our on-going research of Elastic OLAP Cloud Platform, which was in detail described in paper [1]. This paper presents our design and implementation of a multi-tier client system, which on one hand interacts with the Elastic OLAP Cloud platform, provides Web based Graphical User Interfaces (GUI) for administration of virtual OLAP cubes and performing OLAP analysis queries, and on the other hand, interacts with an Open Grid Services Architecture - Data Access and Integration (OGSA-DAI) server to access heterogeneous data sources and load integrated data into the Elastic OLAPCloud platform. The communication between the client and the platform is based on our home-grown OLAP Modeling Markup Language (OMML), which is also presented in the paper. [1] P. Brezany, Y. Zhang, I. Janciak, P. Chen, and S. Ye. An Elastic OLAP Cloud Platform. In Proceedings of the International Confererence on Cloud and Green Computing, CGC 2011, Dec 12-14 2011, Sydney, Australia, December 2011. Best Conference Paper Award.
3:15 PM - 3:30 PME. Atanassov, D. Georgiev (IICT - BAS, Sofia, Bulgaria), N. Manev (IMI - BAS, Sofia, Bulgaria)
ECM Integer factorization on GPU Cluster 
The problem of integer factorization is of high practical importance because of the widespread use of public key cryptosystems for encryption and authentication of Internet connections. In this paper we describe our implementation of the elliptic curve method of integer factorization on an NVIDIA GPU-based cluster using the CUDA technology and report the results of our experiments. The performance of our implementation is compared with other known experiments and software.
3:30 PM - 3:45 PMm. Talebi, M. Razzazi (Amirkabir University of Technology, Tehran, Iran)
An I/O Cost Efficient and Progressive Algorithm for Computing Massive Skyline Points  
We describe an I/O cost efficient algorithm for computing skyline points among a set of points in d-dimensional space. Our progressive (or online) algorithm can quickly return the first skyline points without having to read the entire data file, this property is important in the database community. In this paper we develop Partitioning, a progressive algorithm also based on branch and bound search, which is I/O cost efficient. Previous algorithms for skyline problem are not I/O cost efficient when the number of skyline points is O (n), but in partitioning algorithm this cost is significantly reduced. An important issue is the size of the To-Do list, especially when there are numerous number of skyline points. In our algorithm we significantly reduce the size of the To-Do list.
3:45 PM - 4:00 PMB. Chitsaz, M. Razzazi (Amirkabir University of Technology, Tehran, Iran)
Non-Blocking Roll-Forward Recovery for Message Passing Systems 
Due to the message transmission between processes in a distributed system, an error in a process might be propagated to another via faulty messages, which causes a global failure. In the absence of built-in fault detection methods, rollback recovery approach is not useful. To avoid error propagation and rollback overhead, roll-forward recovery schemes based on redundancy techniques such as N-Version Programming techniques have been presented. The disadvantage of using these schemes is that they need to block the receiver process and its message transmission to until each received message by its sender replica is confirmed, which results in high time overhead. In the case of variant response latencies, consisting of processing time and message transmission delay, these techniques would not be efficient. In this paper, a non-blocking roll-forward recovery approach with some changes to duplex system is proposed. This approach does not avoid fault propagation. But it performs an additional test using a copy of a failed module version to discover faulty process and replace its state with the fault-free process and mask the faults which are propagated to other processes; so it does not need to block processing or message transmission in any phases of the process. This scheme has lower execution time than existing roll-forward techniques.
4:00 PM - 4:15 PMB. Chitsaz, M. Razzazi (Amirkabir University of Technology, Tehran, Iran)
Non-Blocking N-Version Programming for Message Passing Systems 
N-version programming (NVP) employs masking redundancy: N equivalent modules (called versions) are implemented independently and run concurrently. The results of their execution are adjudicated by a special component that defines the correct majority result and eliminates the results of the versions in which design faults have been triggered. The disadvantage of using these schemes is that they need to block the receiver process and its message transmission to until each received message by its sender replica is confirmed, which results in high time overhead. In the case of variant response latencies, consisting of processing time and message transmission delay, these techniques would not be efficient. In this paper a new non-blocking NVP approach based on capturing the causality between requests and response is proposed that does not need to block the versions to confirm the output. The simulations result show that for acceptable values of failure rate per demand (pfd) and simultaneous active requests, our approach has lower execution time.
4:15 PM - 4:30 PMm. Talebi (Amirkabir University Of technology, Tehran, Iran), A. Mohammadalimaddi (University Of Science and Culture, Tehran, Iran), M. Razzazi (Amirkabir University of Technology, Tehran, Iran)
A Cellular Automata Based Algorithm for Voronoi Diagram of Arbitrary Shapes 
Presented in this paper is a cellular automata based algorithm for computing voronoi diagram of arbitrary shapes. Our algorithm outperforms previous works in speed and accuracy. The proposed algorithm constructs the correct voronoi diagram as a wave from each site is propagated to the environment. Our algorithm can be used in distributed systems and supposed as a parallel strategy to build voronoi diagram of arbitrary shapes.
4:30 PM - 4:45 PML. Környei (Széchenyi University, Győr, Hungary)
Parallel Implementation of a Combustion Chamber Simulation With MPI-OpenMP Hybrid Techniques 
Paralellization techniques utilized in a study of gas flow in a combustion chamber are described and discussed in this paper. Models of compressible fluid dynamics are solved with the finite volume method, with an additional algorithm, called "snapper", to handle piston and valve movement. In order to achieve an acceptable scaling on a 240 core CPU cluster, a two-stage parallelization with MPI in conjecture with OpenMP is used, and benchmarked. For some types of physical investigations, the actual spatial region of interest is somehow changing, deforming, or moving in time in a predefined fashion. Handling gas dynamics with piston motion, even with the simplest models requires precaution. Apart from numerical and physical corrections, there are challenges, where multiple types of unstructured, and specially generated deforming grids are handled in a computer system with distributed memory. In the present work the results of the first implementations are presented, which prove to be well scaling for this modest-sized cluster.
4:45 PM - 5:00 PMP. Škoda, B. Medved Rogina (Ruđer Bošković Institute, Zagreb, Croatia), V. Sruk (Faculty of electrical engineering and computing, University of Zagreb, Zagreb, Croatia)
FPGA Implementations of Data Mining Algorithms 
In recent decades there has been an exponential growth in quantity of collected data. Various data mining procedures have been developed to extract information from such large amounts of data. Handling ever increasing amount of data generates increasing demand for compute power. There are several ways of dealing with this demand, such as multiprocessor systems, and use of graphic processing units (GPU). Another way is use of field programmable gate array (FPGA) devices as hardware accelerators. This paper gives a survey of the application of FPGAs as hardware accelerators for data mining. Three data mining algorithms were selected for this survey: classification and regression trees, support vector machines, and k-means clustering. A literature review and analysis of FPGA implementations was conducted for the three selected algorithms. Conclusions on methods of implementation, common problems and limitations, and means of overcoming them were drawn from the analysis.
5:00 PM - 5:15 PME. Atanassov, S. Ivanovska, D. Dimitrov (Institute for Information and Communication Technologies - Bulgarian Academy of Sciences, Sofia, Bulgaria)
Parallel Implementation of Option Pricing Methods on Multiple GPUs 
The Heston stochastic volatility model is one of the most popular models for the evolution of stocks and futures prices, which includes a stochastic process for the volatility. In practice it is usually enhanced by adding a Poisson jump process, which improves the overall correspondence with the observed behaviour of prices in the marketplace. The pricing of financial options by means of Monte Carlo or quasi-Monte Carlo methods can greatly benefit from the use of GPU computing due to the inherent parallelism of the computations. In this work we describe efficient parallel implementations of several popular option pricing schemes by using CUDA-enabled graphic cards. Our quasi-Monte Carlo algorithms make use of modifications of the Sobol and Halton sequences. The numerical and timing results demonstrate the excellent efficiency of our approach on the target computational platforms.
5:15 PM - 5:30 PMD. Helić (Graz University of Technology, Graz, Austria)
Analysing User Click Paths in a Wikipedia Navigation Game 
Through the emergence of sophisticated Web search technology navigation became only a second-class information seeking strategy on the Web. Recently however, navigation becomes again attractive for both users and researchers alike. Numerous studies highlight the importance of navigation as an alternative information retrieval technique to search. These studies provide also evidences that the most efficient information finding occurs in the settings where search and navigation seamlessly integrate and complement each other. Moreover, the research community has recognized also the importance of understanding of human navigation behavior as this understanding helps in designing navigation structures that maximize the navigation efficiency. In this paper we present an initial analysis of user click paths from a Wikipedia navigation game. In addition we compare the structure of navigational paths from an information network with the similar analysis on human search in social networks and routing in complex networks.
5:30 PM - 5:45 PMBreak 
5:45 PM - 6:00 PMJ. Matković (JP Elektroprivreda HZ-HB d.d. Mostar, Mostar, Bosnia and Herzegovina), K. Fertalj (Fakultet elektrotehnike i računarstva, Sveučilište u Zagrebu, Zagreb, Croatia)
Models for the Development of Web Service Orchestrations 
Adoption of Web services delivered interoperability among different platforms. Nevertheless, a real contribution of service oriented systems is not only operability but powerful ways of inter and intra-corporative integration of autonomous IT systems. The most popular term, mentioned in this context, is - orchestration. Orchestration is a strongly supported mechanism which enables integration of component Web services into executable workflow processes. It uses XML based WS-BPEL language as a de-facto standard. This article inspects possible interaction patterns that are offered by WS-BPEL language and inspects different proposals for the development of WS-BPEL models. Most of them are based on model driven development where workflows are specified using languages with visual annotation or with higher level of abstraction which enable easier development or formal checking. Those models are afterwards automatically or semi-automatically translated into abstract and latter into executable WS-BPEL models.
6:00 PM - 6:15 PMG. Kosec, M. Depolli (Institut "Jožef Stefan", Ljubljana, Slovenia)
Superlinear Speedup in OpenMP Parallelization of a Local PDE Solver 
This paper analyses the application of OpenMP parallelization on shared-memory systems, such as the increasingly available multicore systems. The parallelization of the local meshless numerical method is considered. The presented solution procedure is suitable for solving systems of coupled partial differential equations. The superlinear speedup is demonstrated on a solution of the fluid mechanics problem. Local core caches are identified as the source of superlinearity and a set of experiments is performed for the analysis of a cache induced superlinear speedup. For the experiments, a simple algorithm simulating the workload of the local meshless numerical method is used for the multidimensional method complexity assessment.
6:15 PM - 6:30 PMV. Giedrimas, L. Sakalauskas (Siauliai University, Siauliai, Lithuania)
Simulated Annealing and Variable Neighborhood Search Algorithm for Automated Composition of Software Services 
The process of software-as-a-service (SaaS) development is critical in both developing time and quality aspects. The tools for semiautomatic or even automatic composition of software services are already implemented. However the problem of the quality of the resulting system still exists. There is huge number of the services-components (subsystems) with the same functionality but different non-functional attributes and relatively short time to evaluate them. In this paper the method for optimal set of services-components selection taking into account non-functional properties of services-components is proposed. This method can be used as an extension of other proofs-as-programs methods and service-oriented software development systems. We have presented the simulated annealing algorithm with variable neighborhood search for automated software composition and have evaluated this algorithm experimentally, comparing it to the Classic SA and Greedy algorithms. The algorithm for the evaluation and improvement of the services-components’ set we are proposed in this paper presents the class of algorithms because it is possible to change the globality of our optimization algorithm using different values of parameters.
6:30 PM - 6:45 PMD. Tomić (Hewlett-Packard, Zagreb, Croatia), D. Ogrizović (Faculty of Maritime Studies, University of Rijeka, Rijeka, Croatia)
Running High Performance Linpack on CPUGPU Clusters 
A trend is developing in High-Performance Computing with cluster nodes built of general purpose CPUs and GPU accelerators. The common name of these systems is CPUGPU clusters. High Performance Linpack (HPL) benchmarking of High Performance Clusters consisting of nodes with both CPUs and GPUs is still a challenging task and deserves a high attention. In order to make HPL on such clusters more efficient, a multi-layered programming model consisting of at least Message Passing Interface (MPI), Multiprocessing (MP) and Streams Programming (Streams) needs to be utilized. Besides multi-layered programming model, it is crucial to deploy a right load-balancing scheme if someone wants to run HPL efficiently on CPUGPU systems. That means, besides the highest possible utilization rate, both fast and slow processors needs to receive appropriate portion of load, in order to avoid faster resources waiting on slower to finish their jobs. Moreover, in HPC clusters on Cloud, one has to take into account not only computing nodes of different processing power, but also a communication links of different speed between nodes as well. For this reasons we propose a load balancing method based on a semidefinite optimization. We hope that this method, coupled with a multi-layered programming, can perform a HPL benchmark on CPUGPU clusters and HPC Cloud systems more efficiently than methods used today.
6:45 PM - 7:00 PMI. Lukić, M. Köhler, N. Slavek (Faculty of Electrical Engineering, Osijek, Croatia)
Segmentation of Data Set Area Method in Clustering of Uncertain Data 
Clustering uncertain objects is a well researched field. This paper is concerned with clustering uncertain objects with 2D location uncertainty due to object movements. Location of moving object is reported periodically, thus location is uncertain and described with probability density function. Data about moving objects and their locations are placed in distributed databases. Number of objects in database can be large, thus their proper clustering is a challenging task. A survey of existing clustering methods is given in this paper and a new clustering method is proposed. This method is called Segmentation of Data Set Area. Using this method execution times of objects clustering are improved, compared to previous methods. In this method, data set area is divided in sixteen segments. Each segment is observed separately and only clusters and objects in that and neighbouring segments are observed. Experiments were conducted to evaluate the effectiveness of the new method. Experiments proved that this method outperform previous methods up to 28% in computing time while using the same memory space.
7:00 PM - 7:15 PMS. Memon, J. Rybicki, M. Riedel, S. Memon (Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Jülich, Germany), E. Yen (Academia Sinica Grid Computing, Taipei, Taiwan)
Bridging the Gaps: Federation of Clouds Using Grid Services and Standards 
Clouds have emerged as a new paradigm to access compute, storage, and networked resources in secure and cost effective manner. Their major benefits are seen in the commercial domain with its key features such as on-demand and more flexible resource provisioning, pay per use, and customized application environments. Also research communities such as High Energy Physics (HEP), Biology, and Neuroscience are investigating the applicability of Clouds, with their strengths and weaknesses in scientific environments. In this paper we will show that in scientific environments there are certain areas where cloud services should be exploited to support the challenging e-Science requirements. Among them are, support for virtual communities, dynamic service and resource discovery, identity and resource federation, and access to data catalogues. The Grid community has actively contributed to address some of these issues, thus we propose to reuse existing efforts to complement Cloud services with Grid computing best practices, production services, and experiences, including standardization. In this paper we will provide guidelines of how to realize multi-cloud federated deployments based on a survey of existing Grid technologies in context augmenting it with lessons learned gained in scientific environments. The contribution focuses on the areas of compute, data, information, and security. We will also show potential benefits that scientists can gain by adopting proposed solutions in cloud-based deployments.
7:15 PM - 7:30 PME. Atanassov, T. Gurov, A. Karaivanova (IICT-BAS, Sofia, Bulgaria)
Security Issues of the Combined Usage of Grid and Cloud Resources 
The efficient use of e-Infrastructures requires researchers to combine Grid, High Performance and Cloud resources in a way that allocates rationally the physical resources among the research tasks and activities and minimizes the movement of data and the waiting times. The high complexity of the computing infrastructure available for researchers underlines the importance of clear understanding of the security treats and devoting sufficient resources for their mitigation. In Eastern Europe there is a tradition of joint operations and usage of Grid and High Performance Computing resources for the needs of research and education. In this paper we identify some important use cases of combined usage of Grid and Cloud resources and consider the security treats and attack vectors that should be the prime target for investigation and mitigation in this context. By leveraging our experience from the so-called “Security challenges” in the EGEE Grid infrastructure, we identify steps and best practices that could decrease the risk from penetration by “random” as well as determined attackers and outline services that could bring down the overall security risks.
7:30 PM - 7:45 PMK. Kroflin, S. Prstačić, M. Žagar (University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia)
Framework for Implementation of Complex Dynamic Web Forms 
Implementation, usage, and maintenance of complex dynamic web forms can be very time consuming, especially when they have a large number of fields which change very often. In combination with fields that require programming for implementation, the complexity further increases. In this article we describe the problem and suggest the framework for easier implementation of such forms. Later, we describe current implementation, its benefits and propose future work on this subject.
7:45 PM - 8:00 PMS. Prstačić, M. Žagar, K. Kroflin (FER, Zagreb, Croatia)
Interfaces of a Nested Web Application Components Framework as a Reusable Software Component 
The framework is designed to provide a simple and powerful way of Web application component development. Components of the framework can nest, and the framework is designed to be easy to reuse as a component, thus providing its components to be used by the host application. We explain the interfaces required to make this possible – what they provide and what issues they solve, and how their requirement was discovered. We also identify shortcomings of this approach, and issues that will be addressed by future work.
8:00 PM - 8:15 PMM. Perkovac (I. Technical School Tesla, Zagreb, Croatia)
Maxwell's Equations for Nanotechnology 
Classical physics, which includes Maxwell's equations, a hundred years ago couldn’t explain stability of atoms, the periodic table of elements, the chemical bond, the discrete excitation energies of atoms and their energetic state, the ionization of atoms, the spectra, including its fine structure and transition rules, experimental evidence (X-ray spectra and the behavior of atoms in electric and magnetic fields), the properties of the matter in solid state. That was the reason that classical physics fails to apply to the atomic area, i.e. to the area of nanometers or below. Now the situation has changed. It's because in the framework of classical physics, with the help of Maxwell's equations, we can derive Schrödinger's equation, which is the foundation of quantum physics. With this new knowledge all the stated can now be explained in the framework of classical physics. Article describes the procedure to obtain Schrödinger's equation using classical physics.
8:15 PM - 8:30 PME. Afgan, K. Skala, D. Davidović, T. Lipić, I. Sović (Ruđer Bošković Institute, Zagreb, Croatia)
CloudMan as a Tool Execution Framework for the Cloud 
Cloud computing has revolutionized how availability and access to computing and storage resources is realized; it has made it possible to provision a large computational infrastructure in a matter of minutes, all through a web browser. What it has not yet solved is accessibility of a tool execution environment where tools and data can easily be added and used in non-trivial scenarios. In this paper, we demonstrate how CloudMan (http://usecloudman.org) can be used to provide complete and complex tool execution environments for making cloud resources functional for a desired domain.
Friday, 5/25/2012 9:00 AM - 12:00 PM,
Camelia 2, Grand hotel Adriatic, Opatija
Bioinformatics, Medicine and Visualisation
Chair: Roman Trobec
 
Invited Paper 
9:00 AM - 9:15 AMU. Stanič (Biomedical Research and Innovative Society, Ljubljana, Slovenia), F. Strle (University Clinical Centre Ljubljana, LJUBLJANA, Slovenia), J. Stanič (Kosezi d.o.o., Iskra Techno, LJUBLJANA, Slovenia), S. Dolinšek (Innovation Development Institute University of Ljubljana, LJUBLJANA, Slovenia)
Activation of Innovation Potentials of Medical Staff in Design of New Telemedical Applications in Slovene Hospitals 
In several leading EU regions it was independently found out that clinics have high potentials for innovations in product, processes and service development in biomedical industry. In the paper, the case of the ongoing process of transformation of hospitals from non-innovative to innovative ones is presented through the implementation of the InTraMed project in Slovenia. The ultimate goal of the project is to activate the innovation potentials of hospitals which should become major actors in regional development, employment and wellbeing through transfer of knowledge to SMEs and industry. The Action plan and Implementation plan will be presented. Also the case of activation and realization of innovative ideas of medical doctors at University Clinical Centre (UCC) of Ljubljana and Pulmonary Clinic Golnik related to ICT/ telemetrically applications will be presented.
Papers 
9:15 AM - 9:30 AMM. Kozlovszky (MTA SZTAKI, Budapest, Hungary), G. Windisch (Obuda University, Budapest, Hungary), A. Balasko (MTA SZTAKI, Budapest, Hungary)
Short Fragment Sequence Alignment on the HP-SEE Infrastructure 
The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic genomes at the scale of several hundred thousands. This task is solvable by BLAST, BWA and other sequence alignment tools. BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. HP-SEE infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Using workflows we have ported BLAST/BWA algorithms successfully to the massively parallel HP-SEE infrastructure, and have created online service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities within the SEE region. With our service researchers can do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.
9:30 AM - 9:45 AMA. Rashkovska, I. Tomašić, K. Bregar, R. Trobec (Institut Jožef Stefan, Ljubljana, Slovenia)
Remote Monitoring of Vital Functions - Proof-of-Concept System 
Modern information communication technologies (ICT) acting as a support to the medical activities is one of the possibilities to increase the efficacy of the health care system and to decrease its costs. We developed a proof-of-concept system, which uses modern, moderately-priced and user-friendly technology solutions, e.g. wireless body sensors for data acquisition, advanced algorithms for local analysis of data, widely available personal terminals for visualisation of measurements, and the existing communication infrastructure for data transfer, either to medical experts or to a personal database. We pay a special attention to the fusion of different body sensors in order to keep their number at minimum. On the other hand, we want to improve reliability of the system with the introduction of a simple video sensor in order to prevent the rising of false alarms. We show that such a system can achieve a reliability needed for commercial implementation in applications offering services that can contribute significantly to an improved quality and efficiency of the medical care.
9:45 AM - 10:00 AMD. Petrova, G. Spasov (Technical University - Sofia, Plovdiv Branch, Plovdiv, Bulgaria), P. Stefanova-Peeva (Medical University Plovdiv, Plovdiv, Bulgaria)
A Distributed Clinical Information System for Pediatric Surgery – Basics and Specifics 
The basic concepts and specific requirements for design of a distributed clinical information system for pediatric surgery are presented in this paper. The system is based on the Service Oriented Architecture, which ensures its interoperability. Web service adapters are proposed as a means to provide integration of the system with other existing e-health systems. The specific health information (investigations, treatments, diagnoses, etc.) for the patients of a pediatric surgery department is organized and stored in electronic health records. The main part of this information, concerning the undertaken surgical interventions and the treatment’s results, is submitted as free text. The system described in the paper is designed to provide support for the physicians and PhD students in the department conducting scientific research in the area by allowing extended (different criteria) searches and analysis of stored information for pediatric surgery cases.
10:00 AM - 10:15 AMM. Shopov, G. Spasov, G. Petrova (Technical University of Sofia, Plovdiv branch, Plovdiv, Bulgaria)
Modeling and Analysis of Coordinator Functions in Body Sensor Networks 
The paper suggests a model for implementation of a body sensor network’s coordinator functions. The coordinator is based on a mobile smartphone with a Linux kernel and an analysis on different prioritization schemes based on HTB and CBQ queue disciplines. Since the coordinator is modeled as a standard mobile device, the typical traffic flows for such devices are taken into account and their influence on the medical sensors data flow traversing through the coordinator is considered. The paper presents the results from both simulation and experiment analysis on performance parameters. For the simulations a network simulator (ns-2) tool is used, while experiments are conducted on a Nexus One smartphone platform and traffic control (tc) tool.
10:15 AM - 10:30 AMJ. Pavlič, R. Trobec (Institute Jožef Stefan, Ljubljana, Slovenia)
Simulation and Visualisation of Lipids and Water Molecules Ensemble 
Advent of contemporary computers enables the in silico studies of biological nanostructures. One of the important application fields in medicine is the simulation of proteins and study of their physical phenomena, which can help to understand their behavior and improve or accelerate the new drug design. Lipid bilayers have been often used to model living cell membranes in the investigation of basic phenomena, such as interactions of membrane - protein, mechanical and structural properties or conformational changes of membranes, etc. The bilayers can form vesicles that model living cells. The vesicles are composed of lipids arranged in a closed bubble-like shaped bilayer. In our study, we simulate and visualise the formation of lipid bilayer(s) and vesicle(s) in watery environment, using coarse grain molecular dynamics (CG-MD) simulation carried out by open source software GROMACS. Running the CG-MD simulation on multicore parallel computers, we confirmed the formation of bilayer(s) and vesicle(s) from ensembles of few tens of thousands of lipids and water molecules, simulated for time periods from several hundreds of nanoseconds to a few microseconds.
10:30 AM - 10:45 AMĐ. Pečarić (The University of Zagreb/Faculty of Humanities and Social Science, Zagreb, Croatia)
What is the Destiny of Doctors of Information Science? 
This paper analyses professional orientation and activities of scientists that were awarded with a doctoral degree in information science at the University of Zagreb. We will give an overview of master theses and doctoral dissertations according to periods and disciplines of information science. The overview of scientific engagement of information science doctors in scientific and educational institutions and their promotions during scientific career is only a formal indicator of their contribution to information science and educational system. Bibliometric analyses and methods are used in order to: a) research their scientific activities according to scientific areas and scientific productions (analyses are done on the data collected from CROSBI); b) indentify and show basic areas of their interest by co-word analysis (the results were visualized by key word clusters). The results obtained by this research produce quantitative indicators about the scientific productivity of information science doctors, their position and role in scientific and educational institutions, and give an overview of their scientific interests. These results are a presupposition for the evaluation of current development of scientific community in information science, but are also the basis for the prediction of their future development
10:45 AM - 11:00 AMV. Avbelj (INSTITUT "JOŽEF STEFAN", LJUBLJANA, Slovenia)
Auditory Display of Biomedical Signals through a Sonic Representation: ECG and EEG Sonification 
Visualization is a common way of data presentation in many fields where processing of visual information is further done by the human visual system. The signals from the heart (ECG) and from the brain (EEG) are normally presented as graphs and processed visually too. These signals can be transformed into sound (sonification) and then processed by the human auditory system. We made sonification of ECG and EEG recordings to demonstrate the processing capabilities of the auditory system for this type of the signals.
11:00 AM - 11:15 AMT. Harasthy, Ľ. Ovseník, J. Turán (Department of Electronics and Multimedia Communications, Faculty of Electrical Engineering and Infor, Košice, Slovakia)
Video Driver Assistance System Using Optical Correlator 
Paper presents experiments and results of Video Driver Assistance System Using Optical Correlator (OC). In our system, the Cambridge Correlator was used. The traffic scene is captured by color camera and extracted frames are input for the Recognition System. The system consists of three main blocks, Preprocessing Optical Correlator and Traffic Sign Identification. First (preprocessing) block define and chose the Region of Interest (ROI) in captured frame. Preprocessed ROI (color filtered and resized) go to Optical Correlator and then is compared with database of reference images (traffic signs). Those images are compared during correlation process in terms of two criteria, similarity and relative position. Output of correlation consists of highly localized intensities, know as correlation peaks. The intensity of spots provides a measure of similarity and position of spots, how images are relatively aligned in the input scene. Several experiments have been done with this system. Results and conclusion are discussed.
11:15 AM - 11:30 AMBreak 
11:30 AM - 11:45 AMD. Živkov, M. Davidović, M. Vidović, N. Žmukić, A. Anđelković (RT-RK LCC, Institute for Computer Based Systems, Novi Sad, Serbia)
Integration of Skype Client and Performance Analysts on Television Platform 
Focus of this paper is to analyze complexity of implementation Skype client on modern digital television platform. Considering popularity of Skype platform this is one of the feature most user would prefer to have available on their TV sets. Main goal of this paper is to determent what impact Skype has on resources, which are very limited in television, and can one average television platform meet the requirements. Beside this paper is demonstrating one way to implement Skype client in existing television software stack as independent module. Implemented features covered chat, user management and voice calls. Complete concept has been successfully implemented, tested and measured on real platform used in commercial television set.
11:45 AM - 12:00 PMM. Mahnič, D. Skočaj (Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia)
A Visualization and User Interface Framework for Heterogeneous Distributed Environments 
Systems that require complex computations are frequently implemented in a distributed manner. Such systems are often split into components where each component is employed to perform a specific type of processing. The components of a system may be implemented in different programming languages because some languages are more suited for expressing and solving certain kinds of problems. The user of the system must have a way to monitor the state of individual components and also to modify their execution parameters through a user interface while the system is running. The distributed execution and programming language diversity represent a problem for the development of graphic user interfaces. In this paper we describe a framework in which a server provides two types of services to the components of a distributed system. First it manages visualization objects provided by individual components and combines and displays those objects in various views. Second, it displays and executes graphic user interface objects defined at runtime by the components and communicates with the components when changes occur in the user interface or in the internal state of the components. The framework was successfully used in a distributed robotic environment.
12:00 PM - 12:15 PMJ. Sirotković (Siemens d.d., Split, Croatia), H. Dujmić, V. Papić (FESB, Split, Croatia)
K-Means Image Segmentation on Massively Parallel GPU Architecture 
Image segmentation can be computationally demanding, and therefore require powerful hardware to meet performance requirements. Recent rapid increase in the performance of GPU hardware, coupled with simplified programming methods, have made GPU efficient coprocessor for executing variety of highly parallel applications. This paper presents implementation of modified k-means image segmentation on highly parallel GPGPU platform based on CUDA programming model. In order to increase performance gain, emphasis is placed on optimization of algorithm to exploit underlying GPU hardware architecture. The numerical experiments demonstrated up to 30 times advantage in execution time compared to CPU based version of the algorithm.

Basic information:
Chairs:

Karolj Skala (Croatia), Roman Trobec (Slovenia)

Steering Committee:

Piotr Bala (Poland), Leo Budin (Croatia), Yike Guo (United Kingdom), Ladislav Hluchy (Slovakia), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Charles Loomis (France), Ludek Matyska (Czech Republic), Laszlo Szirmay-Kalos (Hungary), Tibor Vámos (Hungary), Branka Zovko-Cihlar (Croatia)

International Program Committee Chairman:

Petar Biljanović (Croatia)

International Program Committe:

Alberto Abello Gamazo (Spain), Slavko Amon (Slovenia), Michael E. Auer (Austria), Mirta Baranović (Croatia), Ladjel Bellatreche (France), Nikola Bogunović (Croatia), Andrea Budin (Croatia), Željko Butković (Croatia), Željka Car (Croatia), Matjaž Colnarič (Slovenia), Alfredo Cuzzocrea (Italy), Marina Čičin-Šain (Croatia), Dragan Čišić (Croatia), Todd Eavis (Canada), Maurizio Ferrari (Italy), Bekim Fetaji (Macedonia), Tihana Galinac Grbac (Croatia), Liljana Gavrilovska (Macedonia), Matteo Golfarelli (Italy), Stjepan Golubić (Croatia), Francesco Gregoretti (Italy), Niko Guid (Slovenia), Yike Guo (United Kingdom), Jaak Henno (Estonia), Ladislav Hluchy (Slovakia), Vlasta Hudek (Croatia), Željko Hutinski (Croatia), Mile Ivanda (Croatia), Hannu Jaakkola (Finland), Robert Jones (Switzerland), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Miroslav Karasek (Czech Republic), Bernhard Katzy (Germany), Christian Kittl (Austria), Dragan Knežević (Croatia), Mladen Mauher (Croatia), Branko Mikac (Croatia), Veljko Milutinović (Serbia), Alexandru-Ioan Mincu (Slovenia), Vladimir Mrvoš (Croatia), Jadranko F. Novak (Croatia), Jesus Pardillo (Spain), Nikola Pavešić (Slovenia), Ivan Petrović (Croatia), Radivoje S. Popović (Switzerland), Goran Radić (Croatia), Slobodan Ribarić (Croatia), Karolj Skala (Croatia), Ivanka Sluganović (Croatia), Vanja Smokvina (Croatia), Ninoslav Stojadinović (Serbia), Aleksandar Szabo (Croatia), Laszlo Szirmay-Kalos (Hungary), Dina Šimunić (Croatia), Jadranka Šunde (Australia), Antonio Teixeira (Portugal), Ivana Turčić Prstačić (Croatia), A. Min Tjoa (Austria), Roman Trobec (Slovenia), Walter Ukovich (Italy), Ivan Uroda (Croatia), Mladen Varga (Croatia), Tibor Vámos (Hungary), Boris Vrdoljak (Croatia), Robert Wrembel (Poland), Baldomir Zajc (Slovenia)

Registration / Fees:
REGISTRATION / FEES
Price in EUR
Before May 7, 2012
After May 7, 2012
Members of MIPRO and IEEE
180
200
Students (undergraduate), primary and secondary school teachers
100
110
Others
200
220

Contact:

Karolj Skala
Rudjer Boskovic Institute
Bijenicka 54
HR-10000 Zagreb, Croatia

GSM: +385 99 3833 888
Fax: +385 1 4680 212
E-mail: skala@irb.hr 

Location:

Opatija, often called the Nice of the Adriatic, is one of the most popular tourist resorts in Croatia and a place with the longest tourist tradition on the eastern part of Adriatic coast. Opatija is so attractive that at the end of the 19th and beginning of the 20th centuries it was visited by the most prominent personalities: Giacomo Puccini, Pietro Mascagni, A. P. Čehov, James Joyce, Isidora Duncan, Beniamino Gigli, Primo Carnera, Emperor Franz Joseph, German Emperor Wilhelm II, Swedish Royal Couple Oscar and Sophia, King George of Greece.

The offer includes 20-odd hotels, a large number of catering establishments, sports and recreational facilities.
For more details please look at www.opatija.hr/ and www.opatija-tourism.hr/.

 


 

 

 

Download
 
News about event
Currently there are no news
 
Patrons - random
Sveučilište u ZagrebuSveučilište u RijeciFER ZagrebPomorski fakultet RijekaTehnički fakultet Rijeka