|H. Abajyan (National Academy of Sciences of Armenia, Yerevan, Armenia), G. Ashot (Institute for Informatics and Automation Problems, NAS of Armenia, Yerevan, Armenia), S. Haik (Institute of Mathematics, NAS of Armenia, Yerevan, Armenia)
New Algorithm for Simulation of Spin-Glass in External fields
Spin glasses are prototypical models for disordered systems which provide a rich source for investigations of a number of important and difficult applied problems of physics, chemistry, material science, biology, evolution, organization dynamics, hard-optimization, environmental and social structures, human logic systems, financial mathematics etc. Numerical studies of spin-glass systems are difficult to accomplish and in general only small moderate system sizes can be accessed.
An effective algorithm for parallel simulation of spin-glass system is developed. In contrast to well known algorithms of Metropolis and others which are based on the Monte Carlo simulations method, the developed algorithm allows with high efficacy to construct stable spin-chains of arbitrary length in parallel and calculate all statistical parameters of spin-glass system of large sizes. We have implemented software using cluster computation (MPI technology) and GPU technology (CUDA programming language is used) as well. Since the ideology of GPU technology is SIMD (Single Instruction Multiple Data) and our implemented algorithm is from that class of problems, we obtained fully paralleled implementation. We have tested the developed code on example of simulation 1D spin-glass in external fields and we were convinced of reliability and efficiency of calculations.
|L. Perkov (Iskon Internet d.d., Zagreb, Croatia), N. Pavković (Institut Ruđer Bošković, Zagreb, Croatia), J. Petrović (Fakultet elektronike i računarstva, Zagreb, Croatia)
High Availability Using Open Source Software
There are several approaches one could take in order to deliver reliable, robust and resilient software solution. Considering the nature of the problem in question there is no single method which can be applied in all cases, but rather a set of techniques one can choose from. Additionally, implementation of highly available software solution using existing resources, without or with minimal investment is a common requirement in many business environments. Constructing a production ready system to ensure high availability is not an easy task. However, upon realization of such system organization gains a number of benefits.
This paper gives an overview of the most commonly used techniques to ensure high availability using only open source software. One of them certainly includes construction of highly available cluster solution which in the event of a failure automatically initiates recovery. Furthermore, a number of open source software have built in functionality to ensure high availability. Linux, the most widely used open source operating system, offers a wide range of diverse solutions of which only few are presented.
|J. Matkovic (JP Elektroprivreda HZ-HB d.d. Mostar, Mostar, Bosnia and Herzegovina)
Inside of Composite Web service Development
One of the most frequent questions, the most enterprise architects often address, is how big a Service should be. This question is reminiscent of the issue many developers grappled with as they transitioned from procedural programming approaches, in which a single program encapsulated one application, to object-oriented forms of programming in which developers separated logical units of application functionality into objects.
Just as object orientation brought the issue of encapsulation to the fore (but didn’t invent the concept), Service orientation highlights the challenge of granularity. Granularity can be defined as the overall quantity of functionality encapsulated by a service where the level of service granularity is set by the service’s functional context, not by the actual amount of functionality that resides within the physically implemented service. When talking about levels of granularity, there are two opposite concepts: fine-grained service vs. coarse-grained service. A fine-grained service addresses a relatively small unit of functionality, while a coarse-grained service abstracts larger chunks of capability within a single interaction. These terms are not absolute, i.e., there is no single measure for fine-granularity or coarse-granularity. Besides service granularity, there are also other types of design granularities, correlated to other service design characteristics.
A concept of service granularity is important to the enterprise architects. Generally there are two major approaches to designing service granularity: top-down and bottom-up approach. In a top-down approach, granularity is constructed in such way that business model is decomposed into sub-models until some condition is met by which it can’t be decomposed any further. Bottom-up approach starts with IT systems already implemented, exposes service interfaces from the APIs that already exist, creates service contracts on top of those interfaces, and then composes them together until the business process requirements are met. Generally, in practice two approaches are usually combined.
Granularity is not the only measurement when talking about Web services. Services can be divided into two categories: atomic and complex ones, which represent yet another measure. Atomic services are being composed to create complex new services by an enterprise architect. Consumer of a service sees any service as one entity, i.e., he sees it as an atomic service, no matter how complex it is. No service consumer should be required to know how a service is implemented, lest there be a change to how the service is implemented.
When architects define their business services in a top-down manner, what they are actually doing is defining which activities are implemented as composite services, and thus must be further decomposed, and which are identified as atomic services, at which point decomposition activity stops and service implementation begins. Atomic services can be implemented using platform-specific approaches, while composite ones are being developed using emerging service composition metadata languages and runtime tools.
When talking about composite Web service development, there are two main concepts used: orchestration and choreography. Choreography is by W3C consortium defined as “the sequence and conditions in which the data exchanged between two or more participants in order to meet some useful purpose”. Former definition, when applied to Web services, defines participants as Web services and their clients where clients can be other Web services, applications, or human beings. Choreography is a pattern of interactions that must be followed for a Web service client in order for its goal to be achieved. [insight] Choreography does not have central coordinator and that is something what differs it from orchestration later described. Real world equivalence would be the one that choreography is the composition and arrangement of dances especially for ballet. [Anne Thomas Manes]
Orchestration refers to automated execution of a workflow, defined using an execution language. Workflow is then executed at runtime by an orchestration engine. An orchestrated workflow is typically exposed as a service that can be invoked through an API. It does not describe a coordinated set of interactions between two or more parties. [Anne Thomas Manes] Real world equivalence for orchestration would be the one that it is an arrangement of a musical composition for performance by an orchestra. From specified definitions it is clear that orchestration has a central conductor, i.e. orchestration engine.
|T. Vámos (CComputer and Automation Inst. (SzTAKI) Hungarian Academy of Sciences, Budapest, Hungary)
Understanding in distributed social-administrative network systems
Understanding is a key aspect of artificial intelligence, i.e. of all kinds of artifact design. Understanding of the task, of the intentions of creator and user, the attributes of the artifacts, its materials, dynamics, understanding among the creators and the users are all individual and coherent problems of the simplest, regular task to the most complex system design.
What is understood was and is to be communicated, even for the creator if he or she is the single executor and user. In our technological age this should have a computer understandable form, too, due to the role of computers in any machine of production, process operation and control.
I focused on understanding, taken as the key concept. This is really the problem of the view, it could be focused on knowledge, features related to intelligence as the buzzword,
Artificial intelligence indicates. Understanding emphasizes the above communicative relations.
A relevant philosophical and not less practical remark: The understanding problem is an infinite one. Even the understanding of our self is incomplete, changing in time; similarly we cannot exhaust the infinite depth of understanding the other person, though he or she can live with us for a whole lifetime. Understanding in science, the phenomena of nature is the endless approximative pursuit of science.
Consequently, understanding is an approximative process, the great challenge of our intelligent activity, related as well to any kind of automation.
Our current experimental field is the progress in automatic support of civil processes related to administrations based on information technology. A relevant part of the exercise is the analysis of letters written by rather low educated people on queries, claims, against other government offices, courts, persons, especially neighbors. The subjects are retirement pensions, court decisions, administrative delays, corruption,
social support, environmental issues.
The problem is complicated not only by the difficulties of understanding remote people of dubious educational background but by all kinds of distributed data base, information base and local regulation systems. The ideal solution of the data base access would be a general, international identification and privacy combined security system, comprising personal data, information on health, education and employment. This ideal is still rather far to be accomplished though several attempts and theoretical-legal considerations were initialized. The differences in administrative procedures and software systems are open problems, too. This is a general problem in the EU and all over the world which requires much effort in the future, think only about international health services or retirement regulations.
Our approach, in starting with the understanding of citizens’ claims is an intention to demonstrate the possibility and related problems of such highly distributed systems.
A person in any kind of need in an alien environment is a case prototype for the future social-administrative information society world.
The linguistic corpus of that kind, containing about 1000 letters, was chosen intentionally, to see the limits of understanding in not well defined subjects having
no preformed structures and texts. These well consumable text types are regular objectives of information mining, e.g. analysis of consumer profiles or media news.
The process starts with human teaching, annotation of the letters for vocabulary statistics,
with a search for subject typical nouns and verbs, expressions. Supporting the annotation a very practical tool was developed based on several available software tools.
This phase initializes the development of Minsky-type frame structures which characterize the main concepts of the enumerated subjects. The next step is the creation of rather general schemes of the letter subjects, e.g. general scheme of a retirement case.
These schemes follow the script ideas of Shank and build a two-three level of semantic net descriptions from the very general scheme to the individual cases This dialog-type process is similar to the human inquiry and includes information quests from the client, from other data bases and specialists.
The learned frames are the nodes of the scripts to be learnt and are the knowledge material for the further, automatic or semiautomatic understanding procedures. The first result is always a categorization which helps the office of libels to distribute the letters. In several cases this also requires expertise and is not a simple task. The next is the selection of those items which can be understood and settled in a simple straightforward way. Not less important is the selection of apparently insane material and the separation of relevant information from others e.g. those tucking at the heart-strings.
All these are steps towards understanding. If we would like to create an understanding in the ideal sense not only the whole history is needed of the person who wrote a letter, his/her all knowledge, emotional experience but the simulacrum of the whole environment and situation, biological constitution. Understanding is in our sense and for the application in the information society the outlined, very modest but practical applications.
Statistics of the results and a demonstration of the procedure will be added.
|A. Eroftiev, N. Timofeeva, A. Savin (Saratov State University, Saratov, Russian Federation)
Parallel Computing in Application to Global Optimization Problem Solving
This paper presents the results of a research of parallelization possibility of different optimization methods used in finding the global extremum.
A parallel algorithm for finding the global extremum based on modified Box’s complex method with explicit and implicit constraints is proposed. We also determined the optimal number of computing nodes needed to provide predefined calculation accuracy and computational stability.
The results of the parallel adaptation of the multi-extremal objective function global minimum finding algorithm based on simulated annealing method are presented. The reliability of finding the global minimum, depending on the number of nodes used in parallel computing system is investigated. We showed that parallel version of the simulated annealing method makes it possible to reliably find the area of the global minimum in a small amount of time.
Also the option of using genetic algorithms with parallel computing systems for global optimization problem solving is described.
A comparative assessment of efficiency of these methods is made and the recommendations for their use in different tasks are given.
|T. Schmitt (Ulm University, Ulm, Germany), P. Schulthess (Institute of Distributed Systems, Ulm University, Ulm, Germany)
Use Case Induced Relaxed Transactional Memory Consistency Models
The consistency models of in-memory data cluster and grid systems attempt to reconcile safe and comfortable programming on one side and efficient and highly concurrent program execution on the other side. Finding a way to maintain the consistency of replicated data in parallel and distributed programs that is convenient to use but still offers good performance is a complex challenge. The transactional memory approach promises to keep the programming effort low and still to provide safe and efficient execution. In many cases thoughtransactional semantics are too rigid and create unnecessary transaction restarts. Our objective now is to partially relax the consistency model of the application while maintaining the strong transactional consistency as the base model. In this paper we present a platform that allows the coexistence of multiple consistency models, employing the transactional memory paradigm in a distributed fashion as the base consistency at the same time.
|A. Weggerle, T. Schmitt, C. Himpel, P. Schulthess (University of Ulm, Ulm, Germany)
Transaction based device driver development
Stability is one of the main design goals in the development process of operating systems. Device drivers often cause crashes of the whole operating system because they usually run in kernel mode. Generally, the synchronization and locking mechanisms of device drivers could be a critical issue in terms of performance and stability.
Inspired by the widely used transaction mechanisms in databases or cluster systems we adopt the technique, introducing a transaction based device driver model. This transfers the advantages of the transaction mechanism to the driver development. All accesses to the devices are encapsulated into transactions. They might be defined through explicit declaration or be inherently implicit by means of the driver model. Especially in systems which elevate transactional processing to a matter of principle – like Transactional Memory – extending the transaction paradigm to the device communication might be an important step. The device driver transaction can guarantee that device communication is carried out atomic and hence the device is left in a consistent state.
|T. Schmitt (Ulm University, Ulm, Germany), P. Schmidt, N. Kaemmer, A. Weggerle, P. Schulthess (Institute of Distributed Systems, Ulm University, Ulm, Germany)
Rainbow OS: A Distributed STM for In-Memory Data Clusters
Making parallel programming safe and intuitive still remains a major challenge in view of the fact that the exploitation of parallelism is not only desirable for multicore architectures but also for distributed systems such as clusters and grids. We claim that Software Transactional Memory (STM) is one of the more promising approaches to simplify the parallel programmer’s job. Our Rainbow OS offers a streamlined in-memory data facility, accommodating many distributed application scenarios using a lean and teachable STM approach.
The paper gives an overview and then focusses on memory management strategies and how Rainbow OS can host multiple, coexisting, custom-tailored memory consistency
models within its transactional environment. We explain the benefits of our cluster-wide STM for distributed programming, for consistent memory image checkpointing and for system-level recompilation with on-the-fly code replacement. We also describe how the transactional programming paradigm can be used inside the operating system, particularly for device driver development.
|I. Voras, I. Čavrak, B. Mihaljević, M. Orlić, V. Paunović (Fakultet elektrotehnike i računarstva, Zagreb, Croatia), T. Pavić, M. Pletikosa (T-Hrvatski Telekom, Zagreb, Croatia), S. Tomić, K. Zimmer, M. Žagar, I. Bosnić (Fakultet elektrotehnike i računarstva, Zagreb, Croatia)
Evaluating Open-Source Cloud Computing Solutions
Cloud computing is becoming a mainstream technology in enterprise environment, promising more efficient use of hardware resources through virtualization, elastic computing facilities and secure management of user applications. Various cloud computing architectures are emerging and several commercial and open-source products on the market advertise a rich feature-set. While commercial vendors try to give potential users the (not necessarily unbiased) tools to reason on the comparative advantages of their product, the open-source community trusts their users to make a well-informed selection on their own.
In this paper, we look at the open-source cloud solutions, and discuss the criteria that can be used to evaluate the stability, performance and features of open-source clouds and compare the available solutions. The evaluation criteria focus on three main components of a cloud solution - storage layer, virtualization layer, and management layer. In addition, we explain the motivation for application of open-source cloud solutions from an enterprise perspective, and discuss the potential benefits of these solutions for private cloud computing environments.
|I. Janciak (, , ), L. Novakova (CVUT, Prague, Czech Republic), M. Lenart (University of Vienna, Vienna, Austria), P. Brezany (Faculty of Computer Science, Vienna, Austria), O. Habala (Slovak Academy of Sciences, Bratislava, Slovakia)
Visualization of the Mining Models on a Data Mining and Integration Platform
An appropriate and understandable data visualization is a key feature for the usability of a data mining system. The proper visualization methods for data exploration increase the whole acceptability of the system and proper selection of the further data processing tasks. This paper introduces RadViz data visualization approach as it is being implemented in the European (FP7) project ADMIRE (Advanced Data Mining and Integration Research for Europe). The approach maps a set of m-dimensional points onto two dimensional space, which is important for identification of clusters of the multidimensional data. The feasibility of the method is presented on a real scenario using data from one of the ADMIRE project use cases. There are two visualization components fully implemented and deployed in a distributed environment. The first one is implemented on the ADMIRE Gateway and is processing input data and producing information required by the second component residing on the ADMIRE Workbench which provides actual graphical visualization. The Gateway and Workbench are two basic components of the distributed ADMIRE environment, also described in this paper.
|M. Riedel (Forschungszentrum Juelich GmbH, Juelich, Germany)
Requirements of an e-science Infrastructure Interoperability Reference Model
Many production Grid and e-science infrastructures offer their broad range of resources via services to end-users during the past several years with an increasing number of scientific applications that require access to a wide variety of resources and services in multiple Grids. But the vision of world-wide federated Grid infrastructures in analogy to the electrical power Grid is still not seamlessly provided today. This is partly due to the fact, that Grids provide a much more variety of services (job management, data management, data transfer, etc.) in comparison with the electrical power Grid, but also we observe a rather slow adoption of the Open Grid Services Architecture (OGSA) concept initially defined as the major Grid reference model architecture one decade ago. This contribution critically reviews OGSA and other related reference models and architectures in the field while pointing to significant requirements of an e-cience infrastructure interoperability reference model that satisfies the needs of end-users today. We give insights to our findings of the core reference blocks of such a reference model including its important major properties, benefits, and impacts for real world pan-European Grid infrastructures today.
|Ľ. Ovseník (Technical University of Košice, Košice, Slovakia), J. Turán, A. Kažimírová Kolesárová (Department of Electronics and Multimedia Communications, Faculty of Electrical Engineering and Infor, Košice, Slovakia)
Video Surveillance Systems with Optical Correlator
This paper is review of many existing video surveillance systems. With the growing quantity of security video, it becomes vital that video surveillance system be able to support security personnel in monitoring and tracking activities. The aim of the surveillance applications is to detect, track and classify targets. In this paper is described object modelling, activity analysis and change detection. In this paper we will also describe a design of our video surveillance system with optical correlators.
|M. Savic (Univerzitet u Banjoj Luci Elektrotehnički fakultet, Banja Luka, Bosnia and Herzegovina), S. Gajin (University of Belgrade Computing Centre, Belgrade, Serbia), M. Bozic (University of Banja Luka Faculty of Electrical Engineering, Banja Luka, Bosnia and Herzegovina)
SNMP Based Grid Infrastructure Monitoring System
When we are talking about monitoring in IP based systems, more often than not, we are talking about systems based primarily on SNMP standard coupled with different auxiliary technologies when needed. On the other hand, monitoring in grid environment is far from such level of standardization and typically consists of several loosely connected components, either custom made or adapted for grid use. This paper presents a solution for monitoring of resources and services in gLite based computing grids through the use of SNMP thus enabling integration in standard and widely used network monitoring systems.
|Z. Stančić, J. Frey Škrinjar, M. Ljubešić (Edukacijsko-rehabilitacijski fakultet Sveučilišta u Zagrebu, Zagreb, Croatia), Ž. Car (Fakultet elektrotehnike i računarstva Sveučilišta u Zagrebu, Zagreb, Croatia)
Multidisciplinary Collaboration and ICT Services for People with Complex Communication Needs
Cilj ovog rada je prikazati mogućnosti multidisciplinarne suradnje sveučilišnih nastavnika iz područja elektrotehnike, računarstva, edukacijske-rehabilitacije, logopedije, psihologije i grafičke tehnologije, usmjerene na istraživanje i rješavanje složenih problema alternativne komunikacije utemeljene na informacijskoj i komunikacijskoj tehnologiji. Ciljane skupine su osobe sa složenim komunikacijskim potrebama kao što su: sindrom Down, poremećaji iz autističnog spektra, Alzheimerova bolest, veće intelektualne teškoće, kompleksni motorički poremećaji i oštećenja. U radu će se prikazati mogući potencijali suradnje sveučilišnih nastavnika koji će se ostvariti putem pilot ICT-usluge višenamjenskog modela komunikacije alternativnim simbolima, primjena komunikacije korištenjem računala, Interneta i mobilnih korisničkih uređaja te mogućnosti prikaza i prilagodbe informacija na web-stranicama. Konačni rezultati suradnje bit će implementirani unutar nastavnih planova pojedinih studija.
|V. Bojović, I. Sović, B. Lučić, K. Skala (Institut Ruđer Bošković, Zagreb, Croatia), A. Bačić (Siemens, Split, Croatia)
A novel tool/method for visualization of orientations of side chains relative to the protein's main chain
A novel visualization tool (method) is developed in order to achieve more efficient insight into protein structure and relative orientations of protein's side chains. Using arrows as vectors, which start (for all amino acid residues in a protein) from the C-alpha atom and end at the most distant non-hydrogen atom in the sidechain belonging to the same amino acid residue, this method visualizes orientations of side chains relative to protein's main chain (defined by positions of C-alpha atoms). The orientation of vector for each amino acid residue is from C-alpha atom to the end of its sidechain. The only one exception is in the case of glycine, where the residue will be represented by sphere (because the side chain of glycine consists of a single hydrogen atom). In all cases coloring is made using Kyte-Doolittle hdyrophobic scale. By such a method one can obtain better insight into the orientation of protein's sidechains relative to its interior or its outer surface area.
|M. Pańka, M. Chlebiej, K. Benedyczak, P. Bała (Nicolaus Copernicus University, Torun, Poland)
Visualization of Multidimensional Data on distributed mobile devices using interactive video streaming techniques
Remote visualization of large datasets has been a challenge for distributed systems for a long time. This challenge is getting even bigger when visualization refers to devices with limited capabilities, like CPU and GPU power, number of RAM or screen size. In this paper we present a distributed system we’ve developed for interactive visualization of remote datasets on variety of modern mobile devices, including laptops, tablets and phones. In our system all the data are rendered on dedicated servers, compressed on-the-fly using a video codec and pushed to client as a single video stream. Basing on this model we have took off most of the computational power from client’s devices, leaving them with video decompression. We were also able to achieve very high frame rates and video quality, dynamically adapted to device capabilities and current network bandwidth of a client. Our system can be used with almost any kind of data, including 2D, 3D and even animated 3D data. All of them are being processed in real time based on user inputs, with minor latency, allowing interactive visualization. At the end of this paper we also present some preliminary results of system performance, gained using sample, multidimensional medical datasets.
|I. Sović, T. Lipić, L. Gjenero, I. Grubišić, K. Skala (Institut Ruđer Bošković, Zagreb, Croatia)
Heat source parameter estimation from scanned 3D thermal models
Thermal imaging is a non-invasive, non-contact functional imaging method used in temperature measurements. It provides an insight to metabolic and other processes within human body. In this paper a general simulation model that can be used to estimate the depth and size of the heat source embedded underneath the surface of an object is presented. Simulations are performed on two sets of input data, acquired with a 3D thermography system, consisting of a 3D scanner and a thermal camera. The procedure is based on describing the heat source radiation using a function (in this paper Gaussian function was used), and searching the parameter space. For every parameter configuration, the color of each vertex of the scanned 3D model is changed according to the defined function. The model is rendered and the image compared to the one taken with a thermal camera. Results of this process are presented in the form of a sorted list, with the most likely configuration at the first place. Method described in this paper can have a wide spread of possible applications in technology and engineering as well as medicine. For example, if it can be assumed that a tumour can be approximated by a point source, this procedure can then also be applicable in analysis of breast or other types of tumours.
|H. Furtado (, , ), R. Trobec (Jožef Stefan Institute, Ljubljana, Slovenia)
Applications of Wireless Sensors in Medicine
Wireless sensor networks (WSN) are one of the fastest developing ICT area in various fields of human activities. WSN consists of a collection of autonomous sensors or actuators capable of collecting and manipulating data, performing some simple local analysis and communicating wirelessly with a higher WSN level. We investigate in this paper various fields of medicine with potential opportunities to apply WSN in order to improve management of medical resources, to assist medical personnel or to provide improved and cost effective personal health. We will show in more details a possible application of WSN in minimally invasive surgery. In spite of all potential advantages a few barriers, e.g. new business and marketing models must be found, a standard and open-source low power wireless device is needed, social aspects including security and privacy must be further elaborated, etc. will be identified, which have to be resolved before a successful application of the WSN in medicine and personal health solutions.
|A. Rashkovska, I. Tomašić, R. Trobec (Jožef Stefan Institute, Ljubljana, Slovenia)
A Telemedicine Application: ECG Data from Wireless Body Sensors on a Smartphone
The development of information technology and telecommunications has reached a level where its usefulness can be applied for health care needs towards Telemedicine and Telecare. Using the advantages of ICT, in this paper, we propose an application that provides a comfortable option for telemonitoring of the heart activity. We use a wireless bipolar body electrode to record ECG wirelessly coupled with the advantages of existing portable smart devices to display the real-time data from the electrode. Additionally, from three wireless bipolar body electrodes the standard 12-lead ECG can be reconstructed and displayed, and stored for further analyses.
|Ž. Jeričević (Tehnički fakultet u Rijeci, Rijeka, Croatia)
Using eigenanalysis to classify proteins and protein motifs
Eigenanalyis is a common name for linear algebra based multivariate analysis procedures like Principal Components Analysis (PCA), Correspondence Analysis (CA), Factor Analysis (FA), etc. The common idea in those methods is to compute eigenvalues and corresponding eigenvectors of a real symmetric matrix. The orthogonality of eigenvectors insures that the information contained in one vector is excluded from all other vectors and provides the basis for ordaining and filtering the information from original data set.
We applied this methodology and freely accessible sequence information in open access biological data bases to classify proteins and their motifs in variety of situations like families of functionally related proteins, classifying functionally unknown proteins and/or finding new member of a known protein family. The performance of proposed methodology is illustrated on the analysis of nuclear receptor proteins family and on the functional family of coregulators of nuclear receptors.
|I. Sović, L. Gjenero, I. Grubišić, T. Lipić (Institut Ruđer Bošković, Zagreb, Croatia), T. Skala (Grafički fakultet, Zagreb, Croatia)
Active 3D scanning based 3D thermography system and medical applications
Infrared (IR) thermography determines the surface temperature of an object or human body using IR camera. It is an imaging technology which is contactless and completely non-invasive. These properties make thermography a useful method of analysis that is used in various industrial applications to detect, monitor and predict irregularities in many fields from engineering to medical and biological observations. This paper presents 4D thermography based on the combination of active visual 3D imaging technology and passive thermal imaging technology. We describe development of a 4D thermography system integrating thermal imaging with 3D geometrical data from active 3D scanner. We also outline the potential benefits of this system in medical applications. In particular, we emphasize the possible usage of this system for detection of breast cancer.fdfdfdfdfr4
|P. Škoda (Rudjer Boskovic Institute, Zagreb, Croatia), T. Lipić (Ruđer Bošković Institute, Zagreb, Croatia), Á. Srp (Budapest University of Technology and Economics, Budapest, Hungary), B. Medved Rogina, K. Skala (Ruđer Bošković Institute, Zagreb, Croatia), F. Vajda (Budapest University of Technology and Economics, Budapest, Hungary)
Implementation Framework for Artificial Neural Networks on FPGA
In an Artificial Neural Network (ANN) a large number of highly interconnected simple nonlinear processing units work in parallel to solve a specific problem. Parallelism, modularity and dynamic adaptation are three characteristics typically associated with ANNs. Field Programmable Gate Array (FPGA) based reconfigurable computing architectures are well suited to implement ANNs as one can exploit concurrency and rapidly reconfigure to adapt the weights and topologies of an ANN. ANNs are suitable for and widely used in various real-life applications. A large portion of these applications are realized as embedded computer systems. With continuous advancements in VLSI technology FPGAs have become more powerful and power efficient, enabling the FPGA implementation of ANNs in embedded systems. This paper proposes an FPGA ANN framework which facilitates implementation in embedded systems. A case study of an ANN implementation in an embedded fall detection system is presented to demonstrate the advantages of the proposed framework.in an embedded fall-detection system.
|N. Kiss, G. Patai, P. Hanák (Budapest University of Technology and Economics, Budapest, Hungary), T. Lipic, P. Skoda, L. Gjenero, A. Dubravic, I. Michieli (Rudjer Bokovic Institute, Zagreb, Croatia)
Vital Fitness and Health Telemonitoring of Elderly People
Modern advances in technology allow new telemonitoring systems for prevention, early diagnosis and management of chronic and degenerative conditions. These remote monitoring systems reduce the need for recurring visits to hospital, allow physicians better insight into the patient’s state and help determine necessary level of care for each individual patient.
Telemonitoring can also be applied on a long-term basis to elderly persons to detect gradual deterioration in their health status, which may imply a reduction in their ability to live independently. Functional potential of the upper extremities is a good indicator of how well a person can live independently and also helps determine their health status.
This article presents OSGi residential gateway based telemonitring system. In addition, two new electronic devices used for monitoring functional degradation of upper extremities are introduced into the presented system. One of the two devices records dynamic properties of the hand grip (e.g. grip speed, grip stability, grip endurance) and the other device records hand movement dynamic properties (e.g. seed of movement, spectral components).
|A. Mateska, M. Pavloski, L. Gavrilovska (Faculty of Electrical Engineering and Information Technologies, Skopje, Macedonia)
RFID and Sensors Enabled In-Home Elderly Care
The growing number of elderly people yields towards an increased demand of assisted living solutions that can permit the elderly persons to live safely and independently in their own homes. The paper proposes an integrated In-home elderly care solution, enabled with wireless sensor network and RFID technology. The solution provides assisted living and improved care of elder through home surveillance, item and medication usage reminder and early warning of potential dangerous situation (e.g., fire, gas leakage). The paper presents the proposed system architecture and demo implementation results realized with Sun SPOT sensor modules, diverse sensors and RFID tags.
|A. Reményi (Óbuda University, Budapest, Hungary), I. Bándi (Óbuda University – John von Neumann Faculty of Informatics, Biotech Group, Budapest, Hungary), G. Valcz (Semmelweiss University, 2nd Department of Internal Medicine, Budapest, Hungary), P. Bogdanov (Óbuda University – John von Neumann Faculty of Informatics, Biotech Group, Budapest, Hungary), M. Kozlovszky (MTA SZTAKI, Laboratory of Parallel and Distributed Computing, Budapest, Hungary)
Biomedical image processing with GPGPU using CUDA
The main aim of this work is to show, how the GPGPUs can be used to speed up certain image processing methods. The algorithm explained in this paper is used to detect nuclei on (HE - hematoxilin eosin) stained colon tissue sample images, and includes a Gauss blurring, an RGB-HSV color space conversion, a fixed binarization, an ultimate erode procedure and a local maximum search. Since the images retrieved from the digital slides require significant storage space (up to few hundred megapixels), the usage of GPGPUs to speed up image processing operations is necessary in the interest of achieving reasonable processing time. The CUDA software development kit was used to develop algorithms to GPUs made by NVIDIA. This work focuses on how to achieve coalesced global memory access when working with three-channel RGB images, and how to use the on-die shared memory efficiently. The exact test algorithm also included a linear connected component labeling, which was running on the CPU, and with iterative optimization of the GPU code, we managed to achieve significant speed up in well defined test environment.
|R. Trobec (Institut Jožef Stefan, Ljubljana, Slovenia), U. Stanič (Kosezi d.o.o., Ljubljana, Slovenia)
Telehealth: A Myth or Reality?
Decreased birth rate and improved medical treatments result in growth of the elderly population and a constant increase of health care costs. The breakthrough of information and communication technologies in the health care could contribute to diminution of such trends in the near future, expecting the new generations of potential patients and medical personnel to accept new technologies on a day-to-day basis. Several studies have been done during the last five years to assess the applicability, financial relevance, patient satisfaction and feasibility of telehealth in all its variants: telemedicine, telecare, telemonitoring and self-monitoring. The telehealth has been introduced using various modalities of equipment, including fixed and mobile phones, touch pads, internet and specialized hardware with various benefits and obstacles. In this paper, we analyze various circumstances that could stimulate or obstruct the further development of telehealth. We evaluate its efficacy and cost-effectiveness on a test-case from early post hospital care, taking into account direct and indirect costs and savings, together with healthcare quality, patient commodity and perception, safety and privacy, and new options for permanent personalized treatment.
|G. Strinić, S. Čupić, N. Domijan, N. Leder, H. Mihanović (Hydrographic Institute of the Republic of Croatia, Split, Croatia)
Distributed System for Remote Wave Data Collection and Visualization as a Part of Operational Oceanography in Croatia
Hydrographic Institute of the Republic of Croatia (HHI) has modernised its
oceanographic equipment buying new DATAWELL DWR-MkIII Waverider, with
possibilities of measuring the wave height, direction and period. Waverider
buoy deployed in the survey sea area is anchored at the sea bottom. An
anchored waverider buoy has a certain freedom of motion. Due to a risk of
losing the buoy, HHI has developed the SMS (Short Message Service) based system
for alarm and graphic representation of the buoy motion using Google EarthT
application to facilitate the buoy tracking and searching in case of its
loss. The paper also describes communication technologies and methods of
remote data collection and production of safety copies. In addition, sensor
for measurement of wind speed and direction is usually installed in the
survey area concurrently to wave measurements. By simultaneous collection of
the waverider data and the wind direction and speed data, the correlation
between parameters can be analysed for each survey area. The measured and
analysed waverider data are essential input parameters for numerical
oceanographic models, important for designing coastal and marine structures,
as well as for improving the safety of navigation at sea.
|S. Čupić (Hydrographic Institute of the Republic of Croatia - Split, Split, Croatia), G. Strinić, N. Domijan, . Mihanović (Hydrographic Institute of the Republic of Croatia, Split, Croatia)
Tide Gauge Network of the Hydrographic Institute of the Republic of Croatia
Sea level measurements at the tide gauge stations on the eastern Adriatic coast have been conducted for many years now. Hydrographic Institute of the Republic of Croatia (HHI) has modernized its tide gauge network by installing Thalimedes digital instrument. Communication with tide gauge stations is performed through GSM network. Data are collected automatically on a daily basis, but there is a possibility of more frequent data collecting in special situations, such as possible flooding of the coastal area, emergency meteorological situations, assessment of risk to people and their property, as well as scientific and technical investigations. First check of the data integrity, communication availability, data gaps in time sequences is automatically performed. If a problem occurs, the system warns administrators about possible errors. The analysed data are prepared to be displayed on the HHI website. The website is currently being designed and developed for the display of predicted and measured sea level heights at the tide gauge stations on the eastern Adriatic coast, and the first results are presented in this paper. Tide gauge measurements and the data obtained from such measurements are important for the safety of navigation, marine construction works and for the development of oceanographic models.
|I. Kožar (Faculty of Civil Engineering University of Rijeka, Rijeka, Croatia), D. Lozzi-Kožar (Hrvatske vode, Rijeka, Croatia), Ž. Jeričević (TEHNIČKI FAKULTET, RIJEKA, Croatia)
Limit kriging in finite element environmental modeling
Finite element model is a basis of the environmental model based on some type of calculation like water circulation. The biggest obstacle in producing the model is scarcity of geometric data so in most cases interpolations are required. A well established method of geostatistical interpolation is the kriging method. The kriging predictor is a function that results in the best linear unbiased predictor which minimizes the mean squared prediction error.
There are several versions of the kriging predictor (e.g. simple, ordinary, limit). Limit kriging predictor gives a better performance over the existing kriging predictors when the constant mean assumption in the kriging model is unreasonable. Moreover, it seems to be robust to the misspecifications in the correlation parameters.
This paper proposes use of the limit kriging for interpolations over divided domains. That application of the limit predictor is demonstrated using an example of finite element lake modeling. The lake is first divided into several parts that are interpolated separately so that different variants of an anisotropic kriging could be applied. Parts are later assembled into one compact model. The results are compared with the results from simple and ordinary kriging as well as results from the potential function approach.
Excellent results confirm superior behavior of the limit kriging when applied on divided domain.
|I. Međugorac (Prirodoslovno-matematički fakultet, Zagreb, Croatia), M. Pasarić, M. Orlić (Department of Geophysics, Faculty of Science, University of Zagreb, Zagreb, Croatia)
An analysis of the Adriatic storm surge of 1 December 2008
Extremely high sea levels, known as acqua alta, occasionally occur in the Adriatic in
late autumn and winter, causing floods along the northern coastline and significant
damage – especially in Venice. An exceptionally strong event of this kind was
observed on 1 December 2008 when high sea level flooded a number of cities.
During this event the oldest Croatian tide gauge station, Bakar, recorded the highest
sea level, 121 cm, in its operating history (since 1929). In order to examine this
particular event, we have analyzed time series of sea level, air pressure and wind
recorded at stations along both sides of the Adriatic. Harmonic analysis was
performed on the sea-level records to remove the tidal signal. The data were low-pass,
band-pass and high-pass filtered in order to determine the low-frequency
variability, the Adriatic-wide seiche and the local seiche activity, respectively. It
turned out that the event was the result of superposition of several phenomena:
storm surge caused by synoptic atmospheric systems, tide, basin-wide seiche, sea-level
rise due to a low-frequency atmospheric disturbance and the local seiche. The
present analysis may serve as an example of a possible contribution to the NVO
|B. Ivančan-Picek (, , ), K. Horvath, S. Ivatek Šahdan, M. Tudor, A. Bajić, I. Stiperski, A. Stanešić (Meteorological and Hydrlogical Service, Zagreb, Croatia)
Operational and research applications in the Meteorological and Hydrological Service of Croatia
The applications being developed and used at Meteorological and Hydrological Service of Croatia cover broad areas of meteorology, climatology, renewable energy, hydrology, air quality modelling, and other. Applications used for operational weather forecast include the numerical weather prediction model ALADIN, data pre-processing and model output post-processing tools as well as visualization applications. Other meteorological numerical models (such as WRF, COAMS, WAsP, MM5, RegCM) are utilized for research and additional applicative purposes and tasks. The operational system is automatic and controlled by a set of scripts that coordinate the model execution and related modules with the availability of the input data. The final products are disseminated to the end-users as soon as the module responsible for its generation is finished.
|L. Grubisic (Prirodoslovno matematički fakultet, Sveučilište u Zagrebu, Zagreb, Croatia), Z. Drmač (Matematički odsjek, Prirodoslovno matematički fakultet, Sveučilište u Zagrebu, Zagreb, Croatia), Ž. Jeričević (Department of Computer Engineering, Engineering Faculty, University of Rijeka, Rijeka, Croatia)
Pseudospectral Picture of the Seismic Algorithm for Surface Multiple Attenuation
Given a two dimensional signal, e.g. surface recorded 2-D wavefield representing marine seismic data, we consider the task of removing the interference due to multiple reflection (the so-called multiple events). This problem has been successfully modeled in the previous work by the third author as a matrix optimization problem. Seen from an abstract perspective, this optimization problem is related to the mathematical notion of the pseudo-spectrum of a matrix. Pseudospectrum is sometimes also called the spectral portrait of a matrix and it presents a powerful visualization tool for analyzing the spectral properties of a nonhermitian matrix. Furthermore, there are several freely available visualization tools to plot the pseudospectrum, most notably the Tom Wright's eigtool. The original mathematical problem can be seen as a quest for a minimum of the pseudo spectrum for a certain nonhermitian matrix pair in the Frobenius matrix norm. The main contribution in the original algorithm was the use of matrix Eigenvalue Decomposition in the preprocessing stage of the algorithm to reduce the computational cost of the inner optimization loop to O(N*N). Here N is the dimension of the matrix (e.g. 2-D signal). Motivated by the work of Trefethen and Embree we present several alternatives to this algorithm which also result with inner loop procedures having O(N*N) cost. Such algorithms are based on a Lanczos type singular value algorithms and solve the optimization problem with regard to the spectral norm. We further discuss issues related to the numerical stability of the complete algorithm with regard to the choice of the used matrix factorization in the preprocessing module. Notably, we consider the cost and the numerical stability for the use of Schur, Hessenberg and Eigenvalue matrix decompositions in this context. Let us close by noting that the structure of the computational problem is such that it falls into the class of "embarrassingly parallel" algorithms. This will also be exploited in the presented work.
|T. Davitashvili (Tbilisi State University, Tbilisi, Georgia)
Weather Prediction Over Caucasus Region Using WRF-ARW Model
Global atmosphere models, which describe the weather processes, give the general character of the weather but can’t catch the smaller scale processes, especially local weather for the territories with compound topography. Small-scale processes such as convection often dominate the local weather, which cannot be explicitly represented in models with grid size more then 10 km. A much finer grid is required to properly simulate frontal structures and represent cumulus convection.
Georgia lies to the south of the Major Caucasian Ridge and the Lesser Caucasus mountains occupy the southern part of Georgia. About 85 percent of the total land area occupies complex mountain ranges.Therefore for the territory of Georgia it is necessary to use atmosphere models with a very high resolution nested grid system taking into account main orographic features of the area.
We have elaborated and configured Whether Research Forecast - Advanced Researcher Weather (WRF-ARW) model for Caucasus region considering geographical-landscape character, topography height, land use, soil type and temperature in deep layers, vegetation monthly distribution, albedo and others. Porting of WRF-ARW application to the grid was a good opportunity for running model on larger number of CPUs and storing large amount of data on the grid storage elements. On the grid WRF was compiled for both Open MP and MPI (Shared + Distributed memory) environment and WPS was compiled for serial environment on the platform Linux-x86. In searching of optimal execution time for time saving different model directory structures and storage schema was used. Simulations were performed using a set of 2 domains with horizontal grid-point resolutions of 15 and 5 km, both defined as those currently being used for operational forecasts. The coarser domain is a grid of 94x102 points which covers the South Caucasus region, while the nested inner domain has a grid size of 70x70 points mainly territory of Georgia. Both use the default 31 vertical levels. Some results of calculations of the interaction of airflow with complex orography of Caucasus with horizontal grid-point resolutions of 15 and 5 km are presented.
|D. Janković (AVL-AST d.o.o , Zagreb, Croatia), Z. Mihajlovic (University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia)
Relief mapping in terrain rendering
Terrain models typically contain huge amount of data so they are very time consuming for visualization purposes. This especially comes to the forefront when urban environments are included. The main compromise in representation of the complex environments is between achieved quality and time consumption. With the simple texture representation of complex environments we will accomplish fast application, and with the large polygonal meshes, high quality of the rendered scene.
In this paper we propose rendering of urban and natural environments using parallax and relief mapping. This approach combines benefits of the rendering of polygonal meshes and texture approach. Thereby, in the proposed approach improved quality on the one side and increased speed on the other side is combined. The applicability of the method is demonstrated trough parallax and relief mapping within the Irrilicht open source graphics engine. The shaders programs were made with the GLSL shader language. As the result, the tests were made to determine the possible usage of parallax and relief mapping in the display of natural and urban environments.
|C. Mocan (Technical University of Cluj-Napoca , Cluj-Napoca, Romania), T. Stefănuț, D. Gorgan (Technical University Cluj-Napoca, Cluj-Napoca, Romania)
Virtual Geographical Space Visualization based on a High-Performance Graphics Cluster
The technical development from the last few years and the usage of GPU and multi-GPU systems allows the implementation of complex 3D Virtual Geographical Space (VGS) scenarios. However, the high costs involved in the acquisition and maintenance of these specialized architectures represent a major drawback, preventing large-scale access of geographical specialists to this kind of resources. In this research paper we propose a scalable solution for creating a high-performance distributed architecture specialized on spatial data modelling, data processing and data visualization. Our main objective is to use the power of multi-GPU systems and visualization clusters in order to run different complex 3D VGS scenarios, having in mind the maximization of the GPU utilization. We have also performed evaluation studies over the compatibility between the graphics cluster framework that run GPU enabled applications and parallel rendering frameworks, aiming to develop an optimal architecture at a lower cost. The performance is evaluated for various cluster configurations and different spatial data models. We will also discuss briefly about the efficiency of load-balancing in our graphics cluster taking into consideration different combinations of distributed rendering algorithms.
Karolj Skala (Croatia)
Piotr Bala (Poland), Leo Budin (Croatia), Yike Guo (United Kingdom), Ladislav Hluchy (Slovakia), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Charles Loomis (France), Ludek Matyska (Czech Republic), Laszlo Szirmay-Kalos (Hungary), Roman Trobec (Slovenia), Tibor Vámos (Hungary), Branka Zovko-Cihlar (Croatia)
Predsjednik Međunarodnog programskog odbora:
Petar Biljanović (Croatia)
Međunarodni programski odbor:
Alberto Abello Gamazo (Spain), Slavko Amon (Slovenia), Michael E. Auer (Austria), Mirta Baranović (Croatia), Ladjel Bellatreche (France), Nikola Bogunović (Croatia), Andrea Budin (Croatia), Željko Butković (Croatia), Željka Car (Croatia), Matjaž Colnarič (Slovenia), Alfredo Cuzzocrea (Italy), Marina Čičin-Šain (Croatia), Dragan Čišić (Croatia), Todd Eavis (Canada), Maurizio Ferrari (Italy), Bekim Fetaji (Macedonia), Liljana Gavrilovska (Macedonia), Matteo Golfarelli (Italy), Stjepan Golubić (Croatia), Francesco Gregoretti (Italy), Niko Guid (Slovenia), Yike Guo (United Kingdom), Jaak Henno (Estonia), Ladislav Hluchy (Slovakia), Vlasta Hudek (Croatia), Željko Hutinski (Croatia), Mile Ivanda (Croatia), Hannu Jaakkola (Finland), Robert Jones (Switzerland), Peter Kacsuk (Hungary), Aneta Karaivanova (Bulgaria), Miroslav Karasek (Czech Republic), Bernhard Katzy (Germany), Christian Kittl (Austria), Dragan Knežević (Croatia), Mladen Mauher (Croatia), Branko Mikac (Croatia), Veljko Milutinović (Serbia), Alexandru-Ioan Mincu (Slovenia), Vladimir Mrvoš (Croatia), Jadranko F. Novak (Croatia), Jesus Pardillo (Spain), Nikola Pavešić (Slovenia), Ivan Petrović (Croatia), Radivoje S. Popović (Switzerland), Slobodan Ribarić (Croatia), Karolj Skala (Croatia), Ivanka Sluganović (Croatia), Vanja Smokvina (Croatia), Ninoslav Stojadinović (Serbia), Aleksandar Szabo (Croatia), Laszlo Szirmay-Kalos (Hungary), Dina Šimunić (Croatia), Jadranka Šunde (Australia), Antonio Teixeira (Portugal), Ivana Turčić Prstačić (Croatia), A. Min Tjoa (Austria), Roman Trobec (Slovenia), Walter Ukovich (Italy), Mladen Varga (Croatia), Tibor Vámos (Hungary), Boris Vrdoljak (Croatia), Robert Wrembel (Poland), Baldomir Zajc (Slovenia)
Savjetovanje je posvećeno prezentaciji i istraživanju znanstvenih i tehnoloških postignuća i orginalnim inovativnim aplikacijama na polju grid i vizualizacijskih sustava. Navedena tematska područja ne isključuju i druge srodne teme:
- Teme iz područja distribuiranog računarstva:
- Grid primjene
- Grid sustavi
- Klaster računarstvo i primjene
- Razvoj paralelnih programa
- Web i Grid servisi
- Razvoj distribuiranih i paralelnih programskih modela i alata
- Web servisi i primjene
- Mrežna suradnja na daljinu
- Virtualne organizacije
- Tehnologije eZnanosti
- Multimedijske i hipermedijske tehnologije
- Aplikacije u atmosferskim i geoznanostima
- Teme iz područja vizualizacije:
- Znanstvena vizualizacija
- Vizualizacija u inženjerstvu i medicini
- Metode i algoritmi paralelne vizualizacije
- Raspodijeljena vizualizacija
- Vizualizacijski procesi i sustavi
- Paralelno modeliranje i renderiranje
- Interakcija sa računalom i vizualizacijske primjene
- Računalom vođeni dizajn
Kao dodatak savjetovanju također su moguće i stručne prezentacije proizvoda i usluga iz područja konferencije.
Opatija, često nazivana “jadranskom ljepoticom”, jedno je od najpopularnijih turističkih mjesta u Hrvatskoj s najdužom turističkom tradicijom na sjevero-istočnoj jadranskoj obali. Njezina ponuda uključuje dvadesetak hotela, velik broj restorana, te brojne sportske i rekreacijske sadržaje. Detaljnije informacije se mogu potražiti na www.opatija.hr i www.opatija-tourism.hr.
PRIJAVA / KOTIZACIJE
CIJENA U EUR-ima
|Članovi MIPRO HU i IEEE
|Studenti (dodiplomski) te nastavnici osnovnih i srednjih škola
Institut Ruđer Bošković
10000 Zagreb, Hrvatska
GSM.:+385 99 3833 888
Fax: +385 1 4680 212