By Professor Armando De Giusti. Faculty of Informatics. Universidad Nacional de La Plata, Argentina. Director of the Institute for Research in Computer LIDI. By Professor Marcela Printista. Computer Science Departament. Universidad Nacional de San Luis.
Professor De Giusti will present the progress of research lines developed in the III-LIDI with particular focus on the areas: HPC, Cluster & Cloud Computing, Distributed & Parallel Systems and Parallel Algorithms.
Professor Marcela Prinstina is staff member of the R&D Laboratory in Computational Intelligence (LIDIC) and perfoms research in Evacuation Simulations using Cellular Automata. Computer simulations using Cellular Automata (CA) have been applied with considerable success in different scientific areas, such as chemistry, biochemistry, economy, physics, etc. Professor Prinstina will present the work developed using CA in order to specify and implement a simulation model that allows to investigate behavioral dynamics for pedestrians in an emergency evacuation.
Conference by Barton P. Miller. Professor of Computer Sciences Department at University of Wisconsin.
Research in Cyber Security
The security team led by Prof. Barton Miller at the University Wisconsin is working on several areas of research that we describe during talk. The first area is the analysis, monitoring, and control of malicious programs. The research is based on a hybrid static/dynamic technique that monitors known code in a binary, and then discovers new code as it is decoded, unpacked, and modified, guaranteeing that we can monitor and control the code before it is executed. This research is being incorporated in the Dyninst binary instrumentation tool suite. The second area is the use of advanced machine learning techniques to expose the provinance of binary programs. These techniques can report the source language(s) in which the program was written, the compiler used and the optimization level. In addition, we can also identify the author of the program solely from the binary code. The last area of research is our joint work with the Autonomous University of Barcelona on the in-depth vulnerability assessment of middleware and services. The work at Wisconsin includes a technique called self-propelled instrumentation, that can be injecting into a running program and trace its behavior, and propagate this tracing in other processes, even on other hosts.
The Workshop will be held at the Escola d’Enginyeria, Universitat Autònoma de Barcelona between April 8 and April 12, 2013 . The main objectives of this worshop is to present the state of the art and the researchs advances in the CAPITA Project.12/4/2013. NEW! : Gallery View
The CAPITA project integrates a series of lines of research, interrelated and developed in the context of high performance computing (HPC) such as:
1. Performance and Efficiency in the use of HPC resources
2. User Availability of HPC Resources
3. Design and optimization of HPC systems, for “workloads” specific (application-specific domains)
4. Social projection (impact) applications:
(Consult Agenda for schedule).
Location:
Conference by Dr Mauricio Marin, Yahoo! Labs Santiago Researcher, Professor at Universidad de Santiago, Chile
Abstract: Large scale Web search engines are complex and highly optimized systems devised to operate on dedicated clusters of processors. Any, even a small, gain in performance is beneficial to economical operation given the large amount of hardware resources deployed in the respective data centers. Performance is fully dependent on users behavior which is featured by unpredictable and drastic variations in trending topics and arrival rate intensity. In this context, discrete event simulation is a powerful tool either to predict performance of new optimizations introduced in search engine components or to evaluate different scenarios under which alternative component configurations are able to process demanding workloads. These simulators must be fast, memory efficient and parallel to cope with the execution of millions of events in small running time on few processors. We propose achieving this objective at the expense of performing approximate optimistic parallel discrete-event simulation.
Raising to the clouds by Eduardo Argollo (Research Scientist HP Exascale Computing Lab, Sant Cugat del Valles)
Eduardo Argollo is Research Scientist at HP Labs since 2007 (Computer Science MS degree in 1997, Catholic University of Salvador, in Brazil, PhD in Computer Science from the University Autonoma of Barcelona, in Spain)
Eduardo has more than 10 years of experience in the ERP systems, databases and data warehousing development. Prior to joining HP, he was worked in methodologies to efficiently attain high-performance from a collection of internet-connected geographically distributed multi-clusters and in the CoreGRID Network of Excellence in Passau, Germany. In this work was involved in the automatic parallelization of applications using higher-order Grid components.
His current research interests include system-level simulation, virtualization and parallel computing middleware.
Researchers of the our group (HPC4EAS) in collaboration with the team at the Emergency Services Unit at Hospital de Sabadell (Parc Taulí Healthcare Corporation), have developed an advanced computer simulator to help in decision-making processes (DSS, or decision support system) which could aid emergency service units in their operations management.
The model was designed based on real data provided by the Parc Taulí Healthcare Corporation, using modelling and simulation techniques adapted to each individual, and which require the application of high performance computing. The system analyses the reaction of the emergency unit when faced with different scenarios and optimises the resources available.
Researchers defined different types of patients according to their emergency level, and doctors, nursing teams, and admissions staff according to different levels of experience. This permitted studying the duration of processes such as the triage (when the emergency level is determined), the number and type of patients arriving at each moment, the waiting period for each stage or phase of the service, costs associated with each process, the amount of staff needed to determine a type of assistance and, in general, all other quantifiable variables. The system not only helps to make decisions in real time, it also can help by making forecasts and improving the functioning of the service.
PhD Thesis Defense: MªMar López, Date: Sept. 13, 11:00 hrs. 2012. Planificación de DAGs en entornos oportunísticos.
Las aplicaciones tipo workflow se caracterizan por tener un elevado tiempo de cómputo y una elevada transferencia de datos. Como consecuencia, el tiempo de ejecución o makespan de un workflow es elevado. Con el propósito de reducir el makespan del workflow, las tareas se ejecutan en diferentes máquinas interconectadas a través de una red. Asignar correctamente las tareas del DAG a las máquinas disponibles del entorno de ejecución mejora el makespan. El encargado de realizar la asignación de las tareas del workflow a las máquinas es el planificador.
El problema de un planificador estático es que no tiene en cuenta los cambios ocurridos en el entorno de ejecución durante la ejecución del DAG.
La solución a este problema ha sido el desarrollo de un nuevo planificador dinámico.
El planificador dinámico mejora el makespan del DAG debido a que considera los cambios ocurridos en el entorno de ejecución durante la ejecución del workflow, pero como contrapartida, genera overhead producido a consecuencia de reaccionar ante los cambios detectados. El objetivo de este trabajo es proporcionar estrategias que reducen el overhead del planificador dinámico, sin afectar al makespan del DAG. Para reducir el overhead, el algoritmo reacciona ante los cambios detectados durante la ejecución del DAG únicamente si anticipa que su makespan mejora.
La política dinámica desarrollada ha sido evaluada a través de ejecuciones simuladas y ejecuciones realizadas en un entorno oportunístico real. En la experimentación simulada se ha mejorado el makespan entre 5% y 30%, y en la experimentación real la mejora del makespan ha sido entre 5% y 15%. En lo que respecta al overhead, éste se ha reducido como mínimo un 20% respecto a otras políticas de planificación dinámicas.
Journal of Computational Science. Best Paper Award. 2012
For the second year (consecutive) HPC4EAS research group receives the Best Paper Award for the article Simulation Optimization for Helathcare Emergency Departaments. Authors: Cabrera E., Taboada M., Iglesias ML., Epelde F., Luque E.
In 2011 the group was awarded with the Outstanding Paper Award at the ICCS 2011 (Singapore) for the article High Performance Distributed Cluster-Based Individual Oriented Fish Schools Simulation. Authors: Solar R., Suppi R., Luque E.
More than 25 young professionals from different universities of Argentina have participated in the Postgraduate courses organized by the Facultad de Informática de la Universidad Nacional de La Plata (Argentina). http://postgrado.info.unlp.edu.ar/Cursos/Cursos_08_Agosto.html
These courses are organized by the Secretaria de Postgrado in the context of PhD in Computer Science Programme and the Postgraduate Specialization Programme of the UNLP. The quality of these courses is guaranteed according to academic standards supervised by the Commission Research and Graduate Advisor, the Ministry of Science, Technology and Graduate Studies and UNLP Academic School Board.
The contents of the course Performance Prediction and Efficient Execution of Parallel Programs is oriented to describe the main concepts of parallel systems, characterization of applications, evaluation and performance prediction, tuning and visualization of parallel executions. The course was taught by professors E. Luque and D. Rexachs (UAB). The course of Computational Science High Performance Simulation is oriented to describe the main concepts of simulation in computational science including systems and model characterization, simulation techniques, discrete event simulation and parallel DES as been as examples and tools to develop the these simulation models. This course was taught by the professors R. Suppi and E. Luque (UAB).
The on-site part of the course follows a very conceptual methodology based on specific knowledge on relevant issues of the course using examples & tools to orient to the student in the main aspects of the theoretical contents. The on line part (e-learning), that include more than 100 hours, the students must analyse & solve different case studies in the area of performance evaluation and model simulation generating a final report with the results & conclusions.
PhD Thesis Defense: Vicente Ivars, Date: Sept. 6, 2012 TDP-Shell: Entorno para acoplar gestores de colas y herramientas de monitorización.
Escola d’Enginyeria – Universitat Autònoma de Barcelona
Abstract:
Hoy en día la mayoría de aplicaciones distribuidas se ejecutan en clusters de ordenadores administrados por un gestor de colas. Por otro lado, los usuarios pueden utilizar las herramientas de monitorización actuales para detectar los problemas en sus aplicaciones distribuidas. Pero para estos usuarios, es un problema utilizar estas herramientas de monitorización cuando el cluster está controlado por un gestor de colas. Este problema se debe al hecho de que los gestores de colas y las herramientas de monitorización, no gestionan adecuadamente los recursos que deben compartir al ejecutar y operar con aplicaciones distribuidas.
A este problema le denominamos “falta de interoperabilidad” y para resolverlo se ha desarrollado un entorno de trabajo llamado TDP-Shell. Este entorno soporta, sin alterar sus códigos fuentes, diferentes gestores de colas, como Cóndor o SGE y diferentes herramientas de monitorización, como Paradyn, Gdb y Totalview. En este trabajo se describe el desarrollo del entorno de trabajo TDP-Shell, el cual permite la monitorización de aplicaciones secuenciales y distribuidas en un cluster controlado por un gestor de colas, así como un nuevo tipo de monitorización denominada “retardada”.
PhD Thesis Defense: Roberto Solar, Date: July, 16 2012 Particionamiento y Balance de Carga en Simulaciones Distribuidas de Bancos de Peces Escola d’Enginyeria – Universitat Autònoma de Barcelona
Abstract:
Partitioning and load balancing are issues of great interest in distributed simulations based on spatially explicit individual-oriented models. The decomposition of the problem domain and the efficient data distribution on the computing nodes of the parallel architecture / distributed are crucial factors in performance figures for distributed simulation.
In this work we have developed a new methodology for partitioning and load balancing for large-scale distributed simulations of individual-oriented models that show spatially explicit movement patterns. In order to validate our strategies, the model of Huth & Wissel, which represents the coordinated and polarized movement of fish, has been used.
The partitioning method is to decompose the problem domain into compact partitions generated from the radial blanket approach and Voronoi diagrams. The distribution of partitions is performed by means of the proximity cluster partitions using a new definition meta-partitions equals to the number of computing cores. The strategy for dynamic load balancing is to detect the imbalance through an algorithm based on thresholds and re–configure meta-partitions to achieve rebalancing. Finally, there has been extensive experimentation to validate and verify the viability of the distributed simulation in different scenarios.
« Primer ‹ Anterior 1 6 7 8