Conference by Dr Mauricio Marin, Yahoo! Labs Santiago Researcher, Professor at Universidad de Santiago, Chile

Abstract: Large scale Web search engines are complex and highly optimized systems devised to operate on dedicated clusters of processors. Any, even a small, gain in performance is beneficial to economical operation given the large amount of hardware resources deployed in the respective data centers. Performance is fully dependent on users behavior which is featured by unpredictable and drastic variations in trending topics and arrival rate intensity. In this context, discrete event simulation is a powerful tool either to predict performance of new optimizations introduced in search engine components or to evaluate different scenarios under which alternative component configurations are able to process demanding workloads. These simulators must be fast, memory efficient and parallel to cope with the execution of millions of events in small running time on few processors. We propose achieving this objective at the expense of performing approximate optimistic parallel discrete-event simulation.

Raising to the clouds by Eduardo Argollo (Research Scientist HP Exascale Computing Lab, Sant Cugat del Valles)

Eduardo Argollo is Research Scientist at HP Labs since 2007 (Computer Science MS degree in 1997, Catholic University of Salvador, in Brazil, PhD in Computer Science from the University Autonoma of Barcelona, in Spain)

Eduardo has more than 10 years of experience in the ERP systems, databases and data warehousing development. Prior to joining HP, he was worked in methodologies to efficiently attain high-performance from a collection of internet-connected geographically distributed multi-clusters and in the CoreGRID Network of Excellence in Passau, Germany. In this work was involved in the automatic parallelization of applications using higher-order Grid components.

His current research interests include system-level simulation, virtualization and parallel computing middleware.

Researchers of the our group (HPC4EAS) in collaboration with the team at the Emergency Services Unit at Hospital de Sabadell (Parc Taulí Healthcare Corporation), have developed an advanced computer simulator to help in decision-making processes (DSS, or decision support system) which could aid emergency service units in their operations management.

The model was designed based on real data provided by the Parc Taulí Healthcare Corporation, using modelling and simulation techniques adapted to each individual, and which require the application of high performance computing. The system analyses the reaction of the emergency unit when faced with different scenarios and optimises the resources available.

Researchers defined different types of patients according to their emergency level, and doctors, nursing teams, and admissions staff according to different levels of experience. This permitted studying the duration of processes such as the triage (when the emergency level is determined), the number and type of patients arriving at each moment, the waiting period for each stage or phase of the service, costs associated with each process, the amount of staff needed to determine a type of assistance and, in general, all other quantifiable variables. The system not only helps to make decisions in real time, it also can help by making forecasts and improving the functioning of the service.

[full article]

 

PhD Thesis Defense:  MªMar López, Date: Sept. 13, 11:00 hrs. 2012.

Planificación de DAGs en entornos oportunísticos.

Escola d’Enginyeria – Universitat Autònoma de Barcelona

Abstract:
Las aplicaciones tipo workflow se caracterizan por tener un elevado tiempo de cómputo y una elevada transferencia de datos. Como consecuencia, el tiempo de ejecución o makespan de un workflow es elevado. Con el propósito de reducir el makespan del workflow, las tareas se ejecutan en diferentes máquinas interconectadas a través de una red. Asignar correctamente las tareas del DAG a las máquinas disponibles del entorno de ejecución mejora el makespan. El encargado de realizar la asignación de las tareas del workflow a las máquinas es el planificador.

El problema de un planificador estático es que no tiene en cuenta los cambios ocurridos en el entorno de ejecución durante la ejecución del DAG.
La solución a este problema ha sido el desarrollo de un nuevo planificador dinámico.

El planificador dinámico mejora el makespan del DAG debido a que considera los cambios ocurridos en el entorno de ejecución durante la ejecución del workflow, pero como contrapartida, genera overhead producido a consecuencia de reaccionar ante los cambios detectados. El objetivo de este trabajo es proporcionar estrategias que reducen el overhead del planificador dinámico, sin afectar al makespan del DAG. Para reducir el overhead, el algoritmo reacciona ante los cambios detectados durante la ejecución del DAG únicamente si anticipa que su makespan mejora.

La política dinámica desarrollada ha sido evaluada a través de ejecuciones simuladas y ejecuciones realizadas en un entorno oportunístico real. En la experimentación simulada se ha mejorado el makespan entre 5% y 30%, y en la experimentación real la mejora del makespan ha sido entre 5% y 15%. En lo que respecta al overhead, éste se ha reducido como mínimo un 20% respecto a otras políticas de planificación dinámicas.

Journal of Computational Science. Best Paper Award. 2012

For the second year (consecutive) HPC4EAS research group receives the Best Paper Award for the article Simulation Optimization for Helathcare Emergency Departaments. Authors: Cabrera E., Taboada M., Iglesias ML., Epelde F., Luque E.

In 2011 the group was awarded with the Outstanding Paper Award at the ICCS 2011 (Singapore) for the article High Performance Distributed Cluster-Based Individual Oriented Fish Schools Simulation. Authors: Solar R., Suppi R., Luque E.

 

 

 

 

More than 25 young professionals from different universities of Argentina have participated in the Postgraduate courses organized by the Facultad de Informática de la Universidad Nacional de La Plata (Argentina). http://postgrado.info.unlp.edu.ar/Cursos/Cursos_08_Agosto.html 

 These courses are organized by the Secretaria de Postgrado in the context of PhD in Computer Science Programme and the Postgraduate Specialization Programme of the UNLP. The quality of these courses is guaranteed according to academic standards supervised by the Commission Research and Graduate Advisor, the Ministry of Science, Technology and Graduate Studies and UNLP Academic School Board.

The contents of the course Performance Prediction and Efficient Execution of Parallel Programs is oriented to describe the main concepts of parallel systems, characterization of applications, evaluation and performance prediction, tuning and visualization of parallel executions. The course was taught by professors E. Luque and D. Rexachs (UAB). The course of Computational Science High Performance Simulation is oriented to describe the main concepts of simulation in computational science including systems and model characterization, simulation techniques, discrete event simulation and parallel DES as been as examples and tools to develop the these simulation models. This course was taught by the professors R. Suppi  and E. Luque (UAB).

The on-site part of the course follows a very conceptual methodology based on specific knowledge on relevant issues of the course using examples & tools to orient to the student in the main aspects of the theoretical contents. The on line part (e-learning), that include more than 100 hours, the students must analyse & solve  different case studies in the area of performance evaluation and  model simulation generating a final report with the results & conclusions.

PhD Thesis Defense:  Vicente Ivars, Date: Sept. 6, 2012

TDP-Shell: Entorno para acoplar gestores de colas y herramientas de monitorización.

 

Escola d’Enginyeria – Universitat Autònoma de Barcelona

Abstract:

Hoy en día la mayoría de aplicaciones distribuidas se ejecutan en clusters de ordenadores administrados por un gestor de colas. Por otro lado, los usuarios pueden utilizar las herramientas de monitorización actuales para detectar los problemas en sus aplicaciones distribuidas. Pero para estos usuarios, es un problema utilizar estas herramientas de monitorización cuando el cluster está controlado por un gestor de colas. Este problema se debe al hecho de que los gestores de colas y las herramientas de monitorización, no gestionan adecuadamente los recursos que deben compartir al ejecutar y operar con aplicaciones distribuidas.

A este problema le denominamos “falta de interoperabilidad” y para resolverlo se ha desarrollado un entorno de trabajo llamado TDP-Shell. Este entorno soporta, sin alterar sus códigos fuentes, diferentes gestores de colas, como Cóndor o SGE y diferentes herramientas de monitorización, como Paradyn, Gdb y Totalview. En este trabajo se describe el desarrollo del entorno de trabajo TDP-Shell, el cual permite la monitorización de aplicaciones secuenciales y distribuidas en un cluster controlado por un gestor de colas, así como un nuevo tipo de monitorización denominada “retardada”.


PhD Thesis Defense:  Roberto Solar, Date: July, 16 2012

Particionamiento y Balance de Carga en Simulaciones Distribuidas de Bancos de Peces

Escola d’Enginyeria – Universitat Autònoma de Barcelona

Abstract:

Partitioning and load balancing are issues of great interest in distributed simulations based on spatially explicit individual-oriented models. The decomposition of the problem domain and the efficient data distribution on the computing nodes of the parallel architecture / distributed are crucial factors in performance figures for distributed simulation.

In this work we have developed a new methodology for partitioning and load balancing for large-scale distributed simulations of individual-oriented models that show spatially explicit movement patterns. In order to validate our strategies, the model of Huth & Wissel, which represents the coordinated and polarized movement of fish, has been used.

The partitioning method is to decompose the problem domain into compact partitions generated from the radial blanket approach and Voronoi diagrams. The distribution of partitions is performed by means of the proximity cluster partitions using a new definition meta-partitions equals to the number of computing cores. The strategy for dynamic load balancing is to detect the imbalance through an algorithm based on thresholds and reconfigure meta-partitions to achieve rebalancing. Finally, there has been extensive experimentation to validate and verify the viability of the distributed simulation in different scenarios.

« Primer ‹ Anterior 1 6 7 8