Current Projects and Directions
The Macro-Micro-Coupling Tool (MaMiCo) is meant to ease the development of and increase flexibility and maintainability of coupled molecular-continuum flow simulations. These multiscale simulations allow to resolve only parts of the considered computational domain by computationally expensive molecular dynamics (MD) simulations; other parts are treated by coarse-grained continuum solver (e.g., Lattice Boltzmann or Navier-Stokes).
MaMiCo strictly separates MD and continuum solver, and coupling components. This allows to easily exchange the solver implementations, and it enables the immediate reuse of coupling algorithms—once implemented within the tool—with one’s favorite MD/continuum solver combination on a supercomputer. The tool currently interfaces eight different solver frameworks (four LB simulation codes, including Palabos and OpenLB, and four MD packages, including ESPResSo and LAMMPS) and supports Single-LB-Multi-MD coupling: a single LB simulation is coupled to multiple quasi-identical MD simulations. The simultaneous execution of several MD instances is favorable in terms of parallelism, it enables the evaluation of averaged quantities over independent MD samples and thus yields a respective reduction in coupling time intervals between LB and MD solver. Besides, MaMiCo incorporates noise filters to remove potentially unwanted thermodynamic fluctuations from the MD data before coupling them to the continuum solver.
Today’s developments and need for machine learning have a great impact on high performance computing in terms of hardware development. Besides, data analytics and machine learning have found their way into numerical simulation, e.g. replacing expensive non-linear kernels or simply for evaluating statistics in uncertainty quantification and related fields.
Some research at the chair is dedicated to detect synergies between HPC, numerical simulation and machine learning, for example
- in the context of performance prediction for simulations whose run time depends on a big number of parameters (corresponding to a high-dimensional parameter space exploration), or
- in molecular-continuum coupled simulations to model molecular behavior via machine learning methods.
In collaboration with researchers from the University Medical Center Hamburg-Eppendorf (UKE), we work on novel high-performance approaches to analyze big and heterogeneous data sets that are relevant for the classification and diagnostics of brain tumors (omics data, gene expressions, NGS-data, etc.), with a focus on child tumors.
In this project, we investigate how to control errors in molecular-continuum flow simulations, that is
- physical errors due to thermal fluctuations,
- errors due to failing hardware/OS.
The to-be-developed methodology will be incorporated into the macro-micro-coupling tool (MaMiCo) and will build the foundation for further research at the edge of data science and high performance computing for multiscale simulation.
In this project, ships (e.g., from the sea rescue) shall be equipped by novel senors/ camera systems and IT/AI-systems. Using the data generated and processed by these systems, digital twins of these ships shall evolve, that will be further enriched over the life cycle of the ship. In a second step, this shall be extended to fleets of ships. A verification is planned in terms of anomaly detection cases and fleet optimization. HPC challenges evolve from the corresponding real-time capabilities of the digital system.
The main goal of TaLPas is to provide a solution to fast and robust simulation of many, potentially dependent particle systems in a distributed environment. This is required in many applications, including, but not limited to,
- sampling in molecular dynamics: so-called “rare events”, e.g. droplet formation, require a multitude of molecular dynamics simulations to investigate the actual conditions of phase transition,
- uncertainty quantification: various simulations are performed using different parametrizations to investigate the sensitivity of the parameters on the actual solution,
- parameter identification: given, e.g., a set of experimental data and a molecular model, an optimal set of model parameters needs to be found to fit the model to the experiment.
For this purpose, TaLPas targets
- the development of innovative auto-tuning based particle simulation software in form of an open-source library to leverage optimal node-level performance. This will guarantee an optimal time-to-solution for small- to mid-sized particle simulations,
- the development of a scalable task scheduler to yield an optimal distribution of potentially dependent simulation tasks on available HPC compute resources,
- the combination of both auto-tuning based particle simulation and scalable task scheduler, augmented by an approach to resilience. This will guarantee robust, that is fault-tolerant, sampling evaluations on peta- and future exascale platforms.
Letzte Änderung: 2. Februar 2021