Codes

From success
Revision as of 10:09, 1 April 2014 by Moureauv (Talk | contribs)

Jump to: navigation, search

AVBP

The AVBP project started in 1993 upon an initiative of Michael Rudgyard and Thilo Schönfeld with the aim to build a modern software tool for Computational Fluid Dynamics (CFD) within CERFACS of high flexibility, efficiency and modularity. Since then, the project grew rapidly and today, under the leadership of Thierry Poinsot, AVBP represents one of the most advanced CFD tools worldwide for the numerical simulation of unsteady turbulence for reacting flows. AVBP is widely used both for basic research and applied research of industrial interest. Today, the AVBP project comprises a total of around 30 researcher scientists and engineers.

AVBP is a parallel CFD code that solves the three-dimensional compressible Navier-Stokes on unstructured and hybrid grids. Initially conceived for steady state flows of aerodynamics, today the exclusive area of application is the modelling of unsteady reacting flows in combustor configurations. These activities are partially related to the rising importance paid to the understanding of the flow structure and mechanisms leading to turbulence. The prediction of these unsteady turbulent flows is based on the Large Eddy Simulation (LES) approach that has emerged as a prospective technique for problems associated with time dependent phenomena and coherent eddy structures. An Arrhenius law reduced chemistry model allows investigating combustion for complex configurations.

The important development of the physical models done at CERFACS is completed by academics studies carried out at the EM2C lab of Ecole Central Paris (ECP) and Institut de Mécanique des Fluides de Toulouse (IMFT). Today, the ownership of AVBP is shared with Institut Français de Pétrole (IFP), located in the Paris area, following an agreement of joint code development oriented towards piston engine applications. Important links to industry have been established with Safran Group (Snecma, Turbomeca), Air Liquide, Gaz de France as well as with Alstom and Siemens Power Generation.

The handling of unstructured or hybrid grids is one key feature of AVBP. With the use of these hybrid grids, where a combination of several elements of different type is used in the framework of the same mesh, the advantages of the structured and unstructured grid methodologies are combined in terms of gridding flexibility and solution accuracy. In order to handle such arbitrary hybrid grids the data structure of AVBP employs a cell-vertex finite-volume approximation. The basic numerical methods are based on a Lax-Wendroff or a Finite-Element type low-dissipation Taylor-Galerkin discretization in combination with a linear-preserving artificial viscosity model.

AVBP is built upon a modular software library that includes integrated parallel domain partition and data reordering tools, handles message passing and includes supporting routines for dynamic memory allocation, routines for parallel I/O and iterative methods. AVBP is highly portable to most standard platforms including PCs, work stations and mainframes and has proven to be efficient on most parallel architectures.

Mesh related aspects of AVBP are handled by the multi-function grid-preprocessor HIP. This grid manipulation tool allows various operations such as generic solution interpolation between two grids, grid cutting or merging, grid validation, adaptive local grid refinement, grid extrusion or the creation of axi-symmetric grids.

The AVBP solver is utilized in the frame of many bilateral industrial collaborations and National research programmes such as the joint R&D combustion initiative INCA. On a European level AVBP is used in several projects of the 5th and 6th EC Framework Programs: PRECCINSTA and INTELLECT-DM on low NOx studies for gas turbines, MOLECULES, DESIRE on gas turbine flows and fluid/structure interaction in liners, FUELCHIEF on fuel-staged combustion instabilities and LESSCO2 for piston engines. Several research fellows use AVBP in the frame of the Marie Curie actions FLUISTCOM (2004-2007) and ECCOMET (to start in 2006).

Finally, AVBP is used by members of the CFD team for research in the frame of the demanding summer school program at the Center for Turbulence Research at Stanford University.


YALES2

YALES2 is a research code started in 2007 that aims at the solving of two-phase combustion from primary atomization to pollutant prediction on massive complex meshes. V. Moureau and G. Lartigue at CORIA are the two maintainers of the code, which is used and developed by more than 100 persons at CORIA and in several laboratories that are grouped in the joint initiative called SUCCESS in order to promote super-computing and help in the training and the porting of the AVBP and YALES2 codes. YALES2 is also used in the aeronautical, automotive and process engineering industries through research projects.

YALES2 has been ported on the major HPC plateforms: Intel clusters (Curie and Arain at CEA, Antares at CRIHAN), Blue Gene machines (P and Q at IDRIS and JUELICH), ARM cluster (MONTBLANC project) and Intel Xeon Phi.

YALES2 is able to handle efficiently unstructured meshes with several billions of elements, thus enabling the DNS and LES of laboratory and semi-industrial configurations. The solvers of YALES2 cover a wide range of phenomena and applications and they may be assembled to address multi-physics problems. YALES2 is based on MPI1 and fortran90 code with some features of fortran2008 for the memory contiguity. The external libraries used in the code are PARMETIS, PT-SCOTCH, HDF5 and FFTW3. YALES2 solves the low-Mach number Navier-Stokes equations with a projection method for constant and variable density flows. These equations are discretized with a 4th-order central scheme in space and a 4th-order Runge-Kutta like scheme in time. The efficiency of projection approaches is usually driven by the performances of the Poisson solver. In YALES2, the linear solver is a highly efficient Deflated Preconditioned Conjugate Gradient that has two mesh levels. As a result, YALES2 is currently used for production runs with meshes of 18 billion cells on 16384 cores.