Computational Modelling Group

Multiscale modelling of neutron star oceans

Homepage
https://github.com/harpolea/swerve
Started
25th September 2014
Research Team
Alice Harpole
Investigators
Ian Hawke

Interference of waves in two layers of fluid modelled using the swerve code.

We are interested in better understanding Type I X-ray bursts - explosions which occur in neutron star oceans. Neutron stars have very strong gravitational fields: we are investigating the effects this may have on the burst physics by using a general relativistic model of their gravity. By studying bursts, we will be able to get tighter constraints on the neutron star radius and therefore better understand the star's interior physics and composition.

The physics of these bursts covers a wide range of scales, so when building our simulation, swerve, it is necessary to build a model which captures the physics across this range of scales. To do this, we use a series of meshes with different resolutions, each with their own physical model to appropriately capture the physics at that scale. The whole domain is covered by the coarsest grid, where we evolve the simulation using equations that capture the large scale physics - the relativistic shallow water equations. In areas of interest, we refine to a finer grid. Here, we apply equations that capture smaller scale physics - the general relativistic compressible fluid equations. In the future, we plan to the refine to a third level of refinement, where we will use the relativistic low Mach number equations (an approximation of the compressible equations which filters out the fast-moving sound waves) to capture the physics on the smallest scales.

Given the complexity of the equations we are evolving, the large, multidimensional domain size and the necessity to communicate data between grids of different resolutions and different physical models, this simulation is very computationally expensive. Consequently, we have used some parallel computing techniques to make the problem feasible. The code is parallelised using CUDA so that it can be run on Iridis' GPUs, with MPI to manage the code across multiple GPUs.

Categories

Physical Systems and Engineering simulation: Astrophysics, CFD, Combustion, General Relativity, Turbulence

Algorithms and computational methods: Finite volume, Multi-physics, Multi-scale, Multigrid solvers

Visualisation and data handling software: HDF5, VisIt

Software Engineering Tools: Git

Programming languages and libraries: C, C++, CUDA, MPI, Python

Computational platforms: GPU, Iridis, Linux

Transdisciplinary tags: Scientific Computing