This repository reproduces the results of the paper Deep learning for fast simulation of seismic waves in complex media, B. Moseley, T. Nissen-Meyer and A. Markham, 2020 Solid Earth.
-
Seismic simulation is crucial for many geophysical applications, yet traditional approaches are computationally expensive.
-
We present two deep neural networks which are able to simulate seismic waves in complex media.
-
The first network simulates seismic waves in horizontally layered media and uses a WaveNet design.
-
The second network is significantly more general and simulates seismic waves in faulted media with arbitrary layers, fault properties and an arbitrary location of the seismic source on the surface of the media, using a conditional autoencoder design.
-
Both are networks are multiple orders of magnitude faster than traditional simulation methods.
-
We discuss the challenges of extending deep learning approaches to more complex, elastic and 3-D Earth models required for practical simulation tasks.
In this study our goal is to understand whether deep neural networks can simulate seismic waves in synthetic media.
To help answer this question, we consider simulating the seismic response from a single fixed point source propagating through a 2D acoustic velocity model at multiple receiver locations horizontally offset from the source, shown below:
Our task is then as follows:
Given a randomly selected velocity model as input, can we train a neural network to simulate the pressure response recorded at each receiver location?
We wish the network to generalise well to velocity models unseen during training. In the paper we also discuss the challenges of extending this approach to more complex, elastic and 3-D Earth models required for practical simulation tasks.
We design two neural networks to complete this task;
- The first network simulates the seismic response in horizontally layered media and uses a WaveNet design. The input to the network is a velocity model converted to its corresponding reflectivity series and its output is a prediction of the seismic response at 11 fixed receiver locations.
- The second network simulates the seismic response in faulted media with arbitrary layers, fault properties and an arbitrary location of the seismic source on the surface of the media, and uses a conditional autoencoder design. The input to this network is the velocity model without any preprocessing applied and its output is a prediction of the seismic response at 32 fixed receiver locations. The input source location is concatenated onto the network's latent vector, allowing the network to learn the effect of the source location.
Both networks are trained using many ground truth FD simulation examples.
seismic-simulation-complex-media
requires Python (for deep learning) and Fortran (for running FD simulation using the SEISMIC CPML library) libraries to run.
We recommend setting up a new environment, for example:
conda create -n seismic_sim python=3.6 # Use Anaconda package manager
conda activate seismic_sim
and then installing the following Python dependencies:
pip install --ignore-installed --upgrade [packageURL]# install tensorflow (get packageURL from https://www.tensorflow.org/install/pip, see tensorflow website for details)
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
conda install scipy matplotlib jupyter
pip install tensorboardX
All of our work was completed using TensorFlow version 1.14.0 (for the WaveNet network) and PyTorch version 1.5.0 (for the conditional autoencoder network).
Next, download the source code:
git clone https://github.com/benmoseley/seismic-simulation-complex-media.git
Finally, compile the SEISMIC CPML Fortran library using:
cd seismic-simulation-complex-media/generate_data/
make all
This should create an executable file called xmodified_seismic_CPML_2D_pressure_second_order
in generate_data
.
The purpose of each folder in the repository is as follows:
generate_data
: generates the input velocity models and ground truth FD simulation data for training both networks.marmousi
: parses and generates FD simulations using the Marmousi velocity model (Martin et al., 2006) for testing the generalisation ability of the conditional autoencoder network.pyray
: provides code for carrying out 2D ray tracing, which is used as a benchmark for the WaveNet network.wavenet
: defines and trains the WaveNet network.autoencoder
: defines and trains the conditional autoencoder network.shared_modules
: provides various helper Python modules.
To reproduce our workflow, run the following scripts in this order:
generate_data/generate_velocity_models.py
: generates random velocity models for training and testing the networks.generate_data/generate_forward_simulations.py
: generates FD simulations using these velocity models.generate_data/preprocess_wavenet.py
: preprocesses velocity models used for training the WaveNet into their corresponding reflectivity series.generate_data/convert_to_flat_binary_wavenet.py
: converts the velocity models and FD simulations used for training the WaveNet into a flat binary file for efficient training.generate_data/convert_to_flat_binary_autoencoder.py
: converts the velocity models and FD simulations used for training the conditional autoencoder into a flat binary file for efficient training.wavenet/main.py
: trains the WaveNet (and inverse WaveNet) networks, outputting training summaries to TensorBoard.autoencoder/main.py
: trains the conditional autoencoder networks, outputting training summaries to TensorBoard.- Finally,
generate_data
,marmousi
,wavenet
andautoencoder
contain Jupyter Notebooks which carry out analysis of our results and will reproduce the results plots in the Solid Earth paper.
Steps 1-5 need to be re-run for each type of train and test dataset used in the paper. See the scripts saved in the subfolders generate_data/velocity
and generate_data/gather
for the configurations we used for each type of dataset, and the paper for a description of each dataset. For steps 6-7, the constants.py
file in wavenet
and autoencoder
can be used to set different network hyperparameters, see the saved files in wavenet/server
and autoencoder/server
for the configurations we used in the paper.
For further questions please contact us.