Barracuda on AWS ParallelCluster provides steps and code samples to build and run Barracuda Virtual Reactor on AWS using AWS ParallelCluster.
Barracuda Virtual Reactor simulates the 3D, transient behavior in fluid-particle systems including the multiphase hydrodynamics, heat balance and chemical reactions. It is a product from CPFD Software, visit their webpage for more information.
AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. You can run AWS CLI commands against AWS services using your preferred shell, such as Bash, PowerShell, or Z shell. And, you can do this without needing to download or install command line tools.
You can launch AWS CloudShell from the AWS Management Console, and the AWS credentials that you used to sign in to the console are automatically available in a new shell session. This pre-authentication of AWS CloudShell users allows you to skip configuring credentials when interacting with AWS services using AWS CLI version 2. The AWS CLI is pre-installed on the shell's compute environment.
Let start by downloading the Barracuda repository containing the Infrastructure as Code on your AWS CloudShell.
On the AWS CloudShell, run the script below to install the prerequisited software:
wget https://github.com/aws-samples/awsome-hpc/archive/refs/heads/main.tar.gz
mkdir -p AWSome-hpc
tar -xvzf main.tar.gz -C AWSome-hpc --strip-components 1
cd AWSome-hpc/apps/barracuda
bash ./scripts/setup/install_prerequisites.sh
The script will install the following on the Cloud9 instance:
Create your Python3 virtual environment
python3 -m venv .env
source .env/bin/activate
Install AWS ParallelCluster
pip3 install aws-parallelcluster==3.4.1
Set AWS Region The command below will query the metadata of the AWS Cloud9 instance to determine in which region it has been created.
export AWS_REGION=`curl --silent http://169.254.169.254/latest/meta-data/placement/region`
Create the AWS ParallelCluster Configuration file. Instances that will be used are p4de.24xlarge.
. ./scripts/setup/create_parallelcluster_config.sh
Create the Barracuda Cluster
CLUSTER_NAME="barracuda-cluster"
echo "export CLUSTER_NAME=${CLUSTER_NAME}" >> ~/.bashrc
pcluster create-cluster -n ${CLUSTER_NAME} -c config/barracuda-cluster.yaml --region ${AWS_REGION}
Connect to the cluster
pcluster ssh -n ${CLUSTER_NAME} -i ~/.ssh/${SSH_KEY_NAME} --region ${AWS_REGION}
Download Barracuda
wget -P /shared https://cpfd-software.com/wp-content/uploads/2022/11/barracuda_virtual_reactor-22.1.0-Linux.tar.gz
Extract archive
tar -xvzf /shared/barracuda_virtual_reactor-22.1.0-Linux.tar.gz -C /shared
Install Barracuda
/shared/barracuda_virtual_reactor-22.1.0-Linux/barracuda_virtual_reactor-22.1.0-Linux.run install --default-answer --accept-licenses --confirm-command --root /shared/Barracuda/22.1.0
echo "export PATH=/shared/Barracuda/22.1.0/bin:$PATH" >> ~/.bashrc
In this section, you will go through the steps to run test case(s) provided by Barracuda on AWS ParallelCluster.
In this section, you will learn how to run Barracuda on a Gasifier test case.
Download Sample case.
wget -P /shared https://cpfd-software.com/wp-content/uploads/2023/02/barracuda_sample_case.zip
Add your license file in /shared/ls.rlmcloud.com.lic
Create submission script that will run the simulation on one p3.2xlarge Amazon EC2 instance using NVIDIA V100 GPU.
cat > barracuda-gasifier-sub.sh << EOF
#!/bin/bash
#SBATCH --job-name=barracuda-gasifier
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err
#SBATCH --partition=gpu-od-queue
#SBATCH --ntasks=1
#SBATCH --gpus=v100:1
#SBATCH --constraint=p3
# Set WORK_DIR as scratch if local storage exist.
# Otherwise use tmp
export WORK_DIR=/scratch/\$SLURM_JOB_ID
if [ ! -d /scratch ]; then
export WORK_DIR=/tmp/\$SLURM_JOB_ID
fi
echo \$WORK_DIR
unzip -j /shared/barracuda_sample_case.zip -d \${WORK_DIR}
cd \${WORK_DIR}
export cpfd_LICENSE="/shared/ls.rlmcloud.com.lic"
/shared/Barracuda/22.1.0/bin/cpfd.x -ow -cc -ct -cbc -cic -qmdp -qll -qfe -gpu -d0 -fallback quit gasifier.prj
tar -czf /shared/barracuda-gasifier-results.tar.gz \${WORK_DIR}
EOF
You can also run on p4d.24xlarge or p4de.24xlarge Amazon EC2 instances by modifying the submission script above.
As an example for p4de, you should replace --gpus=v100:1
by --gpus=a100:1
and --constraint=p3
by --constrant=p4de
to run p4de.24xlarge EC2 instance.
sbatch barracuda-gasifier-sub.sh << EOF
The job should complete in ~4 hours on one p3.2xlarge
Amazon EC2 Instances.
Once the simulation is completed, you can visualize the results. Extract the results archive
tar -xvzf /shared/barracuda-gasifier-results.tar.gz
Install the xdg-utils
package.
sudo yum install -y xdg-utils
Let's exit the head node of AWS ParallelCluster to return to AWS Cloud9 environment.
exit
To visualize the results of the Gasifier test case, you will create remote visualization session using DCV
pcluster dcv-connect -n ${CLUSTER_NAME} --key-path ~/.ssh/${SSH_KEY_NAME} --region ${AWS_REGION}
You should obtain a response like this.
Copy and Paste the https link to a new tab of your web brower. It will create a remote visualization session.
Open a terminal.
Launch Barracuda by typing barracuda
in the terminal.
Open the gasifier project file, gasifier.prj
.
Visualize the results by selecting Post-processing
> View results
.
You can now visualize at the results of the gasifier simulation.
To avoid unexpected charges to your account relative to the Barracuda cluster, make sure you delete the cluster and associated resources.
pcluster delete-cluster -n ${CLUSTER_NAME} --region ${AWS_REGION}
The steps below are optional if you plan to deploy a cluster with Barracuda in the future.
Delete remaining components of the Barracuda solution
. ./scripts/cleanup/cleanup_solution_components.sh