Skip to content

Commit

Permalink
Minor changes to end of notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
donaldcummins committed Mar 7, 2024
1 parent 65f5d0d commit 9cee401
Showing 1 changed file with 39 additions and 41 deletions.
80 changes: 39 additions & 41 deletions PINNs_NavierStokes_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -246,14 +246,14 @@
"\n",
"<h1> Navier-Stokes inverse data driven discovery of PDE’s </h1>\n",
"\n",
"Navier-Stokes equations describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the dispersion of pollutants, and many other applications. Let us consider the Navier-Stokes equations in two dimensions (2D) given explicitly by\n",
"The Navier-Stokes equations describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the dispersion of pollutants, and many other applications. Let us consider the Navier-Stokes equations in two dimensions (2D) given explicitly by\n",
"\n",
"\\begin{equation} \n",
"u_t + \\lambda_1 (u u_x + v u_y) = -p_x + \\lambda_2(u_{xx} + u_{yy}),\\\\\n",
"v_t + \\lambda_1 (u v_x + v v_y) = -p_y + \\lambda_2(v_{xx} + v_{yy}),\n",
"\\end{equation}\n",
" \n",
"where $u(t, x, y)$ denotes the $x$-component of the velocity field, $v(t, x, y)$ the $y$-component, and $p(t, x, y)$ the pressure. Here, $\\lambda = (\\lambda_1, \\lambda_2)$ are the unknown parameters. Solutions to the Navier-Stokes equations are searched in the set of divergence-free functions; i.e.,\n",
"where $u(t, x, y)$ denotes the $x$-component of the velocity field, $v(t, x, y)$ the $y$-component, and $p(t, x, y)$ the pressure. Here, $\\lambda = (\\lambda_1, \\lambda_2)$ are the unknown parameters. Solutions to the Navier-Stokes equations are sought in the set of divergence-free functions; i.e.,\n",
"\n",
"\\begin{equation} \n",
"u_x + v_y = 0.\n",
Expand Down Expand Up @@ -321,7 +321,7 @@
"\\end{bmatrix}\n",
"\\end{equation}\n",
" \n",
"can be trained by minimizing the mean squared error loss$\n",
"can be trained by minimizing the mean squared error loss\n",
"\n",
"\\begin{equation}\n",
"\\begin{array}{rl}\n",
Expand Down Expand Up @@ -374,13 +374,13 @@
"source": [
"<div style=\"background-color: #ccffcc; padding: 10px;\">\n",
"\n",
"# Initalise the neural network \n",
"# Initialise the neural network \n",
" \n",
"`init` is called passing in the training data `x_train`, `y_train`, `t_train`, `u_train` and `v_train` with information about the neural network layers\n",
"`init` is called passing in the training data `x_train`, `y_train`, `t_train`, `u_train` and `v_train` with information about the neural network layers.\n",
" \n",
"# Extract vars\n",
" \n",
"`init` reformats some of the data and outputs model features that we need to pass into the training function `train`\n",
"`init` reformats some of the data and outputs model features that we need to pass into the training function `train`.\n",
"\n",
"</div>"
]
Expand All @@ -395,16 +395,16 @@
"# Advanced \n",
" \n",
" \n",
"Once you have run through the notebook once you may wish to alter the optamizer used in the `init()` function to see the large effect optamizer choice may have. \n",
"Once you have run through the notebook once you may wish to alter the optimizer used in the `init()` function to see the large effect optimizer choice may have.\n",
" \n",
"We've highlighted in the comments a number of possible optamizers to use from the [tf.compat.v1.train](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train) module. \n",
"*This method was chosen to limit tensorflow version modifications required from the original source code*\n",
"We've highlighted in the comments a number of possible optimizers to use from the [tf.compat.v1.train](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train) module. \n",
"*This method was chosen to limit TensorFlow version modifications required from the original source code.*\n",
" \n",
"You can learn more about different optamizers [here](https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6)\n",
"You can learn more about different optimizers [here](https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6).\n",
" \n",
"</div>\n",
"\n",
"# init"
"# Init"
]
},
{
Expand All @@ -415,8 +415,8 @@
"outputs": [],
"source": [
"def init(x, y, t, u, v, layers):\n",
" # This line of code is required to prevent some tensorflow errors arrising from the\n",
" # inclusion of some tensorflw v 1 code \n",
" # This line of code is required to prevent some TensorFlow errors arrising from the\n",
" # inclusion of some TensorFlow v1 code \n",
" tf.compat.v1.disable_eager_execution()\n",
" X = np.concatenate([x, y, t], 1)\n",
" # lb and ub denote lower and upper bounds on the inputs to the network\n",
Expand Down Expand Up @@ -465,12 +465,12 @@
" \n",
" ##############################################################################################\n",
" # #\n",
" ## the optimizer is something that can be tuned to different requirements #\n",
" ## we have not investigated using different optimizers, the orignal code uses L-BFGS-B which # \n",
" ## is not tensorflow 2 compatible #\n",
" # The optimizer is something that can be tuned to different requirements. #\n",
" # We have not investigated using different optimizers. The orignal code uses L-BFGS-B which # \n",
" # is not TensorFlow 2 compatible. #\n",
" # #\n",
" # SELECT OPTAMIZER BY UNCOMMENTING OUT one of the below lines AND RERUNNING CODE #\n",
" # You can alsoe edit the learning rate to see the effect of that #\n",
" # SELECT OPTIMIZER BY UNCOMMENTING one of the below lines AND RERUNNING CODE. #\n",
" # You can also edit the learning rate to see the effect of that. #\n",
" # #\n",
" ##############################################################################################\n",
" \n",
Expand Down Expand Up @@ -506,9 +506,9 @@
"source": [
"<div style=\"background-color: #ccffcc; padding: 10px;\">\n",
"\n",
"`neural_net()` constructs the network Y where X is a matrix containing the input and output coordinates, i.e. x,t,u and X is normalised so that all values lie between -1 and 1, this improves training\n",
"`neural_net()` constructs the network Y where X is a matrix containing the input and output coordinates, i.e. x,t,u and X is normalised so that all values lie between -1 and 1 (this improves training)\n",
"\n",
"`net_NS()` is where the PDE is encoded:\n",
"`net_NS()` is where the PDE is encoded\n",
" \n",
"</div>"
]
Expand All @@ -520,7 +520,7 @@
"metadata": {},
"outputs": [],
"source": [
"def neural_net( X, weights, biases,lb, ub):\n",
"def neural_net(X, weights, biases,lb, ub):\n",
" \n",
" num_layers = len(weights) + 1\n",
"\n",
Expand Down Expand Up @@ -588,13 +588,13 @@
"\n",
"<div style=\"background-color: #cce5ff; padding: 10px;\">\n",
"\n",
"Once you have run through the notebook once you may wish to alter any the following \n",
"Once you have run through the notebook once you may wish to alter any the following:\n",
" \n",
"- number of data training points `N_train`\n",
"- number of layers in the network `layers`\n",
"- number of neurons per layer `layers`\n",
"- number of data training points `N_train`,\n",
"- number of layers in the network `layers`,\n",
"- number of neurons per layer `layers`,\n",
" \n",
"to see the impact on the results\n",
"to see the impact on the results.\n",
"\n",
"</div>"
]
Expand Down Expand Up @@ -671,8 +671,6 @@
"\n",
"If this fails you may need to restarted the notebook with a flag:\n",
"```bash\n",
"\n",
"\n",
"jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10\n",
"\n",
"```\n",
Expand All @@ -686,13 +684,13 @@
"source": [
"<div style=\"background-color: #ccffcc; padding: 10px;\">\n",
"\n",
"# Initalise the nerual network \n",
"# Initalize the neural network \n",
" \n",
"`init` is called passing in the training data `x_train`, `y_train`, `u_train` and `v_train` with information about the neural network layers. The bound information `lb` `ub` is included in the `init()` function\n",
"`init` is called passing in the training data `x_train`, `y_train`, `u_train` and `v_train` with information about the neural network layers. The bound information `lb` `ub` is included in the `init()` function.\n",
" \n",
"# Extract vars\n",
" \n",
"`init` reformats some of the data and outputs model features that we need to pass into the training function `train`\n",
"`init` reformats some of the data and outputs model features that we need to pass into the training function `train`.\n",
" \n",
"</div>"
]
Expand Down Expand Up @@ -845,9 +843,9 @@
"source": [
"<div style=\"background-color: #cce5ff; padding: 10px;\">\n",
" \n",
"# Loading Pre trained model option \n",
"# Loading pre-trained model option \n",
" \n",
"If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`\n",
"If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`.\n",
"\n",
"</div>"
]
Expand Down Expand Up @@ -886,7 +884,7 @@
"\n",
"# Use trained model to predict from data sample\n",
" \n",
"`predict` will predict `u`, `v` and `p` using the trained model\n",
"`predict` will predict `u`, `v` and `p` using the trained model.\n",
"\n",
"</div>"
]
Expand All @@ -898,7 +896,7 @@
"source": [
"<div style=\"background-color: #cce5ff; padding: 10px;\">\n",
"\n",
"The `predict` function has an option `load=False` set by default. Alter this to `load=True` if you wish to load the previously trained model\n",
"The `predict` function has an option `load=False` set by default. Alter this to `load=True` if you wish to load the previously trained model.\n",
" \n",
"</div>"
]
Expand Down Expand Up @@ -939,7 +937,7 @@
"\n",
"# Calculate Errors\n",
" \n",
"if you have set the number of training iterations large enough the errors should be small.\n",
"If you have set the number of training iterations large enough the errors should be small.\n",
"\n",
"</div>"
]
Expand Down Expand Up @@ -1006,7 +1004,7 @@
"\n",
"# Using Noisy Data\n",
" \n",
"We're now going to repeat the previous steps but include some noise in our data to see the effect of that on our results\n",
"We're now going to repeat the previous steps but include some noise in our data to see the effect of that on our results.\n",
"\n",
"</div>"
]
Expand Down Expand Up @@ -1068,7 +1066,7 @@
" \n",
"# Loading Pre trained model option \n",
" \n",
"If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`\n",
"If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`.\n",
"\n",
"</div>"
]
Expand Down Expand Up @@ -1340,7 +1338,7 @@
"source": [
"<div style=\"background-color: #ccffcc; padding: 10px;\"> \n",
" \n",
"if you have not been able to run enough training iterations the figures produced running 10000 iterations can be found:\n",
"If you have not been able to run enough training iterations the figures produced running 10000 iterations can be found:\n",
" \n",
"* [Solution with network trained over 10000 iterations](figures/PINNS_NS_10000_PDE.png)\n",
"* [Figure comparing predicted vs exact with network trained over 10000 iterations](figures/PINNS_NS_10000_predict_vs_exact.png)\n",
Expand All @@ -1357,7 +1355,7 @@
"\n",
"It is also possible to use different sampling techniques for training data points. We randomly select $N_u$ data points, but alternative methods could be choosing only boundary points or choosing more points near the $t=0$ boundary.\n",
"\n",
"return [here](#init) to alter optimization method used\n",
"Return [here](#init) to alter optimization method used.\n",
" \n",
"</div>"
]
Expand All @@ -1373,7 +1371,7 @@
"\n",
"## Next steps\n",
"\n",
"Now we've demonstrated using PINNs for more complex equations we can take a breif look at Hidden Fluid Mechanics (*this final notebook is beyond the scope of these tutorials but provided to give a breif example of the methodology*)\n",
"Now we've demonstrated using PINNs for more complex equations we can take a brief look at Hidden Fluid Mechanics (*this final notebook is beyond the scope of these tutorials but provided to give a brief example of the methodology*).\n",
" \n",
"[Navier-Stokes PINNs Hidden Fluid Mechanics](PINNs_NavierStokes_HFM.ipynb)\n",
"</div>"
Expand Down

0 comments on commit 9cee401

Please sign in to comment.