diff --git a/.gitignore b/.gitignore
index 2249e08..3d71b33 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,4 +4,5 @@ ConvolutionalNeuralNetworks/**/*.npz
ConvolutionalNeuralNetworks/data/real/
ConvolutionalNeuralNetworks/data/train_full_model/
ConvolutionalNeuralNetworks/data/train_fully_connected_model/
-ConvolutionalNeuralNetworks/vgg16_2head.png
\ No newline at end of file
+ConvolutionalNeuralNetworks/vgg16_2head.png
+Data/Stenosis2D.mat
diff --git a/PINNs_1DHeatEquationExample.ipynb b/PINNs_1DHeatEquationExample.ipynb
index 50ab951..6b1d79d 100644
--- a/PINNs_1DHeatEquationExample.ipynb
+++ b/PINNs_1DHeatEquationExample.ipynb
@@ -19,22 +19,22 @@
"source": [
"# Overview\n",
"\n",
- "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep LearningFramework for Solving Forward and Inverse ProblemsInvolving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of NonlinearPartial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
+ "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
"\n",
- "These tutorials will go through solving Partial Differential Equations using Physics Informed Neuaral Networks focusing on the 1D Heat Equation and a more complex example using the Navier Stokes Equation\n",
+ "These tutorials will go through solving Partial Differential Equations using Physics Informed Neural Networks, focusing on the 1D Heat Equation and a more complex example using the Navier Stokes Equations.\n",
"\n",
- "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through)** \n",
+ "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through).** \n",
"\n",
"
\n",
"\n",
- "If you have not already then in your gitbash or terminal please run the following code in the LIFD_ENV_ML_NOTEBOOKS directory via the terminal(mac or linux) or git bash (windows) \n",
+ "If you have not already then please run the following code in the LIFD_ENV_ML_NOTEBOOKS directory via the terminal (mac or linux) or git bash (windows).\n",
" \n",
"```bash\n",
"git submodule init\n",
"git submodule update --init --recursive\n",
"```\n",
"\n",
- "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder**\n",
+ "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder.**\n",
"\n",
" \n",
"
"
@@ -49,20 +49,23 @@
"\n",
"Physics Informed Neural Networks
\n",
"\n",
- "For a typical Neural Network using algorithims like gradient descent to look for a hypothesis, data is the only guide, however if the data is noisy or sparse and we already have governing physical models we can use the knowledge we already know to optamize and inform the algoithms. This can be done via [feature enginnering]() or by adding a physicall inconsistency term to the loss function.\n",
+ "For a typical Neural Network using algorithms like gradient descent to look for a hypothesis, data is the only guide. However, if the data are noisy or sparse and we already have governing physical models, we can use the knowledge we already know to optimize and inform the algorithms. This can be done via [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering) or by adding a physical inconsistency term to the loss function.\n",
+ "\n",
+ "\n",
"\n",
"\n",
" \n",
- " \n",
+ " \n",
" \n",
"## The very basics\n",
"\n",
- "If you know nothing about neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). Creating a 2 layer neural network to illustrate the fundamentals of how Neural Networks work and the equivlent code using the python machine learning library [tensorflow](https://keras.io/). \n",
+ "If you are new to neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). There we build a two-layer neural network to illustrate the fundamentals of how Neural Networks work, and give equivalent code using the Python machine learning library [TensorFlow](https://keras.io/).\n",
"\n",
" \n",
- "## Recommended reading \n",
+ "## Recommended reading\n",
" \n",
- "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more here are some great starting points. \n",
+ "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more, here are some great starting points. \n",
+ " \n",
"\n",
"* [All you need to know on Neural networks](https://towardsdatascience.com/nns-aynk-c34efe37f15a) \n",
"* [Introduction to Neural Networks](https://victorzhou.com/blog/intro-to-neural-networks/)\n",
@@ -90,18 +93,22 @@
" \n",
"## Physics informed Neural Networks\n",
"\n",
- "Neural networks work by using lots of data to calculate weights and biases from data alone to minimise the loss function enabling them to act as universal fuction approximators. However these loose their robustness when data is limited. However by using know physical laws or empirical validated relationships the solutions from neural networks can be sufficiently constrianed by disregardins no realistic solutions.\n",
+ "Neural networks work by using lots of data to calculate weights and biases which minimise the loss function, enabling them to act as universal function approximators. Because they rely purely on observed relationships in the data, these networks lose their robustness when data is limited. However, by using known physical laws or empirically validated relationships, the solutions from neural networks can be sufficiently constrained by disregarding unrealistic solutions.\n",
" \n",
- "A Physics Informed Nueral Network considers a parameterized and nonlinear partial differential equation in the genral form;\n",
+ "A Physics Informed Neural Network considers a parameterized and nonlinear partial differential equation in the general form\n",
"\n",
+ "\n",
+ "\n",
+ " \n",
"\\begin{align}\n",
- "u_t + \\mathcal{N}[u; \\lambda] = 0, x \\in \\Omega, t \\in [0,T],\\\\\n",
+ " u_t + \\mathcal{N}[u; \\lambda] &= 0, && x \\in \\Omega, t \\in [0,T],\\\\\n",
"\\end{align}\n",
+ " \n",
"\n",
"\n",
- "where $\\mathcal{u(t,x)}$ denores the hidden solution, $\\mathcal{N}$ is a nonlinear differential operator acting on $u$, $\\mathcal{\\lambda}$ and $\\Omega$ is a subset of $\\mathbb{R}^D$ (the perscribed data). This set up an encapuslate a wide range of problems such as diffusion processes, conservation laws, advection-diffusion-reaction systems, and kinetic equations and conservation laws. \n",
+ "where $\\mathcal{u(t,x)}$ denotes the hidden solution, $\\mathcal{N}$ is a nonlinear differential operator acting on $u$, $\\mathcal{\\lambda}$ and $\\Omega$ is a subset of $\\mathbb{R}^D$ (the prescribed data). This set up encapsulates a wide range of problems such as diffusion processes, conservation laws, advection-diffusion-reaction systems, and kinetic equations and conservation laws.\n",
"\n",
- "Here we will go though this for the 1D headt equation and Navier stokes equations\n",
+ "Here we will go though this for the 1D Heat equation and for the Navier Stokes equations.\n",
"\n",
"\n",
" "
@@ -117,14 +124,14 @@
" Python
\n",
"\n",
" \n",
- "## Tensorflow \n",
+ "## TensorFlow \n",
" \n",
- "There are many machine learning python libraries available, [TensorFlow](https://www.tensorflow.org/) a is one such library. If you have GPUs on the machine you are using TensorFlow will automatically use them and run the code even faster!\n",
+ "There are many machine learning Python libraries available. [TensorFlow](https://www.tensorflow.org/) is one such library. If you have GPUs on your machine, TensorFlow will automatically use them and run the code even faster!\n",
"\n",
"## Further Reading\n",
"\n",
"* [Running Jupyter Notebooks](https://jupyter.readthedocs.io/en/latest/running.html#running)\n",
- "* [Tensorflow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
+ "* [TensorFlow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
"\n",
"\n",
" \n",
@@ -140,19 +147,19 @@
" \n",
" Requirements
\n",
"\n",
- "These notebooks should run with the following requirements satisfied\n",
+ "These notebooks should run with the following requirements satisfied.\n",
"\n",
" Python Packages:
\n",
"\n",
"* Python 3\n",
- "* tensorflow > 2\n",
+ "* TensorFlow > 2\n",
"* numpy \n",
"* matplotlib\n",
"* scipy\n",
"\n",
" Data Requirements
\n",
" \n",
- "This notebook referes to some data included in the git hub repositroy\n",
+ "This notebook refers to some data included in the GitHub repository, imported via the git submodules command mentioned in the installation instructions.\n",
" \n",
"\n"
]
@@ -179,7 +186,7 @@
"metadata": {},
"source": [
"\n",
- "Load in all required modules (includig some auxillary code) and turn off warnings. \n",
+ "Load in all required modules (including some auxiliary code) and turn off warnings. \n",
"
"
]
},
@@ -249,9 +256,9 @@
"\\frac{\\partial u}{\\partial t} = k \\frac{\\partial^2 u}{\\partial x^2 },\n",
"\\end{equation}\n",
" \n",
- "where $k$ is a material parameter called the coefficient of thermal diffusivitiy.\n",
+ "where $k$ is a material parameter called the coefficient of thermal diffusivity.\n",
"\n",
- "This equation can be solved using numerical methods, such as finite differences or finite elements. For this notebook, we have solved the above equation numerically on a domain of $x \\in [0,1]$ and $t \\in [0, 0.25]$. Solving this equation numerically gives us a spatiomtemporal domain $(x,t)$ and corresponding values of the solution $u$.\n",
+ "This equation can be solved using numerical methods, such as finite differences or finite elements. For this notebook, we have solved the above equation numerically on a domain of $x \\in [0,1]$ and $t \\in [0, 0.25]$. Solving this equation numerically gives us a spatiotemporal domain $(x,t)$ and corresponding values of the solution $u$.\n",
"\n",
"\n",
" \n",
@@ -262,7 +269,7 @@
"![PINNS.png](PINNS.png)\n",
" \n",
" \n",
- "Net U in the above diagram approximations a function that maps from $(x,t) \\mapsto u$. $\\sigma$ represents the biases and weights for the each neuron of the network. These $\\sigma$ values are the network parameters that are updated after each iteration. AD means Automatic Differentiation - this is the chain rule-based differentiation procedure that allows for differentiation of network outputs with respect to its inputs, e.g. differenting $u$ with respect to $x$, or calculating $\\frac{\\partial u}{\\partial x}$. The I node in the AD section represents the identity operation, i.e. keeping $u$ fixed without applying any differentiation. \n",
+ "Net U in the above diagram approximates a function that maps from $(x,t) \\mapsto u$. $\\sigma$ represents the biases and weights for the each neuron of the network. These $\\sigma$ values are the network parameters that are updated after each iteration. AD means Automatic Differentiation - this is the chain rule-based differentiation procedure that allows for differentiation of network outputs with respect to its inputs, e.g. differentiating $u$ with respect to $x$, or calculating $\\frac{\\partial u}{\\partial x}$. The I node in the AD section represents the identity operation, i.e. keeping $u$ fixed without applying any differentiation. \n",
"\n",
"After the automatic differentiation part of the network, we have two separate loss function components - the data loss and the PDE loss. The data loss term is calculated by finding the difference between the network outputs/predictions $u$ and the ground truth values of $u$, which could come from simulation or experiment. The data loss term enforces the network outputs to match known data points, which are represented by the pink box labelled \"Data\". The PDE loss term is where we add the \"physics-informed\" part of the network. Using automatic differentiation, we are able to calculate derivatives of our network outputs, and so we are able to construct a loss function that enforces the network to match the PDE that is known to govern the system. In this case, the PDE loss term is defined as:\n",
" \n",
@@ -272,7 +279,7 @@
" \n",
"where f is the residual of the 1D heat equation. By demanding that $f$ is minimised as our network train, we ensure that the network outputs obey the underlying PDE that governs the system. We then calculate the total loss of the system as a sum of the data loss and the PDE loss.\n",
"\n",
- "The loss is calculated after each pass through the network and when it is above a certain tolerance, the weights and biases are updated using a gradient descent step. When the loss falls below the tolerance the network is trained. In inference mode, we can then input a fine mesh of spatiomteporal coordinates and the network will find the solution at each of these points.\n",
+ "The loss is calculated after each pass through the network and when it is above a certain tolerance, the weights and biases are updated using a gradient descent step. When the loss falls below the tolerance the network is trained. In inference mode, we can then input a fine mesh of spatiotemporal coordinates and the network will find the solution at each of these points.\n",
" \n",
""
]
@@ -285,10 +292,10 @@
"
\n",
"\n",
"\n",
- "**$u(x,t)$** can then be defined below as the function `net_u` and the physics informed neural network **$f(x,t)$** is outline in function `net_f`\n",
+ "**$u(x,t)$** can then be defined below as the function `net_u` and the physics informed neural network **$f(x,t)$** is outlined in function `net_f`\n",
"\n",
"`neural_net()` constructs the network U where X is a matrix containing the input and output coordinates, i.e. x,t,u\n",
- "and X is normalised so that all values lie between -1 and 1, this improves training\n",
+ "and X is normalised so that all values lie between -1 and 1 (this improves training)\n",
"\n",
"`net_u()` constructs a network that takes input x,t and outputs the solution u \n",
" \n",
@@ -309,7 +316,6 @@
"metadata": {},
"outputs": [],
"source": [
- "\n",
"def neural_net(X, weights, biases,lb,ub):\n",
" num_layers = len(weights) + 1\n",
"\n",
@@ -347,9 +353,9 @@
" \n",
"### Intialise everying \n",
" \n",
- "the `init` function will take our gridded data X and U initialised it building our neural networks from the functions defined above ready to train the model\n",
+ "The `init` function will take our gridded data X and U, and build our neural networks from the functions defined above ready to train the model.\n",
"\n",
- "Variables to be deffined here:\n",
+ "Variables to be defined here:\n",
"\n",
"`X_u`: Input coordinates, e.g. spatial and temporal coordinates.\n",
"\n",
@@ -357,7 +363,7 @@
"\n",
"`X_f`: Collocation points at which the governing equations are satisfied. These coordinates will have the same format as the X_u coordinates, e.g. $(x,t)$.\n",
"\n",
- "layers: Specifies the structure of the u network.\n",
+ "layers: specify the structure of the u network.\n",
"\n",
"`lb`: Vector containing the lower bound of all of the coordinate variables, e.g. $x_{min}$, $t_{min}$.\n",
"\n",
@@ -380,16 +386,16 @@
"# Advanced \n",
" \n",
" \n",
- "Once you have run through the notebook once you may wish to alter the optamizer used in the `init()` function to see the large effect optamizer choice may have. \n",
- " \n",
- "We've highlighted in the comments a number of possible optamizers to use from the [tf.compat.v1.train](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train) module. \n",
- "*This method was chosen to limit tensorflow version modifications required from the original source code*\n",
+ "Once you have run through the notebook once you may wish to alter the optimizer used in the `init()` function to see the large effect optimizer choice may have.\n",
+ "\n",
+ "We've highlighted in the comments a number of possible optimizers to use from the [tf.compat.v1.train](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train) module. \n",
+ "*This method was chosen to limit TensorFlow version modifications required from the original source code.*\n",
" \n",
- "You can learn more about different optamizers [here](https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6)\n",
+ "You can learn more about different optimizers [here](https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6).\n",
" \n",
"
\n",
"\n",
- "# init"
+ "# Init"
]
},
{
@@ -418,8 +424,8 @@
" return weights, biases\n",
"\n",
"def init(X, u, layers, lb, ub, k):\n",
- " # This line of code is required to prevent some tensorflow errors arrising from the\n",
- " # inclusion of some tensorflw v 1 code \n",
+ " # This line of code is required to prevent some TensorFlow errors arising from the\n",
+ " # inclusion of some Tensorflow v1 code\n",
" tf.compat.v1.disable_eager_execution()\n",
" \n",
" \n",
@@ -438,7 +444,7 @@
" weights, biases = initialize_NN(layers) \n",
" \n",
" # tf placeholders and graph\n",
- " ## This converts the data into a Tensorflow format\n",
+ " ## This converts the data into a TensorFlow format\n",
" sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(allow_soft_placement=True,\n",
" log_device_placement=True))\n",
" \n",
@@ -462,10 +468,10 @@
" # #\n",
" ## the optimizer is something that can be tuned to different requirements #\n",
" ## we have not investigated using different optimizers, the orignal code uses L-BFGS-B which # \n",
- " ## is not tensorflow 2 compatible #\n",
+ " ## is not TensorFlow 2 compatible #\n",
" # #\n",
- " # SELECT OPTAMIZER BY UNCOMMENTING OUT one of the below lines AND RERUNNING CODE #\n",
- " # You can alsoe edit the learning rate to see the effect of that #\n",
+ " # SELECT OPTIMIZER BY UNCOMMENTING one of the below lines AND RERUNNING CODE #\n",
+ " # You can also edit the learning rate to see the effect of that #\n",
" # #\n",
" ##############################################################################################\n",
" \n",
@@ -474,13 +480,13 @@
" # optimizer = tf.compat.v1.train.AdagradOptimizer(learning_rate) # 8 %\n",
" # optimizer = tf.compat.v1.train.ProximalGradientDescentOptimizer(learning_rate) \n",
" # optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate) \n",
- " # optimizer = tf.compat.v1.train.AdadeltaOptimizer(learning_rate) # yeilds poor results\n",
+ " # optimizer = tf.compat.v1.train.AdadeltaOptimizer(learning_rate) # yields poor results\n",
" # ptimizer = tf.compat.v1.train.FtrlOptimizer(learning_rate) \n",
" \n",
" \n",
" \n",
" \n",
- " # LEAVE THESE OPIMISERS ALONE\n",
+ " # LEAVE THESE OPIMIZERS ALONE\n",
" optimizer_Adam = tf.compat.v1.train.AdamOptimizer()\n",
" train_op_Adam = optimizer_Adam.minimize(loss) \n",
"\n",
@@ -508,7 +514,7 @@
"- followed by 8 fully connected layers each containing 20 neurons and each followed by a hyperbolic tangent activation function,\n",
"- one fully connected output layer.\n",
"\n",
- "This setting results in a network with a first hidden layer: $2 \\cdot 20 + 20 = 60$; $9$ intermediate layers: each $20 \\cdot 20 + 20 = 540$; output layer: $20 \\cdot 1 + 1 = 21$).\n",
+ "This setting results in a network with a first hidden layer: $2 \\cdot 20 + 20 = 60$ parameters; $7$ intermediate layers: each $20 \\cdot 20 + 20 = 540$; output layer: $20 \\cdot 1 + 1 = 21$).\n",
" \n",
"\n",
""
@@ -523,7 +529,7 @@
" \n",
"# Number of collocation points \n",
" \n",
- "`2000` colloction points is the default setting for this example this can be increased to improve results at cost of computational speed. The original work set this `N_u=10000` running on GPU's in a few minutes. \n",
+ "`2000` collocation points is the default setting for this example. This can be increased to improve results at cost of computational speed. The original work set this `N_u=10000` running on GPUs in a few minutes.\n",
" \n",
" \n",
"The network takes in data in coordinate pairs: $(x,t) \\mapsto u$. \n",
@@ -531,7 +537,7 @@
"\n",
"\n",
"\n",
- "Once you have run through the notebook once you may wish to alter any the following \n",
+ "Once you have run through the notebook once you may wish to alter any the following:\n",
" \n",
"- number of data training points `N_u`\n",
"- number of collocation training points `N_f`\n",
@@ -550,7 +556,7 @@
"source": [
"k = 1\n",
"N_u = 100 #100 # number of data points\n",
- "N_f = 2000 # Coloaction points \n",
+ "N_f = 2000 # Collocation points \n",
"# structure of network: two inputs (x,t) and one output u\n",
"# 8 fully connected layers with 20 nodes per layer\n",
"layers = [2, 20, 20, 20, 20, 20, 20, 20, 20, 1]"
@@ -560,9 +566,7 @@
"cell_type": "code",
"execution_count": null,
"id": "a13e702a",
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"outputs": [],
"source": [
"data = scipy.io.loadmat(\"Data/heatEquation_data.mat\")\n",
@@ -636,11 +640,11 @@
"\n",
"# Initalise the neural network \n",
" \n",
- "`init` is called passing in the training data `X_u_train` and `u_train` with information about the neural network layers and bounds `lb` `ub`\n",
+ "`init` is called passing in the training data `X_u_train` and `u_train` with information about the neural network layers and bounds `lb` `ub`.\n",
" \n",
"# Extract vars\n",
" \n",
- "`init` reformats some of the data and outputs model features that we need to pass into the training function `train`\n",
+ "`init` reformats some of the data and outputs model features that we need to pass into the training function `train`.\n",
"\n",
"
"
]
@@ -667,7 +671,6 @@
"metadata": {},
"outputs": [],
"source": [
- "\n",
"def train(sess, nIter,x_tf, t_tf, u_tf,x, t,u_train, loss, train_op_Adam, optimizer_Adam): \n",
" tf_dict = {x_tf: x, t_tf: t, u_tf: u}\n",
"\n",
@@ -680,14 +683,12 @@
" elapsed = time() - start_time\n",
" loss_value = sess.run(loss, tf_dict)\n",
"\n",
- " print('It: %d, Loss: %.3e, l1: %.3f, l2: %.5f, Time: %.2f' % \n",
+ " print('It: %d, Loss: %.3e, Time: %.2f' % \n",
" (it, loss_value, elapsed))\n",
" start_time = time()\n",
"\n",
" \n",
- " optimizer.minimize(loss)\n",
- " \n",
- "\n"
+ " optimizer.minimize(loss)"
]
},
{
@@ -740,7 +741,7 @@
"\n",
"# Use trained model to predict from data sample\n",
" \n",
- "`predict` will predict `u` using the trained model\n",
+ "`predict` will predict `u` using the trained model.\n",
"\n",
""
]
@@ -752,7 +753,6 @@
"metadata": {},
"outputs": [],
"source": [
- "\n",
"def predict(sess, x_star,u_star, u_pred, f_pred):\n",
" tf_dict = {x_tf: x_star, t_tf: u_star}\n",
" u_star = sess.run(u_pred, tf_dict)\n",
@@ -829,7 +829,7 @@
"source": [
"\n",
"\n",
- "# Plot Exact and Precticed $(u,t)$\n",
+ "# Plot Exact and Predicted $(u,t)$\n",
" \n",
"\n",
"\n",
@@ -969,7 +969,7 @@
"\n",
"**Results**\n",
"\n",
- "Above are the results of the PINN. The error for recreating the full solution field is $\\approx 10 \\%$, despite using only $N_u = 100$ data points. This shows the power of PINNs to learn from sparse measurements by augmanting the available observational data with knowledge of the underlying physics (i.e. governing equations). \n",
+ "Above are the results of the PINN. The error for recreating the full solution field is $\\approx 10 \\%$, despite using only $N_u = 100$ data points. This shows the power of PINNs to learn from sparse measurements by augmenting the available observational data with knowledge of the underlying physics (i.e. governing equations).\n",
"\n",
"The three colourmaps show the PINN prediction, the exact solution from the numerical method and the relative error between these two fields. We can see that the errors are largest near $t=0$ and $x=0$, but that overall the agreement is very good.\n",
"\n",
@@ -1007,7 +1007,7 @@
"Remembering that in 1D, the heat equation can be written as:\n",
"\n",
"\\begin{equation}\n",
- "\\frac{\\partial u}{\\partial t} = k \\frac{\\partial^2 u}{\\partial x^2 }\n",
+ "\\frac{\\partial u}{\\partial t} = k \\frac{\\partial^2 u}{\\partial x^2 },\n",
"\\end{equation}\n",
"\n",
"where $k$ is a material parameter called the coefficient of thermal diffusivitiy. For this notebook, we have solved the above equation numerically on a domain of $x \\in [0,1]$ and $t \\in [0, 0.25]$. Solving this equation numerically gives us a spatiomtemporal domain $(x,t)$ and corresponding values of the solution $u$.\n",
@@ -1056,7 +1056,7 @@
"outputs": [],
"source": [
"N_u = 100 #100 # number of data points\n",
- "N_f = 2000 # Coloaction points \n",
+ "N_f = 2000 # Collocation points \n",
"# structure of network: two inputs (x,t) and one output u\n",
"# 8 fully connected layers with 20 nodes per layer\n",
"layers = [2, 20, 20, 20, 20, 20, 20, 20, 20, 1]"
@@ -1069,7 +1069,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# This code duplicated from above incase you have been playing with parameters\n",
+ "# This code duplicated from above in case you have been playing with parameters\n",
"data = scipy.io.loadmat(\"Data/heatEquation_data.mat\")\n",
"t = data['t'].flatten()[:,None] # read in t and flatten into column vector\n",
"x = data['x'].flatten()[:,None] # read in x and flatten into column vector\n",
@@ -1124,7 +1124,7 @@
"source": [
"
\n",
"\n",
- "now we will use all the same fuctions as before except we will modify `k` and the train function to handle a changing `k` value \n",
+ "Now we will use all the same functions as before except we will modify `k` and the train function to handle a changing `k` value.\n",
" \n",
"
"
]
@@ -1164,10 +1164,10 @@
" if it % 50 == 0:\n",
" elapsed = time() - start_time\n",
" loss_value = sess.run(loss, tf_dict)\n",
- " k_value = self.sess.run(self.k)\n",
+ " k_value = sess.run(k)\n",
" print('It: %d, Loss: %.3e, k: %.3f, Time: %.2f' % \n",
" (it, loss_value, k_value, elapsed))\n",
- " start_time = time.time()\n",
+ " start_time = time()\n",
"\n",
" optimizer_Adam.minimize(loss)"
]
@@ -1388,17 +1388,17 @@
"Congratulations, you have now trained your second physics-informed neural network!\n",
"\n",
"This network contains a number of hyper-parameters that could be tuned to give better results. Various hyper-parameters include:\n",
- "- number of data training points N_u\n",
- "- number of collocation training points N_f\n",
- "- number of layers in the network\n",
- "- number of neurons per layer\n",
- "- weightings for the data and PDE loss terms in the loss function (currently we use loss = loss_PDE + 5*loss_data)\n",
- "- initialisation value for k\n",
- "- optimisation \n",
+ "- number of data training points N_u,\n",
+ "- number of collocation training points N_f,\n",
+ "- number of layers in the network,\n",
+ "- number of neurons per layer,\n",
+ "- weightings for the data and PDE loss terms in the loss function (currently we use loss = loss_PDE + 5*loss_data),\n",
+ "- initialisation value for k,\n",
+ "- optimisation.\n",
"\n",
"It is also possible to use different sampling techniques for training data points. We randomly select $N_u$ data points, but alternative methods could be choosing only boundary points or choosing more points near the $t=0$ boundary.\n",
"\n",
- "return [here](#1D-Heat-Equation) to try out some of these changes if you like, or [here](#init) to alter optimization method used\n",
+ "Return [here](#1D-Heat-Equation) to try out some of these changes if you like, or [here](#init) to alter the optimization method used.\n",
" \n",
" \n",
"
"
@@ -1433,19 +1433,11 @@
" \n",
""
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "bda320b0",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -1459,7 +1451,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.10"
+ "version": "3.9.18"
}
},
"nbformat": 4,
diff --git a/PINNs_1DHeatEquation_nonML.ipynb b/PINNs_1DHeatEquation_nonML.ipynb
index 756a439..0160d8f 100644
--- a/PINNs_1DHeatEquation_nonML.ipynb
+++ b/PINNs_1DHeatEquation_nonML.ipynb
@@ -19,21 +19,21 @@
"source": [
"# Overview\n",
"\n",
- "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep LearningFramework for Solving Forward and Inverse ProblemsInvolving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of NonlinearPartial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
+ "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of NonlinearPartial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
"\n",
- "These tutorials will go through solving Partial Differential Equations using Physics Informed Neural Networks focusing on the Burgers Equation and a more complex example using the Navier Stokes Equation\n",
+ "These tutorials will go through solving Partial Differential Equations using Physics Informed Neural Networks focusing on the Burgers Equation and a more complex example using the Navier Stokes Equation.\n",
"\n",
- "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through)** \n",
+ "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through).**\n",
"\n",
"\n",
- "If you have not already then in your repositoy directory please run the following code. \n",
+ "If you have not already then in your repository directory please run the following code.\n",
" \n",
"```bash\n",
"git submodule init\n",
"git submodule update --init --recursive\n",
"```\n",
" \n",
- "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder**\n",
+ "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder.**\n",
"
"
]
},
@@ -46,7 +46,9 @@
"\n",
"Physics Informed Neural Networks
\n",
"\n",
- "For a typical Neural Network using algorithms like gradient descent to look for a hypothesis, data is the only guide, however if the data is noisy or sparse and we already have governing physical models we can use the knowledge we already know to optimize and inform the algorithms. This can be done via [feature engineering]() or by adding a physical inconsistency term to the loss function.\n",
+ "For a typical Neural Network using algorithms like gradient descent to look for a hypothesis, data is the only guide. However, if the data are noisy or sparse and we already have governing physical models, we can use the knowledge we already know to optimize and inform the algorithms. This can be done via [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering) or by adding a physical inconsistency term to the loss function.\n",
+ "\n",
+ "\n",
"\n",
"\n",
" \n",
@@ -54,12 +56,12 @@
" \n",
"## The very basics\n",
"\n",
- "If you know nothing about neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). Creating a 2 layer neural network to illustrate the fundamentals of how Neural Networks work and the equivalent code using the python machine learning library [tensorflow](https://keras.io/).\n",
+ "If you are new to neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). There we build a two-layer neural network to illustrate the fundamentals of how Neural Networks work, and give equivalent code using the Python machine learning library [TensorFlow](https://keras.io/).\n",
"\n",
" \n",
"## Recommended reading\n",
" \n",
- "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more here are some great starting points. \n",
+ "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more, here are some great starting points. \n",
" \n",
"\n",
"* [All you need to know on Neural networks](https://towardsdatascience.com/nns-aynk-c34efe37f15a) \n",
@@ -88,9 +90,9 @@
" \n",
"## Physics informed Neural Networks\n",
"\n",
- "Neural networks work by using lots of data to calculate weights and biases from data alone to minimise the loss function enabling them to act as universal function approximators. However these lose their robustness when data is limited. However by using known physical laws or empirical validated relationships the solutions from neural networks can be sufficiently constrained by disregarding no realistic solutions.\n",
+ "Neural networks work by using lots of data to calculate weights and biases which minimise the loss function, enabling them to act as universal function approximators. Because they rely purely on observed relationships in the data, these networks lose their robustness when data is limited. However, by using known physical laws or empirically validated relationships, the solutions from neural networks can be sufficiently constrained by disregarding unrealistic solutions.\n",
" \n",
- "A Physics Informed Neural Network considers a parameterized and nonlinear partial differential equation in the general form;\n",
+ "A Physics Informed Neural Network considers a parameterized and nonlinear partial differential equation in the general form\n",
"\n",
"\n",
"\n",
@@ -101,9 +103,9 @@
" \n",
"\n",
"\n",
- "where $\\mathcal{u(t,x)}$ denores the hidden solution, $\\mathcal{N}$ is a nonlinear differential operator acting on $u$, $\\mathcal{\\lambda}$ and $\\Omega$ is a subset of $\\mathbb{R}^D$ (the prescribed data). This set up an encapsulation of a wide range of problems such as diffusion processes, conservation laws, advection-diffusion-reaction systems, and kinetic equations and conservation laws.\n",
+ "where $\\mathcal{u(t,x)}$ denotes the hidden solution, $\\mathcal{N}$ is a nonlinear differential operator acting on $u$, $\\mathcal{\\lambda}$ and $\\Omega$ is a subset of $\\mathbb{R}^D$ (the prescribed data). This set up encapsulates a wide range of problems such as diffusion processes, conservation laws, advection-diffusion-reaction systems, and kinetic equations and conservation laws.\n",
"\n",
- "Here we will go though this for the 1 Heat equation and Navier stokes equations\n",
+ "Here we will go though this for the 1D Heat equation and for the Navier Stokes equations.\n",
"\n",
"\n",
" "
@@ -119,14 +121,14 @@
" Python
\n",
"\n",
" \n",
- "## Tensorflow \n",
+ "## TensorFlow \n",
" \n",
- "There are many machine learning python libraries available, [TensorFlow](https://www.tensorflow.org/) a is one such library. If you have GPUs on the machine you are using TensorFlow will automatically use them and run the code even faster!\n",
+ "There are many machine learning Python libraries available. [TensorFlow](https://www.tensorflow.org/) is one such library. If you have GPUs on your machine, TensorFlow will automatically use them and run the code even faster!\n",
"\n",
"## Further Reading\n",
"\n",
"* [Running Jupyter Notebooks](https://jupyter.readthedocs.io/en/latest/running.html#running)\n",
- "* [Tensorflow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
+ "* [TensorFlow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
"\n",
"\n",
" \n",
@@ -142,19 +144,19 @@
" \n",
" Requirements
\n",
"\n",
- "These notebooks should run with the following requirements satisfied\n",
+ "These notebooks should run with the following requirements satisfied.\n",
"\n",
" Python Packages:
\n",
"\n",
"* Python 3\n",
- "* tensorflow > 2\n",
+ "* TensorFlow > 2\n",
"* numpy \n",
"* matplotlib\n",
"* scipy\n",
"\n",
" Data Requirements
\n",
" \n",
- "This notebook referes to some data included in the git hub repositroy imported via the git submodules command mentioned in the installation instructions\n",
+ "This notebook refers to some data included in the GitHub repository, imported via the git submodules command mentioned in the installation instructions.\n",
" \n",
"\n"
]
@@ -273,7 +275,7 @@
"\n",
" 1D Heat Equation (inverse)
\n",
"\n",
- "Given the forward model F and a noisy measurament d of the temperature profile at time T, find the initial temperature profile m\n",
+ "Given the forward model F and a noisy measurement d of the temperature profile at time T, find the initial temperature profile m\n",
"\n",
"such that\n",
"\\begin{equation}\n",
@@ -466,14 +468,14 @@
"plot(u_true, \"-b\", label = 'u(T)')\n",
"plot(d, \"og\", label = 'd')\n",
"plt.legend()\n",
- "plt.title('sample data to be used in Niave solver ')\n",
+ "plt.title('Sample data to be used in Naive solver ')\n",
"plt.show()\n",
"\n",
"\n",
"plot(m_true, \"-r\", label = 'm_true')\n",
"plot(m, \"-b\", label = 'm')\n",
"plt.legend()\n",
- "plt.title('Naive Solution coarse mesh')\n",
+ "plt.title('Naive solution coarse mesh')\n",
"plt.show()\n"
]
},
@@ -485,8 +487,8 @@
"\n",
"\n",
"If you have played around with the code above you will see that:\n",
- "- for a very coarse mesh (`nx = 20`) and no measurement noise (`noise_std_dev = 0.0`) the naive solution is quite good\n",
- "- for a finer mesh (`nx = 100`) and/or even small measurement noise (`noise_std_dev = 0.0001`) the naive solution is very poor\n",
+ "- for a very coarse mesh (`nx = 20`) and no measurement noise (`noise_std_dev = 0.0`) the naive solution is quite good;\n",
+ "- for a finer mesh (`nx = 100`) and/or even small measurement noise (`noise_std_dev = 0.0001`) the naive solution is very poor.\n",
"\n",
"
"
]
@@ -508,10 +510,10 @@
"\\begin{equation} \\mathcal{F} v_n = \\lambda_n v_n, \\quad \\text{where the eigenvalues } \\lambda_n = e^{-kT\\left(\\frac{\\pi}{L} n \\right)^2}. \\end{equation}\n",
"\n",
"**Note 1**:\n",
- "- Large eigenvalues $\\lambda_n$ corresponds to smooth eigenfunctions $v_n$;\n",
- "- Small eigenvalues $\\lambda_n$ corresponds to oscillatory eigenfuctions $v_n$.\n",
+ "- Large eigenvalues $\\lambda_n$ correspond to smooth eigenfunctions $v_n$;\n",
+ "- Small eigenvalues $\\lambda_n$ correspond to oscillatory eigenfuctions $v_n$.\n",
"\n",
- "The figure below shows that the eigenvalues $\\lambda_n$ decays extremely fast, that is the matrix $F$ (discretization of the forward model $\\mathcal{F}$) is extremely ill conditioned.\n",
+ "The figure below shows that the eigenvalues $\\lambda_n$ decay extremely fast, that is the matrix $F$ (discretization of the forward model $\\mathcal{F}$) is extremely ill conditioned.\n",
"\n",
""
]
@@ -533,7 +535,7 @@
"plt.semilogy(i, lambdas, 'ob')\n",
"plt.xlabel('i')\n",
"plt.ylabel('lambda_i')\n",
- "plt.title('Eigen Value decay')\n",
+ "plt.title('Eigenvalue decay')\n",
"plt.show()"
]
},
@@ -556,10 +558,10 @@
"\\begin{equation} d = \\mathcal{F}m_{\\rm true} + \\eta, \\end{equation}\n",
"\n",
"where\n",
- "- $d$ is the data (noisy measurements)\n",
- "- $\\eta$ is the noise: $\\eta(x) = \\sum_{n=1}^\\infty \\eta_n v_n(x)$\n",
- "- $m_{\\rm true}$ is the true value of the parameter that generated the data\n",
- "- $\\mathcal{F}$ is the forward heat equation\n",
+ "- $d$ is the data (noisy measurements),\n",
+ "- $\\eta$ is the noise: $\\eta(x) = \\sum_{n=1}^\\infty \\eta_n v_n(x)$,\n",
+ "- $m_{\\rm true}$ is the true value of the parameter that generated the data,\n",
+ "- $\\mathcal{F}$ is the forward heat equation.\n",
"\n",
"Then, the naive solution to the inverse problem $\\mathcal{F}m = d$ is\n",
"\n",
@@ -583,33 +585,17 @@
"\n",
"## Next steps\n",
"\n",
- "Now we've gone through a Naive manual approach to solving a simple 1D Heat equation we look at the benefits of using neural networks to solve more complex equations starting with the next notebook linked below: \n",
+ "Now we've gone through a naive manual approach to solving a simple 1D Heat equation, we look at the benefits of using neural networks to solve more complex equations starting with the next notebook linked below: \n",
" \n",
"[1D Heat Equation PINN Example](PINNs_1DHeatEquationExample.ipynb)\n",
" \n",
""
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "3fcc6214",
- "metadata": {},
- "outputs": [],
- "source": []
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "fb2c3b34",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -623,7 +609,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.10"
+ "version": "3.9.18"
}
},
"nbformat": 4,
diff --git a/PINNs_NavierStokes_HFM.ipynb b/PINNs_NavierStokes_HFM.ipynb
index a953999..a872476 100644
--- a/PINNs_NavierStokes_HFM.ipynb
+++ b/PINNs_NavierStokes_HFM.ipynb
@@ -19,11 +19,11 @@
"source": [
"# Overview\n",
"\n",
- "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep LearningFramework for Solving Forward and Inverse ProblemsInvolving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of NonlinearPartial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
+ "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
"\n",
- "These tutorials will go through solving Partial Differential Equations using Physics Informed Neuaral Networks focusing on the 1D Heat Equation and a more complex example using the Navier Stokes Equation\n",
+ "These tutorials will go through solving Partial Differential Equations using Physics Informed Neural Networks, focusing on the Burgers Equation and a more complex example using the Navier Stokes Equations.\n",
"\n",
- "**This notebook is a breif illustrative overview of Hidden Physics Models beyond the scope of these tutorials**"
+ "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through).** "
]
},
{
@@ -32,15 +32,15 @@
"metadata": {},
"source": [
"\n",
- " \n",
- "If you have not already then in your repositoy directory please run the following code in your terminal (linux or Mac) or via git bash. \n",
+ "If you have not already then in your repository directory please run the following code. Via the terminal (mac or linux) or gitbash (windows)\n",
" \n",
"```bash\n",
"git submodule init\n",
"git submodule update --init --recursive\n",
"```\n",
+ "\n",
+ "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder.**\n",
" \n",
- " **If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder**\n",
"
"
]
},
@@ -53,20 +53,23 @@
"\n",
"Physics Informed Neural Networks
\n",
"\n",
- "For a typical Neural Network using algorithims like gradient descent to look for a hypothesis, data is the only guide, however if the data is noisy or sparse and we already have governing physical models we can use the knowledge we already know to optamize and inform the algoithms. This can be done via [feature enginnering]() or by adding a physicall inconsistency term to the loss function.\n",
+ "For a typical Neural Network using algorithms like gradient descent to look for a hypothesis, data is the only guide. However, if the data are noisy or sparse and we already have governing physical models, we can use the knowledge we already know to optimize and inform the algorithms. This can be done via [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering) or by adding a physical inconsistency term to the loss function.\n",
+ "\n",
+ "\n",
"\n",
"\n",
" \n",
- " \n",
+ " \n",
" \n",
"## The very basics\n",
"\n",
- "If you know nothing about neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). Creating a 2 layer neural network to illustrate the fundamentals of how Neural Networks work and the equivlent code using the python machine learning library [tensorflow](https://keras.io/). \n",
+ "If you are new to neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). There we build a two-layer neural network to illustrate the fundamentals of how Neural Networks work, and give equivalent code using the Python machine learning library [TensorFlow](https://keras.io/).\n",
"\n",
" \n",
- "## Recommended reading \n",
+ "## Recommended reading\n",
" \n",
- "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more here are some great starting points. \n",
+ "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more, here are some great starting points. \n",
+ " \n",
"\n",
"* [All you need to know on Neural networks](https://towardsdatascience.com/nns-aynk-c34efe37f15a) \n",
"* [Introduction to Neural Networks](https://victorzhou.com/blog/intro-to-neural-networks/)\n",
@@ -85,7 +88,7 @@
"\n",
"# Hidden Fluid Mechanics #\n",
"\n",
- "In this notebook, we will utilise a more advanced implementation of PINNs, taken from Maziar Raissi's paper Hidden Fluid Mechanics. In many fluid flow scenarios, direct measurement of variables such as velocity and pressure is not possible; However, we may have access to measurements of some passive scalar field (such as smoke concentration in wind tunnel testing). This work aims to use PINNs to uncover hidden velocity and pressure fields for flow problems, utilising data drawn only from measurements of some passive scalar, $c(t,x,y)$. This scalar is governed by the transport equation:\n",
+ "In this notebook, we will utilise a more advanced implementation of PINNs, taken from Maziar Raissi's paper Hidden Fluid Mechanics. In many fluid flow scenarios, direct measurement of variables such as velocity and pressure is not possible. However, we may have access to measurements of some passive scalar field (such as smoke concentration in wind tunnel testing). This work aims to use PINNs to uncover hidden velocity and pressure fields for flow problems, utilising data drawn only from measurements of some passive scalar, $c(t,x,y)$. This scalar is governed by the transport equation:\n",
"\n",
"\\begin{equation}\n",
"c_t + u c_x + v c_y = \\text{Pec}^{-1} \\left(c_{xx} + c_{yy}\\right)\n",
@@ -107,9 +110,9 @@
"\n",
"![](../images/Network.png)\n",
"\n",
- "The network has four inputs, as expected, and six outputs, namely the three velocity components, the pressure, the concentration, c, and one final variable, d. d(t,x,y,z) is an 'auxilliary variable', defined to be the complement of c (i.e. d = 1 - c), and is governed by the same transport equation as c. Its inclusion improves prediction accuracy, and helps in detecting boundary locations.\n",
+ "The network has four inputs, as expected, and six outputs, namely the three velocity components, the pressure, the concentration, c, and one final variable, d. d(t,x,y,z) is an 'auxiliary variable', defined to be the complement of c (i.e. d = 1 - c), and is governed by the same transport equation as c. Its inclusion improves prediction accuracy, and helps in detecting boundary locations.\n",
"\n",
- "The data for this notebook was generated by Raissi et al. using a spectral element sovler called NekTar. The Navier-Stokes and transport equations are numerically approximated to a high degree of accuracy. \n",
+ "The data for this notebook was generated by Raissi et al. using a spectral element solver called NekTar. The Navier-Stokes and transport equations are numerically approximated to a high degree of accuracy.\n",
"\n",
"The fluid problem at hand is 2D channel flow over an obstacle. A crude diagram of the flow domain can be seen below:\n",
"![](../images/hfminfo2.png)\n",
@@ -130,14 +133,14 @@
" Python
\n",
"\n",
" \n",
- "## Tensorflow \n",
+ "## TensorFlow \n",
" \n",
- "There are many machine learning python libraries available, [TensorFlow](https://www.tensorflow.org/) a is one such library. If you have GPUs on the machine you are using TensorFlow will automatically use them and run the code even faster!\n",
+ "There are many machine learning Python libraries available. [TensorFlow](https://www.tensorflow.org/) is one such library. If you have GPUs on your machine, TensorFlow will automatically use them and run the code even faster!\n",
"\n",
"## Further Reading\n",
"\n",
"* [Running Jupyter Notebooks](https://jupyter.readthedocs.io/en/latest/running.html#running)\n",
- "* [Tensorflow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
+ "* [TensorFlow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
"\n",
"\n",
" \n",
@@ -153,19 +156,19 @@
" \n",
" Requirements
\n",
"\n",
- "These notebooks should run with the following requirements satisfied\n",
+ "These notebooks should run with the following requirements satisfied.\n",
"\n",
" Python Packages:
\n",
"\n",
"* Python 3\n",
- "* tensorflow > 2\n",
+ "* TensorFlow > 2\n",
"* numpy \n",
"* matplotlib\n",
"* scipy\n",
"\n",
" Data Requirements
\n",
" \n",
- "This notebook referes to some data included in the git hub repositroy\n",
+ "This notebook refers to some data included in the GitHub repository, imported via the git submodules command mentioned in the installation instructions.\n",
" \n",
"\n"
]
@@ -197,7 +200,7 @@
"metadata": {},
"source": [
"\n",
- "Load in all required modules (includig some auxillary code) and turn off warnings. Make sure Keras session is clear\n",
+ "Load in all required modules (including some auxiliary code) and turn off warnings. Make sure Keras session is clear.\n",
"
"
]
},
@@ -221,6 +224,9 @@
"outputs": [],
"source": [
"import sys\n",
+ "import os\n",
+ "import requests\n",
+ "import zipfile\n",
"sys.path.insert(0, 'PINNs/Utilities/')\n",
"import tensorflow as tf\n",
"import numpy as np\n",
@@ -662,6 +668,41 @@
"fi"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "16acb23e-60d5-4fe7-83e6-5330ce71fd45",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Check if the file 'Data/Stenosis2D.mat' exists\n",
+ "if not os.path.exists('Data/Stenosis2D.mat'):\n",
+ " print(\"Grabbing additional data...\")\n",
+ " \n",
+ " # Download the ZIP file\n",
+ " url = \"https://gitlab.com/CEMACHELEN1/lifd_ml_large_file_store/-/archive/main/lifd_ml_large_file_store-main.zip\"\n",
+ " response = requests.get(url)\n",
+ " \n",
+ " # Save the ZIP file\n",
+ " with open(\"lifd_ml_large_file_store-main.zip\", \"wb\") as zip_file:\n",
+ " zip_file.write(response.content)\n",
+ " \n",
+ " # Extract the contents\n",
+ " with zipfile.ZipFile(\"lifd_ml_large_file_store-main.zip\", \"r\") as zip_ref:\n",
+ " zip_ref.extractall(\"Data\")\n",
+ " \n",
+ " # Clean up\n",
+ " os.remove(\"lifd_ml_large_file_store-main.zip\")\n",
+ " \n",
+ " # Move the required file\n",
+ " os.rename(\"Data/lifd_ml_large_file_store-main/Physics_Informed_Neural_Networks/Data/Stenosis2D.mat\", \"Data/Stenosis2D.mat\")\n",
+ " \n",
+ " # List the contents of the 'Data' directory\n",
+ " print(\"Contents of 'Data':\")\n",
+ " for item in os.listdir(\"Data\"):\n",
+ " print(item)"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -949,19 +990,11 @@
" \n",
""
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "ad094b0c",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -975,7 +1008,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.10"
+ "version": "3.9.18"
}
},
"nbformat": 4,
diff --git a/PINNs_NavierStokes_example.ipynb b/PINNs_NavierStokes_example.ipynb
index cb93206..b76eabe 100644
--- a/PINNs_NavierStokes_example.ipynb
+++ b/PINNs_NavierStokes_example.ipynb
@@ -19,11 +19,11 @@
"source": [
"# Overview\n",
"\n",
- "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep LearningFramework for Solving Forward and Inverse ProblemsInvolving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of NonlinearPartial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
+ "This notebook is based on two papers: *[Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999118307125)* and *[Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations](https://www.sciencedirect.com/science/article/pii/S0021999117309014)* with the help of Fergus Shone and Michael Macraild.\n",
"\n",
- "These tutorials will go through solving Partial Differential Equations using Physics Informed Neuaral Networks focusing on the Burgers Equation and a more complex example using the Navier Stokes Equation\n",
+ "These tutorials will go through solving Partial Differential Equations using Physics Informed Neural Networks, focusing on the Burgers Equation and a more complex example using the Navier Stokes Equations.\n",
"\n",
- "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through)** "
+ "**This introduction section is replicated in all PINN tutorial notebooks (please skip if you've already been through).** "
]
},
{
@@ -32,13 +32,14 @@
"metadata": {},
"source": [
"\n",
- "If you have not already then in your repositoy directory please run the following code. Via the terminal (mac or linux) or gitbash (windows)\n",
+ "If you have not already then in your repository directory please run the following code. Via the terminal (mac or linux) or gitbash (windows)\n",
" \n",
"```bash\n",
"git submodule init\n",
"git submodule update --init --recursive\n",
"```\n",
- "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder**\n",
+ "\n",
+ "**If this does not work please clone the [PINNs](https://github.com/maziarraissi/PINNs) repository into your Physics_Informed_Neural_Networks folder.**\n",
" \n",
"
"
]
@@ -52,20 +53,23 @@
"\n",
"Physics Informed Neural Networks
\n",
"\n",
- "For a typical Neural Network using algorithims like gradient descent to look for a hypothesis, data is the only guide, however if the data is noisy or sparse and we already have governing physical models we can use the knowledge we already know to optamize and inform the algoithms. This can be done via [feature enginnering]() or by adding a physicall inconsistency term to the loss function.\n",
+ "For a typical Neural Network using algorithms like gradient descent to look for a hypothesis, data is the only guide. However, if the data are noisy or sparse and we already have governing physical models, we can use the knowledge we already know to optimize and inform the algorithms. This can be done via [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering) or by adding a physical inconsistency term to the loss function.\n",
+ "\n",
+ "\n",
"\n",
"\n",
" \n",
- " \n",
+ " \n",
" \n",
"## The very basics\n",
"\n",
- "If you know nothing about neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). Creating a 2 layer neural network to illustrate the fundamentals of how Neural Networks work and the equivlent code using the python machine learning library [tensorflow](https://keras.io/). \n",
+ "If you are new to neural networks there is a [toy neural network python code example](https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS/tree/main/ToyNeuralNetwork) included in the [LIFD ENV ML Notebooks Repository]( https://github.com/cemac/LIFD_ENV_ML_NOTEBOOKS). There we build a two-layer neural network to illustrate the fundamentals of how Neural Networks work, and give equivalent code using the Python machine learning library [TensorFlow](https://keras.io/).\n",
"\n",
" \n",
- "## Recommended reading \n",
+ "## Recommended reading\n",
" \n",
- "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more here are some great starting points. \n",
+ "The in-depth theory behind neural networks will not be covered here as this tutorial is focusing on application of machine learning methods. If you wish to learn more, here are some great starting points. \n",
+ " \n",
"\n",
"* [All you need to know on Neural networks](https://towardsdatascience.com/nns-aynk-c34efe37f15a) \n",
"* [Introduction to Neural Networks](https://victorzhou.com/blog/intro-to-neural-networks/)\n",
@@ -93,18 +97,22 @@
" \n",
"## Physics informed Neural Networks\n",
"\n",
- "Neural networks work by using lots of data to calculate weights and biases from data alone to minimise the loss function enabling them to act as universal fuction approximators. However these loose their robustness when data is limited. However by using know physical laws or empirical validated relationships the solutions from neural networks can be sufficiently constrianed by disregardins no realistic solutions.\n",
+ "Neural networks work by using lots of data to calculate weights and biases which minimise the loss function, enabling them to act as universal function approximators. Because they rely purely on observed relationships in the data, these networks lose their robustness when data is limited. However, by using known physical laws or empirically validated relationships, the solutions from neural networks can be sufficiently constrained by disregarding unrealistic solutions.\n",
" \n",
- "A Physics Informed Nueral Network considers a parameterized and nonlinear partial differential equation in the genral form;\n",
+ "A Physics Informed Neural Network considers a parameterized and nonlinear partial differential equation in the general form\n",
+ "\n",
+ "\n",
"\n",
+ " \n",
"\\begin{align}\n",
- "u_t + \\mathcal{N}[u; \\lambda] = 0, x \\in \\Omega, t \\in [0,T],\\\\\n",
+ " u_t + \\mathcal{N}[u; \\lambda] &= 0, && x \\in \\Omega, t \\in [0,T],\\\\\n",
"\\end{align}\n",
+ " \n",
"\n",
"\n",
- "where $\\mathcal{u(t,x)}$ denores the hidden solution, $\\mathcal{N}$ is a nonlinear differential operator acting on $u$, $\\mathcal{\\lambda}$ and $\\Omega$ is a subset of $\\mathbb{R}^D$ (the perscribed data). This set up an encapuslate a wide range of problems such as diffusion processes, conservation laws, advection-diffusion-reaction systems, and kinetic equations and conservation laws. \n",
+ "where $\\mathcal{u(t,x)}$ denotes the hidden solution, $\\mathcal{N}$ is a nonlinear differential operator acting on $u$, $\\mathcal{\\lambda}$ and $\\Omega$ is a subset of $\\mathbb{R}^D$ (the prescribed data). This set up encapsulates a wide range of problems such as diffusion processes, conservation laws, advection-diffusion-reaction systems, and kinetic equations and conservation laws.\n",
"\n",
- "Here we will go though this for the 1D Heat equation and Navier stokes equations\n",
+ "Here we will go though this for the 1D Heat equation and for the Navier Stokes equations.\n",
"\n",
"\n",
" "
@@ -120,14 +128,14 @@
" Python
\n",
"\n",
" \n",
- "## Tensorflow \n",
+ "## TensorFlow \n",
" \n",
- "There are many machine learning python libraries available, [TensorFlow](https://www.tensorflow.org/) a is one such library. If you have GPUs on the machine you are using TensorFlow will automatically use them and run the code even faster!\n",
+ "There are many machine learning Python libraries available. [TensorFlow](https://www.tensorflow.org/) is one such library. If you have GPUs on your machine, TensorFlow will automatically use them and run the code even faster!\n",
"\n",
"## Further Reading\n",
"\n",
"* [Running Jupyter Notebooks](https://jupyter.readthedocs.io/en/latest/running.html#running)\n",
- "* [Tensorflow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
+ "* [TensorFlow optimizers](https://www.tutorialspoint.com/tensorflow/tensorflow_optimizers.htm)\n",
"\n",
"\n",
" \n",
@@ -143,19 +151,19 @@
" \n",
" Requirements
\n",
"\n",
- "These notebooks should run with the following requirements satisfied\n",
+ "These notebooks should run with the following requirements satisfied.\n",
"\n",
" Python Packages:
\n",
"\n",
"* Python 3\n",
- "* tensorflow > 2\n",
+ "* TensorFlow > 2\n",
"* numpy \n",
"* matplotlib\n",
"* scipy\n",
"\n",
" Data Requirements
\n",
" \n",
- "This notebook referes to some data included in the git hub repositroy\n",
+ "This notebook refers to some data included in the GitHub repository, imported via the git submodules command mentioned in the installation instructions.\n",
" \n",
"\n"
]
@@ -187,7 +195,7 @@
"metadata": {},
"source": [
"\n",
- "Load in all required modules (includig some auxillary code) and turn off warnings. \n",
+ "Load in all required modules (including some auxiliary code) and turn off warnings. \n",
"
"
]
},
@@ -238,14 +246,14 @@
"\n",
" Navier-Stokes inverse data driven discovery of PDE’s
\n",
"\n",
- "Navier-Stokes equations describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the dispersion of pollutants, and many other applications. Let us consider the Navier-Stokes equations in two dimensions (2D) given explicitly by\n",
+ "The Navier-Stokes equations describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the dispersion of pollutants, and many other applications. Let us consider the Navier-Stokes equations in two dimensions (2D) given explicitly by\n",
"\n",
"\\begin{equation} \n",
"u_t + \\lambda_1 (u u_x + v u_y) = -p_x + \\lambda_2(u_{xx} + u_{yy}),\\\\\n",
"v_t + \\lambda_1 (u v_x + v v_y) = -p_y + \\lambda_2(v_{xx} + v_{yy}),\n",
"\\end{equation}\n",
" \n",
- "where $u(t, x, y)$ denotes the $x$-component of the velocity field, $v(t, x, y)$ the $y$-component, and $p(t, x, y)$ the pressure. Here, $\\lambda = (\\lambda_1, \\lambda_2)$ are the unknown parameters. Solutions to the Navier-Stokes equations are searched in the set of divergence-free functions; i.e.,\n",
+ "where $u(t, x, y)$ denotes the $x$-component of the velocity field, $v(t, x, y)$ the $y$-component, and $p(t, x, y)$ the pressure. Here, $\\lambda = (\\lambda_1, \\lambda_2)$ are the unknown parameters. Solutions to the Navier-Stokes equations are sought in the set of divergence-free functions; i.e.,\n",
"\n",
"\\begin{equation} \n",
"u_x + v_y = 0.\n",
@@ -313,7 +321,7 @@
"\\end{bmatrix}\n",
"\\end{equation}\n",
" \n",
- "can be trained by minimizing the mean squared error loss$\n",
+ "can be trained by minimizing the mean squared error loss\n",
"\n",
"\\begin{equation}\n",
"\\begin{array}{rl}\n",
@@ -366,13 +374,13 @@
"source": [
"\n",
"\n",
- "# Initalise the neural network \n",
+ "# Initialise the neural network \n",
" \n",
- "`init` is called passing in the training data `x_train`, `y_train`, `t_train`, `u_train` and `v_train` with information about the neural network layers\n",
+ "`init` is called passing in the training data `x_train`, `y_train`, `t_train`, `u_train` and `v_train` with information about the neural network layers.\n",
" \n",
"# Extract vars\n",
" \n",
- "`init` reformats some of the data and outputs model features that we need to pass into the training function `train`\n",
+ "`init` reformats some of the data and outputs model features that we need to pass into the training function `train`.\n",
"\n",
"
"
]
@@ -387,16 +395,16 @@
"# Advanced \n",
" \n",
" \n",
- "Once you have run through the notebook once you may wish to alter the optamizer used in the `init()` function to see the large effect optamizer choice may have. \n",
+ "Once you have run through the notebook once you may wish to alter the optimizer used in the `init()` function to see the large effect optimizer choice may have.\n",
" \n",
- "We've highlighted in the comments a number of possible optamizers to use from the [tf.compat.v1.train](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train) module. \n",
- "*This method was chosen to limit tensorflow version modifications required from the original source code*\n",
+ "We've highlighted in the comments a number of possible optimizers to use from the [tf.compat.v1.train](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train) module. \n",
+ "*This method was chosen to limit TensorFlow version modifications required from the original source code.*\n",
" \n",
- "You can learn more about different optamizers [here](https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6)\n",
+ "You can learn more about different optimizers [here](https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6).\n",
" \n",
"\n",
"\n",
- "# init"
+ "# Init"
]
},
{
@@ -407,8 +415,8 @@
"outputs": [],
"source": [
"def init(x, y, t, u, v, layers):\n",
- " # This line of code is required to prevent some tensorflow errors arrising from the\n",
- " # inclusion of some tensorflw v 1 code \n",
+ " # This line of code is required to prevent some TensorFlow errors arrising from the\n",
+ " # inclusion of some TensorFlow v1 code \n",
" tf.compat.v1.disable_eager_execution()\n",
" X = np.concatenate([x, y, t], 1)\n",
" # lb and ub denote lower and upper bounds on the inputs to the network\n",
@@ -457,12 +465,12 @@
" \n",
" ##############################################################################################\n",
" # #\n",
- " ## the optimizer is something that can be tuned to different requirements #\n",
- " ## we have not investigated using different optimizers, the orignal code uses L-BFGS-B which # \n",
- " ## is not tensorflow 2 compatible #\n",
+ " # The optimizer is something that can be tuned to different requirements. #\n",
+ " # We have not investigated using different optimizers. The orignal code uses L-BFGS-B which # \n",
+ " # is not TensorFlow 2 compatible. #\n",
" # #\n",
- " # SELECT OPTAMIZER BY UNCOMMENTING OUT one of the below lines AND RERUNNING CODE #\n",
- " # You can alsoe edit the learning rate to see the effect of that #\n",
+ " # SELECT OPTIMIZER BY UNCOMMENTING one of the below lines AND RERUNNING CODE. #\n",
+ " # You can also edit the learning rate to see the effect of that. #\n",
" # #\n",
" ##############################################################################################\n",
" \n",
@@ -498,9 +506,9 @@
"source": [
"\n",
"\n",
- "`neural_net()` constructs the network Y where X is a matrix containing the input and output coordinates, i.e. x,t,u and X is normalised so that all values lie between -1 and 1, this improves training\n",
+ "`neural_net()` constructs the network Y where X is a matrix containing the input and output coordinates, i.e. x,t,u and X is normalised so that all values lie between -1 and 1 (this improves training)\n",
"\n",
- "`net_NS()` is where the PDE is encoded:\n",
+ "`net_NS()` is where the PDE is encoded\n",
" \n",
"
"
]
@@ -512,7 +520,7 @@
"metadata": {},
"outputs": [],
"source": [
- "def neural_net( X, weights, biases,lb, ub):\n",
+ "def neural_net(X, weights, biases,lb, ub):\n",
" \n",
" num_layers = len(weights) + 1\n",
"\n",
@@ -580,13 +588,13 @@
"\n",
"\n",
"\n",
- "Once you have run through the notebook once you may wish to alter any the following \n",
+ "Once you have run through the notebook once you may wish to alter any the following:\n",
" \n",
- "- number of data training points `N_train`\n",
- "- number of layers in the network `layers`\n",
- "- number of neurons per layer `layers`\n",
+ "- number of data training points `N_train`,\n",
+ "- number of layers in the network `layers`,\n",
+ "- number of neurons per layer `layers`,\n",
" \n",
- "to see the impact on the results\n",
+ "to see the impact on the results.\n",
"\n",
"
"
]
@@ -663,8 +671,6 @@
"\n",
"If this fails you may need to restarted the notebook with a flag:\n",
"```bash\n",
- "\n",
- "\n",
"jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10\n",
"\n",
"```\n",
@@ -678,13 +684,13 @@
"source": [
"\n",
"\n",
- "# Initalise the nerual network \n",
+ "# Initalize the neural network \n",
" \n",
- "`init` is called passing in the training data `x_train`, `y_train`, `u_train` and `v_train` with information about the neural network layers. The bound information `lb` `ub` is included in the `init()` function\n",
+ "`init` is called passing in the training data `x_train`, `y_train`, `u_train` and `v_train` with information about the neural network layers. The bound information `lb` `ub` is included in the `init()` function.\n",
" \n",
"# Extract vars\n",
" \n",
- "`init` reformats some of the data and outputs model features that we need to pass into the training function `train`\n",
+ "`init` reformats some of the data and outputs model features that we need to pass into the training function `train`.\n",
" \n",
"
"
]
@@ -837,9 +843,9 @@
"source": [
"\n",
" \n",
- "# Loading Pre trained model option \n",
+ "# Loading pre-trained model option \n",
" \n",
- "If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`\n",
+ "If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`.\n",
"\n",
"
"
]
@@ -878,7 +884,7 @@
"\n",
"# Use trained model to predict from data sample\n",
" \n",
- "`predict` will predict `u`, `v` and `p` using the trained model\n",
+ "`predict` will predict `u`, `v` and `p` using the trained model.\n",
"\n",
""
]
@@ -890,7 +896,7 @@
"source": [
"\n",
"\n",
- "The `predict` function has an option `load=False` set by default. Alter this to `load=True` if you wish to load the previously trained model\n",
+ "The `predict` function has an option `load=False` set by default. Alter this to `load=True` if you wish to load the previously trained model.\n",
" \n",
"
"
]
@@ -931,7 +937,7 @@
"\n",
"# Calculate Errors\n",
" \n",
- "if you have set the number of training iterations large enough the errors should be small.\n",
+ "If you have set the number of training iterations large enough the errors should be small.\n",
"\n",
""
]
@@ -998,7 +1004,7 @@
"\n",
"# Using Noisy Data\n",
" \n",
- "We're now going to repeat the previous steps but include some noise in our data to see the effect of that on our results\n",
+ "We're now going to repeat the previous steps but include some noise in our data to see the effect of that on our results.\n",
"\n",
""
]
@@ -1060,7 +1066,7 @@
" \n",
"# Loading Pre trained model option \n",
" \n",
- "If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`\n",
+ "If the training time is too slow you can skip the following line and load in a pretrained model instead set `loadweights = True` in the next cell. You can play around with different number of iterations to see the effects e.g. setting `saver.restore(sess, netSaveDir + 'model_at_iter15000.ckpt')`.\n",
"\n",
""
]
@@ -1073,7 +1079,7 @@
"outputs": [],
"source": [
"# Training\n",
- "train(sess, 20000, x_tf, y_tf, t_tf, u_tf, v_tf, x, y, t, u_train, v_train, loss, train_op_Adam, optimizer_Adam,\"modelckpts/NSn/\")"
+ "train(sess, Train_iterations, x_tf, y_tf, t_tf, u_tf, v_tf, x, y, t, u_train, v_train, loss, train_op_Adam, optimizer_Adam,\"modelckpts/NSn/\")"
]
},
{
@@ -1124,8 +1130,8 @@
"x_vort = data_vort['x'] \n",
"y_vort = data_vort['y'] \n",
"w_vort = data_vort['w'] \n",
- "modes = np.asscalar(data_vort['modes'])\n",
- "nel = np.asscalar(data_vort['nel']) \n",
+ "modes = np.ndarray.item(data_vort['modes'])\n",
+ "nel = np.ndarray.item(data_vort['nel']) \n",
"\n",
"xx_vort = np.reshape(x_vort, (modes+1,modes+1,nel), order = 'F')\n",
"yy_vort = np.reshape(y_vort, (modes+1,modes+1,nel), order = 'F')\n",
@@ -1332,7 +1338,7 @@
"source": [
" \n",
" \n",
- "if you have not been able to run enough training iterations the figures produced running 10000 iterations can be found:\n",
+ "If you have not been able to run enough training iterations the figures produced running 10000 iterations can be found:\n",
" \n",
"* [Solution with network trained over 10000 iterations](figures/PINNS_NS_10000_PDE.png)\n",
"* [Figure comparing predicted vs exact with network trained over 10000 iterations](figures/PINNS_NS_10000_predict_vs_exact.png)\n",
@@ -1349,7 +1355,7 @@
"\n",
"It is also possible to use different sampling techniques for training data points. We randomly select $N_u$ data points, but alternative methods could be choosing only boundary points or choosing more points near the $t=0$ boundary.\n",
"\n",
- "return [here](#init) to alter optimization method used\n",
+ "Return [here](#init) to alter optimization method used.\n",
" \n",
"
"
]
@@ -1365,24 +1371,16 @@
"\n",
"## Next steps\n",
"\n",
- "Now we've demonstrated using PINNs for more complex equations we can take a breif look at Hidden Fluid Mechanics (*this final notebook is beyond the scope of these tutorials but provided to give a breif example of the methodology*)\n",
+ "Now we've demonstrated using PINNs for more complex equations we can take a brief look at Hidden Fluid Mechanics (*this final notebook is beyond the scope of these tutorials but provided to give a brief example of the methodology*).\n",
" \n",
"[Navier-Stokes PINNs Hidden Fluid Mechanics](PINNs_NavierStokes_HFM.ipynb)\n",
""
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "4f4b2310",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -1396,7 +1394,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.10"
+ "version": "3.9.18"
}
},
"nbformat": 4,