-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
GitHub Actions build openturns/otbenchmark 10957094266
- Loading branch information
GitHub Actions
committed
Sep 20, 2024
1 parent
ff52c6f
commit 63a816f
Showing
369 changed files
with
27,646 additions
and
206 deletions.
There are no files selected for viewing
104 changes: 104 additions & 0 deletions
104
.../_downloads/011e9011be6c44d1edb1eaeb85a11c55/plot_sensitivity_distribution_ishigami.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"\n# Distribution of the Sobol' indices on Ishigami function\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In this document, we consider the Ishigami function and check the distribution of the sensitivity indices.\nWe check that it is consistent with the actual distribution of the estimator.\n\nThe problem is that the exact distribution of the estimator is unknown in general.\nThe asymptotic distribution is known, but this may not reflect the true distribution\nwhen the sample size is not large enough.\nIn order to get a reference distribution of the estimator, we generate a Monte-Carlo sample of the Sobol' indices,\nbut repeating the estimation of the Sobol' indices on indepedent sub-samples.\nThen we use kernel smoothing to approximate the actual distribution of the Sobol' indices.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import openturns as ot\nimport otbenchmark as otb\nimport openturns.viewer as otv\nimport pylab as pl" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## When we estimate Sobol' indices, we may encounter the following warning messages:\nWRN - The estimated first order Sobol index (2) is greater than its total order index ...\nWRN - The estimated total order Sobol index (2) is lesser than first order index ...\n```\nLots of these messages are printed in the current Notebook. This is why we disable them with:\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"ot.Log.Show(ot.Log.NONE)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"problem = otb.IshigamiSensitivity()\nprint(problem)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In the following loop, we compare the distribution of the sensitivity indices computed and the actual distribution\nof the estimator computed by Monte-Carlo sampling (using kernel smoothing).\nThe distribution of the sensitivity indices can be computed either from bootstrap\n(using kernel smoothing to approximate the distribution)\nor from asymptotic analysis (using a Gaussian distribution).\n\nIn both cases, the distribution is estimated using one sample only.\nOn the contrary, the actual distribution of the estimator (i.e. the reference distribution) is computed\nby generating independent realizations of the estimator.\nHence, it is expected that the distribution computed is not centered on the true value of the sensitivity indices.\nInstead, the distribution based on the sample of estimators must be centered on the true value of the index,\nsince these estimators are consistent, converge to the true value when the sample size increase\nand have no observable bias (although this is not proven by theory).\n\nThe two essential parameters in the script are the following:\n\n- `sampleSize` is the size of the sample used to estimated on set of sensitivity\n indices (the number of sensitivity indices is equal to twice, because of first and total order Sobol' indices,\n the product of the number of input variables),\n- `numberOfRepetitions` is the size of the Monte-Carlo sample of sensitivity indices.\n\nWe do not necessarily want to use a large value of `sampleSize`.\nThis may be required, however, if we want to check the asymptotic distribution computed,\nbecause the asymptotic regime may not be reached for small values and the code cannot be blamed for that.\nThis is why the asymptotic option may fail if `sampleSize` is not large enough.\nThe bootstrap option may fail too, because the sample size may be so small that re-sampling\nin the basic sample may not provide enough variability.\n\nThe value of `numberOfRepetitions` must be as large as possible because it ensures that\nthe reference distribution used for this verification is accurate enough.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"numberOfRepetitions = 100 # Larger is better\nproblem = otb.IshigamiSensitivity()\nmetaSAAlgorithm = otb.SensitivityBenchmarkMetaAlgorithm(problem)\nfor sampleSize in [100, 200, 400, 800]:\n print(\"sampleSize=\", sampleSize)\n for useAsymptotic in [False, True]:\n if useAsymptotic:\n label = \"Asymptotic\"\n else:\n label = \"Bootstrap\"\n print(label)\n ot.ResourceMap.SetAsBool(\n \"SobolIndicesAlgorithm-DefaultUseAsymptoticDistribution\", useAsymptotic\n )\n for estimator in [\"Saltelli\", \"Martinez\", \"Jansen\", \"MauntzKucherenko\"]:\n print(\"Estimator:\", estimator)\n benchmark = otb.SensitivityDistribution(\n problem,\n metaSAAlgorithm,\n sampleSize,\n numberOfRepetitions=numberOfRepetitions,\n estimator=estimator,\n )\n grid = benchmark.draw()\n view = otv.View(grid)\n figure = view.getFigure()\n _ = figure.suptitle(\n \"n = %d, %s, %s. %s.\"\n % (sampleSize, problem.getName(), estimator, label)\n )\n _ = figure.set_figwidth(8.0)\n _ = figure.set_figheight(6.0)\n _ = figure.subplots_adjust(wspace=0.4, hspace=0.4)\n # Customize legends\n ax = figure.get_axes()\n for i in range(len(ax) - 1):\n _ = ax[i].legend(\"\")\n _ = ax[-1].legend(bbox_to_anchor=(1.0, 1.0))\n _ = pl.show()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The plot compare two distributions.\n\n- The \"Computed\" distribution is the one computed by OpenTURNS.\n- The \"Sample\" distribution is the one generated by Monte-Carlo sampling.\n\nThe fact that the \"Computed\" distribution is not centered on the true value is an\nexpected property of the way the distribution is computed.\nWhat must be checked, instead, is the the variance of the distribution.\nMore precisely, we check that the asymptotic covariance is correctly computed by the library.\nIn other words, we focus on the spread of the distribution and check that it is consistent with the actual spread.\nThis comparison is, however, restricted by the fact that the re-sampling size has a limited size\nequal to the `numberOfRepetitions` parameter. Increasing this parameter allows the check to be more accurate,\nbut increases the simulation elapsed time.\n\nWe see that these distributions are never far away from each other.\nThis proves that the computation of the distribution is correct, for both the asymptotic and bootstrap options.\n\n" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.9.20" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 0 | ||
} |
Binary file modified
BIN
+0 Bytes
(100%)
...mark/master/_downloads/02ebaaff0fbfc2d6c398818b1ca6b2f8/plot_nloscillator_sensitivity.zip
Binary file not shown.
274 changes: 274 additions & 0 deletions
274
...hmark/master/_downloads/038b9ccad3f6f9c5a018a9685e9f905b/plot_reliability_benchmark.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,274 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"\n# Benchmark on a given set of problems\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In this example, we show how to make a loop over the problems in the benchmark.\nWe also show how to run various reliability algorithms on a given problem so that\nwe can score the methods using number of digits or performance.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import openturns as ot\nimport numpy as np\nimport otbenchmark as otb" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Browse the reliability problems\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"We present the BBRC test cases using the otbenchmark module.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkProblemList = otb.ReliabilityBenchmarkProblemList()\nnumberOfProblems = len(benchmarkProblemList)\nnumberOfProblems" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"for i in range(numberOfProblems):\n problem = benchmarkProblemList[i]\n name = problem.getName()\n pf = problem.getProbability()\n event = problem.getEvent()\n antecedent = event.getAntecedent()\n distribution = antecedent.getDistribution()\n dimension = distribution.getDimension()\n print(\"#\", i, \":\", name, \" : pf = \", pf, \", dimension=\", dimension)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"maximumEvaluationNumber = 1000\nmaximumAbsoluteError = 1.0e-3\nmaximumRelativeError = 1.0e-3\nmaximumResidualError = 1.0e-3\nmaximumConstraintError = 1.0e-3\nnearestPointAlgorithm = ot.AbdoRackwitz()\nnearestPointAlgorithm.setMaximumCallsNumber(maximumEvaluationNumber)\nnearestPointAlgorithm.setMaximumAbsoluteError(maximumAbsoluteError)\nnearestPointAlgorithm.setMaximumRelativeError(maximumRelativeError)\nnearestPointAlgorithm.setMaximumResidualError(maximumResidualError)\nnearestPointAlgorithm.setMaximumConstraintError(maximumConstraintError)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## The FORM method\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"problem = otb.ReliabilityProblem8()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"metaAlgorithm = otb.ReliabilityBenchmarkMetaAlgorithm(problem)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkResult = metaAlgorithm.runFORM(nearestPointAlgorithm)\nbenchmarkResult.summary()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## The SORM method\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkResult = metaAlgorithm.runSORM(nearestPointAlgorithm)\nbenchmarkResult.summary()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## The LHS method\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkResult = metaAlgorithm.runLHS(maximumOuterSampling=10000)\nbenchmarkResult.summary()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## The MonteCarloSampling method\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkResult = metaAlgorithm.runMonteCarlo(maximumOuterSampling=10000)\nbenchmarkResult.summary()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## The FORM - Importance Sampling method\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkResult = metaAlgorithm.runFORMImportanceSampling(nearestPointAlgorithm)\nbenchmarkResult.summary()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## The Subset method\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"benchmarkResult = metaAlgorithm.runSubsetSampling()\nbenchmarkResult.summary()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The following function computes the number of correct base-10 digits\nin the computed result compared to the exact result.\nThe `CompareMethods` function takes as a parameter a problem\nand it returns the probabilities estimated by each method.\nIn addition, it returns the performance of these methods.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"def PrintResults(name, benchmarkResult):\n print(\"------------------------------------------------------------------\")\n print(name)\n numberOfDigitsPerEvaluation = (\n benchmarkResult.numberOfCorrectDigits\n / benchmarkResult.numberOfFunctionEvaluations\n )\n print(\"Estimated probability:\", benchmarkResult.computedProbability)\n print(\"Number of function calls:\", benchmarkResult.numberOfFunctionEvaluations)\n print(\"Number of correct digits=%.1f\" % (benchmarkResult.numberOfCorrectDigits))\n print(\n \"Performance=%.2e (correct digits/evaluation)\" % (numberOfDigitsPerEvaluation)\n )\n return [name, benchmarkResult.numberOfCorrectDigits, numberOfDigitsPerEvaluation]" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"def CompareMethods(problem, nearestPointAlgorithm, maximumOuterSampling=10000):\n \"\"\"\n Runs various algorithms on a given problem.\n \"\"\"\n summaryList = []\n pfReference = problem.getProbability()\n print(\"Exact probability:\", pfReference)\n metaAlgorithm = otb.ReliabilityBenchmarkMetaAlgorithm(problem)\n # SubsetSampling\n benchmarkResult = metaAlgorithm.runSubsetSampling()\n summaryList.append(PrintResults(\"SubsetSampling\", benchmarkResult))\n # FORM\n benchmarkResult = metaAlgorithm.runFORM(nearestPointAlgorithm)\n summaryList.append(PrintResults(\"FORM\", benchmarkResult))\n # SORM\n benchmarkResult = metaAlgorithm.runSORM(nearestPointAlgorithm)\n summaryList.append(PrintResults(\"SORM\", benchmarkResult))\n # FORM - ImportanceSampling\n benchmarkResult = metaAlgorithm.runFORMImportanceSampling(\n nearestPointAlgorithm, maximumOuterSampling=maximumOuterSampling\n )\n summaryList.append(PrintResults(\"FORM-IS\", benchmarkResult))\n # MonteCarloSampling\n benchmarkResult = metaAlgorithm.runMonteCarlo(\n maximumOuterSampling=maximumOuterSampling\n )\n summaryList.append(PrintResults(\"MonteCarloSampling\", benchmarkResult))\n # LHS\n benchmarkResult = metaAlgorithm.runLHS()\n summaryList.append(PrintResults(\"LHS\", benchmarkResult))\n # Gather results\n numberOfMethods = len(summaryList)\n correctDigitsList = []\n performanceList = []\n algorithmNames = []\n for i in range(numberOfMethods):\n [name, numberOfCorrectDigits, numberOfDigitsPerEvaluation] = summaryList[i]\n algorithmNames.append(name)\n correctDigitsList.append(numberOfCorrectDigits)\n performanceList.append(numberOfDigitsPerEvaluation)\n print(\"------------------------------------------------------------------------\")\n print(\"Scoring by number of correct digits\")\n indices = np.argsort(correctDigitsList)\n rank = list(indices)\n for i in range(numberOfMethods):\n j = rank[i]\n print(\"%d : %s (%.1f)\" % (j, algorithmNames[j], correctDigitsList[j]))\n print(\"------------------------------------------------------------------------\")\n print(\"Scoring by performance (digits/evaluation)\")\n indices = np.argsort(performanceList)\n rank = list(indices)\n for i in range(len(indices)):\n j = rank[i]\n print(\"%d : %s (%.1e)\" % (j, algorithmNames[j], performanceList[j]))\n return correctDigitsList, performanceList" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"problem = otb.ReliabilityProblem8()\n_ = CompareMethods(problem, nearestPointAlgorithm)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Remarks\n\n* We note that the FORM and SORM methods are faster, but, they do not converge to the exact proba.\n* We also notice the effectiveness of the FORM-ImportanceSampling method (inexpensive method, and converges).\n* The convergence of the MonteCarlo method requires a large number of simulations.\n* SubsetSampling converges even if the probability is very low.\n\n\n" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.9.20" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 0 | ||
} |
Oops, something went wrong.