Skip to content

Develop->master

Develop->master #82

This check has been archived and is scheduled for deletion. Learn more about checks retention
GitHub Actions / Test report (Python 3.11) failed Sep 7, 2023 in 0s

Test report (Python 3.11) ❌

Tests failed

❌ test-results-3.11.xml

407 tests were completed in 1640s with 345 passed, 42 failed and 20 skipped.

Test suite Passed Failed Skipped Time
pytest 345✅ 42❌ 20⚪ 1640s

❌ pytest

tests.test_base
  ✅ testGMG
  ✅ testParallelGMG[1-interval-P1-False]
  ✅ testParallelGMG[1-interval-P1-True]
  ✅ testParallelGMG[4-interval-P1-True]
  ✅ testParallelGMG[4-interval-P2-True]
  ✅ testParallelGMG[4-square-P2-True]
  ✅ testParallelGMG[4-square-P2-False]
  ✅ testParallelGMG[1-square-P2-True]
  ✅ testParallelGMG[1-square-P2-False]
  ✅ testParallelGMG[4-square-P1-True]
  ✅ testParallelGMG[4-square-P3-True]
  ✅ testParallelGMG[4-square-P3-False]
  ✅ testParallelGMG[1-square-P3-True]
  ✅ testParallelGMG[1-square-P3-False]
  ✅ testParallelGMG[4-interval-P3-True]
  ✅ testParallelGMG[4-cube-P3-True]
  ✅ testParallelGMG[4-cube-P3-False]
  ✅ testParallelGMG[1-cube-P3-True]
  ✅ testParallelGMG[1-cube-P3-False]
  ✅ testParallelGMG[4-cube-P2-True]
  ✅ testParallelGMG[4-cube-P2-False]
  ✅ testParallelGMG[1-cube-P2-True]
  ✅ testParallelGMG[1-cube-P2-False]
  ✅ testParallelGMG[4-cube-P1-True]
  ✅ testParallelGMG[4-cube-P1-False]
  ❌ testHelmholtz[4-cube]
	ranks = 4, domain = 'cube', extra = []
  ✅ testParallelGMG[1-cube-P1-True]
  ✅ testParallelGMG[1-cube-P1-False]
  ❌ testHelmholtz[1-cube]
	ranks = 1, domain = 'cube', extra = []
  ✅ testParallelGMG[4-interval-P3-False]
  ✅ testParallelGMG[1-interval-P3-True]
  ✅ testParallelGMG[1-interval-P3-False]
  ✅ testParallelGMG[4-square-P1-False]
  ❌ testHelmholtz[4-square]
	ranks = 4, domain = 'square', extra = []
  ✅ testParallelGMG[1-square-P1-True]
  ✅ testParallelGMG[1-square-P1-False]
  ❌ testHelmholtz[1-square]
	ranks = 1, domain = 'square', extra = []
  ✅ testParallelGMG[4-interval-P2-False]
  ✅ testParallelGMG[1-interval-P2-True]
  ❌ testParallelGMG[1-interval-P2-False]
	ranks = 1, domain = 'interval', element = 'P2', symmetric = False, extra = []
  ✅ testParallelGMG[4-interval-P1-False]
  ✅ testHelmholtz[4-interval]
  ✅ testHelmholtz[1-interval]
  ✅ testInterface[domainNoRef0]
  ✅ testInterface[domainNoRef1]
  ✅ test_tupleDict
  ✅ test_arrayIndexSet
  ✅ test_bitArray
  ✅ test_integrals_drift[square]
  ✅ test_integrals_grad[square]
tests.test_drivers_intFracLapl
  ✅ testNonlocal[interval-fractional-poly-Dirichlet-lu-dense]
  ❌ testNonlocal[interval-fractional-poly-Neumann-lu-dense]
	runNonlocal_params = ('interval', 'fractional', 'poly-Neumann', 'lu', 'dense')
  ✅ testNonlocal[interval-constant-poly-Dirichlet-lu-dense]
  ❌ testNonlocal[interval-constant-poly-Neumann-lu-dense]
	runNonlocal_params = ('interval', 'constant', 'poly-Neumann', 'lu', 'dense')
  ✅ testNonlocal[interval-inverseDistance-poly-Dirichlet-lu-dense]
  ❌ testNonlocal[interval-inverseDistance-poly-Neumann-lu-dense]
	runNonlocal_params = ('interval', 'inverseDistance', 'poly-Neumann', 'lu', 'dense')
  ✅ testNonlocal[square-fractional-poly-Dirichlet-cg-mg-dense]
  ⚪ testNonlocal[square-fractional-poly-Neumann-cg-mg-dense]
  ✅ testNonlocal[square-constant-poly-Dirichlet-cg-mg-dense]
  ⚪ testNonlocal[square-constant-poly-Neumann-cg-mg-dense]
  ✅ testNonlocal[square-inverseDistance-poly-Dirichlet-cg-mg-dense]
  ⚪ testNonlocal[square-inverseDistance-poly-Neumann-cg-mg-dense]
  ✅ testNonlocal[interval-fractional-poly-Dirichlet-lu-H2]
  ❌ testNonlocal[interval-fractional-poly-Neumann-lu-H2]
	runNonlocal_params = ('interval', 'fractional', 'poly-Neumann', 'lu', 'H2')
  ✅ testNonlocal[interval-constant-poly-Dirichlet-lu-H2]
  ❌ testNonlocal[interval-constant-poly-Neumann-lu-H2]
	runNonlocal_params = ('interval', 'constant', 'poly-Neumann', 'lu', 'H2')
  ❌ testNonlocal[interval-inverseDistance-poly-Dirichlet-lu-H2]
	runNonlocal_params = ('interval', 'inverseDistance', 'poly-Dirichlet', 'lu', 'H2')
  ❌ testNonlocal[interval-inverseDistance-poly-Neumann-lu-H2]
	runNonlocal_params = ('interval', 'inverseDistance', 'poly-Neumann', 'lu', 'H2')
  ✅ testNonlocal[square-fractional-poly-Dirichlet-cg-mg-H2]
  ⚪ testNonlocal[square-fractional-poly-Neumann-cg-mg-H2]
  ✅ testNonlocal[square-constant-poly-Dirichlet-cg-mg-H2]
  ⚪ testNonlocal[square-constant-poly-Neumann-cg-mg-H2]
  ✅ testNonlocal[square-inverseDistance-poly-Dirichlet-cg-mg-H2]
  ⚪ testNonlocal[square-inverseDistance-poly-Neumann-cg-mg-H2]
  ✅ testFractional[interval-const0.25-constant-P0-cg-mg-dense]
  ❌ testFractionalHeat[interval-const0.25-constant-P0-cg-mg-dense]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P0', 'cg-mg', 'dense')
  ✅ testFractional[interval-const0.25-constant-P0-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.25-constant-P0-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P0', 'cg-mg', 'H2')
  ✅ testFractional[interval-const0.25-constant-P1-cg-mg-dense]
  ❌ testFractionalHeat[interval-const0.25-constant-P1-cg-mg-dense]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P1', 'cg-mg', 'dense')
  ✅ testFractional[interval-const0.25-constant-P1-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.25-constant-P1-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P1', 'cg-mg', 'H2')
  ❌ testFractional[interval-const0.25-zeroFlux-P1-lu-H2]
	runFractional_params = ('interval', 'const(0.25)', 'zeroFlux', 'P1', 'lu', 'H2')
  ❌ testFractionalHeat[interval-const0.25-zeroFlux-P1-lu-H2]
	runFractional_params = ('interval', 'const(0.25)', 'zeroFlux', 'P1', 'lu', 'H2')
  ✅ testFractional[interval-const0.25-knownSolution-P1-cg-jacobi-H2]
  ❌ testFractionalHeat[interval-const0.25-knownSolution-P1-cg-jacobi-H2]
	runFractional_params = ('interval', 'const(0.25)', 'knownSolution', 'P1', 'cg-jacobi', 'H2')
  ✅ testFractional[interval-const0.75-constant-P1-lu-dense]
  ❌ testFractionalHeat[interval-const0.75-constant-P1-lu-dense]
	runFractional_params = ('interval', 'const(0.75)', 'constant', 'P1', 'lu', 'dense')
  ✅ testFractional[interval-const0.75-constant-P1-lu-H2]
  ❌ testFractionalHeat[interval-const0.75-constant-P1-lu-H2]
	runFractional_params = ('interval', 'const(0.75)', 'constant', 'P1', 'lu', 'H2')
  ❌ testFractional[interval-const0.75-zeroFlux-P1-cg-jacobi-H2]
	runFractional_params = ('interval', 'const(0.75)', 'zeroFlux', 'P1', 'cg-jacobi', 'H2')
  ❌ testFractionalHeat[interval-const0.75-zeroFlux-P1-cg-jacobi-H2]
	runFractional_params = ('interval', 'const(0.75)', 'zeroFlux', 'P1', 'cg-jacobi', 'H2')
  ✅ testFractional[interval-const0.75-knownSolution-P1-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.75-knownSolution-P1-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.75)', 'knownSolution', 'P1', 'cg-mg', 'H2')
  ✅ testFractional[interval-varconst0.75-constant-P1-cg-jacobi-dense]
  ❌ testFractionalHeat[interval-varconst0.75-constant-P1-cg-jacobi-dense]
	runFractional_params = ('interval', 'varconst(0.75)', 'constant', 'P1', 'cg-jacobi', 'dense')
  ✅ testFractional[interval-varconst0.75-constant-P1-cg-jacobi-H2]
  ❌ testFractionalHeat[interval-varconst0.75-constant-P1-cg-jacobi-H2]
	runFractional_params = ('interval', 'varconst(0.75)', 'constant', 'P1', 'cg-jacobi', 'H2')
  ✅ testFractional[interval-varconst0.75-zeroFlux-P1-cg-mg-H2]
  ❌ testFractionalHeat[interval-varconst0.75-zeroFlux-P1-cg-mg-H2]
	runFractional_params = ('interval', 'varconst(0.75)', 'zeroFlux', 'P1', 'cg-mg', 'H2')
  ✅ testFractional[interval-varconst0.75-knownSolution-P1-lu-H2]
  ❌ testFractionalHeat[interval-varconst0.75-knownSolution-P1-lu-H2]
	runFractional_params = ('interval', 'varconst(0.75)', 'knownSolution', 'P1', 'lu', 'H2')
  ✅ testFractional[interval-const0.25-constant-P2-cg-mg-dense]
  ❌ testFractionalHeat[interval-const0.25-constant-P2-cg-mg-dense]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P2', 'cg-mg', 'dense')
  ✅ testFractional[interval-const0.25-constant-P2-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.25-constant-P2-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P2', 'cg-mg', 'H2')
  ✅ testFractional[interval-const0.75-constant-P2-cg-mg-dense]
  ❌ testFractionalHeat[interval-const0.75-constant-P2-cg-mg-dense]
	runFractional_params = ('interval', 'const(0.75)', 'constant', 'P2', 'cg-mg', 'dense')
  ✅ testFractional[interval-const0.75-constant-P2-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.75-constant-P2-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.75)', 'constant', 'P2', 'cg-mg', 'H2')
  ✅ testFractional[interval-const0.25-constant-P3-cg-mg-dense]
  ❌ testFractionalHeat[interval-const0.25-constant-P3-cg-mg-dense]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P3', 'cg-mg', 'dense')
  ✅ testFractional[interval-const0.25-constant-P3-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.25-constant-P3-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.25)', 'constant', 'P3', 'cg-mg', 'H2')
  ✅ testFractional[interval-const0.75-constant-P3-cg-mg-dense]
  ❌ testFractionalHeat[interval-const0.75-constant-P3-cg-mg-dense]
	runFractional_params = ('interval', 'const(0.75)', 'constant', 'P3', 'cg-mg', 'dense')
  ✅ testFractional[interval-const0.75-constant-P3-cg-mg-H2]
  ❌ testFractionalHeat[interval-const0.75-constant-P3-cg-mg-H2]
	runFractional_params = ('interval', 'const(0.75)', 'constant', 'P3', 'cg-mg', 'H2')
  ✅ testFractional[disc-const0.25-constant-P0-cg-mg-dense]
  ❌ testFractionalHeat[disc-const0.25-constant-P0-cg-mg-dense]
	runFractional_params = ('disc', 'const(0.25)', 'constant', 'P0', 'cg-mg', 'dense')
  ✅ testFractional[disc-const0.25-constant-P0-cg-mg-H2]
  ❌ testFractionalHeat[disc-const0.25-constant-P0-cg-mg-H2]
	runFractional_params = ('disc', 'const(0.25)', 'constant', 'P0', 'cg-mg', 'H2')
  ✅ testFractional[disc-const0.25-constant-P1-cg-mg-dense]
  ❌ testFractionalHeat[disc-const0.25-constant-P1-cg-mg-dense]
	runFractional_params = ('disc', 'const(0.25)', 'constant', 'P1', 'cg-mg', 'dense')
  ✅ testFractional[disc-const0.25-constant-P1-cg-mg-H2]
  ❌ testFractionalHeat[disc-const0.25-constant-P1-cg-mg-H2]
	runFractional_params = ('disc', 'const(0.25)', 'constant', 'P1', 'cg-mg', 'H2')
  ✅ testFractional[disc-const0.75-constant-P1-cg-mg-dense]
  ❌ testFractionalHeat[disc-const0.75-constant-P1-cg-mg-dense]
	runFractional_params = ('disc', 'const(0.75)', 'constant', 'P1', 'cg-mg', 'dense')
  ✅ testFractional[disc-const0.75-constant-P1-cg-mg-H2]
  ❌ testFractionalHeat[disc-const0.75-constant-P1-cg-mg-H2]
	runFractional_params = ('disc', 'const(0.75)', 'constant', 'P1', 'cg-mg', 'H2')
  ✅ testVariableOrder
  ✅ testMatvecs[interval-const0.25]
  ✅ testMatvecs[interval-const0.75]
  ✅ testMatvecs[interval-varconst0.25]
  ✅ testMatvecs[interval-varconst0.75]
  ✅ testMatvecs[square-const0.25]
  ✅ testMatvecs[square-const0.75]
  ✅ testMatvecs[square-varconst0.25]
  ✅ testMatvecs[square-varconst0.75]
tests.test_fracLapl
  ✅ testFracLapl[1-P1-0.3]
  ✅ testFracLapl[1-P1-0.7]
  ✅ testFracLapl[2-P1-0.3]
  ✅ testFracLapl[2-P1-0.7]
  ✅ testScaling[1-0.25-inf]
  ✅ testScaling[1-0.25-1]
  ✅ testScaling[1-0.75-inf]
  ✅ testScaling[1-0.75-1]
  ✅ testScaling[2-0.25-inf]
  ✅ testScaling[2-0.25-1]
  ✅ testScaling[2-0.75-inf]
  ✅ testScaling[2-0.75-1]
  ✅ testH2[1-0.3-0.0001-P1]
  ✅ testH2[1-0.7-0.01-P1]
  ✅ testH2[2-0.3-0.00012-P1]
  ✅ testH2[2-0.7-0.01-P1]
tests.test_h2finiteHorizon
  ✅ test_h2_finite[1-0.25-1.0-0.5-True]
  ✅ test_h2_finite[1-0.75-1.0-0.5-True]
  ✅ test_h2_finite[1-0.25-1.0-0.5-False]
  ✅ test_h2_finite[1-0.75-1.0-0.5-False]
  ✅ test_h2_finite[1-0.25-1.0-2.5-False]
  ✅ test_h2_finite[1-0.75-1.0-2.5-False]
tests.test_kernels
  ✅ testIntegrableKernel[dim1-kernelTypeconstanthorizon0.5-normalizedTrue]
  ✅ testIntegrableKernel[dim1-kernelTypeconstanthorizon0.5-normalizedFalse]
  ✅ testIntegrableKernel[dim1-kernelTypeinverseDistancehorizon0.5-normalizedTrue]
  ✅ testIntegrableKernel[dim1-kernelTypeinverseDistancehorizon0.5-normalizedFalse]
  ✅ testIntegrableKernel[dim1-kernelTypeGaussianhorizon0.5-normalizedTrue]
  ✅ testIntegrableKernel[dim1-kernelTypeGaussianhorizon0.5-normalizedFalse]
  ✅ testIntegrableKernel[dim2-kernelTypeconstanthorizon0.5-normalizedTrue]
  ✅ testIntegrableKernel[dim2-kernelTypeconstanthorizon0.5-normalizedFalse]
  ✅ testIntegrableKernel[dim2-kernelTypeinverseDistancehorizon0.5-normalizedTrue]
  ✅ testIntegrableKernel[dim2-kernelTypeinverseDistancehorizon0.5-normalizedFalse]
  ✅ testIntegrableKernel[dim2-kernelTypeGaussianhorizon0.5-normalizedTrue]
  ✅ testIntegrableKernel[dim2-kernelTypeGaussianhorizon0.5-normalizedFalse]
  ✅ testFractionalKernel[dim1-s0.25-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative00]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative00]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative00]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative00]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative01]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative01]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative01]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative01]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative02]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative02]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative02]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative02]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative03]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative03]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative03]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative03]
  ✅ testFractionalKernel[dim1-s0.25-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.75-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phi2.0-derivative0]
  ✅ testFractionalKernel[dim1-s0.25-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.25-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.25-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.25-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.75-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.75-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.75-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-s0.75-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative10]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative10]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative10]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative10]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative11]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative11]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative11]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative11]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative12]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative12]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative12]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative12]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedTrue-phiNone-derivative13]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizoninf-normalizedFalse-phiNone-derivative13]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedTrue-phiNone-derivative13]
  ✅ testFractionalKernel[dim1-sfeFractionalOrder(lookupExtended)-horizon0.5-normalizedFalse-phiNone-derivative13]
  ✅ testFractionalKernel[dim2-s0.25-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.25-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.25-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.25-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.75-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.75-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.75-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.75-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative0]
  ✅ testFractionalKernel[dim2-s0.25-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.25-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.25-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.25-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.75-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.75-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.75-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-s0.75-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.25,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-svariableConstFractionalOrder(s=0.75,sym=1)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.25)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-sconstantNonSymFractionalOrder(0.75)-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.75,sr=0.25,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizoninf-normalizedFalse-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedTrue-phiNone-derivative1]
  ✅ testFractionalKernel[dim2-ssmoothedLeftRightFractionalOrder(smoothStep(sl=0.25,sr=0.75,r=0.1,interface=0.0))-horizon0.5-normalizedFalse-phiNone-derivative1]
  ✅ test_discrete_s_const
  ✅ test_discrete_leftRight
tests.test_nearField.const1D_025
  ✅ testConstCluster
  ✅ testConstH2
  ✅ testVarDense
  ✅ testVarCluster
tests.test_nearField.const1D_075
  ✅ testConstCluster
  ✅ testConstH2
  ✅ testVarDense
  ✅ testVarCluster
tests.test_nearField.const1D_025_finiteHorizon
  ✅ testConstCluster
  ✅ testConstH2
  ✅ testVarDense
  ✅ testVarCluster
tests.test_nearField.const1D_075_finiteHorizon
  ✅ testConstCluster
  ✅ testConstH2
  ✅ testVarDense
  ✅ testVarCluster
tests.test_nearField.leftRight1D
  ⚪ testConstCluster
  ⚪ testConstH2
  ⚪ testVarDense
  ✅ testVarCluster
tests.test_nearField.leftRight1DfiniteHorizon
  ⚪ testConstCluster
  ⚪ testConstH2
  ⚪ testVarDense
  ✅ testVarCluster
tests.test_nearField.const2D_025
  ✅ testConstCluster
  ⚪ testConstH2
  ✅ testVarDense
  ✅ testVarCluster
tests.test_nearField.const2D_075
  ✅ testConstCluster
  ⚪ testConstH2
  ✅ testVarDense
  ✅ testVarCluster
tests.test_nearField.leftRight2DinfiniteHorizon
  ⚪ testConstCluster
  ⚪ testConstH2
  ⚪ testVarDense
  ✅ testVarCluster
tests.test_nearField.layers2D
  ⚪ testConstCluster
  ⚪ testConstH2
  ⚪ testVarDense
  ✅ testVarCluster

Annotations

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_base ► testHelmholtz[4-cube]

Failed test found in:
  test-results-3.11.xml
Error:
  ranks = 4, domain = 'cube', extra = []
Raw output
ranks = 4, domain = 'cube', extra = []

    def testHelmholtz(ranks, domain, extra):
        base = getPath()+'/../'
        py = ['runHelmholtz.py', '--domain', domain]
        path = base+'drivers'
        cacheDir = getPath()+'/'
>       runDriver(path, py, ranks=ranks, cacheDir=cacheDir, extra=extra)

tests/drivers_base.py:73: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runHelmholtz.py', '--domain', 'cube', '--test', '--testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domaincube4', '--skipPlots']
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 4, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:20:06  root                                     
E       1: Traceback (most recent call last):
E       1:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       1:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       1:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       1:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 255, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       1: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       2023-09-07 13:20:06  root                                     
E       3: Traceback (most recent call last):
E       3:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       3:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       3:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       3:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 255, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       3: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------
E       [fv-az283-77:05199] 1 more process has sent help message help-mpi-api.txt / mpi-abort
E       [fv-az283-77:05199] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_base ► testHelmholtz[1-cube]

Failed test found in:
  test-results-3.11.xml
Error:
  ranks = 1, domain = 'cube', extra = []
Raw output
ranks = 1, domain = 'cube', extra = []

    def testHelmholtz(ranks, domain, extra):
        base = getPath()+'/../'
        py = ['runHelmholtz.py', '--domain', domain]
        path = base+'drivers'
        cacheDir = getPath()+'/'
>       runDriver(path, py, ranks=ranks, cacheDir=cacheDir, extra=extra)

tests/drivers_base.py:73: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runHelmholtz.py', '--domain', 'cube', '--test', '--testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domaincube1', '--skipPlots']
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:21:20  __main__                                 
E       {'debugOverlaps': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'cube',
E        'element': 'P1',
E        'frequency': 40.0,
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'logDependencies': False,
E        'logProperties': '',
E        'maxiter': 300,
E        'mpiGlobalCommSize': 1,
E        'overwriteCache': False,
E        'partitioner': 'regular',
E        'partitionerParams': {},
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'wave',
E        'reorder': False,
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'symmetric': False,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domaincube1',
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:21:20  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runHelmholtz.py --domain cube --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domaincube1 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:21:20  __main__                                 setup levels in 0.000257 s
E       2023-09-07 13:21:20  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 255, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       0: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_base ► testHelmholtz[4-square]

Failed test found in:
  test-results-3.11.xml
Error:
  ranks = 4, domain = 'square', extra = []
Raw output
ranks = 4, domain = 'square', extra = []

    def testHelmholtz(ranks, domain, extra):
        base = getPath()+'/../'
        py = ['runHelmholtz.py', '--domain', domain]
        path = base+'drivers'
        cacheDir = getPath()+'/'
>       runDriver(path, py, ranks=ranks, cacheDir=cacheDir, extra=extra)

tests/drivers_base.py:73: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runHelmholtz.py', '--domain', 'square', '--test', '--testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domainsquare4', '--skipPlots']
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 4, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:21:35  __main__                                 
E       {'debugOverlaps': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'square',
E        'element': 'P1',
E        'frequency': 40.0,
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'logDependencies': False,
E        'logProperties': '',
E        'maxiter': 300,
E        'mpiGlobalCommSize': 4,
E        'overwriteCache': False,
E        'partitioner': 'regular',
E        'partitionerParams': {},
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'wave',
E        'reorder': False,
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'symmetric': False,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domainsquare4',
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:21:35  root                                     
E       1: Traceback (most recent call last):
E       1:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       1:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       1:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       1:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 225, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       1: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       2023-09-07 13:21:35  root                                     
E       2: Traceback (most recent call last):
E       2:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       2:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       2:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       2:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 225, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       2: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       2023-09-07 13:21:35  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runHelmholtz.py --domain square --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domainsquare4 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    4
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:21:35  __main__                                 setup levels in 0.000229 s
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------
E       2023-09-07 13:21:35  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 225, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       0: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       2023-09-07 13:21:35  root                                     
E       3: Traceback (most recent call last):
E       3:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       3:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       3:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       3:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 225, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       3: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       [fv-az283-77:05319] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
E       [fv-az283-77:05319] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_base ► testHelmholtz[1-square]

Failed test found in:
  test-results-3.11.xml
Error:
  ranks = 1, domain = 'square', extra = []
Raw output
ranks = 1, domain = 'square', extra = []

    def testHelmholtz(ranks, domain, extra):
        base = getPath()+'/../'
        py = ['runHelmholtz.py', '--domain', domain]
        path = base+'drivers'
        cacheDir = getPath()+'/'
>       runDriver(path, py, ranks=ranks, cacheDir=cacheDir, extra=extra)

tests/drivers_base.py:73: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runHelmholtz.py', '--domain', 'square', '--test', '--testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domainsquare1', '--skipPlots']
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:21:45  __main__                                 
E       {'debugOverlaps': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'square',
E        'element': 'P1',
E        'frequency': 40.0,
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'logDependencies': False,
E        'logProperties': '',
E        'maxiter': 300,
E        'mpiGlobalCommSize': 1,
E        'overwriteCache': False,
E        'partitioner': 'regular',
E        'partitionerParams': {},
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'wave',
E        'reorder': False,
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'symmetric': False,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domainsquare1',
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:21:45  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runHelmholtz.py --domain square --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runHelmholtz.py--domainsquare1 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:21:45  __main__                                 setup levels in 0.000229 s
E       2023-09-07 13:21:45  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runHelmholtz.py", line 45, in <module>
E           hierarchies, connectors = paramsForMG(p.noRef,
E                                                 ^^^^^^^
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1624, in getter
E           builder()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1491, in __call__
E           self.fun(*args)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_fem/pdeProblems.py", line 225, in processProblem
E           self.rhs = (np.vdot(xi, xi)-self.frequency**2) * self.solEx
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
E       0: TypeError: unsupported operand type(s) for *: 'float' and 'PyNucleus_fem.functions.waveFunction'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_base ► testParallelGMG[1-interval-P2-False]

Failed test found in:
  test-results-3.11.xml
Error:
  ranks = 1, domain = 'interval', element = 'P2', symmetric = False, extra = []
Raw output
ranks = 1, domain = 'interval', element = 'P2', symmetric = False, extra = []

    def testParallelGMG(ranks, domain, element, symmetric, extra):
        base = getPath()+'/../'
        py = ['runParallelGMG.py',
              '--domain', domain,
              '--element', element]
        if symmetric:
            py.append('--symmetric')
        path = base+'drivers'
        cacheDir = getPath()+'/'
>       runDriver(path, py, ranks=ranks, cacheDir=cacheDir, relTol=3e-2, extra=extra)

tests/drivers_base.py:62: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runParallelGMG.py', '--domain', 'interval', '--element', 'P2', '--test', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.03, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:21:51  __main__                                 
E       {'checkSolution': False,
E        'commType': 'standard',
E        'debugOverlaps': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'doBICGSTAB': False,
E        'doCG': False,
E        'doFMG': True,
E        'doFMGPCG': True,
E        'doFMGPGMRES': True,
E        'doGMRES': False,
E        'doMG': True,
E        'doPBICGSTAB': True,
E        'doPCG': True,
E        'doPCoarsen': False,
E        'doPGMRES': True,
E        'domain': 'interval',
E        'element': 'P2',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'logDependencies': False,
E        'logProperties': '',
E        'maxiter': 50,
E        'mpiGlobalCommSize': 1,
E        'noRef': 14,
E        'overwriteCache': False,
E        'partitioner': 'regular',
E        'partitionerParams': {},
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_residuals': False,
E        'plot_spSolve': True,
E        'plot_spSolveError': True,
E        'plot_spSolveExactSolution': True,
E        'problem': 'sin',
E        'reorder': False,
E        'saveVTK': False,
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'smoother': 'jacobi',
E        'symmetric': False,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runParallelGMG.py--domaininterval--elementP21',
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:21:51  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runParallelGMG.py --domain interval --element P2 --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runParallelGMG.py--domaininterval--elementP21 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000174 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'seed' in 0.00131 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000147 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 6.9e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000153 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 7e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000148 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 8.91e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000143 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000102 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000149 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000105 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000164 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000135 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000179 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000206 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000297 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000394 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000353 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.000627 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000544 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Assembled matrices in 0.00167 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 0.000681 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 0.0003 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 0.000163 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 7.56e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 4.02e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 2.26e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 1.36e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 9.6e-06 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 7.4e-06 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 5.9e-06 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Build algebraic overlaps of type 'standard' in 5.2e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.00167 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.connectors    Repartitioning from 'seed' to 'fine' using 'regular' in 0.0346 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.connectors    Building algebraic overlaps of type 'standard' from 'seed' to 'fine' using Alltoallv in 0.00137 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.connectors    Building distribute from 'seed' to 'fine' in 0.000376 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000752 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined interfaces in 4.6e-06 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Build algebraic overlaps of type 'standard' in 2.75e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.00229 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00135 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined interfaces in 2.8e-06 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Build algebraic overlaps of type 'standard' in 3.01e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Prepared sparsity patterns in 0.00375 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00265 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Refined interfaces in 2.9e-06 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Build algebraic overlaps of type 'standard' in 3.94e-05 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Assembled matrices in 0.0121 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 0.00489 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 0.00244 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.levels        Restrict stiffness matrix in 0.00134 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.hierarchies   Build multilevel overlaps in 0.000154 s
E       2023-09-07 13:21:51  PyNucleus_multilevelSolver.hierarchies   
E                                      0
E       input                          -
E       seed 0                         o
E       seed 1                         o
E       seed 2                         o
E       seed 3                         o
E       seed 4                         o
E       seed 5                         o
E       seed 6                         o
E       seed 7                         o
E       seed 8                         o
E       seed 9                         o
E       seed 10                        o
E       breakUp_seed:1                 -
E       fine 10                        o
E       fine 11                        o
E       fine 12                        o
E       fine 13                        o
E       2023-09-07 13:21:51  __main__                                 setup levels in 0.102 s
E       2023-09-07 13:21:51  __main__                                 Assemble rhs on finest grid in 0.00692 s
E       2023-09-07 13:21:51  __main__                                 Setup solver in 0.00742 s
E       2023-09-07 13:21:51  __main__                                 
E       Subdomains:       1
E       Refinement steps: 14
E       Elements:         16,384
E       DoFs:             32,767
E       h:                6.1e-05
E       hmin:             6.1e-05
E       Tolerance:        2e-09
E       
E       2023-09-07 13:21:51  __main__                                 
E         level  unknowns    nnz        nnz/row  solver
E       -------  ----------  -------  ---------  ----------------------------------
E             3  32,767      131,063    3.99985  Jacobi (2/2 sweeps, 0.667 damping)
E             2  16,383      65,527     3.99969  Jacobi (2/2 sweeps, 0.667 damping)
E             1  8,191       32,759     3.99939  Jacobi (2/2 sweeps, 0.667 damping)
E             0  4,095       16,375     3.99878  LU
E       
E       2023-09-07 13:21:51  __main__                                 Solve MG in 0.0163 s
E       2023-09-07 13:21:51  __main__                                 Solve FMG in 0.00843 s
E       2023-09-07 13:21:51  __main__                                 Solve PCG in 0.0136 s
E       2023-09-07 13:21:51  __main__                                 Solve PGMRES in 0.0169 s
E       2023-09-07 13:21:51  __main__                                 Solve PBICGSTAB in 0.0231 s
E       2023-09-07 13:21:51  __main__                                 Solve FMG-PCG in 0.00834 s
E       2023-09-07 13:21:51  __main__                                 Solve FMG-PGMRES in 0.00847 s
E       2023-09-07 13:21:51  __main__                                 Mass matrix in 0.0108 s
E       2023-09-07 13:21:51  __main__                                 
E       Rate of convergence MG:          0.0575
E       Rate of convergence FMG:         0.00325
E       Rate of convergence PCG:         0.0052
E       Rate of convergence PGMRES:      0.0149
E       Rate of convergence PBICGSTAB:   0.000201
E       Rate of convergence FMG-PCG:     1.26e-07
E       Rate of convergence FMG-PGMRES:  3.74e-08
E       Number of iterations MG:         6
E       Number of iterations FMG:        3
E       Number of iterations PCG:        3
E       Number of iterations PGMRES:     4
E       Number of iterations PBICGSTAB:  2
E       Number of iterations FMG-PCG:    1
E       Number of iterations FMG-PGMRES: 1
E       Residual norm MG:                1.47e-09
E       Residual norm FMG:               1.4e-09
E       Residual norm PCG:               5.72e-09
E       Residual norm PGMRES:            1.99e-09
E       Residual norm PBICGSTAB:         1.64e-09
E       Residual norm FMG-PCG:           5.11e-09
E       Residual norm FMG-PGMRES:        1.52e-09
E       L^2 error:                       3.49e-08
E       H^1_0 error:                     0.000106
E       2023-09-07 13:21:51  __main__                                 
E       timer                          numCalls     minCall    meanCall     maxCall         sum
E       ---------------------------  ----------  ----------  ----------  ----------  ----------
E       setup levels                          1  0.101905    0.101905    0.101905    0.101905
E       Assemble rhs on finest grid           1  0.00691755  0.00691755  0.00691755  0.00691755
E       Setup solver                          1  0.00742096  0.00742096  0.00742096  0.00742096
E       Solve MG                              1  0.0162656   0.0162656   0.0162656   0.0162656
E       Solve FMG                             1  0.00842936  0.00842936  0.00842936  0.00842936
E       Solve PCG                             1  0.0135736   0.0135736   0.0135736   0.0135736
E       Solve PGMRES                          1  0.0169438   0.0169438   0.0169438   0.0169438
E       Solve PBICGSTAB                       1  0.0230533   0.0230533   0.0230533   0.0230533
E       Solve FMG-PCG                         1  0.00834186  0.00834186  0.00834186  0.00834186
E       Solve FMG-PGMRES                      1  0.00846966  0.00846966  0.00846966  0.00846966
E       Mass matrix                           1  0.0108055   0.0108055   0.0108055   0.0108055
E       total                                 1  0.248221    0.248221    0.248221    0.248221
E       2023-09-07 13:21:51  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runParallelGMG.py", line 307, in <module>
E           d.finish()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1307, in finish
E           self.saveOutput()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1190, in saveOutput
E           assert False, 'No match (observed, expected)\n' + str(pformat(diff))
E       0: AssertionError: No match (observed, expected)
E       {'errors': {'L^2 error': (3.494632065164618e-08, 1.0536712127723509e-08)}}
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------
E       
E       
E       cells kept local on rank 0 in repartitioning: 1.0 / target: 1.0
E       L^2 error 3.494632065164618e-08 1.0536712127723509e-08 2.0 1e-12 2.0 None

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_drivers_intFracLapl ► testNonlocal[interval-fractional-poly-Neumann-lu-dense]

Failed test found in:
  test-results-3.11.xml
Error:
  runNonlocal_params = ('interval', 'fractional', 'poly-Neumann', 'lu', 'dense')
Raw output
runNonlocal_params = ('interval', 'fractional', 'poly-Neumann', 'lu', 'dense')
extra = []

    @pytest.mark.slow
    def testNonlocal(runNonlocal_params, extra):
        domain, kernel, problem, solver, matrixFormat = runNonlocal_params
        base = getPath()+'/../'
        py = ['runNonlocal.py',
              '--domain', domain,
              '--kernelType', kernel,
              '--problem', problem,
              '--solver', solver,
              '--matrixFormat', matrixFormat]
        # if kernel != 'fractional':
        path = base+'drivers'
        cacheDir = getPath()+'/'
        if problem == 'poly-Neumann' and domain == 'square':
            return pytest.skip('not implemented')
>       runDriver(path, py, cacheDir=cacheDir, extra=extra)

tests/test_drivers_intFracLapl.py:69: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runNonlocal.py', '--domain', 'interval', '--kernelType', 'fractional', '--problem', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:22:10  __main__                                 
E       {'debugAssemblyTimes': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'discretizedOrder': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'interval',
E        'element': 'P1',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'horizon': 0.2,
E        'interaction': 'ball2',
E        'kernelType': 'fractional',
E        'logDependencies': False,
E        'logProperties': '',
E        'matrixFormat': 'dense',
E        'maxiter': 100,
E        'mpiGlobalCommSize': 1,
E        'noRef': 8,
E        'normalized': True,
E        'overwriteCache': False,
E        'phi': 'const(1.)',
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_analyticSolution': True,
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'poly-Neumann',
E        'quadType': 'auto',
E        'quadTypeBoundary': 'auto',
E        's': 'const(0.4)',
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'solverType': 'lu',
E        'target_order': -1.0,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypefractional--problempoly-Neumann--solverlu--matrixFormatdense',
E        'tol': 1e-06,
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:22:10  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runNonlocal.py --domain interval --kernelType fractional --problem poly-Neumann --solver lu --matrixFormat dense --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypefractional--problempoly-Neumann--solverlu--matrixFormatdense --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'fine' in 0.000205 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000173 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000149 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000151 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000208 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000251 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000317 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000401 s
E       2023-09-07 13:22:10  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00071 s
E       2023-09-07 13:22:10  __main__                                 hierarchy - meshes in 0.0161 s
E       2023-09-07 13:22:12  PyNucleus_nl.helpers                     Assemble dense matrix kernel(fractional, 0.4, |x-y|_2 <= 0.2, constantFractionalLaplacianScaling(0.4,0.2 -> 4.139188984383644)), zeroExterior=False in 2.03 s
E       2023-09-07 13:22:12  PyNucleus_multilevelSolver.levels        Assembled matrices in 2.03 s
E       2023-09-07 13:22:12  __main__                                 hierarchy - matrices in 2.03 s
E       2023-09-07 13:22:13  __main__                                 solve discretizedNonlocalProblem in 0.00991 s
E       2023-09-07 13:22:13  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runNonlocal.py", line 44, in <module>
E           discrProblem.report(results)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_nl/discretizedProblems.py", line 595, in report
E           group.add('matrix memory size', self.A.getMemorySize())
E                                           ^^^^^^^^^^^^^^^^^^^^
E       0: AttributeError: 'PyNucleus_base.linear_operators.TimeStepperLinearO' object has no attribute 'getMemorySize'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_drivers_intFracLapl ► testNonlocal[interval-constant-poly-Neumann-lu-dense]

Failed test found in:
  test-results-3.11.xml
Error:
  runNonlocal_params = ('interval', 'constant', 'poly-Neumann', 'lu', 'dense')
Raw output
runNonlocal_params = ('interval', 'constant', 'poly-Neumann', 'lu', 'dense')
extra = []

    @pytest.mark.slow
    def testNonlocal(runNonlocal_params, extra):
        domain, kernel, problem, solver, matrixFormat = runNonlocal_params
        base = getPath()+'/../'
        py = ['runNonlocal.py',
              '--domain', domain,
              '--kernelType', kernel,
              '--problem', problem,
              '--solver', solver,
              '--matrixFormat', matrixFormat]
        # if kernel != 'fractional':
        path = base+'drivers'
        cacheDir = getPath()+'/'
        if problem == 'poly-Neumann' and domain == 'square':
            return pytest.skip('not implemented')
>       runDriver(path, py, cacheDir=cacheDir, extra=extra)

tests/test_drivers_intFracLapl.py:69: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runNonlocal.py', '--domain', 'interval', '--kernelType', 'constant', '--problem', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:22:22  __main__                                 
E       {'debugAssemblyTimes': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'discretizedOrder': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'interval',
E        'element': 'P1',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'horizon': 0.2,
E        'interaction': 'ball2',
E        'kernelType': 'constant',
E        'logDependencies': False,
E        'logProperties': '',
E        'matrixFormat': 'dense',
E        'maxiter': 100,
E        'mpiGlobalCommSize': 1,
E        'noRef': 8,
E        'normalized': True,
E        'overwriteCache': False,
E        'phi': 'const(1.)',
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_analyticSolution': True,
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'poly-Neumann',
E        'quadType': 'auto',
E        'quadTypeBoundary': 'auto',
E        's': 'const(0.4)',
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'solverType': 'lu',
E        'target_order': -1.0,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeconstant--problempoly-Neumann--solverlu--matrixFormatdense',
E        'tol': 1e-06,
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:22:22  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runNonlocal.py --domain interval --kernelType constant --problem poly-Neumann --solver lu --matrixFormat dense --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeconstant--problempoly-Neumann--solverlu--matrixFormatdense --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'fine' in 0.000106 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000365 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00022 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000161 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000176 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00021 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000323 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000413 s
E       2023-09-07 13:22:22  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000662 s
E       2023-09-07 13:22:22  __main__                                 hierarchy - meshes in 0.0173 s
E       2023-09-07 13:22:24  PyNucleus_nl.helpers                     Assemble dense matrix Kernel(indicator, |x-y|_2 <= 0.2, constantIntegrableScaling(0.2 -> 187.49999999999994)), zeroExterior=False in 1.75 s
E       2023-09-07 13:22:24  PyNucleus_multilevelSolver.levels        Assembled matrices in 1.76 s
E       2023-09-07 13:22:24  __main__                                 hierarchy - matrices in 1.76 s
E       2023-09-07 13:22:25  __main__                                 solve discretizedNonlocalProblem in 0.00994 s
E       2023-09-07 13:22:25  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runNonlocal.py", line 44, in <module>
E           discrProblem.report(results)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_nl/discretizedProblems.py", line 595, in report
E           group.add('matrix memory size', self.A.getMemorySize())
E                                           ^^^^^^^^^^^^^^^^^^^^
E       0: AttributeError: 'PyNucleus_base.linear_operators.TimeStepperLinearO' object has no attribute 'getMemorySize'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_drivers_intFracLapl ► testNonlocal[interval-inverseDistance-poly-Neumann-lu-dense]

Failed test found in:
  test-results-3.11.xml
Error:
  runNonlocal_params = ('interval', 'inverseDistance', 'poly-Neumann', 'lu', 'dense')
Raw output
runNonlocal_params = ('interval', 'inverseDistance', 'poly-Neumann', 'lu', 'dense')
extra = []

    @pytest.mark.slow
    def testNonlocal(runNonlocal_params, extra):
        domain, kernel, problem, solver, matrixFormat = runNonlocal_params
        base = getPath()+'/../'
        py = ['runNonlocal.py',
              '--domain', domain,
              '--kernelType', kernel,
              '--problem', problem,
              '--solver', solver,
              '--matrixFormat', matrixFormat]
        # if kernel != 'fractional':
        path = base+'drivers'
        cacheDir = getPath()+'/'
        if problem == 'poly-Neumann' and domain == 'square':
            return pytest.skip('not implemented')
>       runDriver(path, py, cacheDir=cacheDir, extra=extra)

tests/test_drivers_intFracLapl.py:69: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runNonlocal.py', '--domain', 'interval', '--kernelType', 'inverseDistance', '--problem', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:22:33  __main__                                 
E       {'debugAssemblyTimes': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'discretizedOrder': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'interval',
E        'element': 'P1',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'horizon': 0.2,
E        'interaction': 'ball2',
E        'kernelType': 'inverseDistance',
E        'logDependencies': False,
E        'logProperties': '',
E        'matrixFormat': 'dense',
E        'maxiter': 100,
E        'mpiGlobalCommSize': 1,
E        'noRef': 8,
E        'normalized': True,
E        'overwriteCache': False,
E        'phi': 'const(1.)',
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_analyticSolution': True,
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'poly-Neumann',
E        'quadType': 'auto',
E        'quadTypeBoundary': 'auto',
E        's': 'const(0.4)',
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'solverType': 'lu',
E        'target_order': -1.0,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeinverseDistance--problempoly-Neumann--solverlu--matrixFormatdense',
E        'tol': 1e-06,
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:22:33  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runNonlocal.py --domain interval --kernelType inverseDistance --problem poly-Neumann --solver lu --matrixFormat dense --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeinverseDistance--problempoly-Neumann--solverlu--matrixFormatdense --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'fine' in 0.000131 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000258 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00015 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000154 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00019 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000185 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00041 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000366 s
E       2023-09-07 13:22:33  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000587 s
E       2023-09-07 13:22:33  __main__                                 hierarchy - meshes in 0.0151 s
E       2023-09-07 13:22:35  PyNucleus_nl.helpers                     Assemble dense matrix Kernel(peridynamic, |x-y|_2 <= 0.2, constantIntegrableScaling(0.2 -> 24.999999999999996)), zeroExterior=False in 1.78 s
E       2023-09-07 13:22:35  PyNucleus_multilevelSolver.levels        Assembled matrices in 1.79 s
E       2023-09-07 13:22:35  __main__                                 hierarchy - matrices in 1.79 s
E       2023-09-07 13:22:36  __main__                                 solve discretizedNonlocalProblem in 0.0107 s
E       2023-09-07 13:22:36  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runNonlocal.py", line 44, in <module>
E           discrProblem.report(results)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_nl/discretizedProblems.py", line 595, in report
E           group.add('matrix memory size', self.A.getMemorySize())
E                                           ^^^^^^^^^^^^^^^^^^^^
E       0: AttributeError: 'PyNucleus_base.linear_operators.TimeStepperLinearO' object has no attribute 'getMemorySize'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_drivers_intFracLapl ► testNonlocal[interval-fractional-poly-Neumann-lu-H2]

Failed test found in:
  test-results-3.11.xml
Error:
  runNonlocal_params = ('interval', 'fractional', 'poly-Neumann', 'lu', 'H2')
Raw output
runNonlocal_params = ('interval', 'fractional', 'poly-Neumann', 'lu', 'H2')
extra = []

    @pytest.mark.slow
    def testNonlocal(runNonlocal_params, extra):
        domain, kernel, problem, solver, matrixFormat = runNonlocal_params
        base = getPath()+'/../'
        py = ['runNonlocal.py',
              '--domain', domain,
              '--kernelType', kernel,
              '--problem', problem,
              '--solver', solver,
              '--matrixFormat', matrixFormat]
        # if kernel != 'fractional':
        path = base+'drivers'
        cacheDir = getPath()+'/'
        if problem == 'poly-Neumann' and domain == 'square':
            return pytest.skip('not implemented')
>       runDriver(path, py, cacheDir=cacheDir, extra=extra)

tests/test_drivers_intFracLapl.py:69: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runNonlocal.py', '--domain', 'interval', '--kernelType', 'fractional', '--problem', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:24:47  __main__                                 
E       {'debugAssemblyTimes': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'discretizedOrder': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'interval',
E        'element': 'P1',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'horizon': 0.2,
E        'interaction': 'ball2',
E        'kernelType': 'fractional',
E        'logDependencies': False,
E        'logProperties': '',
E        'matrixFormat': 'H2',
E        'maxiter': 100,
E        'mpiGlobalCommSize': 1,
E        'noRef': 8,
E        'normalized': True,
E        'overwriteCache': False,
E        'phi': 'const(1.)',
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_analyticSolution': True,
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'poly-Neumann',
E        'quadType': 'auto',
E        'quadTypeBoundary': 'auto',
E        's': 'const(0.4)',
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'solverType': 'lu',
E        'target_order': -1.0,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypefractional--problempoly-Neumann--solverlu--matrixFormatH2',
E        'tol': 1e-06,
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:24:47  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runNonlocal.py --domain interval --kernelType fractional --problem poly-Neumann --solver lu --matrixFormat H2 --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypefractional--problempoly-Neumann--solverlu--matrixFormatH2 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'fine' in 0.000151 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000167 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000184 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000146 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000163 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00019 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000274 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000537 s
E       2023-09-07 13:24:47  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000605 s
E       2023-09-07 13:24:47  __main__                                 hierarchy - meshes in 0.0162 s
E       2023-09-07 13:24:49  PyNucleus_nl.nonlocalLaplacian           interpolation_order: 11, maxLevels: 200, minClusterSize: 5, minMixedClusterSize: 5, minFarFieldBlockSize: 121, eta: 1.0
E       2023-09-07 13:24:49  PyNucleus_nl.nonlocalLaplacian           Anear: <3073x3073 SSS_LinearOperator with 135577 stored elements>
E       2023-09-07 13:24:49  PyNucleus_nl.nonlocalLaplacian           <3073x3073 H2Matrix 0.014357 fill from near field, 0.016675 fill from tree, 0.037825 fill from clusters, 1023 tree nodes, 2952 far-field cluster pairs>
E       2023-09-07 13:24:49  PyNucleus_nl.helpers                     Assemble H2 matrix kernel(fractional, 0.4, |x-y|_2 <= 0.2, constantFractionalLaplacianScaling(0.4,0.2 -> 4.139188984383644)), zeroExterior=False in 1.86 s
E       2023-09-07 13:24:49  PyNucleus_multilevelSolver.levels        Assembled matrices in 1.87 s
E       2023-09-07 13:24:49  __main__                                 hierarchy - matrices in 1.87 s
E       2023-09-07 13:24:50  __main__                                 solve discretizedNonlocalProblem in 0.0101 s
E       2023-09-07 13:24:50  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runNonlocal.py", line 44, in <module>
E           discrProblem.report(results)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_nl/discretizedProblems.py", line 595, in report
E           group.add('matrix memory size', self.A.getMemorySize())
E                                           ^^^^^^^^^^^^^^^^^^^^
E       0: AttributeError: 'PyNucleus_base.linear_operators.TimeStepperLinearO' object has no attribute 'getMemorySize'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_drivers_intFracLapl ► testNonlocal[interval-constant-poly-Neumann-lu-H2]

Failed test found in:
  test-results-3.11.xml
Error:
  runNonlocal_params = ('interval', 'constant', 'poly-Neumann', 'lu', 'H2')
Raw output
runNonlocal_params = ('interval', 'constant', 'poly-Neumann', 'lu', 'H2')
extra = []

    @pytest.mark.slow
    def testNonlocal(runNonlocal_params, extra):
        domain, kernel, problem, solver, matrixFormat = runNonlocal_params
        base = getPath()+'/../'
        py = ['runNonlocal.py',
              '--domain', domain,
              '--kernelType', kernel,
              '--problem', problem,
              '--solver', solver,
              '--matrixFormat', matrixFormat]
        # if kernel != 'fractional':
        path = base+'drivers'
        cacheDir = getPath()+'/'
        if problem == 'poly-Neumann' and domain == 'square':
            return pytest.skip('not implemented')
>       runDriver(path, py, cacheDir=cacheDir, extra=extra)

tests/test_drivers_intFracLapl.py:69: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runNonlocal.py', '--domain', 'interval', '--kernelType', 'constant', '--problem', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:24:56  __main__                                 
E       {'debugAssemblyTimes': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'discretizedOrder': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'interval',
E        'element': 'P1',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'horizon': 0.2,
E        'interaction': 'ball2',
E        'kernelType': 'constant',
E        'logDependencies': False,
E        'logProperties': '',
E        'matrixFormat': 'H2',
E        'maxiter': 100,
E        'mpiGlobalCommSize': 1,
E        'noRef': 8,
E        'normalized': True,
E        'overwriteCache': False,
E        'phi': 'const(1.)',
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_analyticSolution': True,
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'poly-Neumann',
E        'quadType': 'auto',
E        'quadTypeBoundary': 'auto',
E        's': 'const(0.4)',
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'solverType': 'lu',
E        'target_order': -1.0,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeconstant--problempoly-Neumann--solverlu--matrixFormatH2',
E        'tol': 1e-06,
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:24:56  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runNonlocal.py --domain interval --kernelType constant --problem poly-Neumann --solver lu --matrixFormat H2 --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeconstant--problempoly-Neumann--solverlu--matrixFormatH2 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'fine' in 0.000111 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00019 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.00014 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000174 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000214 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000205 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000249 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000472 s
E       2023-09-07 13:24:56  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000678 s
E       2023-09-07 13:24:56  __main__                                 hierarchy - meshes in 0.0158 s
E       2023-09-07 13:24:57  PyNucleus_nl.nonlocalLaplacian           interpolation_order: 12, maxLevels: 200, minClusterSize: 6, minMixedClusterSize: 6, minFarFieldBlockSize: 144, eta: 1.0
E       2023-09-07 13:24:57  PyNucleus_nl.nonlocalLaplacian           Anear: <3073x3073 SSS_LinearOperator with 135577 stored elements>
E       2023-09-07 13:24:57  PyNucleus_nl.nonlocalLaplacian           <3073x3073 H2Matrix 0.014357 fill from near field, 0.019489 fill from tree, 0.045015 fill from clusters, 1023 tree nodes, 2952 far-field cluster pairs>
E       2023-09-07 13:24:57  PyNucleus_nl.helpers                     Assemble H2 matrix Kernel(indicator, |x-y|_2 <= 0.2, constantIntegrableScaling(0.2 -> 187.49999999999994)), zeroExterior=False in 0.74 s
E       2023-09-07 13:24:57  PyNucleus_multilevelSolver.levels        Assembled matrices in 0.75 s
E       2023-09-07 13:24:57  __main__                                 hierarchy - matrices in 0.75 s
E       2023-09-07 13:24:58  __main__                                 solve discretizedNonlocalProblem in 0.00939 s
E       2023-09-07 13:24:58  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runNonlocal.py", line 44, in <module>
E           discrProblem.report(results)
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_nl/discretizedProblems.py", line 595, in report
E           group.add('matrix memory size', self.A.getMemorySize())
E                                           ^^^^^^^^^^^^^^^^^^^^
E       0: AttributeError: 'PyNucleus_base.linear_operators.TimeStepperLinearO' object has no attribute 'getMemorySize'
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError

Check failure on line 0 in test-results-3.11.xml

See this annotation in the file changed.

@github-actions github-actions / Test report (Python 3.11)

pytest ► tests.test_drivers_intFracLapl ► testNonlocal[interval-inverseDistance-poly-Dirichlet-lu-H2]

Failed test found in:
  test-results-3.11.xml
Error:
  runNonlocal_params = ('interval', 'inverseDistance', 'poly-Dirichlet', 'lu', 'H2')
Raw output
runNonlocal_params = ('interval', 'inverseDistance', 'poly-Dirichlet', 'lu', 'H2')
extra = []

    @pytest.mark.slow
    def testNonlocal(runNonlocal_params, extra):
        domain, kernel, problem, solver, matrixFormat = runNonlocal_params
        base = getPath()+'/../'
        py = ['runNonlocal.py',
              '--domain', domain,
              '--kernelType', kernel,
              '--problem', problem,
              '--solver', solver,
              '--matrixFormat', matrixFormat]
        # if kernel != 'fractional':
        path = base+'drivers'
        cacheDir = getPath()+'/'
        if problem == 'poly-Neumann' and domain == 'square':
            return pytest.skip('not implemented')
>       runDriver(path, py, cacheDir=cacheDir, extra=extra)

tests/test_drivers_intFracLapl.py:69: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = '/home/runner/work/PyNucleus/PyNucleus/tests/../drivers'
py = ['runNonlocal.py', '--domain', 'interval', '--kernelType', 'inverseDistance', '--problem', ...]
python = '/opt/hostedtoolcache/Python/3.11.5/x64/bin/python3', timeout = 600
ranks = 1, cacheDir = '/home/runner/work/PyNucleus/PyNucleus/tests/'
overwriteCache = False, aTol = 1e-12, relTol = 0.01, extra = None

    def runDriver(path, py, python=None, timeout=600, ranks=None, cacheDir='',
                  overwriteCache=False,
                  aTol=1e-12, relTol=1e-2, extra=None):
        from subprocess import Popen, PIPE, TimeoutExpired
        import logging
        import os
        from pathlib import Path
        logger = logging.getLogger('__main__')
        if not isinstance(py, (list, tuple)):
            py = [py]
        autotesterOutput = Path('/home/caglusa/autotester/html')
        if autotesterOutput.exists():
            plotDir = autotesterOutput/('test-plots/'+''.join(py)+'/')
        else:
            extra = None
        if cacheDir != '':
            cache = cacheDir+'/cache_' + ''.join(py)
            runOutput = cacheDir+'/run_' + ''.join(py)
            if ranks is not None:
                cache += str(ranks)
                runOutput += str(ranks)
            py += ['--test', '--testCache={}'.format(cache)]
            if overwriteCache:
                py += ['--overwriteCache']
        else:
            py += ['--test']
        if extra is not None:
            plotDir.mkdir(exist_ok=True, parents=True)
            py += ['--plotFolder={}'.format(plotDir), '--plotFormat=png']
        else:
            py += ['--skipPlots']
        assert (Path(path)/py[0]).exists(), 'Driver \"{}\" does not exist'.format(Path(path)/py[0])
        if ranks is None:
            ranks = 1
        if python is None:
            import sys
            python = sys.executable
        cmd = [python] + py
        if 'MPIEXEC_FLAGS' in os.environ:
            mpi_flags = str(os.environ['MPIEXEC_FLAGS'])
        else:
            mpi_flags = '--bind-to none'
        cmd = ['mpiexec'] + mpi_flags.split(' ') + ['-n', str(ranks)]+cmd
        logger.info('Launching "{}" from "{}"'.format(' '.join(cmd), path))
        my_env = {}
        for key in os.environ:
            if key.find('OMPI') == -1:
                my_env[key] = os.environ[key]
        proc = Popen(cmd, cwd=path,
                     stdout=PIPE, stderr=PIPE,
                     universal_newlines=True,
                     env=my_env)
        try:
            stdout, stderr = proc.communicate(timeout=timeout)
        except TimeoutExpired:
            proc.kill()
            raise
        if len(stdout) > 0:
            logger.info(stdout)
        if len(stderr) > 0:
            logger.error(stderr)
>       assert proc.returncode == 0, stderr+'\n\n'+stdout
E       AssertionError: 2023-09-07 13:25:00  __main__                                 
E       {'debugAssemblyTimes': False,
E        'disableFileLog': False,
E        'disableHeader': False,
E        'disableTimeStamps': False,
E        'discretizedOrder': False,
E        'displayConfig': True,
E        'displayRanks': False,
E        'domain': 'interval',
E        'element': 'P1',
E        'hdf5Input': '',
E        'hdf5Output': '',
E        'horizon': 0.2,
E        'interaction': 'ball2',
E        'kernelType': 'inverseDistance',
E        'logDependencies': False,
E        'logProperties': '',
E        'matrixFormat': 'H2',
E        'maxiter': 100,
E        'mpiGlobalCommSize': 1,
E        'noRef': 8,
E        'normalized': True,
E        'overwriteCache': False,
E        'phi': 'const(1.)',
E        'plotFolder': '',
E        'plotFormat': 'pdf',
E        'plot_analyticSolution': True,
E        'plot_error': True,
E        'plot_solution': True,
E        'problem': 'poly-Dirichlet',
E        'quadType': 'auto',
E        'quadTypeBoundary': 'auto',
E        's': 'const(0.4)',
E        'showDependencyGraph': False,
E        'showMemory': False,
E        'showTimers': True,
E        'skipPlots': True,
E        'solverType': 'lu',
E        'target_order': -1.0,
E        'test': True,
E        'testCache': '/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeinverseDistance--problempoly-Dirichlet--solverlu--matrixFormatH2',
E        'tol': 1e-06,
E        'yamlInput': '',
E        'yamlOutput': ''}
E       2023-09-07 13:25:00  __main__                                 
E       Running:                                              /opt/hostedtoolcache/Python/3.11.5/x64/bin/python3 runNonlocal.py --domain interval --kernelType inverseDistance --problem poly-Dirichlet --solver lu --matrixFormat H2 --test --testCache=/home/runner/work/PyNucleus/PyNucleus/tests//cache_runNonlocal.py--domaininterval--kernelTypeinverseDistance--problempoly-Dirichlet--solverlu--matrixFormatH2 --skipPlots
E       Open MPI v4.1.2, package: Debian OpenMPI, ident: 4.1.2, repo rev: v4.1.2, Nov 24, 2021
E       MPI standard supported:                               (3, 1)
E       Vendor:                                               ('Open MPI', (4, 1, 2))
E       Level of thread support:                              multiple
E       Is threaded:                                          True
E       Threads requested:                                    True
E       Thread level requested:                               multiple
E       Hosts:                                                fv-az283-77
E       Communicator size:                                    1
E       OMP_NUM_THREADS:                                      not set
E       numpy:                                                1.25.2
E       scipy:                                                1.11.2
E       mpi4py:                                               3.1.4
E       cython:                                               3.0.2
E       PyNucleus_base,fem,metisCy,multilevelSolver,nl,packageTools:1.1rc0
E       
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.connectors    Initializing mesh on 'fine' in 0.000102 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000253 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000155 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000144 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000148 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000219 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000255 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000383 s
E       2023-09-07 13:25:00  PyNucleus_multilevelSolver.levels        Refined mesh in 0.000936 s
E       2023-09-07 13:25:00  __main__                                 hierarchy - meshes in 0.00926 s
E       2023-09-07 13:25:01  PyNucleus_nl.nonlocalLaplacian           interpolation_order: 12, maxLevels: 200, minClusterSize: 6, minMixedClusterSize: 6, minFarFieldBlockSize: 144, eta: 1.0
E       2023-09-07 13:25:01  PyNucleus_nl.nonlocalLaplacian           Anear: <3073x3073 SSS_LinearOperator with 135577 stored elements>
E       2023-09-07 13:25:01  PyNucleus_nl.nonlocalLaplacian           <3073x3073 H2Matrix 0.014357 fill from near field, 0.019489 fill from tree, 0.045015 fill from clusters, 1023 tree nodes, 2952 far-field cluster pairs>
E       2023-09-07 13:25:01  PyNucleus_nl.helpers                     Assemble H2 matrix Kernel(peridynamic, |x-y|_2 <= 0.2, constantIntegrableScaling(0.2 -> 24.999999999999996)), zeroExterior=False in 0.75 s
E       2023-09-07 13:25:02  PyNucleus_nl.nonlocalLaplacian           interpolation_order: 12, maxLevels: 200, minClusterSize: 6, minMixedClusterSize: 6, minFarFieldBlockSize: 144, eta: 1.0
E       2023-09-07 13:25:02  PyNucleus_nl.nonlocalLaplacian           Anear: <2559x2559 SSS_LinearOperator with 208890 stored elements>
E       2023-09-07 13:25:03  PyNucleus_nl.nonlocalLaplacian           <2559x2559 H2Matrix 0.031899 fill from near field, 0.015904 fill from tree, 0.024145 fill from clusters, 511 tree nodes, 1098 far-field cluster pairs>
E       2023-09-07 13:25:03  PyNucleus_nl.helpers                     Assemble H2 matrix Kernel(peridynamic, |x-y|_2 <= 0.2, constantIntegrableScaling(0.2 -> 24.999999999999996)), zeroExterior=False in 1.4 s
E       2023-09-07 13:25:03  PyNucleus_multilevelSolver.levels        Assembled matrices in 1.41 s
E       2023-09-07 13:25:03  __main__                                 hierarchy - matrices in 1.41 s
E       2023-09-07 13:25:03  __main__                                 solve discretizedNonlocalProblem in 0.00648 s
E       2023-09-07 13:25:03  __main__                                 
E       h:                                0.000781
E       hmin:                             0.000781
E       mesh quality:                     1.0
E       DoFMap:                           P1 DoFMap with 3073 DoFs and 0 boundary DoFs.
E       Interior DoFMap:                  P1 DoFMap with 2559 DoFs and 514 boundary DoFs.
E       Dirichlet DoFMap:                 P1 DoFMap with 514 DoFs and 2559 boundary DoFs.
E       matrix memory size:               4,604,764
E       solver:                           lu
E       L2 error interpolated:            3.65e-08
E       relative interpolated L2 error:   3.49e-08
E       Linf error interpolated:          3.89e-08
E       relative interpolated Linf error: 3.89e-08
E       2023-09-07 13:25:03  __main__                                 
E       timer                               numCalls     minCall    meanCall     maxCall         sum
E       --------------------------------  ----------  ----------  ----------  ----------  ----------
E       hierarchy - meshes                         1  0.00925726  0.00925726  0.00925726  0.00925726
E       hierarchy - matrices                       1  1.40726     1.40726     1.40726     1.40726
E       solve discretizedNonlocalProblem           1  0.00648034  0.00648034  0.00648034  0.00648034
E       total                                      1  2.79349     2.79349     2.79349     2.79349
E       2023-09-07 13:25:03  root                                     
E       0: Traceback (most recent call last):
E       0:   File "/home/runner/work/PyNucleus/PyNucleus/drivers/runNonlocal.py", line 65, in <module>
E           d.finish()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1307, in finish
E           self.saveOutput()
E       0:   File "/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py", line 1190, in saveOutput
E           assert False, 'No match (observed, expected)\n' + str(pformat(diff))
E       0: AssertionError: No match (observed, expected)
E       {'errors': {'L2 error interpolated': (3.645334428710421e-08,
E                                             2.0677307416367313e-07),
E                   'Linf error interpolated': (3.885486876686883e-08,
E                                               2.2039557501241092e-07),
E                   'relative interpolated L2 error': (3.4894724904771207e-08,
E                                                      1.9793217005902312e-07),
E                   'relative interpolated Linf error': (3.885486876686883e-08,
E                                                        2.2039557501241092e-07)}}
E       
E       --------------------------------------------------------------------------
E       MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
E       with errorcode 1234.
E       
E       NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
E       You may or may not see output from other processes, depending on
E       exactly when Open MPI kills them.
E       --------------------------------------------------------------------------
E       
E       
E       L2 error interpolated 3.645334428710421e-08 2.0677307416367313e-07 0.03 1e-08 0.03 1e-08
E       relative interpolated L2 error 3.4894724904771207e-08 1.9793217005902312e-07 0.03 1e-08 0.03 1e-08
E       Linf error interpolated 3.885486876686883e-08 2.2039557501241092e-07 0.03 1e-08 0.03 1e-08
E       relative interpolated Linf error 3.885486876686883e-08 2.2039557501241092e-07 0.03 1e-08 0.03 1e-08

/opt/hostedtoolcache/Python/3.11.5/x64/lib/python3.11/site-packages/PyNucleus_base/utilsFem.py:1412: AssertionError