Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
On 2016-11-07, Victor committed some code modifications to address three trouble tickets: 664250f
In his commit note, he listed an additional 4th item: "4. Simplified code in refitL()". This particular code change in
Sti/StiKalmanTrack.cxx::refitL()
did a little more than simplify: it modified the behavior, though I cannot tell if that was intentional from the comment. The new behavior appears to allow the re-fit to continue along a track when invalid track nodes are encountered (the invalid nodes are excluded from the refit, I believe), for whatever reason a node may be considered invalid. Previously, the entire remainder of the track would be ignored once an invalid node was encountered.The modification Victor made introduced a potential way for a job to crash for the case when the first node fails validity checks, but the second node doesn't get checked to the same degree and may have invalid content. This shows up in logs as:
root4star: .sl73_gcc485/OBJ/StRoot/Sti/StiTrackNodeHelper.cxx:670: int StiTrackNodeHelper::save(): Assertion 'fabs(mPredPars.hz()-mTargetHz)<=1e-10' failed.
(which is a test of the predicted parameters from the previous node versus the current node)
The crash happens for only a tiny fraction of tracks, but a single failure from all the tracks contained in a single DAQ file is enough to crash a production job and lose all of its data. This particularly seems to be more common in pp500 data where there are more invalid nodes due to large TPC distortion corrections (though whether all of them should be invalid is a separate question not to answer here at this time), but I have seen this crash in various other datasets from time to time over recent years too. A very rough estimation is that this might cost us ~2% of all our Run 22 pp500 data.
Two candidate fixes to consider:
I have run a track-by-track comparison with each on ~200k tracks, and the impacted tracks are very few in number, at the level of ~10 tracks in solution 1, and 1 track in solution 2 (that one track changing by only a single hit). Given that solution 2 appears to be very close to having no impact other than allowing jobs to run that would otherwise crash, this PR implements that solution.