-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
next.release #101
Comments
Anders, I have some Matlab/Scilab code for several things discussed here:
|
I would welcome a conda package. Actually, we already build it from the PiPy package for our PTB-internal conda repo. |
Thanks for the comments.
|
if you can, please test the conda package and report any issues: |
tested installation of the conda package, works. No issues found so far, thanks! |
Hi @aewallin ! I have been working in the past couple of weeks on an updated version of I don't really use GitHub anymore, so it's unofficially forked on GitLab: https://gitlab.com/amv213/allantoolkit As far as I have tested, the code exactly replicates Stable32 results (v1.61). The only difference is on the confidence intervals: for some reason the Chi-squared confidence intervals calculated by Stable32 are slightly different than what I get with Scipy's chi-squared function ... even if the equivalent degrees of freedom I get are the same ... Don't hesitate to let me know what you think! Changelog:
Next steps (?):
Merry Christmas to all! |
Wow, that seems like a lot of work - good job! I would encourage you to submit pull-requests in smaller pieces that allow myself or others to review code/tests/results in much more manageable pieces. I don't see a lot of discussion on the low level API in the issues or the mailing list. I suggest keeping the low-level API and perhaps adding a higher level one, if needed. How large differences in confidence intervals to Stable32 do you see? If the differences are in the 2nd or more digit, this could be because an approximating function to the (inverse) cumulative chi-squared distribution. I can try to dig out the approximation used in Stable32 - if you are interested in exploring this further? |
Perfect, thanks! Yes, the confidence interval differences I'm seeing are generally at the 3rd decimal digit for most deviation types. If you have time to find the approximation that Stable32 uses that would be amazing!! I'm more than happy to then try see if we can match results completely. The low-level API change was just to handle the increased amount of dev outputs, because for every stability run I am now also returning averaging factors, identified noise type, and lower and higher confidence intervals. Now you could just do Of course, this is all quite flexible. Once the code is all ready I will certainly get in touch again to get help on how to best go about integrating changes, and to see which changes you would like and which are not necessary :) |
@amv213 : my current understanding of how Stable32 approximates the inverse chi-squared cumulative distribution and how this leads to differences in computed confidence intervals compared to allantools is now posted in my blog: http://www.anderswallin.net/2020/12/fun-with-chi-squared/ It probably makes sense to split the confidence-interval discussion into a separate issue - if others feel that comparison of confidence intervals with Stable32 is useful now and in the future for the allantools codebase. |
Groslambert covariance now (2021 June) included. Documentation and tests to be added. |
this workflow should now run test coverage analysis and upload the results to coveralls: |
This is a list of to-do features/issues that will not make it into 2019.07
Algorithms:
Test and Build:
Dependencies:
The text was updated successfully, but these errors were encountered: