Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for BigFloat arguments #98

Open
fda-tome opened this issue May 9, 2023 · 4 comments
Open

Support for BigFloat arguments #98

fda-tome opened this issue May 9, 2023 · 4 comments
Labels
enhancement New feature or request

Comments

@fda-tome
Copy link

fda-tome commented May 9, 2023

Why cant we get arbitrary precision on the besselj functions?

@heltonmc
Copy link
Member

Thanks for the interest! This is definitely on the radar but not implemented. You’re best bet for arbitrary precision is to use ArbNumerics.jl which wraps the fantastic Arb library. There you can specify any precision you want to compute rigorously ‘besselj’.

The reason they are not currently implemented is that the implementations for arbitrary precision routines and fixed precision to double precision are different. I have no plans to support arbitrary precision with this library as I think Arb is excellent. However, I do have some plans to support Float128 or Double64 in the future. I can’t say I’ll ever get there for sure but I have wanted that functionality.

For now, ArbNumerics.jl or QuadMath.jl will be your best bet ☺️

@heltonmc heltonmc added the enhancement New feature or request label May 10, 2023
@fda-tome
Copy link
Author

Awesome! I am very interested in implementing Float128 and Double64 support, I got a lot on my plate right now with GSOC, but once I'm finished with it Ill be happy to tag in and contribute. 😄

@heltonmc
Copy link
Member

Sounds good! Happy to help with that so please feel free to ask any questions when you have more time!

@nsajko
Copy link

nsajko commented Oct 12, 2023

Regarding the original topic of this issue (BigFloat support), note that MPFR already implements the relevant functionality and the SpecialFunctions package already wraps the MPFR interface. E.g.:

julia> using SpecialFunctions

julia> let x = one(BigFloat) / 3
         besselj0(x) == besselj(0, x)
       end
true

Not to discourage a pure-Julia implementation, though.

Regarding support for fixed-precision FP types like Float128, maybe I give implementing those a try myself, after improving some parts of the Float32 and Float64 implementations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants