-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Substantial Python loop overhead for VBBL #104
Comments
@kmzzhang We are about to do a major refactor for Version 3, so we can add this to the list. In particular, we are talking about having subclasses for models, which could include creating MagnificationCurve subclasses such as MagnificationCurveVBBL() |
We don't have to wait till v3. This seems relatively easy thing. I'm guessing we did it on the epoch-by-epoch basis because it's easier to pass a specific number of floats between python on C++, rather than arrays of unknown sizes. |
You're welcome. I believe this also applies to 2L1S point source when the SG12 solver in VBBL is used. Since the finite source method is specified in time intervals in MulensModel, perhaps one way to refractor the code is to aggregate epochs by those intervals and calculate the magnifications? |
Yes, that's what I plan to do. |
@kmzzhang Can you please share how you pass pointers/vectors of floats between python and C++? I see that there are different approaches to |
@rpoleski Apologies for the late reply. I don't have a particular way of doing this, but perhaps you could use Valario's way of making python wrappers: https://github.com/valboz/VBBinaryLensing/tree/master/VBBinaryLensing/lib. He has a python wrapper for the BinaryLightCurve function that takes array of times (ts). One could easily modify this function (and its wrapper) to take arrays of source locations y1s and y2s instead of times. Everything else would be the same. |
@kmzzhang Can you provide example codes that show significant differences in execution? |
@rpoleski I did some quick tests If MM only uses VBBL for points that need full finite source, it is around 0%--40% slower. Note that in the "Point source" example, VBBL native python wrapper is faster than MM point source method, even when VBBL automatically decides whether to do FS or not for each point (I kept non zero rho but none of the points actually needed FS). So once the loop is moved within C++ one doesn't need to specify the time interval needing full FS and it's still lot faster. |
Also if I recall correctly, for each subsequent time sampling VBBL initializes lens-equation root finding from the roots for the previous time sampling. But since MulensModel calls the VBBL python wrapper separately for each time stamp VBBL had to initialize the roots from zero every single time making it much slower. So it's most likely more than an overhead. |
There is one more aspect: MM still uses some older version VBBL, not VBM. I don't know how much they differ in speed. |
I guess one reason is that MulensModel doesn't have the keyword for relative tolerance control (right?), which significantly affects the speed. |
I've started a new branch for that: vbbl_reltol |
Setting relative tolerance is ready to be merged. The only question is what default values we set for absolute and relative tolerance. I'd like the code to work well for Roman data, so I propose RelTol = 0.001 and Tol = 0 (i.e., the latter is ignored). |
Currently, the binary lens light curve is calculated in a python loop where each point is evaluated separately.
https://github.com/rpoleski/MulensModel/blob/master/source/MulensModel/magnificationcurve.py#L421
When using VBBL as the finite source method, the python loop overhead can slow things down by up to ~7 times compared to VBBL's own python wrapper, where the loop occurs in C++. This most apparent when VBBL is used for the full light curve (which itself decides automatically whether to trigger full FS calculation). Perhaps could aggregate points that use VBBL and move the loop into C++? I considered making a pull request but I realize this may involve refracting larger parts of the code.
The text was updated successfully, but these errors were encountered: