-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch backend slow with pymc model #1110
Comments
The script I used for reference import arviz as az
import numpy as np
import multiprocessing
import pandas as pd
import pymc as pm
import pytensor as pt
import pytensor.tensor.random as ptr
import time
def main():
# Load the radon dataset
data = pd.read_csv(pm.get_data("radon.csv"))
data["log_radon"] = data["log_radon"].astype(np.float64)
county_idx, counties = pd.factorize(data.county)
coords = {"county": counties, "obs_id": np.arange(len(county_idx))}
# Create a simple hierarchical model for the radon dataset
with pm.Model(coords=coords, check_bounds=False) as model:
intercept = pm.Normal("intercept", sigma=10)
# County effects
raw = pm.ZeroSumNormal("county_raw", dims="county")
sd = pm.HalfNormal("county_sd")
county_effect = pm.Deterministic("county_effect", raw * sd, dims="county")
# Global floor effect
floor_effect = pm.Normal("floor_effect", sigma=2)
# County:floor interaction
raw = pm.ZeroSumNormal("county_floor_raw", dims="county")
sd = pm.HalfNormal("county_floor_sd")
county_floor_effect = pm.Deterministic(
"county_floor_effect", raw * sd, dims="county"
)
mu = (
intercept
+ county_effect[county_idx]
+ floor_effect * data.floor.values
+ county_floor_effect[county_idx] * data.floor.values
)
sigma = pm.HalfNormal("sigma", sigma=1.5)
pm.Normal(
"log_radon", mu=mu, sigma=sigma, observed=data.log_radon.values, dims="obs_id"
)
from pymc.model.transform.optimization import freeze_dims_and_data
model = freeze_dims_and_data(model)
for mode in ("NUMBA", "PYTORCH"):
start = time.perf_counter()
trace = pm.sample(
model=model,
cores=1,
chains=1,
tune=500,
draws=500,
progressbar=False,
compute_convergence_checks=False,
return_inferencedata=False,
compile_kwargs=dict(mode=mode)
)
end = time.perf_counter()
idata = pm.to_inference_data(trace, model=model)
print(az.summary(idata, kind="diagnostics"))
print(mode, trace._report.t_sampling, end - start)
if __name__ == "__main__":
multiprocessing.freeze_support()
main() |
I called compile logp and dlogp as well to narrow down the time
|
You have to call it once (perhaps assert they output the same) to jit compile and then only timeit |
@ricardoV94 can you assign this issue to me by chance? I profiled a bit more. The logp and dlogp pytensor functions don't take long to generate, but executing it is slower. Both numba and torch do show that it gets faster if you execute it multiple times, but numba is much faster
I'm seeing some interesting data in the Graph Count: 8
Graph Break Count: 7
Op Count: 198
Break Reasons:
Break Reason 1:
Reason: data dependent operator: aten._local_scalar_dense.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
User Stack:
<FrameSummary file /var/folders/2z/y4z6q41j7l93ysr2y2k_gsnh0000gn/T/tmpxndm7s5l, line 35 in pytorch_funcified_fgraph>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/basic.py, line 129 in join>
Break Reason 2:
Reason: data dependent operator: aten._local_scalar_dense.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
User Stack:
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/basic.py, line 129 in join>
Break Reason 3:
Reason: data dependent operator: aten._local_scalar_dense.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
User Stack:
<FrameSummary file /var/folders/2z/y4z6q41j7l93ysr2y2k_gsnh0000gn/T/tmpxndm7s5l, line 75 in torch_dynamo_resume_in_pytorch_funcified_fgraph_at_35>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/basic.py, line 129 in join>
Break Reason 4:
Reason: Dynamic slicing on data-dependent value is not supported
User Stack:
<FrameSummary file /var/folders/2z/y4z6q41j7l93ysr2y2k_gsnh0000gn/T/tmpxndm7s5l, line 133 in torch_dynamo_resume_in_pytorch_funcified_fgraph_at_75>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/subtensor.py, line 77 in inc_subtensor>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/tensor/subtensor.py, line 142 in indices_from_subtensor>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/tensor/subtensor.py, line 142 in <genexpr>>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/tensor/subtensor.py, line 131 in convert_indices>
Break Reason 5:
Reason: Dynamic slicing on data-dependent value is not supported
User Stack:
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/subtensor.py, line 81 in torch_dynamo_resume_in_inc_subtensor_at_78>
Break Reason 6:
Reason: Dynamic slicing on data-dependent value is not supported
User Stack:
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/subtensor.py, line 81 in torch_dynamo_resume_in_inc_subtensor_at_81>
Break Reason 7:
Reason: Dynamic slicing on data-dependent value is not supported
User Stack:
<FrameSummary file /var/folders/2z/y4z6q41j7l93ysr2y2k_gsnh0000gn/T/tmpxndm7s5l, line 193 in torch_dynamo_resume_in_pytorch_funcified_fgraph_at_133>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/link/pytorch/dispatch/subtensor.py, line 77 in inc_subtensor>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/tensor/subtensor.py, line 142 in indices_from_subtensor>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/tensor/subtensor.py, line 142 in <genexpr>>
<FrameSummary file /Users/ch0ronomato/dev/pytensor/pytensor/tensor/subtensor.py, line 131 in convert_indices> The first thing I'm gonna do is clean up some of the warnings. These timings are only possible because of sending the warning logs out to dev null, if you don't the timings balloon a bit. |
The first time is the compilation, it's not as relevant since it's a one time thing. If we're recompiling multiple times that's a different thing. Also are you using Did you confirm the outputs match? |
Those breaks are interesting but most are not data dependent? Like the slice is constant in this model. Can you enable that capture scalar outputs option? Also are you freezing the model data and dims like in the original example? When we have static shapes we could forward those to the dispatch |
Description
@ricardoV94 did a nice perf improvement in pymc-devs/pymc#7578 to try to speedup jitted backends. I tried out torch as well. The model performed quite slow.
We need to investigate why
When doing perf evaluations, keep in mind that torch does a lot of caching. If you want a truly cache-less eval, you can either add
torch.compiler.reset()
or set the env variable to disable the dynamo cache (google it).The text was updated successfully, but these errors were encountered: