Replies: 5 comments
-
@jc00ke, its a great conversation. The tradeoff between compile-time and runtime performance is important - and in this project a core consideration. ###💡 don't precompile? The canonical libicu doesn't precompile. But then neither the C version or the Java version runs on a platform that would even make that easy. In the end, CLDR is a very large and complex data structure that needs to be interpreted in the context of an equally large and complex specification. Those implementations make different tradeoffs. In my opinion it results in a library that is somewhat unapproachable, difficult to navigate and has a high barrier to entry. My goal has always been the opposite - approachable, low barrier to entry (a relative thing I know) and easy to navigate. In short, to be a good fit in the Elixir ecosystem. So in the end, recompilation serves two goals: Be easy to adopt and use. And to deliver good performance. I truly don't believe that runtime performance would be acceptable without recompilation of some artefacts. 💡 cache precompiled BEAM files?Thats an interesting idea to experiment with. I would be concerned about version management both of the underlying CLDR-based data and the different BEAM versions. And some additional complexity is the the various generated modules include data from multiple locales so it would require quite a lot of rearchitecting to make BEAM caching possible. 💡 port what gets precompiled to Rust?There is an emerging Rust icu so that could be an option for sure. One of the possible misunderstandings about what gets precompiled is the assumption that it's "just" baking data into modules. Thats certainly true. But for there is also quite a lot of actual code generation going on. For example:
|
Beta Was this translation helpful? Give feedback.
-
In my earlier investigations into why compilation is slow - and why is grows exponentially with large numbers of locales - my naive conclusion has been that its not Elixir compilation that is slow. It's the BEAM ssa phase that is slow. Somehow related to how it tries to optimise function heads with multiple clauses. The way that My intended next round of investigation into this challenge of balancing runtime and compile time is to see what the impact would be of collapsing the number of function heads by collapsing some of the core data into maps that are accessed at runtime. I think that may get some compile time improvement without material runtime impact. As always the challenge is time and I don't expect to be able to work on this aspect for a few months yet. But I would be very happy to collaborate on experiments to try and improve compile time performance - as long as it doesn't negatively impact runtime performance. |
Beta Was this translation helpful? Give feedback.
-
I think a good starting point is to set up CI to run integration tests of all the libraries and benchmarks. This reminds me that I need to move cldr_territories into the organization. But time is not currently on my side. |
Beta Was this translation helpful? Give feedback.
-
I would hope the JIT could optimize this so that you don't have to yourself, but I'd assume map lookup is already tuned.
@mtrudel has done some really good benchmarking work with GH Actions for Bandit, maybe we can use that for inspiration. One thing I've noticed recently is that it seems like this package is being recompiled more often than maybe it needs? I'm back in the office on Monday (was on vacation and then traveling for work) and will see if I can test that. |
Beta Was this translation helpful? Give feedback.
-
Once compiled, I wouldn't expect a "backend module" to be recompiled unless you change the configuration. Curious to see what you are seeing. And I definitely appreciate any help to reduce transitive compile-time dependencies (not a strength of mine for sure). |
Beta Was this translation helpful? Give feedback.
-
Hi there! First off, thank you to everyone (especially @kipcole9!) for your work on this project. I've relied on it for a while now and it's been great.
One thing I've noticed more recently than before is long compilation times. I saw #214 and thought a discussion could be a better place to brainstorm ideas than in an issue.
💡 don't precompile?
I don't know what the benchmarks look like, and this probably wouldn't work, but wanted to throw out the idea anyway.
💡 cache precompiled BEAM files?
Is there something like RustlerPrecompiled for BEAM files? If not, this could be a good opportunity to build it, though I don't know if other projects would have this need.
💡 port what gets precompiled to Rust?
... and then pull down with
RustlerPrecompiled
.This could benefit a lot of other projects that can pull in Rust deps, but would likely be A LOT of work that may not align with the goals of this project or it's authors.
Looking forward to hearing thoughts on those options and other ideas!
Beta Was this translation helpful? Give feedback.
All reactions