You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 21, 2022. It is now read-only.
Hello. I found this project very interesting, Reading the README it says
The aim of this exercise is to prove that interpreted languages can be just as fast as C.
I'm surprised to read that though, because I had learned that interpreters would always be slower than compiler because of the interpreter overhead. I would be very excited to see this project reach close to 1.0x performance but I'm curious why you believe the interpreter overhead would not hold it back?
The text was updated successfully, but these errors were encountered:
If I may: @rain-1, the short version is that all software needs some form of runtime abstraction to be developed in a scalable and maintainable way.
C-based software like redis is no exception. It has many of its own types and conventions, which introduce their own overhead. Embracing an interpreter, like CPython, is really not much different from using a standard runtime. It might be light for some, and heavy for others, but the key is that it works.
Python is on the lighter, simpler side of interpreted runtimes. In contrast, Java is usually run with a very heavy bytecode interpreter. CPython behaves more like C than Java, and because Python lets us get very close to system calls, simple applications like redis can approach the speed of the system itself.
I wrote about how we achieved submillisecond service performance in Python at PayPal here. Hope this helps!
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello. I found this project very interesting, Reading the README it says
I'm surprised to read that though, because I had learned that interpreters would always be slower than compiler because of the interpreter overhead. I would be very excited to see this project reach close to 1.0x performance but I'm curious why you believe the interpreter overhead would not hold it back?
The text was updated successfully, but these errors were encountered: