-
-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is an idea to optimize the code for json.lua #34
Comments
(Translation)
|
I completely agree with you. ".." - should be avoided for large amounts of data. |
Took advice from @Dnsls and @Faserr to optimize code from rxi#34. rxi#34 rxi#34 (comment) rxi#34 (comment)
String concatenation using Using
|
If you want a fast implementation: https://github.com/Vurv78/qjson.lua
|
Sorry, but that claim in that form simply isn't true (or at best highly misleading). On which inputs? Since this is a complexity issue, any linear-time implementation can be arbitrarily faster. Also, your parser doesn't seem spec-compliant to me. I see no provisions for unicode escape sequences. Your Formatting strings with TL;DR: Your JSON parser is broken and has a time complexity issue as well. Your performance claims in their current form can't be taken seriously. Besides, my PR fixes the complexity issue. |
There are benchmarks. It is better. |
I don't see how you are replying to any of my points. |
I did. All of your points are invalidated by the benchmarks. Theory is theory but it comes down to the actual performance. |
Oh yeah? So something being fast somehow magically makes it correct? Look, I have the fastest JSON decoder: function read_json(str) return 1 end it's easy to optimize e.g. string decoding using "The" benchmarks of |
Just to be sure, I ran the benchmarks of
and here's LuaJIT
(oops, looks like I don't know what you did wrong when benchmarking; Perhaps do some benchmarks before citing them? |
It's probably because you have a horribly slow CPU. |
Really...?
|
Have you tried turning your computer on and off again? Sometimes that effects my benchmarks. |
Would probably help to have a shared benchmark script, so that people on both sides of the argument can verify that the tests are being conducted correctly (or at the very least consistently...) |
The benchmark script is available in this repository. Just add |
It has some odd contents such as this: if name == "jfjson.lua" then
local _encode, _decode = json.encode, json.decode
json.encode = function(...) return _encode(json, ...) end
json.decode = function(...) return _decode(json, ...) end
end Which will affect LuaJIT (have not tested by how much) as it will not JIT compile variadic functions |
How would that affect
[citation needed] A much more likely explanation is that |
Addressing my incorrect claim that it is "thousands of times faster": This was very wrong and a dumb mistake on my end. I've largely rewritten the code to be even further optimized and taken advantage of the rope optimization (pushed that a bit further too). It's now faster in every case besides decoding on LuaJIT since it relies on patterns, which stitch. I've fixed my comments to reflect this, thank you for bringing it to my attention.
See the link I posted above |
Good. What remains to be addressed:
|
@appgurueu https://web.archive.org/web/20220708225331/http://wiki.luajit.org/NYI Took some time to find where the hell this was (the wiki has been deleted). Looks like only vararg results are unsupported based on that table, but I think in the context of a benchmark any potential for overhead should be removed though.
It doesn't, I'm just saying that generally the benchmarking script in this repo is not giving each project being benchmarked an equal footing (regardless of whether they're being directly compared here or not). |
On an unrelated note, regarding the local function isArray(value)
return getmetatable(value) == Array
end
...
local myArray = someJsonLib.Array({1, 2, 3, 4})
print(isArray(myArray)) If detecting whether a table is an array or not is a severe bottleneck, isn't the above approach the best way to tackle this? (seems especially useful for exceptionally large arrays) |
From a user's perspective, probably nobody would want to have to adjust all their input data, but from performance, would probably be much faster. Although I think A faster way could be a lookup table which holds references to all of the array tables local function encode(t, arrays)
if arrays[t] then
else
end
end
local data = { 1, 2, {} }
encode(data, { [data] = true, [data[3]] = true }) But still, this would require action on the user's part. |
I imagine that a user encoding the data is probably in control of how that data is generated (or can write a transformer easily). Ignoring that though, I'm not sure a user of a pure-lua JSON library really cares about splitting hairs over performance in the first place...
Not sure how that's a more ergonomic API than wrapping the arrays at source, or with a transformer. That solution still requires knowing where every single array in the dataset is, and provides a duplicate source of truth for what is or isn't an array. |
This is intended. I don't think tables with gaps should be treated as arrays, or at least, they should cut off. Checking how other JSON libraries handle encoding
There's no real clear pick here. In the future maybe I'd like a configuration function to be exported which lets you tweak things like this to allow or disallow sparse arrays, but for now I prefer current behavior, or swapping to rapidjson/lunajson's method. |
I think throwing an error is the best course of action. A table with holes simply can't be represented properly with accepting the possibility of sparse tables being exponentially wasteful. |
When used .. in loops, the lua will create a lot of garbage and thus affect performance. I changed your script (json-fix.lua)
I created a .HAR file (size 313971312 bytes) using firefox and tested it with a simple Lua script.
I got results like this on my computer (Intel(R) Core(TM) i3-9100 CPU @ 3.60GHz)
The text was updated successfully, but these errors were encountered: