You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the Overpass system performs absolutely terribly for any nontrivial query, and seems completely incapable of returning results in reasonable time for relatively straightforward queries like "find all X worldwide which are within A distance of an Y and B distance of a Z (where X, Y and Z are queries that give closed way areas)".
It looks like that the project doesn't implement any sort of spatial index like an r-tree or kd-tree (or if it does it's done poorly), which would make it unsuitable for anything beyond trivial queries.
Also it seems that public instances run on SSDs, while for proper performance they should have enough RAM to hold the whole data set in RAM, since it looks like 256-1024GB of RAM are enough for that.
The text was updated successfully, but these errors were encountered:
As a first step, you should probably post a real query to reproduce the issue. Some verbose description of what you're trying to do is of limited use only.
And no, the database wouldn't fit in main memory, OSM is simply too large for that. Even 1TB will not suffice (assuming no compression). The compressed version already takes up > 600GB. Very likely disk access isn't the issue here anyway, so we can simply ignore this point.
It looks like the Overpass system performs absolutely terribly for any nontrivial query, and seems completely incapable of returning results in reasonable time for relatively straightforward queries like "find all X worldwide which are within A distance of an Y and B distance of a Z (where X, Y and Z are queries that give closed way areas)".
It looks like that the project doesn't implement any sort of spatial index like an r-tree or kd-tree (or if it does it's done poorly), which would make it unsuitable for anything beyond trivial queries.
Also it seems that public instances run on SSDs, while for proper performance they should have enough RAM to hold the whole data set in RAM, since it looks like 256-1024GB of RAM are enough for that.
The text was updated successfully, but these errors were encountered: