Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make sure a large gp hash table is used #38

Open
ralphlange opened this issue Feb 24, 2022 · 2 comments
Open

Make sure a large gp hash table is used #38

ralphlange opened this issue Feb 24, 2022 · 2 comments
Assignees

Comments

@ralphlange
Copy link
Contributor

Reported by Lana Abadie (ITER):

Investigating long reconnect times through a CA Gateway, profiling found that the Gateway spends a lot of time inside GPHENTRY * epicsStdCall gphFindParse(gphPvt *pgphPvt, const char *name, size_t len, void *pvtid), namely doing string comparisons.

In that general purpose hash table, linear search and string comparisons are done in the case of collisions, i.e., when hash buckets are found not empty.

The CA Gateway – being an application that may serve many PVs and always runs on a virtual memory system – should use a large gp hash table.

@ralphlange ralphlange self-assigned this Feb 24, 2022
@anjohnson
Copy link
Member

Interesting, what is the hash table size it uses? Jeff Hill provided and uses his own C++ hash table implementation in resourceLib.h which adjusts the hash table size dynamically, maybe it would be worth looking at switching to that instead?

@mdavidsaver
Copy link

mdavidsaver commented Feb 24, 2022

imo. any re-working should favor std::map as the more sustainable, and perhaps better performing, alternative. Maybe with ifdefs to use std::unordered_map when available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants