-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: jwt cache is not purged #3801
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1632,6 +1632,8 @@ def test_admin_metrics(defaultenv): | |
assert "pgrst_db_pool_available" in response.text | ||
assert "pgrst_db_pool_timeouts_total" in response.text | ||
|
||
assert "pgrst_jwt_cache_size" in response.text | ||
|
||
|
||
def test_schema_cache_startup_load_with_in_db_config(defaultenv, metapostgrest): | ||
"verify that the Schema Cache loads correctly at startup, using the in-db `pgrst.db_schemas` config" | ||
|
@@ -1648,3 +1650,58 @@ def test_schema_cache_startup_load_with_in_db_config(defaultenv, metapostgrest): | |
response = metapostgrest.session.post("/rpc/reset_db_schemas_config") | ||
assert response.text == "" | ||
assert response.status_code == 204 | ||
|
||
|
||
def test_jwt_cache_size_decreases_after_expiry(defaultenv): | ||
"verify that JWT purges expired JWTs" | ||
|
||
relativeSeconds = lambda sec: int( | ||
(datetime.now(timezone.utc) + timedelta(seconds=sec)).timestamp() | ||
) | ||
|
||
headers = lambda sec: jwtauthheader( | ||
{"role": "postgrest_test_author", "exp": relativeSeconds(sec)}, | ||
SECRET, | ||
) | ||
|
||
env = { | ||
**defaultenv, | ||
"PGRST_JWT_CACHE_MAX_LIFETIME": "86400", | ||
"PGRST_JWT_SECRET": SECRET, | ||
"PGRST_DB_CONFIG": "false", | ||
} | ||
|
||
with run(env=env, port=freeport()) as postgrest: | ||
|
||
# Generate three unique JWT tokens | ||
# The 1 second sleep is needed for it generate a unique token | ||
hdrs1 = headers(5) | ||
postgrest.session.get("/authors_only", headers=hdrs1) | ||
|
||
time.sleep(1) | ||
|
||
hdrs2 = headers(5) | ||
postgrest.session.get("/authors_only", headers=hdrs2) | ||
|
||
time.sleep(1) | ||
|
||
hdrs3 = headers(5) | ||
postgrest.session.get("/authors_only", headers=hdrs3) | ||
|
||
# the cache should now have three tokens | ||
response = postgrest.admin.get("/metrics") | ||
assert response.status_code == 200 | ||
assert "pgrst_jwt_cache_size 3.0" in response.text | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @taimoorzaeem Is it possible to have the size measured on bytes? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe. There is no built-in function for this in We can try manually writing some logic to calculate the size in bytes, but I guess that would require some effort. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There doesn't seem to be much code in that library: https://github.com/def-/ghc-datasize/blob/master/src/GHC/DataSize.hs. So maybe we can just vendor the code and use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Btw, this metric could be a new PR so it's easier to review. |
||
|
||
# Wait 5 seconds for the tokens to expire | ||
time.sleep(5) | ||
|
||
hdrs4 = headers(5) | ||
|
||
# Make another request to force call the purgeExpired method | ||
# This should remove the 3 expired tokens and adds 1 to cache | ||
postgrest.session.get("/authors_only", headers=hdrs4) | ||
|
||
response = postgrest.admin.get("/metrics") | ||
assert response.status_code == 200 | ||
assert "pgrst_jwt_cache_size 1.0" in response.text |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, this is going to start a new thread on every request? That doesn't look efficient.
I think we should have a background thread that is notified for doing the purge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest - for starters I would implement a simple solution that executes purgeExpired periodically ( frequency subject to configuration possibly - but not necessarily ).
Triggering purge upon every request might mean the purging thread is running constantly wasting cycles as there is nothing to do most of the times.
The ultimate solution would be to have some kind of scheduler that triggers purging upon next nearest expiry - but that would require more complex bookkeeping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about it some more - it seems to me the best moment to trigger purge is upon a cache miss - just before/after inserting a new entry.
The reasoning is that:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a much better idea. This way, we wouldn't need to do maintain any new thread state, hence avoiding extra overhead. @steve-chavez WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, looks like great idea 👍