Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to replicate state immediately during normal shutdown #190

Merged
merged 6 commits into from
Sep 27, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

This new version of Phoenix.PubSub provides a simpler, more extensible, and more performant Phoenix.PubSub API. For users of Phoenix.PubSub, the API is the same, although frameworks and other adapters will have to migrate accordingly (which often means less code).

## 2.1.4 (2024-09-27)

### Enhancements
- Add `:permdown_on_shutdown` option.

## 2.1.3 (2023-06-14)

### Bug fixes
Expand Down
40 changes: 38 additions & 2 deletions lib/phoenix/tracker.ex
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,34 @@ defmodule Phoenix.Tracker do
An optional `handle_info/2` callback may also be invoked to handle
application specific messages within your tracker.

## Special Considerations
## Stability and Performance Considerations

Operations within `handle_diff/2` happen *in the tracker server's context*.
Therefore, blocking operations should be avoided when possible, and offloaded
to a supervised task when required. Also, a crash in the `handle_diff/2` will
crash the tracker server, so operations that may crash the server should be
offloaded with a `Task.Supervisor` spawned process.

## Application Shutdown

A tracker does not automatically replicate its state across the cluster as it
shuts down. This means that your supervision tree shuts down normally - as it
does when you call `System.stop()` or the BEAM receives a `SIGTERM` - any
presences that the local tracker instance has will continue to be seen as
present by other trackers in the cluster until the `:down_period` for the
instance has passed.

If you want a normal shutdown to immediately cause other nodes to see that
tracker's presences as leaving, pass `permdown_on_shutdown: true`. On the
other hand, if you are using `Phoenix.Presence` for clients which will
immediately attempt to connect to a new node, it may be preferable to use
`permdown_on_shutdown: false`, allowing the disconnected clients time to
reconnect before removing their old presences, to avoid overwhelming clients
with notifications that many users left and immediately rejoined.

If the application crashes or is halted non-gracefully (for instance, with a
`SIGKILL` or a `Ctrl+C` in `iex`), other nodes will still have to wait the
`:down_period` to notice that the tracker's presences are gone.
nathanl marked this conversation as resolved.
Show resolved Hide resolved
"""
use Supervisor
require Logger
Expand Down Expand Up @@ -267,6 +288,9 @@ defmodule Phoenix.Tracker do
* `:down_period` - The interval in milliseconds to flag a replica
as temporarily down. Default `broadcast_period * max_silent_periods * 2`
(30s down detection). Note: This must be at least 2x the `broadcast_period`.
* `permdown_on_shutdown` - boolean; whether to immediately call
`graceful_permdown/1` on the tracker during a graceful shutdown. See
'Application Shutdown' section. Default `false`.
nathanl marked this conversation as resolved.
Show resolved Hide resolved
* `:permdown_period` - The interval in milliseconds to flag a replica
as permanently down, and discard its state.
Note: This must be at least greater than the `down_period`.
Expand All @@ -287,6 +311,7 @@ defmodule Phoenix.Tracker do
@impl true
def init([tracker, tracker_opts, opts, name]) do
pool_size = Keyword.get(opts, :pool_size, 1)
permdown_on_shutdown = Keyword.get(opts, :permdown_on_shutdown, false)
^name = :ets.new(name, [:set, :named_table, read_concurrency: true])
true = :ets.insert(name, {:pool_size, pool_size})

Expand All @@ -301,13 +326,24 @@ defmodule Phoenix.Tracker do
}
end

children = if permdown_on_shutdown do
shards ++ [
%{
id: :shutdown_handler,
start: {Phoenix.Tracker.ShutdownHandler, :start_link, [[tracker: tracker]]}
nathanl marked this conversation as resolved.
Show resolved Hide resolved
}
]
else
shards
end

opts = [
strategy: :one_for_one,
max_restarts: pool_size * 2,
max_seconds: 1
]

Supervisor.init(shards, opts)
Supervisor.init(children, opts)
end

defp pool_size(tracker_name) do
Expand Down
21 changes: 21 additions & 0 deletions lib/phoenix/tracker/shutdown_handler.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
defmodule Phoenix.Tracker.ShutdownHandler do
@moduledoc false
use GenServer

def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end

@impl GenServer
def init(opts) do
tracker = Keyword.fetch!(opts, :tracker)
Process.flag(:trap_exit, true)
{:ok, %{tracker: tracker}}
end

@impl GenServer
def terminate(_reason, state) do
Phoenix.Tracker.graceful_permdown(state.tracker)
:ok
end
nathanl marked this conversation as resolved.
Show resolved Hide resolved
end
Loading