Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose Architecture and Probe <-> Server API #11

Merged
merged 2 commits into from
Apr 20, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 64 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,71 @@

Let your software roar!

## Architecture
Tuesday concept:
## Assumptions and Requirements
Let's start with an analogy: the profiled program is like a mechanical gearbox with many cogs that each can make a sound. The sound a cog makes depends on 1) how fast it spins 2) how much power it transmits 3) the shape/material/kind of the cog.

Acoustic profiler makes it fairly easy to
1. implement probes for various types of cogs
2. vary sound effects given to individual cogs
3. hear cogs spinning in remote machine (host)
goodhoko marked this conversation as resolved.
Show resolved Hide resolved

This necessitates an interface somewhere between the probes and the sound synthesis so that they can be mix-and-matched. This interface also works over network to satisfy 3.

## Interface

On abstract level, the interface is just a stream of Events where event is

```rust
struct Event {
type: EventType
// only values between [0, 1]
quality: f32
}
goodhoko marked this conversation as resolved.
Show resolved Hide resolved

enum EventType {
ActorMessageSent,
FilesystemWriteOrRead
...
}
goodhoko marked this conversation as resolved.
Show resolved Hide resolved
```

The frequency of events encodes how fast a cog spins and the quality encodes the power transmitted through it. (We can add more dimensions later.) We also assume that
Copy link
Member Author

@goodhoko goodhoko Apr 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of relying on the event frequency to be transmitted over the wire we may instead make the probes to send messages regularly (Eg. every 0.1s) and encode the actual event frequency as an attribute of the regularly sent message.

I.e. a message then would be something like

Message {
    event_type: EventType,
    // number of occurences of the events in the last 0.1 s
    occurences: u32,
    // the sum of qualities
    quality: f32,
}

This could

  1. save bandwith
  2. curcumvent timing/ordering/buffering issues in the network layer

OTOH we'd also loose some resolution that could be used for sound synthesis.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of relying on the event frequency to be transmitted over the wire we may instead make the probes to send messages regularly (Eg. every 0.1s) and encode the actual event frequency as an attribute of the regularly sent message.

Another alternative would be to let probes (aggregators) encode the frequency itself (synthesis server would assume it stays the same until update?). That has an advantage over some assumed fixed interval that probes could send update as fast or as slow as the value actually changes.

The disadvantage is that we could keep playing for probes that die.

Also, is the interface (Rust type) the same before aggregation and after it? I.e. do we have some Event and AggregatedEvent, or does Event by itself (at least some types) need to support aggregation internally? I can imagine both variants.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like starting with the simplest (as in the original proposal) and evolve over time. We can revisit the idea of occurrences and frequency later.

A thought for later: I think I'm starting to prefer making probes as dumb as possible and let the server handle most of the complexities.

  • The server knows more about kinds of sound sources it can play and other events from concurrent probes, which may influence how to interpret a particular series of events. Disk reads could be thought of as a changing frequency value, or a series of discrete events.
  • We can record incoming probe events and play back on the server side to quickly iterate on aggregation and synthesis.
  • We probably want to encode timestamp in the event struct as well, so we are not influenced by the network conditions. It allows us to send Vec<Event> to reduce serialization and network overhead.

- any cog may not turn at all (no incoming events) - then it shouldn't make any sound.
- no cog can turn faster than a human can hear (~20 000 events per second).

Every probe should fit into these limits and every sound-effect should account for the entire range.

## Implementation

Each probe be individual binary that connects and streams the events to a server over UDP serialized as bincode. The address of the server is passed to the probe with command-line arguments. The probe should aggregate the events to ensure it fits into the above range. Probes are free to send multiple type of events.
goodhoko marked this conversation as resolved.
Show resolved Hide resolved

The server is a binary that accepts events and assigns a sound effect to every event type it receives. This mapping is set statically in code. Later we can let it be configured with command line options or config file.
goodhoko marked this conversation as resolved.
Show resolved Hide resolved

We assume that the probe knows best how any given event type should be aggregated. The server is only concerned about assigning sound effects to event types, executing them and composing them all together into the overall sound that is played out.

With this setup we can also scatter probes across multiple machines and "listen to a datacenter".

Can we have a diagram? Sure!

```mermaid
flowchart TD
subgraph server [Server Binary]
S1[aggergation]-->|possibly a network interface here| S2[sound synthesis]
S2-->S3[speakers]
end

subgraph probes [Probes as individual binaries]
A1[Probe 1] -->|events over IPC| S1
A2[Probe 2] -->|events over IPC| S1
A3[Probe 3] -->|events over IPC| S1
end
subgraph server [Server]
S1[map event types to sound effects]-->SE1
S1-->SE2
S1-->SE3
SE1[ bag pipes ]-->S3[compose]
SE2[ whistle ]-->S3[compose]
SE3[ geigermeter ]-->S3[compose]
S3-->S4[speakers]
end

subgraph probe1 [Probe 1]
P1[collect events]-->P2[aggregate]
P2-->|events over UDP| S1
end

subgraph probe2 [Probe 2]
P3[collect events]-->P4[aggregate]
P4-->|events over UDP| S1
end
```