Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better visibility into running scenarios and better debugging facilities #127

Open
hassy opened this issue Jul 10, 2016 · 3 comments
Open

Comments

@hassy
Copy link
Member

hassy commented Jul 10, 2016

A virtual user / request tracing feature would be useful to be able to have real-time visibility into what (a subset of) virtual users are doing - requests being sent, responses that come back etc.

This would make it easier to both write new scripts and to figure out why a particular scenario isn't working anymore.

@colceagus
Copy link

I subscribe and can commit to the issue solution.

@jdarling
Copy link

My initial though was inside of core it should emit events related to scenarios, then in the runner or other tools you can subscribe to those events and do whatever with them. Plugins could be allowed for to extend functionality. EG: Pushing all events to Mongo, Cassandra, Redis,

That would allow users to go in from a report and basically see "Ahh I got a 500 from xxx, let me go query for that and see what happened"

I think the basic format should be something similar to how Bunyan logs, giving hostname, pid, eventName, and data.

I know it isn't supported yet, but that way when artillery is running in something like a containerized environment better information is available about the host that saw the errors (maybe its a host issue) and the errors themselves.

@hassy
Copy link
Member Author

hassy commented Jul 22, 2016

@jdarling There's two related but different use-cases to support - in one of them, you want visibility into every action that a virtual user has taken (requests sent, what was captured etc) to help debug complex scenarios or to verify they are working as intended. The other use-case is seeing the details of a request that's caused the server to return an error response. Both would be solved by logging absolutely everything that happens, but that would affect performance when running large-scale tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants