Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation for tests #5

Open
liquidnya opened this issue May 24, 2022 · 4 comments
Open

Add documentation for tests #5

liquidnya opened this issue May 24, 2022 · 4 comments
Labels
enhancement New feature or request

Comments

@liquidnya
Copy link

This is something I plan on doing eventually:

  • Explain how to setup new test cases and how the test logs can be understood.
  • Explain how to read what was going wrong when a test fails.
  • Additionally explain how the test is setup internally.
@ToransuShoujo ToransuShoujo added the enhancement New feature or request label May 24, 2022
@skybloo
Copy link

skybloo commented Jun 15, 2022

Would you be open to changing up how they're formatted? I think I see how they're working, but having them in interpreted log files makes debugging a bit harder. Happy to take the work on and document as I go

@liquidnya
Copy link
Author

The reason of how the files are formatted is because it is super easy to create these test cases.
Since all you need to do is connect the bot to a twitch channel and then have Chatty open and then you can enter your commands and then when you are done manually testing you can copy the log of messages from chatty into a file.
These log files are then replayed and time and timers are also simulated and the output of the bot messages are checked.
As well as there are some further checks if files have certain content etc.
I also tried to add a lot of debugging functionality to the test case, since I was debugging using the test cases myself.
I think that it is just a lack of documentation on how these tests work and also a lack of guidance on how to debug using these test cases.
But I also think that all the checks that are done apart from chat message replay can use some work for sure.

Would you be open to changing up how they're formatted?

@skybloo What would you change to make them better? What makes debugging harder; is it the lack of being able to set breakpoints? Is there something I can do to make debugging with the log replay easier?

I made it so that if a chat replay message is wrong, that both the test log location and the code location where the message was emitted is shown:
image

@skybloo
Copy link

skybloo commented Jun 16, 2022

The reason of how the files are formatted is because it is super easy to create these test cases. Since all you need to do is connect the bot to a twitch channel and then have Chatty open and then you can enter your commands and then when you are done manually testing you can copy the log of messages from chatty into a file.
This was the piece I was missing, how the logs were being generated. I'm personally not a huge fan of "one file reads a bunch of other files" but that's much more a personal preference than any kind of value difference. I'll look at adding some more specific documentation and might make a couple of helpers to generate logs, just because I'd like to be able to do TDD

@liquidnya
Copy link
Author

@skybloo with the merge of #22 into master, I moved the code for setting up a test case into https://github.com/ToransuShoujo/quesoqueue_plus/blob/master/tests/simulation.js.
And I wrote a new test case (https://github.com/ToransuShoujo/quesoqueue_plus/blob/master/tests/data-conversion/data-conversion.test.js) which makes use of the queue programmatically without using the logs.
In that specific test case HandleMessage (in index.js) is never called, but you could also use this as some kind of template and then call test.handle_func(message, sender, mockedFunction) yourself and then testing if mockedFunction was called with the correct response.
In theory we could also setup test cases which only test queue.js instead of using index.js.
The log tests are really meant as like an integration/regression test and so far we do not have any unit tests except for testing twitch.js.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants