You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to come up with a way to automatically test that the given classified output of calling the library is correct for a given failure input file.
I think testing individual modules might be quite easy, but this main functionality test will require some thinking.
For one, we're working with RSpec output, specifically. I do want to expand this lib to work with other outputs from other testing frameworks (specifically elixir ones, for obvious reasons), but for now I think we can focus on RSpec.
We want the lib to grab test output files and group the errors by their root cause. Right now we actually group them by message similarity, which isn't great, but I think that the first thing is to:
Come up with some mock test output file, or a few files
Write out what we want the library to output
Have our test framework compare the actual output with the expected output
Maybe there's a better way. Maybe we don't want to necessarily test the output but just test how the data is grouped. That way we can avoid string comparisons and leave the output testing to when we decide to test our formatting options.
The text was updated successfully, but these errors were encountered:
Testing this library isn't super fun right now.
The 2 main issues are:
We need to come up with a way to automatically test that the given classified output of calling the library is correct for a given failure input file.
I think testing individual modules might be quite easy, but this main functionality test will require some thinking.
For one, we're working with RSpec output, specifically. I do want to expand this lib to work with other outputs from other testing frameworks (specifically elixir ones, for obvious reasons), but for now I think we can focus on RSpec.
We want the lib to grab test output files and group the errors by their root cause. Right now we actually group them by message similarity, which isn't great, but I think that the first thing is to:
Maybe there's a better way. Maybe we don't want to necessarily test the output but just test how the data is grouped. That way we can avoid string comparisons and leave the output testing to when we decide to test our formatting options.
The text was updated successfully, but these errors were encountered: