-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Purpose of this repository #81
Comments
I would classify this as a broadening of those as this doesn't address those but more fundamental aspect to the conversation, The name "rosetta" was chosen to convey a comparative analysis. In particular, my focus for the rosetta repos I maintain on example code and metrics. This is in part because it would be impractical for me to keep up on what is going on with each library in the different repos for doing a more thorough analysis. Steps past the most basic (data driven |
So comparative analysis. There's several levels we can compare parsers as far as parsing is concerned
Parsing for the happy path only is easy, in my $dayjob we have a test task where an applicant needs to write a simple console app that gets passed a file name and an optional Making sure that an app can only be used correctly comes at a complexity cost which needs to live somewhere - in the parser or in the user code. Should the samples show how this complexity would look like? Invalid or mutually options or groups of them for example. I've seen code like that many times and fixed bugs related to that: // without extra annotations parser would happily take `--do-this` and `--do-that` at the same time
// happens when command line is generated from chunks in some script, whatever is executed
// depends on how options are consumed. They can even be consumed in different order
// depending on code path taken, seen that too.
struct Options {
do_this: bool,
do_that: bool,
do_something_else: bool,
}
fn main() {
...
// and this explodes as soon as new options are added, seen that too :)
if opts.do_this {
do_this()
} else if opts.to_that() {
do_that()
} else if opts.do_something_else {
do_something_else()
} // optionally, with else branch but often without it
else { unreachable!() }
} Then there are design quirks. Say we add a |
I think I'm missing what you were meaning to get at with the code samples and the discussion of issues at your $dayjob. |
It is very easy to make a parser if you only consider situations where it will be given correct data and a lot of applicants to that to save time. It is harder to make a parser that will accept correct usage and reject incorrect one. From the user point of view rejecting incorrect usage is just as important as accepting the correct one. |
How is that tied into the purpose of this repository? |
Is this repo comparing just the happy path or is it comparing parsers in a state you would actually want to use in your app? |
It is doing an automatically generated metric-based comparison. There are a lot of different design trade offs and we leave it to people to dig into and decide between these trade offs. In a lot of cases, |
Well, it's not comparing apples to apples then. Parsers have different behavior. I'm not saying Without making sure parsers are identically this projects is basically - figures in the main page a bit misleading. Check for correctly (or incorrectly) handling sample inputs can be done fully automatically too. Anyway, feel free to close the ticket if you are okay with current state. |
Trust me, as the maintainer of clap, I fully understand that the numbers don't stand on their own, like comparing |
What is the purpose of this repository?
I guess it's a continuation from #80 and #77
The text was updated successfully, but these errors were encountered: