Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed assertion lexical_scope == cur_module #1174

Open
luizademelo opened this issue Oct 1, 2024 · 8 comments
Open

failed assertion lexical_scope == cur_module #1174

luizademelo opened this issue Oct 1, 2024 · 8 comments
Labels
Enhancement fuzzing Automatic tool generated code that is not expected to always be valid

Comments

@luizademelo
Copy link

When running iverilog on the following program:

module module_0 #(
    parameter id_1  = 32'd92,
    parameter id_3  = 32'd50,
    parameter id_4  = 32'd25,
    parameter id_8  = 32'd99,
    parameter id_9  = 32'd40
) ();

    case ((1))
      1: begin
        if (id_3) begin
          else begin
            end else begin
              if (1)
                  id_3 = id_9[1];
              
            end
          end
        end 
    endcase
  assign id_8 = 1;

endmodule

iverilog outputs the following:

test656.v:16: error: Invalid module item.
test656.v:20: syntax error
test656.v:22: error: Invalid module item.
test656.v:2: assert: pform.cc:1478: failed assertion lexical_scope == cur_module
Aborted

I'm using this Icarus Verilog version:
Icarus Verilog version 13.0 (devel) (s20221226-526-g5cbdff202)

@caryr
Copy link
Collaborator

caryr commented Oct 4, 2024

Assigning to parameters in procedural code is undefined. Mismatched begin/end and consecutive else statements. Your code generator needs some serious work.

@larsclausen
Copy link
Collaborator

This looks like it is generated a fuzzer that tries to find inputs that cause crashes/asserts in the program. I think we should look into fixing this, but maybe not highest priority.

@pronesto
Copy link

pronesto commented Oct 4, 2024

Hi larsclausen, that's correct: the programs are generated via fuzzing. We are experimenting with different fuzzing techniques, and reporting issues in various tools. The generator is ongoing work: we use the feedback from the developers to improve the fuzzer. Notice that the fuzzer produces valid and invalid codes (by design). Incidentally, we have found more confirmed issues with invalid codes.

@larsclausen
Copy link
Collaborator

Hi larsclausen, that's correct: the programs are generated via fuzzing. We are experimenting with different fuzzing techniques, and reporting issues in various tools. The generator is ongoing work: we use the feedback from the developers to improve the fuzzer. Notice that the fuzzer produces valid and invalid codes (by design). Incidentally, we have found more confirmed issues with invalid codes.

This is good work, keep the bug reports coming. We might not have the bandwidth to fix them all immediately, but stress testing the system and finding issues is is much appreciated.

@pronesto
Copy link

pronesto commented Oct 5, 2024

Hi larsclausen, Thank you! We plan to run bug-finding campaigns whenever we implement significant changes in ChiGen. In the meantime, we would like to ask for your permission to post a link to one of the issues we've reported. The link will be posted in our README. Please note that we will label them as 'issues,' not 'bugs.' Including these issues will help attract more users to ChiGen.

@larsclausen
Copy link
Collaborator

Feel free to link

@caryr
Copy link
Collaborator

caryr commented Oct 11, 2024

Incidentally, we have found more confirmed issues with invalid codes.

When you think about the psychology of code development this likely makes sense. Developers are usually time constrained and they tend to focus on implementing functionality versus checking what the code does when complete nonsense is given to it. Sure there are checks for certain invalid cases, but we can often make assumptions that our users will not get to far from valid code. When you add fuzzing you often end up with horribly invalid code that sends the tool into a bad state where it may report some issues, but eventually asserts/core dumps because things are so far from valid. These are worthwhile things to fix, but from a users perspective would they rather we generate good error messages for the invalid code and not assert or core dump or would they prefer for us to focus on adding new features?

We do appreciate the bug reports and some of them may be easy to fix so they could get fixed quicker than expected. I was thinking we should add a Fuzzer tag to distinguish these from ones that are reported by users who actually ran into an issue while coding.

@pronesto
Copy link

Hi caryr,

I was thinking we should add a Fuzzer tag to distinguish these from ones that are reported by users who actually ran into an issue while coding.

That seems a good idea to me. If we run a new fuzzing campaign on Icarus, that would be a nice tag to have.

@caryr caryr added the fuzzing Automatic tool generated code that is not expected to always be valid label Nov 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement fuzzing Automatic tool generated code that is not expected to always be valid
Projects
None yet
Development

No branches or pull requests

4 participants