Replies: 3 comments
-
It's been a long time since I looked at that but IIRC the thresholds come from the original definitions and/or recommended thresholds from related SE publications. (like you, I wondered about those highlight thresholds when I first used it in work/reporting context). One thing you can try for a different function-level calculation is cyclo: https://github.com/sarnold/cyclo although it's a lot more basic than cccc, I remember the function-level complexities computed by both are pretty much identical (but that wasn't a "formal" comparison/evaluation). There are cases where C code confuses the cccc parser where it can reject a lot of the source code, but I haven't dug into that (I think it's mostly preprocessor abuse in the code being analyzed causing that). Sorry I've been busy lately and apparently github is bad at getting my attention when the notification traffic is high; I've talked to the original cccc developer a couple times lately and he might have a window to work on this again for a short time. I'll see what kinds of tasks he has in mind. Thanks for the feedback. |
Beta Was this translation helpful? Give feedback.
-
So, back to your original question... The McCabe paper uses "module" pretty generically, so you should just think of it as a single function/procedure/method in real code. Hope this helps... |
Beta Was this translation helpful? Give feedback.
-
Hi Tim Littlefair here, original author of CCCC. On how this relates to the thresholds, specifically for MVG, McCabe's origer inal work presents MVG as a measure of the difficulty of testing a program, and for me it is clear that the word 'program' encompasses subprograms (i.e. individual methods). From memory, McCabe's graphs only ever covered a single method at a time, but obviously if an overall system contains tens or hundreds and methods, the individual MVG values of the methods can be summed to give a useful metric of the effort to test the system as a whole. As I've commented before in an email exchange with Steve Arnold, one of the reasons I decided to stop maintaining CCCC is that I came to the conclusion that the most useful thing it gave me (which is MVG, broken down on a module/function basis) is essentially the denominator of the result I would extract by using a coverage measurement tool on the body of source code. I came to the conclusion it was more useful to concentrate on coverage measurement tools (provided by others) and get both the numerator and the denominator. |
Beta Was this translation helpful? Give feedback.
-
It's a great tool with a lot of metrics, but there are still some buggy issue that I would like to point out in order to make it a better application.
I'm not sure about what standard we take to define a module. One module is not equal to one class, nor a function. While the note showed in the webpage shows, MVG is a notation which is "The analyser counts this by recording the number of distinct decision outcomes contained within each function".
It was kind of scary to see our overall Cyclomatic Number to be tens of thousands while have the under 15 is a preferred range in mind. I understood that bigger one class gets, with more lines of code or more functions included, the higher Cyclomatic complexity it tends to have.
I might be wrong, but from my understanding, after reading multiple articles, I would recommend to provide the Cyclomatic Complexity score on a function base. I know we can click on the module name and get detail information about each function with detailed information about the number of functions included as well as MVG score for each. However, from a user perspective, the accumulated MVG score is not the one we would care about, and it is kind of annoying to calculate the average score by moving from website to the other. It would be beneficial to show the average MVG of functions and may mark it yellow if it exceed 10, might mark it red if it exceeds 50 or so.
Beta Was this translation helpful? Give feedback.
All reactions