Hrm. Maybe I was mistaken… I thought fuzzing referred to altering a set of test cases regardless of random versus predicted. I am unsure if my concept would work 100 % but I was thinking more of a deliberate prioritized set of changes rather than random changes.
Why, you might ask? That’s a good question. To help facilitate the reproducibility factor. Without being able to reproduce the issue, it gets harder to troubleshoot. Of course things like logs, output, etc. help; but nothing seems to help more than reproducing so that one could just step through the code to the troubled point. Random makes it harder to track down.
Come to think of it, you could put in an AI or at least a metric system to try to understand the risk areas better by placing some sort of weight system in the lines/steps that caused issues.
Then you could see which area of code being touched would be risky via data.
Hrm. Not sure if it’s better on pen/paper than written down or if it is feasible.
Filed under: Uncategorized