As stated beforehand, our analysis attempts to verify if there are errors in your program. To accomplish that, we have two main processes. The first, naturally, is detecting the errors themselves. The second one is turning the program plus the analysis results into something that can be easily evaluated by the user. The first two sections each treat these subjects.
Afterwards, comes a brief section, which explains how to read the result from this analysis. Finally, at the foot of the page, you will find a link to the article that served as inspiration for this work.
This analysis works verifying the program variable possible values, either when they are being used in expressions or, in some specific cases, by looking into the caracteristics of the pointers being used. Therefore, two things are required. The first is a range analysis, in order to be able to guess the possible values the variables can assume. The page referring to the range analysis we used can be found here: Range Analysis
The second one is an alias analysis. It allows us to have an idea of which pointers might (or do) actually point to the same places in memory. That allows us to know the possible memory addresses that a pointer can point to, which helps us in the error cascade detection (we will talk about this later).
Finally, the pass itself. As we enter each function, we run the range analysis to get the range of values each variable can assume. Following that, we analyze every instruction in that function. The order in which we do that is not important, since all the ranges were calculated beforehand. After detecting the instruction type, as of yet, we handle two cases: divisons and memory accesses (loads and stores).
If it is a division instruction, all that is needed is to check the divisor. If zero is among the possible values it can take, then we may have an error. It is very important to state here that this is a static analysis, and so the range does not necessarily predit the program values with complete accuracy. The possible values are guaranteed to be within our known range, but some values may be impossible to reach (one example is a -1;1 binary variable. Zero is among its range values, but unless there is an error in the program it should never actually be 0). This pass is conservative in the sense that if no extra information is available, then we will consider that instruction to be potentially flawed.
As for NULL exception errors, values can be verified by either the lack of an allocating instruction for the variable, or by attributing it to one such variable (an unallocated one) and then using it. If the memory allocation (or attribution) occurs inside a conditional block, then the the error will still be considered possible (this is a conservative analysis).
Index out of bound errors, meaning access to an array with an index that is either too big or too small, is also made using the range analysis, combined with an array size evaluation. The latter attempts to discover the array bounds as the program progresses, and the first checks if the value of the variables that are being used as indexes can be outside these boundaries.
When an instruction is detected as being potentially flawed, it is saved in a list. At the end of the instruction evaluation, a depth first search in the use-chain graph will be done to find cascaded errors. This means that every time a flawed variable is being used, then the variable that uses that value is also flawed. This chain of errors is our Cascaded Error Detection. It is also important to realize that this process is repeated for every flawed variable due to memory erros, zero division errors or cascaded errors. Since we cascade errors from previously cascaded errors, we call it a use-chain graph and not simply a use list. If a flawed variable is stored in memory, we keep the pointer to that memory in another list in order to verify, for every following load, whether the loaded variable might be an alias of the stored error. That avoids all loads being considered flawed and improves our chain-use following algorithm.
According to compiler theory, a rather simple and efficient way to represent a program is to divide it into modules, that are then divided into functions, which themselves are divided into basic blocks and, finally, the instructions are inside these basic blocks.
It is logical, therefore, to assume that using this representation would ensure a simple and easy to understand representation. This diagram, plus the edges that show how the program progresses, is called the program control flow graph (CFG).
Given that our results are instruction based (it is either potentially flawed or not), it is important to be able to tie the error viewing to the instructions themselves. Often, the CFG abstracts them for the sake of clarity, since their bulk rather clutters the graph; however, in our case, it was important to show all instructions and to highlight them.
That means the pdf you will receive will have several graphs, each representing a different function. In each of them, there will be nodes, representing the basic blocks, and inside each there will be a subgraph, where the instructions are the nodes. These will contain the error information, using a color code.
The result, as stated above, will be a forest, more specifically the control flow graph of each function of the program, or, even more specifically, the set of basic blocks that compose it, with the edges showing the paths that can be taken between the block as the program progresses. The color code shown below will be associated with the instructions in order to represent the potential errors associated with an instruction. It is important to remember that if an instruction is shown to be flawed, it does not mean that it certainly is. Our analysis shows the fact that there may be an error at that point. The only guarantee we give is that when an instruction is shown to be correct, then there is certainly no error at that point.
Red states a zero division error. This means that the divisor might be zero.
As stated above, there are two possible situations in which a Memory error can occur. These Instructions will have a green background.
If a previous error occured, it is very likely that others will follow, and, in these cases, errors can be found in virtually any instruction that uses a program variable. We have decided to paint these cascaded errors with the same color as the error that caused it, to make it simpler to follow the error chain. Note that while an instruction might not be directly connected with this error, it may be that it is connected, via another variable.
In the case where there is no error associated with the instruction, the node will be colored white.
This article, written by Fan Long et. al. studies the possibility of dynamic error analysis, and of leading the program to a safe ending instead of having it end with errors, such as segmention faults or Zero Division Errors. It was from his error detection that the idea of a static error analysis originated.
Automatic Runtime Error Repair and Containment via Recovery Sheperding
We have come to the end of this description. We hope you have enjoyed reading it, and that this tool may come to be useful to you in the future!
If you have any doubts, or suggestions, you can find us at:
Elton M. Cardoso -> eltonm.cardoso@gmail.com
Fernando Magno Quintao Pereira -> pronesto@gmail.com
Thiago Rodrigues Vasconcellos -> thiagorova@gmail.com