Let’s say we have a simple application A that can multiply and divide two given numbers, represented in figure 1.
Now let’s think about the possible scenarios this application might fail in. Alternatively you can think about the number of ways to navigate through the application’s code, or the number of code paths in short.
You might say that there are two: one scenario where the user selected the multiply operation and the other where the divide operation was selected. But then you are also saying that you are sure the application will continue to run flawlessly after both operations have been successfully tested once. However, this isn’t necessarily true. With input numbers 5 and 4 being x and y respectively it might execute as expected for both operations. But with numbers 5 and 0 it might not for the divide operation, if division by zero wasn’t properly covered in the code.
So in this example, if numbers x and y were 8-bit numbers, there would be at least 131072 possible code paths, or 256 times 256 times 2 (figure 2). Meaning that, in theory, in each of these code paths there could be a flaw.
The probability of flaws in application code
For a given application we can formulate the probability of (undiscovered) flaws as follows:
Where P(F) is the resulting probability, U the number of untested code paths, C the possible number of code paths and P(Fu) the probability of a flaw in an untested code path.
The latter is theoretically always 50 percent because, well, it’s not tested. In practice however, this greatly depends on the quality of the developer (team).
Alternatively the following formula produces mentioned probability after a specific number of code path executions:
Where P(Fe) is the resulting probability after e executions, and e the number of executions.
Note that this formula applies to applications where each code path has an equal probability of being executed, like it is the case for our sample application A.
Using the formula we get the following results with regard to application A.
The yellow line in the graph represents the probability of encountering an untested code path after e executions. The blue and greens lines represent the probability of encountering an undiscovered flaw after e executions, where P(Fu) is 50 percent and 10 percent respectively.
Interestingly enough, if application A was written by a developer who delivers 90 percent correct code on average (P(Fu) = 10 percent), there would still be a 5 percent chance of discovering a flaw even after 80000 application executions, or a 1 percent chance after 300000 executions. If a thousand people made use of this application once a day, it would take 80 days to reach this 5 percent and almost a year to reach the 1 percent.
More concrete, if division by zero wasn’t properly covered upon release of the application, there would be a 50 percent chance that this flaw would remain hidden after 350 executions, and a 5 percent chance after 1500 executions.
And yet we are talking here about the simplest of applications.
Smart contracts and The DAO
Let it now be clear that (structural) testing is of the essence before releasing a new contract, or any other application where value is at stake. Testing to the maximum extent possible, that is. As we saw, the possible number of code paths depends on the number of variables an application or code block uses (a smart contract is actually a code block in the greater Ethereum application). More specifically, it depends on the number of possible variable values. Let those variables be block variables, application variables or external variables. Even asynchronous execution of the same code should be considered a different code path.
In practice, it isn’t always feasible to test each and every possible code path. We can and should however test with carefully selected values for each variable and their possible permutations.
After release, the contract has yet to stand the test of time before it can be considered safe, and before any real value should be put in. Generally, the amount of time is dependent on the complexity of the contract.
The DAO was essentially a dumb contract upon release. It had yet to grow smarter. With that amount of money in it in such short timeframe, it was an accident waiting to happen. And we didn’t have to wait very long. The deus ex machina appearing and just erasing any trace of it doesn’t help in improving the quality of smart contracts. On the contrary, it takes the motivation away to do it better next time. Not to mention destroying the essence of blockchain technology in the process, that got us this far in the first place.
Equipping contracts with an indicator telling the user how safe or smart they actually are, could help avoiding future failures of this magnitude. Some proof of intelligence, if you will.