
Automated Testing in MQL4
The problem of coding trading strategies
Any decent strategy will easily grow above some thousands of lines of code over the time. The code size of RobinVOL v2 for example is above 7,000 lines of C++ and MQL4 code. As time passes studying the strategy and markets, you figure out ways to improve your strategy, for example increment the capital safety, increment profitability or reduce the drawdown.
All this is good, but requires changes in the code and this is always a risk. Every change becomes more difficult as it could affect other parts of the code as a side effect.
To overcome this problem I programmed a basic automated testing framework in MQL4. Then, for every feature of the EA I write some lines of code that tests that functionality.
Unit and functional testing
Unit tests
A unit test is written to validate the correct result of any internal function. For example, let's say if we have created a function that returns the division of two numbers. In pseudo-code:
double division( double a, double b) {
if( b == 0 ) raise error ("division by zero");
return( a / b );
}
We would write the following unit tests to validate it's working correctly:
AssertEqual( 10, 5, 2)
AssertEqual( 20, 20, 1)
AssertError( 10, 0, true)
Every time the code is compiled, all the unit tests are executed so the build cannot be completed unless all the tests are working correctly. If I made a change that affected this function, my tests will fail and that will let me locate quickly where is the problem.
If, for example, we deleted the line number 2, then the third check would fail and the build would not be completed.
Functional tests
Functional tests are similar to Unit tests but are focused on features from user perspective. When programming strategies, those functional tests are run with the help of the backtester / optimizator.
For example, this is one of the many functional tests written on RobinVOL to validate that a 2 year backtest gives the expected results. This needs the help of the testing framework to run, I will publish it on a future post.
bool initTest_10() {
int testid = 10;
testName = "Overall results default settings";
testStart = 946685000; // 01-01-2000
testEnd = 978377100; // 30-12-2001
writeTestHeader( testid, testName, testStart, testEnd );
startAsserts(10);
}
bool assertsTest_10() {
assertEqual( "Ending profit matches", 47119.33, SummaryProfit );
assertEqual( "Number of trades", 454, SummaryTrades );
assertEqual( "Max consecutive wins", 18, ConProfitTrades1 );
assertEqual( "Max consecutive losses", 11, ConLossTrades1 );
endAsserts(10);
}
The improved bug fixing process
A good practice when you find a bug is not to fix it immediately. Instead it is better to write one or more test cases that reproduces the bug, so when running the automated tests it will show a failure on that point. Once you have your tests written, then it is time to fix the bug until your system passes again all the tests.
Conclussion
One of the nice things about automated testing is that it almost completely avoids regression errors. If you have properly written tests covering all the functionality and covering all the fixed bugs, then it is very difficult that a given modification affects a part of the code that is already working. And it is the safest way to make important code refactoring tasks.
Currently RobinVOL has more than 3,000 lines just for test code, and that makes the whole project being above 10,000 lines of code.
This is the first part of the article about Unit and Functional tests. On a future second article I will publish the MQL4 code of the framework and some documentation about how to use it.
Recent Comments