Performance comparison of X++ compiled into CIL

In Ax2012 it's now possible to compile and run code under the .NET run-time as opposed to using the X++ kernel (in a custom pcode format). This is a fairly major development from a technical standpoint, but I was interested in testing the actual performance differences between the two execution methods.

The basic code for the test is as follows: Note, this method is defined within class ProcessTimeTest, extending RunBaseBatch (*). Themain method creates an instance of the class, then calls this method 50 times, the idea being to average out the results.
* The suggested best practice for Ax2012 and beyond is to use the Business Operation Framework, but that's overkill for this job.

protected void runSingleTest()
{

    int64               startTime,endTime,dt;
    System.DateTime     dateTime;
    int                 loopCount,innerCount;
    real                dummyReal = 1;
    str                 stringBuf;
    ProcessTimeTestLog  log;

    NoYesID             runningAOS  = Global::isRunningOnServer();
    NoYesId             runningCLR   = xSession::isCLRSession();

    // Start the timer
    dateTime = System.DateTime::get_Now();
    startTime = dateTime.get_Ticks();

    // Do pointless activity, lots of times.
    for(loopCount = 1;loopCount <= 100;loopCount++)
    {
        InventTable::find(strFmt("__%1",loopCount));    // cache-miss
        InventTable::find('1000');                      // cahce-hit

        dummyReal = 1;
        stringBuf = "";
        for(innerCount = 1;innerCount < 100;innerCount++)
        {
            // FP arithmetic
            dummyReal = dummyReal * 3.14152;
            dummyReal = dummyReal / 2.89812;
            dummyReal = dummyReal - 0.00310;
            dummyReal = dummyReal + 1.21982;

            // String concatenation + metadata
            stringBuf += strFmt("%1-23",
                innerCount,
                tableId2name(tablenum(SalesLine)));

            // Construction+removal of object (GC overhead)
            this.newObject();
        }

        this.recursiveFunctionCall();
    }

    // Stop timer and save results
    dateTime            = System.DateTime::get_Now();
    endTime             = dateTime.get_Ticks();
    dt                  = endTime - startTime;  // in ticks


    log.clear();
    log.RunningInCLR    = runningCLR;
    log.RunningOnAOS    = runningAOS;
    log.RunningTime     = dt / 10000; // tick = 1/10,000th of a ms
    log.insert();

}

Basic performance test code

The above code is completely pointless, but it is testing the following aspects of the run-time:

  • Record querying/selection. The first find on the local item table is a cache miss, and will cause an actual  query against the database. The second find method should be picked up by the Ax record cache.
  • A bit of floating point arithmetic, and some string concatenation, which also includes an AOT/meta-data query (resolving table ID/name).
  • Construction and removal of a new object via method newObject, which just creates an instance of the same class. It's in a separate method to ensure it's fully out of scope and destroyed. Note the actual removal is subject to the garbage collection cycle, which is completely different for code running under the CIL.
  • A recursive function call, which goes 10 deep. I wouldn't expect the fact that it's recursive to make a huge difference - It's more to test the overhead of function calls in general.
So, each test does this basic sequence of operations 100 times. To run the test under the AX run-time, it's just a case of 'running' the class (ie right-click open). To run under the .NET run-time, the CIL needs to be updated:


Then, the job run as a batch process (make sure your AOS is correctly configured). Towards the end of the code you'll see that it writes the timing information to a table, along with flags indicating whether the code is running on the AOS and whether it's being executed by the Ax or .NET interpreters.

The results are encouraging:


That's about a 75% improvement in execution time when running under the .NET run-time!  There are plenty of reasons for this: .NET has a JIT compiler meaning the code is executed as native machine code, the garbage-collector is more sophisticated, the C# code optimizer is more advanced etc.

This is certainly good news, but keep in mind that CPU is rarely a bottle-neck in Ax implementations. As a developer, the main things you should be focusing on are (still):
  • Table and index structure
  • Using caching effectively
  • Minimal recalculation of data that can be pre-stored for reporting and inquiries
  • and all the other stuff that isn't apparent until it implodes during go-live!

This is definitely a great effort from the technical team at Microsoft, as it would have been no mean feat to accurately translate the X++ p-code into C#. 

I'll be following up soon with a bit more information on code running under the CIL. If interested there's also a good (if confusingly coloured) blog posting here on debugging IL code.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.