Saturday, January 28, 2017

Profiling Apex, Part 2 (Display The Data)

LimitsProfiler is on GitHub

(Install Version 1.1)

So now that I have a mechanism to take snapshots of governor limit usage, I need a way to display it. This display mechanism should be easy to connect to any org, extend, and run multiple times. I'm starting out with a Visualforce Page for the front end interface. Perhaps down the road I will develop a Lightning UI, but that's definitely a bigger investment of my time.

Anyway, after thrashing about scrapping a few rough drafts, I eventually settled on an approach I'm pretty happy with. I want to be able to specify which limits to display, select a profiler type and number of iterations, and then measure it as many times as needed. From a technical and a convenience standpoint, it seems simplest to persist the data in a Hierarchy Custom Setting. While it's not strictly necessary, I added a small interface to manage this setting so you can switch back and forth between config and profiling in one consistent UI.

The configuration page isn't very interesting, so I won't cover it here (you can see it on GitHub). But the display page was a breeze to write, since all of the limits data points are already set up as properties on the LimitsSnapshot class. It's a bit tedious to write out, but mostly just copy pasta. The basic idea:

public Boolean isTypeDefined { get; private set; }
public List diffs { get; private set; }

@TestVisible final LimitsProfiler profiler;
public LimitsProfilerController()
    diffs = new List();
    summary = getProfilerType();
        profiler = (LimitsProfiler)Type.forName(getProfilerType()).newInstance();
        isTypeDefined = true;
    catch (NullPointerException n) { isTypeDefined = false; }
    catch (TypeException t) { isTypeDefined = false; }
public PageReference configure() { return Page.LimitsProfilerConfig; }
public void measure()
public String getProfilerType()
    return LimitsProfilerConfig__c.getInstance().ProfilerType__c;
public Integer getIterations()
    return (Integer)LimitsProfilerConfig__c.getInstance().Iterations__c;
            <apex:commandButton value="Measure" action="{!measure}" rendered="{!isTypeDefined}" />
            <apex:commandButton value="Configure" action="{!configure}" />
    <apex:pageBlockTable value="{!diffs}" var="snapshot" rendered="{!isTypeDefined}">
        <apex:column rendered="{!$Setup.LimitsProfilerConfig__c.DisplayAggregateQueries__c}"
                     headerValue="Aggregate Queries"
                     value="{!snapshot.aggregateQueries}" />
        <apex:column rendered="{!$Setup.LimitsProfilerConfig__c.DisplayAsyncCalls__c}"
                     headerValue="Async Calls"
                     value="{!snapshot.asyncCalls}" />

The code changes are small, but the impact they have on what can be measured is immense. When set up this way, each trial runs in its own transaction. It becomes possible to profile code that takes over ten seconds across trials without changing context. It also means you should be able to profile your triggers more consistently, as code like lazy loads will have to re-initialize each time it is profiled.

// when profiled multiple times from execute anonymous,
// this code will only consume a query on the first run
// but from Visualforce it will consume one on each run
public static List users
        if (users == null)
            users = [SELECT Id FROM User];
        return users;
    private set;
Previous Post (Gather Usage Data)

Monday, January 9, 2017

Profiling Apex, Part 1 (Gather Usage Data)

LimitsProfiler is on GitHub

(Install Version 1.0)

I think more than I should about how to optimize my code. I know, I know:

Premature optimization is the root of all evil.
- Donald Knuth

Yet if you look at the surrounding context of that quote, you will see that Knuth didn't caution us to never optimize, simply that about 97% of the time, small inefficiencies can (and should) be ignored. My question is, what do we do in that other 3% of the time? How do we arm ourselves with knowledge about the performance of our Apex code?

I found my first rudiments of how to profile code on the Salesforce Stack Exchange, and they went along these lines:

Integer iterations = 1000;
Long start =;
for (Integer i = 0; i < iterations; i++)
system.debug( - start);
We care about many aspects of performance, though. CPU Time is probably more important to measure than real time elapsed, and we may also want to observe queries, DML, and so on. Why not try to capture them all in one comprehensive snapshot?
global class LimitsSnapshot
    global Integer aggregateQueries { get; private set; }
    global Integer asyncCalls { get; private set; }
    global Integer callouts { get; private set; }
    global Integer cpuTime { get; private set; }
    global Integer dmlRows { get; private set; }
    global Integer dmlStatements { get; private set; }
    global Integer emailInvocations { get; private set; }
    global Integer futureCalls { get; private set; }
    global Integer heapSize { get; private set; }
    global Integer mobilePushApexCalls { get; private set; }
    global Integer queries { get; private set; }
    global Integer queryLocatorRows { get; private set; }
    global Integer queryRows { get; private set; }
    global Integer queueableJobs { get; private set; }
    global Integer soslQueries { get; private set; }
    global Long rawTime { get; private set; }
        Integer aggregateQueries, Integer asyncCalls, Integer callouts, Integer dmlRows,
        Integer dmlStatements, Integer emailInvocations, Integer futureCalls,
        Integer mobilePushApexCalls, Integer queries, Integer queryLocatorRows, Integer queryRows,
        Integer queueableJobs, Integer soslQueries, Integer heapSize, Integer cpuTime, Long rawTime
        this.aggregateQueries = aggregateQueries;
        this.asyncCalls = asyncCalls;
        this.callouts = callouts;
        this.dmlRows = dmlRows;
        this.dmlStatements = dmlStatements;
        this.emailInvocations = emailInvocations;
        this.futureCalls = futureCalls;
        this.mobilePushApexCalls = mobilePushApexCalls;
        this.queries = queries;
        this.queryLocatorRows = queryLocatorRows;
        this.queryRows = queryRows;
        this.queueableJobs = queueableJobs;
        this.soslQueries = soslQueries;
        this.heapSize = heapSize;
        this.cpuTime = cpuTime;
        this.rawTime = rawTime;

Woof. That got unwieldy in a hurry! Working with this class is actually pretty straightforward though once you add in a factory pattern. To that class I added these three methods:

global static LimitsSnapshot getInstance()
    return new LimitsSnapshot(
        Limits.getAggregateQueries(), Limits.getAsyncCalls(), Limits.getCallouts(),
        Limits.getDMLRows(), Limits.getDMLStatements(), Limits.getEmailInvocations(),
        Limits.getFutureCalls(), Limits.getMobilePushApexCalls(), Limits.getQueries(),
        Limits.getQueryLocatorRows(), Limits.getQueryRows(), Limits.getQueueableJobs(),
        Limits.getSoslQueries(), Limits.getHeapSize(), Limits.getCpuTime(),
global static LimitsSnapshot since(LimitsSnapshot start) { return diff(start, getInstance()); }
global static LimitsSnapshot diff(LimitsSnapshot a, LimitsSnapshot b)
    return new LimitsSnapshot(
        b.aggregateQueries - a.aggregateQueries, b.asyncCalls - a.asyncCalls,
        b.callouts - a.callouts, b.dmlRows - a.dmlRows, b.dmlStatements - a.dmlStatements,
        b.emailInvocations - a.emailInvocations, b.futureCalls - a.futureCalls,
        b.mobilePushApexCalls - a.mobilePushApexCalls, b.queries - a.queries,
        b.queryLocatorRows - a.queryLocatorRows, b.queryRows - a.queryRows,
        b.queueableJobs - a.queueableJobs, b.soslQueries - a.soslQueries,
        b.heapSize - a.heapSize, b.cpuTime - a.cpuTime, b.rawTime - a.rawTime

Now we have some scaffolding on which we can build a simple framework to profile our code. This version looks a lot like those early attempts:

global abstract class LimitsProfiler
    global abstract void execute();
    global virtual void setup() { }
    global virtual void tearDown() { }
    global LimitsSnapshot measure() { return measure(1); }
    global LimitsSnapshot measure(Integer iterations)
        LimitsSnapshot start = LimitsSnapshot.getInstance();
        for (Integer i = 0; i < iterations; i++) execute();
        return LimitsSnapshot.since(start);

I added setup and tearDown methods because a lot of what I want to test revolves around trigger performance, where you might need to insert data beforehand, perform a rollback, etc. I haven't decided whether they should go inside or outside of the snapshots, but I was getting some really weird heap readings when I tried putting them outside, so I tabled further investigation of that question for now.

Using this profiler makes it a bit easier to get results in bulk. The following script can be run via Execute Anonymous to illustrate:

List numbers = new List();
for (Integer i = 0; i < 1000000; i++) numbers.add(i);

class IsEmptyProfiler extends LimitsProfiler
    public override void execute()
        Boolean value = numbers.isEmpty();
class SizeProfiler extends LimitsProfiler
    public override void execute()
        Boolean value = numbers.size() == 0;

system.debug(new IsEmptyProfiler().measure(1000000));
system.debug(new SizeProfiler().measure(1000000));

That's a great start, and can get me some useful data on performance. But if your profiler depends on debug logs, you can't do your profiling with the logging levels set to NONE. And those logging levels add noise to your measurements. For instance, set all your logging levels to the highest setting and see if the above code will even run.

When the framework is complete, I should be able to plug it into a Visualforce Page and turn off all logging. Hence all the snapshot attributes being set up as properties. It will take me several more posts to get there, though, and more work on it to boot.

Next Post (Display The Data)