Code Profilers - Choosing a Tool for Analyzing Performance

A software application usually progresses through a series of phases as it goes from concept to reality. Those phases—analysis, design, implementation and test—have been (and continue to be) well-explored by hosts of software methodologists. However, you can reveal an omission to this list of phases if you translate them from their formal names into more casual terms:

The missing ingredient is one of degree and occurs (or, should occur) at the last step.

You have verified that the program does its job...but have you verified that it does it well? Is the program efficient? Have you verified that its execution speed cannot be improved? What tool could you use to answer these questions?

The answer to this last question—and, therefore, that which will allow you to answer the other questions—is a profiler. The name profiler gives its purpose away. A profiler creates a profile of an application’s execution. And, to explain what we mean by that, consider yourself to be in the situation we described at the outset. You have written an application, it works and now you want to improve the application’s performance, to make it run faster. How do you do that?

Well, you could create a large amount of test data, sit down in front of a computer, grab a stopwatch, feed the data to the application and time how long it takes to execute. This gets you partway to your answer, but there’s a problem with this approach.

A program is not a monolithic entity; it is composed of routines that call other routines that call other routines, and so on in a complex web of interdependent functions. Also, if you have used statically linked or dynamically linked libraries in your application, then you have added more layers of functional interdependency that involve code for which you do not even have the source.

So, the problem is this: If your application is running slowly, what parts of it are running slowly? From the outside, it looks like the whole thing is running slowly. But, it could be that a single, fundamental algorithm used repeatedly throughout your application is the only bottleneck. That algorithm may constitute a small fraction of your source code, so that searching other parts of your application for opportunities of improvement is, literally, a waste of time. You need to focus on that algorithm exclusively.

To do that, you need a profile, a map of your application’s performance, so that you can see the individual parts’ contributions to the application’s overall execution.

That is the job of a profiling tool. It lets you examine the execution behavior of your application in such a way that you can determine where your application is spending its time. Most profiling tools actually provide graphical output that lets you quickly identify performance "hotspots" so you do not waste time wading through source code that is not a problem in the first place.

In this white paper, we are going to examine how profilers do what they do. As you will see, how a profiler works determines that profiler’s capabilities. Not all profiling technologies are equal, and, therefore, not all profilers are equal.

We will begin by defining and describing the two broad categories into which profilers fall: passive profilers and active profilers. In our descriptions, we will examine the characteristics—positive and negative—inherent in each category.

View Entire Paper | Previous Page | White Papers Search

If you found this page useful, bookmark and share it on: