January 30, 2008 Leave a comment
I recently had an interesting conversation with a colleague about optimization, profiling and telemetry as they relate to application performance.
“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” (Knuth, Donald. Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.)
This makes perfect sense since you don’t know were the bottleneck are going to be before building a system. On the other hand, it does not mean that you should throw performant algorithms out of the window. You have to think performance at a local level when you write your code.
Profiling the application means running it with preset data and inputs to see where the performance bottlenecks are, at which point you can optimize the application. A profile will tell you where time is being spent, down to the function, and sometimes down to the line. I have found useful to always reuse the same set of data and inputs so I can track the performance effects of changes. Of course you need to make sure that the data set an inputs provide good code coverage.
Profiling should also give you a good idea of how long it takes to perform operations. This becomes important when you take your system to production, and gather telemetry.
Telemetry (credit goes to Dan Pritchett for the term) is the collection of data on a running application, tracking how long certain operations take, getting data from a database or running a search for example. This data can be collected and monitored.
For example if you have to contact a number of web services to construct a web page, you should capture the time each of those services take. That way you will be able to see the maximum, minimum, mean and median times it takes for each of those services to run. By monitoring this data you will be able to tell when a service is underperforming and take action.