Profiling is probably the most important overlooked step in the modern applications development process. Often the reason for this is the fact that in the development cycle our application looks and performs ok hence no obvious reasons to worry right? However, potential bottlenecks and hotspots will not hit you until the application goes live in most cases. Even if you do stress tests, you can still miss scenarios which will cost user experience and often money. Developers enjoy how quickly they can build applications from blocks these days, there is a bit of code for everything on StackOverflow and there are components for all your needs as well, so why not just bodge them together quickly perhaps assuming some architecture to make things seem more professional. While this is ok for some systems, it may cost you dearly if you ever expect or will be forced to scale. Well optimized code can easily be dozens of times quicker than something just pieced together. Why would you pay for 5 application servers and 2 sets of database shards if you could just profile your system, optimize obvious bottlenecks and put it all on 1 or 2 machines? You can save a fortune on licences and maintenance which I think is worth an extra profiling and improving sessions.
On the beginning there was the request…
Since user experience is first affected with the time it took to load the application we should start our profiling here. There is an amazing tool called Glimpse which when added to the application (via painless integration) renders itself on top of our website, allowing a firebug like analysis of most interesting aspects in page requesting/rendering. Below are just a few things it can tell you:
Server and client time consumed for the request
Active database connections
Queries executed (including actual SQL)
MVC Routing information
Model binding data
There are tons of plugins for profiling: Azure, EntityFramework, SignalR, Knockout, IOC and many more, have a look here.
We need to go deeper – The Data Layer
In some scenarios you may need to look for performance issues deeper in your custom server logic. For this task I recommend CLR Profilers there are some free ones like this one. It doesn’t take mad skills to understand the way profilers work, you attach to the app pool process then click “start profiling”, do your thing (profiled action) and click “stop profiling”. The result should be some sort of a tree-view report in which you can explore method calls invoked (each one has some useful CPU and memory utilization data displayed). It may literally take minutes to find the exact method and often even the specific line of code which is your bottleneck or hotspot. You can also debug your software straight from visual studio if that’s more convenient for you – just check this great blog article.
Going even deeper: Database
Often performance issues originate at the data source. Whether it is badly written stored procedure with too many joins or heavy usage of cursors when they are not required, the tool which will help you uncover the bad guy here is the SqlProfiler. You can find it under tools menu in the SQL Management Studio. Using this profiler you are actually setting up a proxy on top of the database engine. It allows you to filter all the queries going into the database by text, name, client-machine and more. It shows what and how often is queried and most important how the queries are structured. In case you cannot spot anything wrong with a particular bit of SQL it is worth checking the execution plan for it which should be enough to pinpoint any inefficiencies. Additionally as part of SSMS there is an extremely useful set of tools helping to optimize your database based on current usage/querying patterns. For example: while it may not be obvious to you when first designing a table – once you go live and there is more traffic, it may turn out that you would benefit greatly by adding specific index on one of the columns because few stored procedures often use it to locate data. To read more about this check this great article.
Got to catch them all! PerfView!
This one would be my recommendation for the toughest performance detective sessions. While slightly more advanced/complex than above ones, it is still possible to learn how to use it in about an hour – just check this great tutorial video an embrace the skill! One big advantage of this tool is the ability to profile all processes at once, which can be useful in a multi service/multi app pool setups in which you know something is very inefficient but you don’t exactly know where to start profiling. Few of the things PerfView offers are:
Tracing/counting exceptions swallowed by your app and CLR itself.
Memory dump analysis
Much more – these are the ones I used so explore and find what else it can do! Let me know in the comments box if you found something important I haven’t mentioned.
Additionally below is the list of most common bottlenecks introduced by less experienced devs based on my code-detective sessions:
Unnecessary use of cursors in SQL
Lack of proper indexing in SQL (there is more than just the primary clustered index!)
Joins explosion in T-SQL
Not using string builders and sting helper methods in parsers and “hot” methods (This will trash your heap and make your GC collects more often and slower)
Not using parallel extensions (tasks, parallel collections and loops)
Using raw threads instead of thread pooled workers – ideally tasks
To sum up: There is a profiling tool for everything out there. The correct process should be as follows: pick an application layer to profile, narrow down suspects, locate and eliminate performance issues – if you don’t know where to look for your performance bottlenecks, use PerfView and just browse tabs trying to spot anomalies.
If you haven’t worked with profiling tools yet, spend an afternoon testing few and see what they can show you. It doesn’t take much to learn and it is crucial to know your tools when it comes to quickly identifying performance issues.
Please leave your comments and perhaps your favourite profiler description below!
Software developer at Goyello. Problem solver. The more complicated the problem is, the more motivated he gets. Whether it’s designing, improving processes, architecture or coding, he will be the first one to jump right in.
We process cookies and make them available to Google Analytics (a service provided by Google, Inc.) to improve the performance of the website, to learn your preferences about using it and to tailor it to your needs. The data will be anonymised before being transmitted. If you do not agree to this, you may disable cookies in your browser. If you do not change your browser settings, you accept the fact that it saves cookies.