ASP.NET Performance Optimization

The Performance of an application from the perspective of the client is very important and if it is degraded from too many round trips, too many resources, too many ajax or server calls, it gives the end user no option other than leaving your useful resource. So to end this problem you must have an eye on ways to boost your application performance by using performance boosters.

Performance of an application is the most important aspect which you need to monitor every day to provide uninterrupted services to your favorite clients and the better approaches your system uses the better results you will get. I will write the series of Optimization and Performance Techniques for SQL, jQuery, Ajax, JavaScript, C#, query optimization, and Website Optimization in upcoming articles but for now I am stating ASP.NET Performance Metrics.

But wait, what are the ways that you can judge how you want optimization of your web / application?

Below is the list of Optimization and performance Metrics that you want to acknowledge.

  1. Speed
  2. Use Logs
  3. Proper Exception Handling
  4. View State
  5. Proper Use of Caching
  6. Avoid Serve Side Validation
  7. Minify and Compress JS, CSS Resources
  8. Session Management
  9. Paging for Large Result set
  10. Avoid Un-necessary RoundTrips to Server
  11. Pages Must be Batch Compiled
  12. Partition Application Logically
  13. HTTP Compression
  14. Resource Management
  15. String Handling

We Can Discuss Each one in detail. So let’s start.


Speed of your application is the most important factor and you need to keep an eye on this factor. There are many factors involved to boost speed of your application:

Reduce Page Size:

  • By Reducing page size by means of using external css and javascript files instead of inline css and javascript.
  • The other way to reduce page size is to used only the minify version of javascript files and also beautify css and js by online tools.

To beautify Cascading Style Sheet Files Follow this Link.

To beautify Javascript files follow this Link.

To Minify Cascading Style Sheet Files Follow this Link.

To Minify JavaScript files follow this Link.

Beautify means to format the unformatted files (means with white space, comments and without indentation).

Minify means to remove white spaces, comments, non-indent text and the nice feature is to reduce space by giving your functions and variables a single name like function a(b){ if b == ‘4’){ b=’good’}}, move your code to single line of function defined.

  • Its very efficient way to separate the logic of your page as well as we did to crate an application by separating the logic of data access and business layers, so in page level you can separate it by making user controls of header, body and footer.

Reduce No of Requests to Server:

The fewer number of requests served to the server the more efficient your page behavior is.

Reduce the number  of requests by reducing the number of resources, like move your inline css code of all files to single css and like wise with javscripts. The other way is to cache the non static resouces and remove unnecessary headers from requests like version number and powered by. Use CDN (Content Delivery Network) so that it downloads files from nearest available server and save the concurrent requests if other websites are using same jquery plugin files.

Use Logs

Use IIS Log to trace out issues on weekly or monthly basis of your application, and the best is that you have to watch it on adaily basis, mainly IIS Log contains information about your server , date and time span, Referral page and the original URL and lots more information and also the HTTP Status response codes through which you can understand the nature of the issue.

I have written post on HTTP Status Response code you can read it from Here

The other way you can customize what to log is by creating a database table and inserting the exception detail in this, I have done it this way in my organization and also tracking module which can search, generate report on daily basis, insert the log modules and lots more, so that you don’t need to query the table again and again to check for daily error or for reporting.

There are other best approaches to track the error, one of the best available tools is Elmah Tool, which has a config base setting through which you can track it by configuring it with email option to the responsible person.

ELMAH (Error Logging Modules and Handlers) is an open source debugging tool for ASP.NET web services. When added to a running web application on a machine, exceptions that are thrown trigger event handlers in the ELMAH tool.

You can get it from google code Link OR

You can get it from Nuget Packages Link.

Proper Exception Handling

There are many developers who cannot handle proper exception techniques as a result the final outcomes are not pretty satisfied which means they cannot understand if there is any crashing of the application. The best way is to use try….. catch block appropriately to handle the exception in a right manner. User can use (if statement) to check for open connection if not open close the connection to the database. The other way use can use the (try…. catch block) for the connection if not closed throw an exception. With try…catch it is best approach to use finally block also with try…catch and then block like this to properly dispose off the unused resources if exception occurs or not.

  1.  try
  2.  {
  3.  }
  4.  catch (Exception)
  5.  {
  6.  throw;
  7.  }
  8.  finally
  9. {
  10.  }

Exception handling is the most important technique for finding the original run time unhandled exception in application but it should be wisely and judiciously used.

View State

View state is encrypted component in webforms that maintain the state of pages. It is used to maintain the state of page in post backs, these are the hidden fields and you can check by viewing source of page. iI you are too using the view state to maintain the state behavior of data in web forms in large form it loads your page and as a result you have a performance issue.

The potential issues lies with view state; it has large page load times due to increased number of page states.

So what are the best practices to override these types of headoff.

Here are some performance paradigms that must be accounted for while using view state.

  • Use it whenever needed in a page but limited it to small size as possible.
  • Don’t use multiple forms in a single page having state management enabled.
  • Use it wisely required on page or either in control level to application level.
  • Monitor the size of view state by enabling tracing.
  • Avoid storing large objects as the size is directly proportional to the objects.

Proper Use of Caching

With the proper use of caching you can get a lot of benefits like reducing the round trips to the server, reducing the number of server resources and it renders faster than in routine normal mode. It can improve the performance manyfold by caching the data on multiple Http Requests, it can store the page partially for specified times with some expiration values. It can boost application performance by storing data in memory so that it can be accessed quickly with less time just like CPU RAM. Cache can be accessible in your single application to use in webfarm you can use the distributed cache managers like memcached to share the cache data within the web farm.

Best use of cache is as follows:

  • Use it in all layers DataAccess, Business and UI Layer and use it in a proper way can give you performance boost.
  • caching for large time for static resources or not too used resources and also adding expiration to cache also give best performance.
  • Don’t cache expensive objects in cache like connection and like other resources.
  • use Output cache for static pages with expiration time and location as per your need.
  • Use partial fragment cache to partially cache the page component.

Avoid Server Side Validation

Validating your system is an important part as you filter what you want very clearly in DB because if you have free type text input then chances are that the required data cannot be received that’s why validations are required.

Validations are of two types: Client and server side validation.

Server Side Validation are important by the perspective of securing your sensitive information like saving password and other sensitive information. This is always not required to do server side validation as it always submits request and response back to client which causes in the cost and time of the user. This type of validation occur when submit is Hit.

Best Tips is to use whenever you required to ensure that security is not bypassed otherwise it is better to check the client side validation formats like email, URL , phone number, masking and other required information which need to be correct.

Minify and Compress JS, CSS Resources

The best approach with static content files is to minify which means make them small. Just like in production environment Jqueryis also recommended to use the min.js version file for best operations with their library you need too to include the minified version of your application working javascript files as the number of requests greatly increases the page time as some time size of file doesn’t matter but it can also be reduced by using the minified files for javascript and CSS files.

In IIS there is also setting for Compressing the Static and dynamic content you can too try this to enable compression in website.

This is also the best approach to use the Cascading style sheet files in the head of the webpage while the scripts must be included in the bottom of the page for fast processing of the page.

As you all aware of that we are using bundles of javascript libraries day by day to do our work but we foget about the behavior and impact which they made to our system. To overcome this Microsoft also announced its Microsoft web Optimization framework which is also useful.

You can read more about this framework from Here.

You may refer to section 1 of this article about the speed concerns.

Session Management

Session is an important part for applications but its effects are adverse if not handled properly.

Here are some best practices to use it intelligently.

  • Do not store a bunch of data in sessions.
  • Store basic types of data not complex types likes object.
  • Use wisely the available session states like in Proc, out of process using state view and out of process using SQL Server.
  • Out of process is the best option as it application did not restart despite of any application configuration changes but it is slow as it is some server of in sql, While the other in proc is fast as it is used same memory as application and their retrieval is also fast.
  • Do not use sensitive data in session state.
  • Always use abandon () method to sign out the user with session enabled.

Paging for Large Result set

Paging on large result sets are an extremely useful approach as we restrict the result to 10 to 30 records per page to be shown and on call of next records we get more records to load it reduces the extra load server bear to fetch all the records and return it which causes increase in page load times and extra cost to your users as well as the whole page goes unresponsive for about large time. So best approach it to make your result sets as less as possible and also use of ROWCOUNT() enhances the paging a lot more.

As your client has low resources either saving the large results set also make impact on your client.

The basic backbone in paging is the use of row_number ranking function. If you can check time with all records versus no of first page records than it show a great improvement.

Avoid Un-necessary Roundtrips to Server

The best method to avoid roundtrips to the server is to ensure that no un-necessary calls sent to the server as number of requests sent to the server increases the page load time increase in the result client suffers.

So it is necessary that you must use client mechanisms to ensure the validity to get data from server as it does not result in post backs and not involve any server callbacks which result in server involvement and it trigger the request response cycle.

You can following metrics to minimize the round trips between web server and browser.

Use Server.Transfer instead of Response.Redirect for redirecting to certain path. Server.Transfer scope is in current application redirection for redirecting to other than your application use Response.Redirect.

If your data is static you can use caching for best performance. Use Output Buffering as it reduces roundtrips by loading whole page and made available to client. If you want to transfer some data and client always connected then use HttpResponse.IsClientConnected as it reduces chances of any missing change not sent to server.

Pages must be Batch Compiled

The more the assemblies grow in a process, there are more chances that process shoots out and it throws out of memory exception. To overcome this pages need to be batch compiled as when first requested is initiated to compile pages all the pages in the same directory batch compiled and it make a single assembly. The basic advantages is thatthe  max number of assemblies that try to load in the process does not load which did not compromise on server load and only single batch compile assembly loaded in the process.

You can also ensure some things while doing this like:

  • Debug property in the configuration file always set to false in the production environment as if it is set to true pages are not batch compiled
  • Pages also did not time out if certain web service of page is not responding at the desired time,
  • Make sure that different languages are not used in same directory as it reduces chances of batch compilation.

Partition Application Logically

This means logically partitioning of your application logic like business, presentation and Data access layers. This is very useful as you have control of anything happening in any one can do their respective work in logic layer. This didn’t mean that you have to write more line of code, proper code with reusability and scalability of the application is the key properties of your application overall performance.

Don’t confuse it with physical separation of logic, as it only separate the code logic.

Below are the key pros of the separate application logic.

  • The main advantage of this is that you have a choice of logic to be separately reside on servers for your easiness in a web farm environment but it increase the latency of calls.
  • The closer is you logical layers the more benefit you have for example the all logic files in bin directory.

HTTP Compression

As the name suggests that http compression means to compress the content mostly in Gzip format or deflate and send with content headers after compression applied. It provides faster transmission time between IIS and browser.

There are two types of compression supported in IIS:

Static Compression:

It compresses cache static content by specifying in the path of directory attribute. After the first request that is compressed followed by requests used the cache compress copy thus decrease the time to furnish the content and increases productivity and performance of application. You should only compress the static content which is not changes not dynamic one.

Dynamic Compression:

Unlike the static content, dynamic content often changes as a result it supports compression by not being able to add in to the cache it only compresses the content.

Resource Management

The resource management is the management of overall resource of your application as it directly related to the performance of the application. Poor resource management decreases the performance and it loads your server CPU.

Below is the list of the most useful techniques for resource management.

  • Good use of pooling
  • Proper use of the connection object.
  • Dispose of unused resources after using them
  • Handling memory leaks.
  • Remove unused variables

String Handling

String management is one of the key to manage the memory of your application.

There are many techniques which is very useful in handling strings, some of them are enlisted:

  • Use Response.Write() to fastest show output to the browser.
  • Use stringBuilder when you don’t know the number of iterations to concatenate strings.
  • Use += operator to concatenate string when you know the number of strings are limited.
  • Do not use .ToLower() while comparing string as it creates temporary string instead use the to compare two strings because it has built in check for case insensitive data by using cultureinfo class.

Introduction Of Visual Studio Code Lens

What is Code Lens

Code lens tells us the code changes in a fantastic way and the best thing of this feature is there in editor history which you get before code lens in a lot of time.

With code lens you have a deep focus on your code, since you know what last changes were made to the file with reference history and who changed it. Code lens is a combination of References, changes in your code, code reviews, bugs associated with item, unit tests and their detail of what number of unit test passed or failed in a unique fashion.

All of the above options you can check while leaving your editor and in page references make life easier, since all of these are associated directly with your code.But wait are you too much excited after readings its indicators, just read the note.

Note: CodeLens is available only in Visual Studio Enterprise and Visual Studio Professional editions. It is not available in Visual Studio Community edition.

So let’s start in detail about indicators of code lens. You can customize the indicators on / off from Tools, Options, Text Editor, then All Languages and Code Lens.

As you can see in the image above the code lens option is enabled by default in the editor. If you are connected to Team Foundation Server or any other version control system you have all the options available and you can customize other specific options.

If you want to disable all the options you can uncheck the Enable CodeLens Option and if you want to enable CodeLens, then check the Enable CodeLens and then customize the indicators what do you want to see in the editor.

There are number of options available as elaborated below:

Show Test Status

By turning this option on you can see the indicator defining the last test run status and it will show left to the references. By clicking the status icon you can take the test result information.

Show References

If your code does not have reference, it show 0 References. you can view the code references by pressing (Alt + 2). if your code has references, then you can view them by moving mouse on the top of reference and by pressing double click you can view the reference definition.

After the references are opened you can see the information displayed in the form of Parent element contains the File location, followed by definition of method with line number as the child of each page.

Show Tested By


The Show Tested By option will show the tests associated with your code and the overall status of how many tests associated with number and status of tests run or not. If you have unit test project then this can show the unit test status by giving you value of how much passed, failed with ratio of 1/2 means out of 2, 1 passed and 1 failed and it give you detailed information of unit test with methods and duration in ms (milliseconds).

You can directly run all the tests or specific test and can review the test code by pressing (Alt + 3).

The tests shown with indicator marks of wrong and correct icon which show who failed or passed. If you see the warning icon then it means you have not run any tests and that way you can run through code lens unit test indicator.

To review the test definition double click on the test item.

Show Authors


Show author option will able to show you the last author who has changed the file or work in file and check in the latest changes. If there are more than one author associated with your code it shows the last author name like MUHAMMAD AQIB + 2, the count next to the author tells that there are two more authors associated with file. To show this option you can press (Alt + 4) key.

Show Changes


This option can give you overview of the changes / history that are associated with the code. The code changes indicator shown on VS Editor is like

3 Changes. To show this option you can press (Alt + 5) key, by opening the changes window you can see the detail of changes with change set ID, Change set Description, Author Name and Date of Changes.

In the changes window, by right clicking on any item it will give you the following three more options:

  1. View Diff of Changeset with [Number] – show difference between selected changeset number with earlier changeset number.
  2. Changeset Details –  by selecting this will show the detail of changeset in team explorer window.
  3. Send Email to [Author Name] –  this will open you default email program and fill the specific change details in the To, Subject and Body section of Email.

Show Bugs


If there are some bugs reported with your work items, it will show like 2 Bugs and by opening the bugs detail window it will show you Bug ID, Type, Description, Author and date reported.

Show Work Items


It will show the related work items with the piece of code just before the code review indicators. 3 work items,  it means there are 3 work items associated with the the checked in changes and all are associated work items are related with it.

Show Code Reviews


It will give you information about the associated code review with specific method and it will show in the last of code lens indicators as 4 reviews.
After opening the review window it will show the Review ID, Type, Title, Author and Date of Review.

Code Lens Formatting


With this option you can perform formatting with the code lens like fonts, color and other things in Tools, Options, Environment, then Font and Colors
option and select the show settings for: code lens and after the selection from drop down list it will display Items in other window with options such as indicator text, indicator text (Hovered), indicator text (Selected), indicator text (Disabled) and Indicator separator and then if you require formatting changes, hit OK button and enjoy with formatting colors and applied fonts.

Code Lens Accessibility


You can easily access code lens indicators by mouse and with the keyboard keys.

Just point the mouse to the specified indicator and click on it to show indicator options.

From keyboard move the cursor to the desired method and press and hold Alt key for 2 seconds and then it give you number option to select your desired option by pressing numbers.

Code Lens Options From Visual Studio Editor

You can also open code lens option from the Visual studio editor. To do so just right click on any one of the code lens indicator and you can see two options.

Refresh CodeLens Team Indicators.

CodeLens Options

CodeLens options opens the same window which was elaborated at the starting of this article.Hope you enjoyed this article and happy coding with Visual Studio IDE.

Best Ever Performance And Debugging Tools In Visual Studio 2015

The performance of the application is of utmost importance and this going to be very important from the perspective of client which they are facing if you have performance leaks in your application. To identify the performance leak prior to the Production environment is not an easy task but the new emerging power of VS 2015 Diagnostic Tools windows made it easier for us to have a deep performance analysis before it go live.

Instead of running a full profiling tool, you might take one or more of the following steps.

Insert code into an app (such as System.Diagnostics.Stop­watch) to measure how long it takes to run between various points, iteratively adding stopwatches as needed to narrow down the hot path.

Step through the code to see if any particular step “feels slow.”

These practices are typically not accurate, not a good use of time or both. That’s why there are now performance tools in the debugger. They will help you understand your apps performance during normal debugging.

Why not try this new and improved way to diagnose.

Diagnostic Tools Window

The primary difference you’ll notice when debugging code in Visual Studio 2015 is the new Diagnostic Tools window that will appear, firstly go to Debug and Click on Menu Item Show Diagnostic Tools, as in the following Figure 1.

Figure 1: How to open Diagnostic tools Window in Visual Studio 2015

These Diagnostic Tools present information in two complementary ways. They add graphs to the timeline in the upper-half of the window, and provide detailed information in the tabs on the bottom as in the following Figure 2.

Figure 2: the New Diagnostic Tools Window in Visual Studio 2015

  • Debugger Events (with IntelliTrace) gives you access to all Break, Output, and IntelliTrace events collected during your debugging session. The data is presented both as a timeline and as a tabular view. The two views are synchronized and can interact with each other.
  • The Memory Usage tool allows you to monitor the memory usage of your app while you are debugging. You can also take and compare detailed snapshots of native and managed memory to analyze the cause of memory growth and memory leaks.
  • The CPU Usage tool allows you to monitor the CPU usage of your application while you are debugging.

New IntelliTrace UI and experience:

  • Record specific events.
  • Examine related code, data that appears in the Locals window during debugger events, and function call information.
  • Debug errors that are hard to reproduce or that happen in deployment.

Why debug with IntelliTrace

  • An exception happens.

Figure 3: PerfTips

There is also a new option to show the CPU consumption time with PerfTips by clicking on that checkbox.


In Visual Studio 2015, you’ll see three tools in the Diagnostics Tools window: Debugger (includes IntelliTrace), Memory Usage and CPU Usage.

You can enable or disable the CPU Usage and Memory Usage tools by clicking on the Select Tools dropdown as in the following image:

The Debugger tool has three tracks that show Break Events, Output Events and IntelliTrace Events.

Figure 4: Digonostic Tools

This feature is used to trace the root cause of Bug/Error fastly and reliably by using events like clicking, post click changes and lots of trace information. This rich tool has lot of information about your running events.

You can spend less time debugging your application when you use IntelliTrace to record and trace your code’s execution history. You can find bugs easily because IntelliTrace lets you.

Note: You can use IntelliTrace in Visual Studio Enterprise edition (but not the Professional or Community editions).

IntelliTrace enhances the debugging experience and saves you valuable debugging time! It does that by capturing additional events with useful information about your programs execution, allowing you to identify potential root causes with fewer debugging iterations. The data it collects appear as events in the IntelliTrace track and in the table in the Events details tab.

In the Enterprise edition of Visual Studio 2015, you will see that we have completely revamped the user interface of IntelliTrace by bringing it into the Diagnostic Tools window. IntelliTrace vastly enhances the debugger by automatically capturing interesting events in your application and surfacing them in the Events graph and the Events tab as in the following image:

Figure 5: Event

UIcon in the graph area of events are the interesting events that are captured while debugging.

By mouse hovering it give detail information in the tooltip and by double clicking to the event it points to the event in Events Tab.

Traditional or live debugging shows only your application’s current state, with limited data about past events. You either have to infer these events based on the applications current state, or you have to recreate these events by rerunning your application.

IntelliTrace expands this traditional debugging experience by recording specific events and data at these points of time. This lets you see what happened in your application without restarting it, especially if you step past where the bug is. IntelliTrace is turned on by default during traditional debugging and collects data automatically and invisibly. This lets you switch easily between traditional debugging and IntelliTrace debugging to see the recorded information. See IntelliTrace Features and what data does IntelliTrace collect?

IntelliTrace can also help you debug errors that are hard to reproduce or that happen in deployment. You can collect IntelliTrace data and save it to an IntelliTrace log file (.iTrace file). An .iTrace file contains details about exceptions, performance events, web requests, test data, threads, modules, and other system information. You can open this file in Visual Studio Enterprise, select an item, and start debugging with IntelliTrace. This lets you go to any event in the file and see specific details about your application at that point in time.

These events made easier to capture where the bug happens as it give full stack tracing in historical debugging section as in the following:


Without IntelliTrace, you get a message about an exception but you don’t have much information about the events that led to the exception. You can examine the call stack to see the chain of calls that led to the exception, but you can’t see the sequence of events that happened during those calls. With IntelliTrace, you can examine the events that happened before the exception.

Videos Link

PerfTips (performance information in tooltips) is a new feature introduced in Visual Studio 2015 through which you can calculate time between the breakpoint hit points to the last point or while you step into the code.

It give you tips while you debug through code of the elapsed time as in the following image and on hovering over the mouse it gives detailed analysis and the value it estimate also contain debug overhead.

By clicking the elapsed time it can open the diagnostic tools window and show the current event with process memory and CPU utilization if selected.

Beside the elapsed time information if you want to show the CPU utilization this can be set through right clicking the elapsed time link.

When we right-click on that time it’ll show the PerfTips options to configure that.

The default value of showing elapsed time are in milliseconds.

Figure 6: Performance Tools


PerfTips Video