2.05.2012

Reviewing uCertify 70-515-CSharp MCTS exam prepkit (Part 2)

Hi again!

As I wrote in the last post, I'm reviewing a preparation kit for the 70-515-CSharp MCTS exam from uCertify.
Today, I'm here to write about what I've found to be the strengths and weaknesses of this product.

This is a full-featured product that pretty much covers everything this kind of tools should cover. I would highlight the following strengths of the product:
  • Easy and intuitive user-interface;
  • Rather complete explanations of why an answer is either correct or incorrect. Some other products I've seen before are only concerned about wrong questions, and dismiss explanations about the correct which may be as relevant as the explanation for wrong questions;
  • Extensive study materials covering most topics targeted by the exam. I didn't use them much myself, because I would rather take a peek at documentation supplied in msdn (links to these online resources are also supplied in a section of this product) and try the old "learn by doing" method. However there are people that would rather read the study materials and retake the tests, so I believe this is a useful feature.
  • Several kinds of test "layouts", from the traditional format, into adptative tests (which dynamically adapt the question complexity to your answers. That is, if you're getting your answers right, the complexity will gradually increase, otherwise they will decrease, thus showing you questions accordingly to your kownledge level).
and my favorite feature:
  • The hability to take quick tests.  Say that you only have a handful of minutes and you want to study a bit. Just open the application, and start a fixed time test, specifying how many minutes you wish to spend. Lacking enough free time to take complete tests, I found myself using this kind of tests a lot.

I would like to point out that despite the target of this application is to prepare yourself to take the 70-515 exam, you can also use it as a way of enriching and testing your knowledge. For instance, I find out that I have some serious gaps in ASP.NET MVC (which doesn't surprise me, knowing that I haven't developed any serious application in MVC). I just might be targeting MVC as one of the next things to investigate deeper.
Bottom line: if you've decided to take this exam, this kind of tools are definitely a good way of testing and evaluating your knowledge and I would recommend the uCertify product. But always remember that you might find different questions and topics in the real exam, so real-world experience is a must-have. Don't take your exam for granted just because you have good results in the preparation tool.

By the way, any readers of my blog are entitled to a 10% discount (in addition to any existing sale) on any test preparation software from uCertify (just use the discount code 'UCPREP' in the uCertify shopping cart).

1.02.2012

Reviewing uCertify 70-515-CSharp MCTS exam prepkit

Hi! Long time since my last post, right? Let's hope I can make a few more posts in this brand new year than I did in 2011.

I'll start the year by reviewing a preparation kit for the 70-515-CSharp MCTS exam from uCertify.
I have got a offer from them to review their PrepKit (which you can find here) and I have accepted that challenge. Initially it looks fully-featured and with a nice UI, I haven't had the time to make a few tests and that's what counts most, so I'll leave any further review opinions for later. As soon as I am done with the full review I will post it for you all.

Best regards and a Happy New Year

8.04.2011

Redistributing user controls

Ever needed a recipe to build a user control library to use across several web applications? Here's one!
First of all, we should realise that this isn't something you can get out-of-the-box and as such, there are several half-solutions out there, and as far as I know there isn't a perfect one. The one I'm presenting here suits best for what I'm trying to achieve, and it doesn't mean it is the best for other people's situation.

My main motivation, other than user control reusability was to create a project structure that allowed me to migrate a webapplication developed in VB.NET to C#. This has to be done over a long-period and I don't want to keep adding code in VB.NET any time a new feature is needed. Hence I decided to build user controls in C# that I could use in VB.NET pages, thus avoiding any code-behind code in VB.NET.

There is however a big problem with creating user controls outside web applications: We're not supposed to do it! When you build a page (aspx) or a user control (ascx) in a regular web application, the compiler creates an assembly where each of this files produces a class that inherits from our "code-behind" class. This class is where some of the magic happens. When we create a user control outside the web application where the user control is being used the afore-mentioned classes don't exist. So how do we cope with this? We'll create a base class for all our user controls and we inject some magic in it as well! This base class will be responsible for:
  - manually loading the ascx file and parse it to dynamically create the elements/controls present in the ascx.
  - "binding" these elements/controls to the members that represent them in the code-behind class (this members are usually present in the designer file)
Here's the code to such a base class:


And here's the rest of my recipe:

1- Create new asp.net web application project. This makes sure we can create new user controls directly through the "new item" dialog, which would not be possible if we were to create a regular class library project.
  1.1- Clear all the files that visual studio placed inside the new project (leave only the AssemblyInfo.cs)

2- Creating user controls in your new project
  2.1 - Add a new "Web User Control" item to your project
  2.2 - Design your control as you always do
  2.3 - Open the designer files and copy the declaration of controls you'll want to access later on to the code-behind class. (You can also keep the designer files as partial classes to your code-behind class, but there's not much point in it as visual studio won't be able to update it later on)
  2.4 - Delete the designer files (unless you've decided to keep it as a partial class)
  2.5 - The control header (in the ascx file) should be cleaned to keep only the Language attribute, thus leaving only the following header:

  2.6 - Change the ascx file build action from "Content" to "Embedded Resource"
  2.7 - Change the control base class from "UserControl" to our own "BaseUserControl"

3- Using the user controls in your web application
  3.1 - Reference the new project in our old web application
  3.2 - Change the web.config to avoid registering the control in every page you'll use it:


Alternatively you could register the control in in the pages you want to use it by adding the following line:


And we're done! Here's a list of the issues found with this solution (so far):
- Couldn't get the control skins to work with my user controls
- Local and Global Resources are currently in the web application (instead of being properly encapsulated in the user controls that uses them), as asp.net loads them from the page, not the user control. There are at least two workarounds for this. We could create a resource provider and resolve the resources manually or we could use resources like we do in a regular winforms application, instead of using the local/global resources asp.net approach.

That's all folks!

6.23.2011

Testing emails with neptune

Most of us have already developed some feature that sends emails, and when we finish the development phase we need to test it, but how? Surely we can force the application to send a mail to ourselves, but sometimes we don’t even have an available smtp server to use, or it requires further development steps to enable it with some authentication mechanism.
An alternative is to use a lightweight local smtp server focused on this development needs. So today I’ll show you how to use Neptune (There is another product with similar features, which I’ve never used, so feel free to explore it as an alternative to Neptune. It’s called SMTP4Dev and you can find it at smtp4dev.codeplex.com).
The latest Neptune version (downloadable at http://donovanbrown.com/post/Neptune-with-POP3.aspx) even supplies a POP port so that you can easily check the emails you sent, instead of only acknowledging that an email was successfully sent.
Neptune runs in the windows tray. With a right click on it’s icon you access a context menu which allows you to stop the smtp server and access a window showing further details as the smtp/pop ports and the number of email messages processed.




As mentioned earlier, you can easily set up an email account on your outlook (or another email client) to use the pop server supplied by neptune to take a look at the emails sent from your application. Most of the account details (as the username, password and email address) are dummy and make no difference for the pop server. You just have to specify that the mail server is 127.0.0.1.


If you’re interested in unit testing your emails, Neptune also supplies some extensibility mechanisms for that purpose.

2.25.2011

Using Keyboard Hooks in .NET

Here's another code snippet. This one shows you how to use keyboard hooks in .NET.
I took hints from a few online articles about this subject and stumbled upon an issue that arises when you try to use hooks in .NET 4.0. Typically when you invoke the SetWindowsHookEx, you supply it with the instance of the current module. In most articles you'll find that this can be done with the following line:

However this does not work in .NET 4. For further information see this discussion and this one as well. As stated in the later, an alternative way of doing this so that it works in .NET 4 (as well as in the older framework versions) is:




Here's the code snippet for my HookManager class:



And an usage example:

2.18.2011

Brand new Reflector alternatives

So, now that our loved free .NET Reflector is about to run out of juice, what alternatives do we have. I've said before that I'm willing to pay for it, but there are many of you out there who are just too much pissed off at Redgate to buy anything from them!
Whose about to step in and take away some of Redgate's future clients? Jetbrains! Yes, the same ones that develop another much loved tool: ReSharper. They're introducing decompiling capabilities in their tool and also promising to release the decompiler as a free standalone tool by the end of the year! Check it out here: http://blogs.jetbrains.com/dotnet/
That got me thinking if it's finally time to buy ReSharper! What about you?

Also, there's another free tool on the horizon: ILSpy

Edit (24/02/2011): It seems another alternative is about to rise on the horizon! If you look at the Telerik's JustCode pre-release blog post (where they provide a special preview of the release highlights for their Q1'2011 release) you can read the following paragraph:
The new decompiling functionality will enable you to recover lost source code and allow you to explore and analyze compiled .NET assemblies. We know a lot of you will welcome this addition as it comes in response to your feedback and recent developments in the industry.

2.16.2011

Test INotifyPropertyChanged Pattern implementations

Today, I'll cover a well known topic: different implementations of the INotifyPropertyChanged interface (henceforth referred to as INPC). Its most common use is when you're implementing the MVVM pattern. This interface makes sure that your viewmodel properties update the binded interface components when you change the property value. So, with so many WPF and silverlight applications out there, it's a well known interface and you probably also know that it's regular implementation is somewhat boring and extensively verbose! That's why there are so many different alternatives out there. I decided to collect a few and test them for performance.
The main motivation was to validate the suspected performance penalty of implementations based on expression trees. So I'm pitting this kind of implementation against the most common and basic implementation, two AOP (see http://en.wikipedia.org/wiki/Aspect-oriented_programming) implementations (one using IL weaving and another using a dynamic proxy) and the usage of dependency properties. Dependency properties may seem a little out of place, but they are one of the common alternatives to using the INotifyPropertyChanged.

The main reasons to research for different INPC implementations (the aforementioned boringness and verbosity) should obviously be taken into account. Most developers don't like that they can't use the simpler syntax of automatic properties if the want to use INPC.
It's not my objective to discuss the advantages and disadvantages of each, as you can find easily find several articles that delve deep into that subject. However, there are a few points we should keep in mind when choosing the kind of implementation to use. 
A perfectly good example are the disadvantages of using dependency properties: the need of inheriting from DependencyObject and the fact that these properties can't be changed from non-UI dispatcher threads. These may be two show-stoppers depending on your situation. The first also applies to the usage of expression trees. If your viewmodels are depending on some kind of ViewModelBase class then, that's not much of a fuss, but if you wish to achieve some kind of Naked MVVM implementation (see http://blog.vuscode.com/malovicn/archive/2010/11/07/naked-mvvm-simplest-possible-mvvm-approach.aspx) than you can't rely on a base class. Anyway, if you can avoid it, there's no need to burn your base class (see http://www.artima.com/intv/dotnet.html). 

On the other hand, most AOP approaches only make sense if you're using an IOC container to resolve your viewmodels. This is due to the fact that you can't instantiate your viewmodel class directly, you'll need to call a method that creates a proxy (where the INPC is injected) for your viewmodel. One of the AOP approaches (the one using PostSharp) we're using here avoids this by injecting the INPC in your class in a post-build action through IL weaving (see http://www.sharpcrafters.com/aop.net/msil-injection).

The Test

The test is pretty simple. Each of the implementations resides in a separate class which has the integer property (conveniently named "Property") we'll be using in the test. This property will be assigned 200.000 times in a cycle and we'll be controlling the time it takes for this cycle to complete. We'll also make sure that the PropertyChanged event is handled by an event handler that does absolutely nothing (we just want to measure the event handler invocation time, not the time the event handler takes to execute, so we're making it as simple as possible).

Example source for a specific test:


I've shown the example of the PostSharp implementation because the AOP test implementations require you to add the event handler through reflection, as the event doesn't exist at compile time. That's the only difference from the non-AOP implementations. Notice that you won't need to do this in real-life applications because you won't be registering explicitly the PropertyChanged event, that's the job of the framework's binding mechanism.
Also note that in the Castle Dynamic Proxy implementation, the instantiation of the test class would be replaced by a call to create the proxy (which, as stated above, would not be visible if we were using an IOC container to resolve the class)
Example:


Test Results


Implementations
Here's the code to the implementations used in the test, which by the way, I'm not claiming ownership for, I've just gathered and altered them slightly where needed.
Common INPC implementation (which I'm calling the "Plain old" way)



Expression Trees implementation

Dependency Properties implementation

PostSharp AOP implementation

Castle Dynamic Proxy AOP implementation



Conclusions
After this, I think it's safe to conclude that the Expression trees implementation implies a severe performance penalty and although it's far simpler than the "Plain old" way of doing it, it still loses in simplicity when compared to the AOP implementations. That said, I should say that I've always been a fan of PostSharp and I currently consider it to be the best way of implementing INPC, although it is a payed tool.

2.04.2011

RIP .NET Reflector (as a free version)

If you still haven't heard about it, here's a bad news: Redgate is about to put an end on the .NET reflector's free version as soon as they release the new version (which will be .NET Reflector 7). This new version seems to be scheduled for late february/early march. Knowing that the free version will only work until May 30, you'll soon have to make a decision: either buy it or stop using it. Well, I guess you could maintain a virtual machine with a freezed date and no internet conectivity at all to avoid connections with redgate servers, but where's the praticality in that?


I've made my decision, it's an easy decision, 35$ is not that much for one of the most important tools I use on a daily basis! Otherwise, I would have to make ildasm my new best friend!
As a developer I understand the importance of getting payed for what you develop and that the comporate side demands return on investment upon the new features being developed. Don't get me wrong, I do like the idea of open source, I just don't think we can apply it everywhere. The fact that the .NET community isn't taking this decision very well isn't much of a surprise for me. We all had the expectation that we'ld never have to pay for it, you may even state that we Redgate led us to believe that, that's what's the fuss is all about!
At the end of the day, what really matters is that we're talking about a fanstastic product that deserves the investment. Also, Redgate states that "Version 7 will be sold as a perpetual license, with no time bomb or forced updates", so that's definitely worth the value. I may even consider buying the pro edition...

What worries me is knowing that there a few companies that will not buy it for their developers! Let's just hope my company buys the necessary licenses asap, so I'm not forced to use my personal license at work!

1.19.2011

Stack Walking

Have you ever needed to determine who's calling your methods or merely needed to inspect the call stack?

There's a fairly simple way of doing it by using the StackTrace class. I'll show you an usage example.
First, let's start by creating a small class that creates a stack of 3 method calls just so that we have some to call stack to look into.
This should do it:



Now, let's call Method1 and hook up an event handler that will gather the stack trace and print some information about each stack frame.


You should pay special attention to the boolean parameter of the StackTrace constructor. It defines if file information should be gathered (the file, line and column of the method being called). As we'll see below, gathering this information incurs in a performance penalty! Also, these values will likely be absent from a release build, as it is built without debug symbols.

You may be wondering about the performance of such a feature. I've made some testing and got back the values below. Note that these tests are merely indicative. My laptop had dozens of applications running that could interfere with the processor availability. Anyway, each value from the results below is the best out of three attempts and the average values for a single execution are consistent. Also, for the testing purposes, the Console.WriteLine instructions were removed.
These tests were made using a Debug version of the application.
The graph above shows the time it takes to run several executions of the ShowStackTrace method. The same tests were made with and without gathering file information. As you can see, there is a substantial difference.
The graph below shows the average time of a single execution.
As you can see, a single run, without file information takes in average 0.05 milliseconds! On a debug build! That seems pretty darn fast. It might be interesting to point out that this is the same way that exceptions gather their stack trace.

I wonder how this times stack up against the times achieved in a stack walk using the dbghelp.dll library. Maybe on a future blog post...

12.10.2010

.NET Encryption - Part 3

Now, for the third and last part of this series (first and second parts can be found here and here) about encryption. I've promised before that I would supply example source code for previously mentioned encryption operations. That's what this post is all about. The source below shows how to perform each of these operations with a handful of source code lines. The only thing covered here that I haven't mentioned before is how to derive algorithm keys (which are arrays of bytes) from a string, typically a password or a pass-phrase. That's covered in the first few lines and you should pay special attention to it, because it is something you'll need plenty of times.


Hope you've enjoyed this cryptography sessions.

12.08.2010

Software Building vs Software Maintenance



I've been reading "The Mythical Man-Month" in the last few days and I'ld like to highlight here a particular paragraph:
Systems program building is an entropy-decreasing process, hence inherently metastable. Program maintenance is an entroypy-increasing process, and even its most skillful execution only delays the subsidence of the system into unfixable obsolescence.
So, for all of us working on program maintenance this means that even if from times to times we introduce new features into the system, we should keep in mind that someday (maybe not so far into the future as we would like to believe) the system we're working on will be deamed obsolete and eventually replaced! Isn't it so much better to design and implement brand new systems? Maybe that's why we're so eager to refactor existing sub-systems and sometimes even throw them way and build them from scratch...

By the way, this is one of those books that should be regarded as a "must-read" for all software engineers/architects!

12.01.2010

.NET Encryption - Part 2

In the first article of this series I’ve briefly covered some topics about .NET encryption. Now that we’ve all remembered how encryption works, let’s move onwards to the real deal.
Let’s start by seeing what .NET provides us with. In the table below I’ve grouped every algorithm (please let me know if I’ve missing any!) and categorized them by purpose (symmetric encryption, asymmetric encryption, non-keyed hashing and keyed hashing) and implementation.
There are three kinds of implementations:
  • Managed: pure .NET implementations
  • CryptoServiceProvider: managed wrappers to the Microsoft Crypto API native code implementations
  • CNG: managed wrappers to the Next Generation Cryptography API designed to replace the previously mentioned CryptoAPI (also known as CAPI)


In the table above, I’ve highlighted in red a few classes which were only introduced in .NET 3.5. However these new classes (except AesManaged) can only be used on Windows Vista and later operating systems.  This is due to the fact that the CNG API was first released along with Windows Vista.
Please note that .NET framework supports only a few of the CNG features. If you wish to use CNG more extensively in .NET you may be interested in delving into the CLR Security Library.

So, the first big question is: with so many flavors what should we choose? Of course there’s no absolute and definitive response, there are too many factors involved, but we can start by pointing some of the pros and cons of each kind of implementation.
CNG:  It has the downside of only running on the latest Operating Systems; On the upside it is the newer API (you should face CAPI as the deprecated API), it’s FIPS-Certified and it’s native code (hence likely to be faster than the Managed implementation).
Managed: It has the downside of not being FIPS-Certified and likely to be slower than the native implementations; On the upside this approach has increased portability has it works across all platforms (and apart from AesManaged you don’t even need the latest .NET version)
CSP: CryptoServiceProviders supply you a bunch of FIPS-Certified algorithms and even allows you to use cryptography hardware devices. Note that .NET support for Crypto Service Providers is a wrapper for the CAPI features and doesn’t all of CAPI features.

You may ask “What’s FIPS-Certified?”. FIPS (Federal Information Processing Standards) are a set of security guidelines which are demanded by several federal institutions and governments. Your system can be configured to allow only the use of FIPS-Certified algorithms. When faced with such a requirement, using a non FIPS-Certified algorithm is considered the same as using no encryption at all!

So, now that you know how to choose among the different kinds of implementations, another (perhaps more relevant and important) question, is how to choose the algorithm to use. It mostly depends upon the encryption strategy you are using.
  • For a symmetric algorithm, Rijndael is mostly recommended. AES is no more than a Rijndael implementation with fixed block and key sizes.
  • For asymmetric algorithms, RSA is the common option.
  • For hashing purposes, SHA2 algorithms (SHA256, SHA384, SHA512) are recommended. MD5 is considered to have several flaws and is considered insecure. SHA1 has also been recently considered insecure.
That's all for these blog post. In the next part of the series I'll show you source code examples and maybe delve into a few common scenarios and necessities.

11.16.2010

.NET Encryption - Part 1

In this series, I’ll target .NET encryption. These will be a series of articles to avoid extensive blog posts.
I want to make a kind of a personal bookmark for whenever I need to use it. Heck, that’s one of the reasons most of us keep technical blogs, right?
Encryption is one of those things we don’t tend to use on a daily basis, so it’s nice to have this kind of info stored somewhere to help our memory!

In this Part 1, let’s start by remembering a few concepts:

Symmetric encryption
In this method, a single key is used to encrypt and decrypt data, hence, both the sender and the receiver need to use the exact same key.
Pros:
  • Faster than asymmetric encryption
  • Consumes less computer resources
  • Simpler to implement
Cons:
  • The shared key must be exchanged between both parties, that itself poses a security risk. If the key exchange must not be compromised!

Asymmetric encryption
This method uses two keys: the private key and the public key. The public key is publicly available for everyone who wishes to send encrypted messages. These encrypted messages can only be decrypted by the private key. This provides a scenario where everyone can send encrypted messages, but only the receiver bearing the private key is able to decrypt the received message. 
Pros:
  • Safer, because there’s no need to exchange any secret key between the parties involved in the communication
Cons:
  • Slower than symmetric encryption
  • Bad performance for large sets of data
  • Requires a Key management system to handle all saved keys

Hashing
Safer, because Hashing isn’t encryption per-se, but it’s typically associated with it. Hashing is a mechanism that given an input data generates a hash value from which the origin data cannot be deduced. Typically a small change in the origin message can produce a completely different hash value. Hash values are typically used to validate that some data hasn’t been tampered with. Also when sensitive data (like a password) needs to be saved but its value is never to be read (only validated against), the hash can be saved instead of the data. 


That's all folks! In the next part(s) I'll cover the encryption algorithms available in .NET, how to choose among them, some real-life scenarios and source code examples.

11.04.2010

Silverlight Strategy Tweaked to handle HTML5

Recently, there has been quite some buzz about Silverlight and HTML5. Microsoft has stated over and over they are supporting and investing a lot in HTML5. Have you heard the PDC keynote? If you heard it, then you’ve surely noticed the emphasis Ballmer placed on HTML5! Where was Silverlight in that keynote? Yes, I know PDC’s SilverLight focus was all about WP7, but what about all the other platforms?

So where does that leave Silverlight?
- Are they dropping the investment in SL?
- Does it still make sense to support both platforms knowing that their targets and objectives are slightly different?
- Will SL make sense only for smaller devices as WP7?
- Is SL going to be the future WPF Client Profile (as we have nowadays lighter .NET versions called “Client Profile”) and the gap between WPF and SL continuously reduced until only WPF exists? Will it be named “WPFLight”?
- Can HTML5 completely replace SL? Does it make sense to build complex UI applications in plain HTML5? Can/Should I build something like CRM Dynamics in HTML5?
- Is it the best alternative if you want to invest in developing cloud applications?

Much has been written about this subject, there are many opinions and mostly many unanswered questions. These are just a few of the questions I’ve heard and read in the last few weeks/months. These are hardly questions that just popped out of my mind… Let’s call them cloud questions, they are all over the web!
What I would like to point out is that Microsoft is aware of this and in response they’ve published in the Silverlight team blog (http://team.silverlight.net), they’ve published an announcement talking about changes in the Silverlight strategy (http://team.silverlight.net/announcement/pdc-and-silverlight). And I would like to quote the last part of it:
We think HTML will provide the broadest, cross-platform reach across all these devices. At Microsoft, we’re committed to building the world’s best implementation of HTML 5 for devices running Windows, and at the PDC, we showed the great progress we’re making on this with IE 9.
The purpose of Silverlight has never been to replace HTML, but rather to do the things that HTML (and other technologies) can’t, and to do so in a way that’s easy for developers to use. Silverlight enables great client app and media experiences. It’s now installed on two-thirds of the world’s computers, and more than 600,000 developers currently build software using it. Make no mistake; we’ll continue to invest in Silverlight and enable developers to build great apps and experiences with it in the future

So, what they are saying here is:
- We acknowledge that HTML5 is the best cross-platform technology for the web
- We think there’s still some room for SilverLight, namely complex user interface client apps

And finally: “we’ll continuing investing in it!”. The question that might pop in your mind is “will they? Really? A long-term investment? Or a rather short one?”

What do I think? I think there’s still room for SL applications and I’m looking forward to see the developments and how SL will continue to reduce the gap to the full WPF framework.

9.20.2010

Memory Dump to the Rescue

Surely you’ve been through those situations where a user or a tester reports a situation that you can’t reproduce in your development environment. Even if your environment is properly set up, some bugs (such as a Heisenbug) can be very difficult to reproduce. Sometimes you even have to diagnose the program in the testing/production environment! Those are the times you wish you had a dump of the process state when the bug occurred. Heck, you’ve even read recently that Visual Studio 2010 allows you to open memory dumps, so you don’t even have to deal with tools like windbg, cdb or ntsd!
Wouldn’t it be great if you could instruct your program to generate a minidump when it stumbles upon a catastrophic failure? 
Microsoft supplies a Debug Help Library which among other features, allows you to write these memory dumps. The code below is a .NET wrapper to this particular feature, which allows you to easily create a memory dump. 
Note however that the target assembly must be compiled against .NET 4.0, otherwise visual studio will only be able to do native debugging.




Having the code to write a dump, let’s create an error situation, to trigger the memory dump creation. Let’s just create a small program that try to make a division by zero.

Running this program will throw an obvious DivideByZeroException, which will be caught by the try catch block and will be handled by generating a memory dump.

Let’s go through the process of opening this memory dump and opening it inside Visual Studio 2010. When you open a dump file in VS, it will present a Summary page with some data about the system where the dump was generated, the loaded modules and the exception information.
Note that this summary has a panel on the right that allows you to start debugging. We’ll press the “Debug with Mixed” action to start debugging.


Starting the debug of a memory dump will start debugger with the exception that triggered the dump, thus you’ll get the following exception dialog:


After you press the “Break” button, you’ll be shown the source code as shown below. Note the tooltip with the watch expression “$exception” (you can use this expression in the immediate or the watch window whenever you want to check the value of the thrown exception, assuming somewhere up in the stack you are running code inside a catch block) where you can see the divide by zero exception that triggered the memory dump.

You can also see the call stack navigate through it to the Main method where the exception was caught. All the local and global variables are available for inspection in the “Locals” and “Watch” tool windows. You can use this to help in your bug diagnosis.


The approach explained above automatically writes a memory dump when such an error occurs, however there are times when you are required to do post-mortem debugging in applications that don’t automatically do this memory dumping for you. In such situations you have to resort to other approaches of generating memory dumps. 
One of the most common approaches is using the debugger windbg to achieve this. Let’s see how we could have generated a similar memory dump for this application. If we comment the try catch block and leave only the division by zero code in our application, it will throw the exception and immediately close the application. Using windbg to capture such an exception and create a memory dump of it is as simple as opening windbg, selecting the executable we’ll be running:


This will start the application and break immediately (this behaviour is useful if we want to set any breakpoints in our code). We’ll order the application to continue by entering the command “g” (you could also select “Go” from the “Debug” menu). 


This will execute until the division by zero throws the exception below. When this happens, we’ll order windbg to produce our memory dump with the command ‘.dump /mf “C:\dump.dmp”’.


After this your memory dump is produced in the indicated path and you can open it in visual studio as we’ve previously done.
Post-mortem is a valuable resource to debug some complex situations. This kind of debugging is now supported by Visual Studio, making it a lot easier, even though for some trickier situations you’ll still need to resort to the power of native debuggers and extensions like SOS.

9.16.2010

Code Access Security Cheat Sheet

Here’s one thing most developers like: Cheat Sheets!
I’ve made a simple cheat sheet about .NET Code Access Security, more specifically about the declarative and imperative way of dealing with permissions.
Bear in mind that this cheat sheet doesn’t cover any of the new features brought by .NET 4.0 security model.
This cheat sheet may be handy for someone who doesn’t use these features often and tend to forget how to use it or for someone studying for the 70-536 exam.

9.07.2010

Viewing __ComObject with a Dynamic View

If you ever had to debug a .NET application that uses COM objects, you've probably added such an object to the Watch window or tryied to inspect it in the Immediate window, just to find out it was a __COMObject type which wouldn't give you any hint about it's members.
Visual Studio 2010 gives you a solution for this problem. They've introduced a feature called Dynamic View. As long as your project is using the .NET 4.0 framework you can now inspect these elements.

Let's take a look at this code:



This code is creating a Visual Studio instance and instructing the debugger to break execution right after the creation so that we can inspect the __ComObject.

There are two ways of doing this:
  • adding "vsObj" to the watch window and expanding "Dynamic View" node 
  • adding "vsObj, dynamic" to the watch window


Also note that if your object contains other __COMObjects you can inspect them in the same way as show here in the "AddIns" property.


This "Dynamic View" feature was developed mainly to inspect dynamic objects (another new feature from .NET 4.0). That's where its name comes from.

9.06.2010

RIP : Object Test Bench

After spending some time looking for Object Test Bench in VS2010 menus, I've decided to do a quick search and found this VS2010 RIP list where Habib supplies a partial list of the features removed in VS2010. I was surprised to find out they have removed Object Test Bench!
It's not as if I really needed it, there's nothing I would do with it that I can't do with Immediate window, but I did use it from time to time for some lightweight testing. I guess I won't be using it anymore, will I?
But maybe you'll be more surprised to find out they also removed intellisense for C++/CLI!!!

9.03.2010

Refactoring with NDepend

I’ve recently been granted an NDepend Professional License, so I’ve decided to write a review about it. I took this opportunity to refactor one application I’ve been working on, which suddenly grew out of control and needed a major refactor.

Note: This article is mainly focused on NDepend, a tool that supplies great code metrics in a configurable way through it’s Code Query Language. If you don’t know this tool, jump to http://www.ndepend.com/ and find out more about it.

This application grown out of control pretty quickly and one of the things I wished to do in this refactoring is split the assembly in two or three logical assemblies. For this, I’ve decided to use ndepend’s dependency matrix and dependency graph to evaluate where was the best place to split the assemblies. I validated the number of connections I would have between the assemblies to split. For a handful of classes I decided some refactoring was needed in order to reduce the number of dependencies between assemblies. Refactoring these classes allowed me to reduce coupling between the assemblies while increasing the assemblies relational cohesion (the relations between the assembly types) which was lower than it should. There is a metric for this relational cohesion which I used to evaluate how far I should go.
Dependency Graph of all the assemblies envolved (I've blurred the names of my assemblies for obvious reasons)

Further inspection of the dependency matrix led me to the conclusion that my model-view-controller was being violated in a few places, so I added a TODO comment in those places. I’ll fix that after this major refactoring is done to avoid too many refactorings at once (remember that one of the thumb rules about refactoring is doing it in baby steps to avoid the introduction of errors in a previously working codebase).

Next it was time for some fine-grained refactorings. This is where the NDepend code metrics were most valuable. I don’t want to go into too much detail here, so I’ll just pick a few of the metrics I used and talk about them.

First metrics I decided to look were the typical metrics that target code maintainability. Metrics such as “Cyclomatic Complexity” and “Number of lines of code”. No big surprises here. I found two methods that tend to show up in the queries, but these are methods that contain complex algorithm implementations which aren’t easy to change in a way that would improve this statistics. These are some important metrics, so I tend to check them first.

Onward to another interesting metrics...
Some Code Metrics
Efferent coupling
This metric told me something I had forgotten: I suck at designing user interfaces! I tend to crumple a window with a bunch of controls, instead of creating separate user controls where appropriate. Of course that generally produces a class (the window) with too many responsibilities and too much code (event handling/wiring). Also the automatically generated code-behind for these classes tends to show up repeatedly across several code metrics.

Dead code metrics
These allow you to clean dead code that’s rotting right in the middle of your code base, polluting the surrounding code and increasing complexity and maintainability costs. Beware however that NDepend only marks *Potentially* dead code. Say that you are overriding a method that’s only called from a base class from a referenced assembly: since you have no calls to that method and the source to the base class where the method is called is absent from the analysis, that will lead NDepend to conclude that the method is potentially dead code, it doesn’t mean however that you are free to delete it. Also remember a method can be called through reflection (which pretty much makes it impossible for a static analysis tool to say your method is definitely dead code), so think twice before deleting a method, but think! If it’s dead code, it’s got to go!

Naming policy metrics
Here I started by changing some of the rules to fit my needs. Example, naming convention constraints define that instance fields should be prefixed by “m_”, while I tend to use “_”. These are the kind of metrics that are useful to do some code cleanup and make sure you keep a consistent code styling. NDepend pre-defined naming conventions may need to be customized to suit the code style you and your co-workers opted for. As good as these naming conventions rules can be, they’re still far from what you can achieve with other tools suited specifically for this. If you’re interested in enforcing a code style among your projects I would suggest opt for alternatives like StyleCop, however for lightweight naming convention NDepend might just suit your needs.

Note: Don’t get me wrong, I’m not saying that my convention is better or worse than the one NDepend uses. When it comes to conventions I believe that what’s most important is to have one convention! Whether the convention specifies that instance fields are prefixed by “m_” or “_” or something else is another completely different discussion…

A Nice Feature not to be overlooked
I know the hassle of introducing these kind of tools on a pre-existant large code base. You just get swamped in warnings which in turn leads to ignoring any warning that may arise in the old or in the code you develop from now on. Once again NDepend metrics come to the rescue, meet “CodeQuality From Now!”. This is a small group of metrics that applies to the new code you create, so this is the one you should pay attention if you don’t want to fix all your code immediately. Improving the code over time is most of the times the only possible decision (we can’t stop a project for a week and just do cleanup and refactorings, right?) so while you’re in that transition period make sure you keep a close eye on that “CodeQuality From Now!” metrics, because you don’t want to introduce further complexity.

Visual NDepend vs NDepend Visual Studio addin
I also would like to make a brief comment about Visual Studio integration and how I tend to use NDepend.
While having NDepend embed in visual studio is fantastic I found that I regularly go back to Visual NDepend whenever I want to investigate further. I would say that I tend to use it inside Visual Studio whenever I want to do some quick inspection or to consider the impacts of a possible refactoring. Anything more complex I tend to go back to Visual NDepend mainly because having an entire window focused on the issue I’m tracking is better than having some small docked panel in visual studio (Wish I had a dual monitor for visual studio!)

Feature requests
I do have a feature request. Just one! I would like to be able to exclude types/members/etc from analysis, and I would like to do it in the NDepend project properties area. A treeview with checkboxes for all members we want to include/exclude from the process would be great. I’m thinking in the treeview used in some obfuscator tools (eg: Xenocode Obfuscator) that contain the same kind of tree to select which types/members should be obfuscated.
This is useful for automatically generated code. I know that the existing CQL rules can be edited in a way to exclude auto-generated code, but it’s a bit of a hassle to handle all that CQL queries. Another alternative would be to “pollute” my code with some NDepend attributes to mark that the code is automatically generated, but I don’t really want to redistribute my code along with NDepend redistributable library.
Also, my project has a reference which is set to embed interop types in the assembly (a new .NET feature) and currently there is no way (at least that I know of) to exclude these types from analysis.

Conclusions
Ndepend will:
- make you a better developer
- enforce some best practices we tend to forget
- improve your code style and most important code maintainability

There are many other interesting NDepend metrics and features beside metrics I didn’t mention. I picked just a bunch of features I like and use, I could have picked some others, but I think this is already too long. Maybe some other time!
Also note that NDepend is useful as an everyday tool, not just for when you're doing refactorings as I've exemplified.

If you think I’ve missed something important go ahead and throw in a comment.

8.26.2010

WSDL Flattener

If you ever developed a webservice in WCF, you probably already know that WCF automatically generates WSDL documents by splitting it in several files according to their namespaces and using imports to link the files. This is all very nice and standards compliant as it should be, the problem is that there are still many tools out there that don't support this. If you ever had to deal with this issue before you probably used Christian Weyer's blog post as a solution to make WCF output single file WSDL documents.

A few days ago I had a similar problem, I didn't want to change the way WCF produces WSDL, but I did want a single file WSDL for documentation purposes. I searched for a tool to do this, but I didn't find any, so I did what a good developer should do, I developed a small tool for this. I'm also making it available under an open source license, so if you ever need it you can find it here: http://wsdlflattener.codeplex.com/