Being a "DevOp"

I've read an interesting post today about a current trend in the software development industry: http://jeffknupp.com/blog/2014/04/15/how-devops-is-killing-the-developer/.
The post rants about the "full-stack" developer. The "one-man band" that is basically responsible for pretty much everything from writing code, managing databases and even being a system administrator.
Raise your hand if you ever felt like a "DevOp" :) I know I have!


Transforming web.config in TFS builds

XDT transforms (mostly known as web.config transformations) are most useful to perform transformations on your web.config at deploy time (for more details about web.config transformations you can take a peek at http://msdn.microsoft.com/en-us/library/dd465326.aspx). Say that you’re publishing a web application in the project configuration “Release”. If you have a “web.Release.config” file in your application, that file contains the transformation to be applied over the web.config. However, if you are using TFS builds to create the deployable outputs, you might find yourself in unexplored territory, because your XDT’s will not be ran as when you explicitly publish from within Visual Studio.
Microsoft might have inadvertedly shed some light on this subject, as they have recently made available a prerelease of an API to perform these XDT transformations which is available through NuGet here:  https://nuget.org/packages/Microsoft.Web.Xdt.
Knowing this, I’ve decided to explore this new API and put it to good use. I've created a CodeActivity to use this API and integrate the XDT transformations in my build process. Another alternative (I was previously using) is to rely on an external tool to do the transformations for you and invoke that tool from within your build process (see the Config Transformation Tool here: http://ctt.codeplex.com ).

So here’s a code activity to do just that: run XDT on your web.config depending on the project configuration being built:


Premature optimization

First of all, happy New Year!
Now, on the first post of the year, I would like to quote the great Donald Knuth: "We should forget about small efficiencies, say, about 97% of the time: Premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified."

I've been defending this for a long time, but I keep stumbling upon developers that disregard this rule of thumb. When faced with performance issues most tend to use an empirical approach: read the source code and attempt to make informed guesses on what may be the critical code. Most forget about code profilers that can accurately identify the critical code. I believe this is not only their fault, but also a fault of the education system, that failed to instruct and emphasize on the relevance of such tools.

Here's another quote (from Continuous Delivery): "Don't guess; measure". My experience tells me, that, most of the times, our informed guesses are not the main culprit of our performance issues. So, the best approach is to clearly identify the culprits and deal with them. What's the point in spending time solving an issue when there's another issued that on itself may represent a 50% improvement? We rarely have the time to perfect our code until we deem it as good and fast as we'ld like, so in such situations a programmer should be pragmatic (which I believe to be one of the most relevant qualities in programmers. By the way, if you haven't read The Pragmatic Programmer yet, it's time to start thinking about it).

Happy coding in 2013!


Webinar on Software Best Practices and ROI

I've recently seen a webinar that I would like to share with you. It concerns software best practices and their impact on companies, mainly focusing on these practices ROI. If you're trying to understand if best practices really work, or struggling to convince anyone of this, than this presentation just might be what you're looking for. I would like to highlight not only the content of the presentation, but also the Q&A session at the end of the talk.
The presenter is Steve McConnell, author for several renowned books. I've recently read his "Software Estimation: Demystifying the Black Art" and strongly recommend it!

I've asked for permission to publish the link for this webinar and Steve has given his blessing, so here's the link to a recording of Steve's talk.


Customizing your application resources

Picture this: You have an asp.net application in which you support multiple languages through the use of resources (global and local resources) and depend on the .NET framework to do all the hard work of loading the correct resources depending on the current UI culture. All is well until you have a requirement of customizing a handful of resources (either per user, per customer, or any other criteria you may think of).
Anyone who has enough knowledge of the .NET framework, tells you right away that you can create your own resource provider and take control of all the aforementioned mechanism. You can find several articles online, including MSDN articles with a detailed explanation of how to provide your own resource provider (here's an interesting article about it).
There are several online resources supplying examples of resource providers that load resources from the database and other similar approaches. However in this specific situation we would like to preserve the default provider behaviour and override only when the target resource has been customized. It seems pretty easy, right? We override the default provider, find if the target resource has been customized, if so, return the new value, otherwise, call the base class implementation, right?
Wrong! You can't extend it because the GlobalResXResourceProvider class (default provider responsible for resolving global resources) is internal, the LocalResXResourceProvider class (default provider responsible for resolving local resources) is internal and the ResXResourceProviderFactory class (provider factory responsible for creating instances of the global and local resource providers) is... guess what... internal! So, it seems Microsoft did not want us to extend this classes...
Right now we could opt to rewrite this providers ourselves OR we could avoid that hard work and opt to create our resource providers as proxies/surrogates (see the design pattern here) that end up invoking the default providers. The only question is how do we initialize these default providers if they are internal... It's not pretty, but I can only think of reflection. In this case, instead of accessing both default providers through reflection I chose to minimize the points of failure by creating only the default resource provider factory through reflection and calling it's public interface to retrieve the default provider instances.
So here's the how our brand new resource provider factory looks like:

Plain simple, just create our brand new customizable resource providers and pass them a "fallback" provider, which will be used to resolve any resource that hasn't been customized. Our new resource providers just have to determine if they should resolve the resource or if they should use the "fallback" provider (which will be the default provider). Something like this:

Nice and easy! Accessing internal stuff in the framework is obviously something to be avoided, but I think we're pretty safe here. Resource providers have been around for a while and I don't expect Microsoft to remove this class sooner.


Bug Hunting with NDepend

Here's another post about NDepend (one of my favorite tools of the trade)! I've recently installed the new NDepend v4 and here's how I've used it for the first time.
I was going through the logs of the asp.net application I'm developing and found reasons to believe that somewhere in the application an object was being written on the viewstate that wouldn't serialize as supposed and throw an exception.
Now, how do we find and fix this bug easily? Surely we could attempt to find in the solution for all uses of the Viewstate and reading that code, but that would be too much code to read, so I've decided to use NDepend to aid me.
So here's what I've though:
- most of the times we use the viewstate in this application is through a property on a page or user control;
- if it's a serialization error, we can ignore properties whose type is a primitive type as we're likely dealing with a class defined in our application;

Here's how I've done it:

From here I only had 3 source code locations to investigate! Much easier than going through the entire application, isn't it?

The ability to query your code (and now in a syntax much alike the linq queries I write day after day) is something very powerful, and every once in a while we might find a new way of use this power!



I think I can say that I know most of the relevant development tools related with the .NET world, so it's obviously not every day that I stumble upon a new tool worth of notice. But today (or rather tonight) was one of those days (or nights). Even more relevant is the fact that this one is free (at least for know, while its considered a beta version).
I'm talking about NCrunch (http://www.ncrunch.net/). It's a tool targeted for developers using Visual Studio and doing TDD. It aims to decrease the amount of time we loose compiling and runing our unit tests, by not requiring us to do it at all! That is, it compiles and runs our tests in background automatically. Even before we save the file with the unit tests source code!
I'm not going to write a full review, but I dare you to use a handful of minutes looking at the video presented in the NCrunch homepage to see the awesome features available.

Happy coding and testing!