Linking .NET Assemblies with IoC

Suppose you like Inversion of Control (IoC). I do. And suppose you like to encapsulate shared assemblies so the internal details are just that– internal. I like that, too. But those two likes can conflict when a consuming application needs to register all the dependencies in an IoC container: those internal types are going to be invisible to the container.

Well, I want what I want when I want it. So here are my demands…

  1. An application– MyApplication– using an IoC container to fill dependencies via constructor-based injection
  2. A shared assembly– SharedAssembly– filling its own dependencies however it well pleases (it’s none of my business)
  3. The shared assembly exposing just a few interfaces– not classes– to work with, save one factory. No implementations; no visible internals/guts;

So NONE of this nonsense:

namespace MyApplication
...
   var myClass = new SharedAssembly.MyClass();

Instead, MyApplication classes will depend on only the public interfaces defined in SharedAssembly.

My friend Drew once created a policy where each shared assembly came with a basic factory to create instances of public interfaces. I liked how that gave consumers the ability to work with just the interfaces and thereby respected the boundaries of the assemblies. I’ll use that same idea here and create a factory in SharedAssembly to return instances of those public interfaces.

The IoC container in MyApplication will use the factory from SharedAssembly to obtain instantiated implementations of SharedAssembly’s public interfaces. The IoC container will then inject those implementations into the dependent classes in MyApplication.

The Code

In SharedAssembly, I’ll create a basic factory that accepts a type and returns an object…

public static class Factory
{
    public static Object Resolve( Type type )
    {
        return type == typeof( IBusinessEntity ) ? new BusinessEntity() : null;
    }
}

I could design it to be more “manual” (e.g., CreateBusinessEntity() ). Also, here is where SharedAssembly could use its own IoC container of whatever type it wants, or none at all. This factory is a static class. If an instance would be better, do that. Let the details of the situation be your guide.

Now, over in MyApplication, I’ll reference SharedAssembly. And, in the IoC container initialization, I’ll register IBusinessEntity with a call to Factory.Resolve( IBusinessEntity ). For now, I’m going to use StructureMap 3.1.0.133, mostly because it’s an awesome IoC container, but don’t get bogged down in religious or political battles here. Use whichever one you want; just set it up properly so you can easily switch it out. If the container is getting passed around or you see more than just a couple references to it in an application, you’ve already been hit. Get help now.

Taking some pointers from code I’ve read, I like to bury all the IoC registration in a folder/namespace called DependencyResolution. In there, I have this class…

public class IoC
{
    public static IContainer Initialize()
    {
        var container = new Container( config =>
        {
            config.AddRegistry<DefaultRegistry>();
            config.AddRegistry<SharedAssemblyRegistry>();
        } );

        return container;
    }
}

StructureMap allows us to define multiple registries, so I’ve defined one for MyApplication– DefaultRegistry– and one for SharedAssembly– SharedAssemblyRegistry…

public class SharedAssemblyRegistry : Registry
{
    public SharedAssemblyRegistry()
    {
        Scan(scanner =>
        {
            scanner.Assembly( "SharedAssembly" );
            scanner.With( new CustomConvention() );
        });
    }
}

Additionally, StructureMap allows us to use custom-defined conventions when scanning an assembly. During a scan, the convention is called for each public type discovered (not internals, etc.). It accepts the Type and the Registry and decides how to register that type in that registry. Here, I’ve defined CustomConvention which for every type discovered except Factory, registers it against the Factory.Resolve() method…

public class CustomConvention : IRegistrationConvention
{
    public void Process( Type type, Registry registry )
    {
        if ( type != typeof( Factory ) )
            registry.For( type ).Use( "SharedAssembly types", ctx => Factory.Resolve( type ) );
    }
}

So, when the IoC container in MyApplication attempts to resolve some class called MyBusinessObject, which declares a dependency on IBusinessEntity, the IoC container itself will first call Factory.Resolve(), and then use the returned object to construct MyBusinessObject.

 

The Theory

Don’t let the auto-scanning coolness of StructureMap distract you– the solution was really the factory. This hinge point gives SharedAssembly control of instantiating its internals, which means it can use dependency injection if desired, or not. SharedAssembly can avoid using an IoC container today and easily add one when complexity warrants it. SharedAssembly can declare an ILog or IRepo used by internal classes and then provide a consumer (e.g., MyApplication) with the ability to submit an implementation (e.g., Factory.Register(ILog, MyAppLog) ). And if SharedAssembly depends on other assemblies that have their own factories (do you have a policy?), here is where they can be linked, too. The point is, you can do whatever you want! So start dreaming and open your mind to new possibilities.

"The power is yours!" -Captain Planet

 

Improved Object Oriented Design – Isolate and Model Business Processes

Early in my career, I focused on modeling the business nouns– Customer, Account, Product, etc. My designs sorely underestimated the importance of the business processes– Place Order, Open Account, Bill Customer.

My first official OO class went something like this: read through the requirements, circle the nouns, and underline the verbs. The nouns are the objects, and the verbs are the methods. So easy, and so wrong.

I actually heard Jeffrey Palermo criticize this same oversimplification at a user group presentation. I wanted to jump up and shout, “Yes!!! We had the same class!” I think many of us took that same class and received this false gospel.

This shortsighted yet traditional OO design method focuses on a deep analysis of how all the data in the system is related, which is somewhat wasted effort as everything will eventually connect to everything else without a context to establish boundaries of relevance. The resulting model is large and elaborate, chocked full of has-a’s and is-a’s. My biggest OO mistake was designing such a structure, putting in a lot of automatic behavior, and sharing the same model from the GUI to the data access! Yikes. It was and is a brittle, monolithic structure.

Crystallizing all the relationships and behaviors into one large model says a lot about the business, almost always more than intended or realized. It introduces implications that business analysts would say are either inaccurate or unnecessary.

When the product owners request a change that fundamentally shifts some of the relationships– even unimportant relationships they didn’t even know were modeled– it can easily fracture the model in a way difficult to resolve…or explain to the product owners.

I want less implication, fewer connections, less obstruction to change.

I’m a big fan of fixing a problem at its root cause. I hate downstream patches and hacks, the shortcuts; they always lead to further headaches. I’ve grumbled to a team member or two, “In making the harder fix easier, we made all the easy things hard.”

Sony Google TV RemoteThe root problem here is that the model encompasses far too much. It establishes too many relationships and connects too much behavior. Everything was tied together…”for completeness.” In my mind, the tendency for me to do this is closely related to the tendency to gold-plate software and to build frameworks. It’s a good ol’ fashion power trip. I think, “I’ll build this super-powerful framework or API that can do everything!” This can sound like, “Then, whenever I need to do anything for an [x], all I need to do is get an instance and it’s all right there!” The power! The POWER!!! The naivety.

We don’t need one system-wide model, one framework, one all-seeing construct to rule them all, all the system behavior, that is. We don’t even want one model to encompass all the behavior of a customer, or an employee, or an order. We only want enough for what’s happening at the moment. Notice that I’ve transitioned into an action perspective: system behavior; what’s happening; what’s changing. First, define the action.

Apple TV RemoteTake, for example, the process of billing a customer. Traditionally, it sounds like a method begging to be put in a Customer class: Customer.Bill( order ). The first problem I’ve seen here is that I don’t need a whole customer object to bill them– probably just a customer ID and an order ID. What all else went into creating the customer class was wasted. The second problem is that all the other customer capabilities are noise. Customer.UpdateAddress(), .PlaceOrder(), .IsPreferred(), etc. The thing is, nobody needs all that at once.

Instead, imagine an interface (I)BillCustomer that contains an Execute() method. It’s responsibility becomes immediately clear. A potential consumer knows exactly what to expect. The BillCustomer implementation can have very specific dependencies, namely, the actions or steps in the customer billing process. Drill downward in this manner until you hit infrastructure (database, I/O, external systems, etc.).

There are many reasons I like such modeling. For one, it liberates grouping. Before, all customer-oriented processes gravitated toward the Customer class black hole. But now, I can group all those processes however and wherever I want. They can be in the same namespace, or different namespaces; the same assembly, or different assemblies; the same service, or different services.

But generally, I like that process-modeling is process-oriented. TDD and especially BDD have driven my development in such a way that I want to address fundamental system requirements, and those requirements always come down to doing something. Users are never content with a graph of their system, never impressed with elaborate UML diagrams. They expect the computer to do something— to act.

Processing is the fundamental purpose of a computer. I remember back in college a whole class chanting repeatedly with our professor, “Input, Output, Processing, Storage!” Data goes in, data comes out. Often, it’s stored. But the magic is the processing.

Focusing first on individual and independent processes and just the data needed to perform each one separately seems simplistic. I used to turn up my nose at such a simpleton’s idea. But this simplicity, this directness is what a business analyst actually expects is conveyed during requirements gathering. Stick to this simplicity, and requirements will be more directly represented in the system.

Behavior should be a first class citizen in the system, and as such a behavior can be a class. Bill Customer can be a class. Place Order can be a class. Change Password, Register New User, Post Feedback– every process can be its own class. The advantage is that each process becomes independent of the rest of the system.

As I’ve traveled down this path, I find myself moving closer to some of the concepts in Domain Driven Design (DDD), Command Query Responsibility Segregation (CQRS), and especially Event Sourcing (ES). I like that ES designs for atomic events with specific handlers. I like that CQRS is quick to separate processing that changes the data from processing that queries for the data. The “bounded context” concept from DDD is, perhaps, the most important concept when developing a system of any size. In general, it’s because it represents isolation of business concepts. It’s all about isolation. In software development, isolation is paramount if a system is to be flexible, extensible, scalable, maintainable, etc. We see isolation again in the Single Responsibility Principle from the SOLID programming principles. That’s referring to object design, but notice how all of these isolate behavior. Some at a micro level (SRP), and others at a macro level (bounded context).

Used appropriately, these are all tools of isolation: methods, delegates, interfaces, classes, inheritance, namespaces, assemblies, services. Early on, I didn’t do a good job of isolating at all. Now, interfaces are small, specific, and numerous but organized. Delegates are welcome, not ostracized. I’ve changed my thinking a lot. Have I swung the pendulum too far? I suppose time will tell, but this is a far better position than where I was, and I’m excited for where I’ll go next.

ASP.NET MVC is the Frontier

A coworker emailed me a link to an “interesting article,” as he put it. Shaun Walker, creator of DotNetNuke, posted Microsoft Declares the Future of ASP.NET is Web API wherein he reports on Microsoft’s full backing behind Web API and how it’s here to replace ASP.NET MVC. He takes the opportunity to bluntly discredit the growth of ASP.NET MVC and generally devalues it. If I didn’t know better (and I guess I really don’t) I’d think he’s harboring some kind of animosity toward MVC. The post sounds like he believes ASP.NET MVC has been little more than a popular fad, that creating MVC was basically throwing a bone to the ROR camp (as if they’re unimportant), and that Microsoft over-hyped it which caused all the interest and growth in ASP.NET MVC. “Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.” Have I misunderstood? If not, I respectfully and completely disagree.

The movement for MVC-ish web frameworks began, as Shaun acknowledges, apart from Microsoft. That was the beginning of what’s become a rapid feedback loop: 1. the developer community demanded an MVC-ish offering; 2. Microsoft built it. Wash, rinse, repeat for MVC 2 and 3. Microsoft didn’t manufacture some fad; it provided better tooling to developers whose practices were evolving toward better engineered software. In short, developers were advancing with or without Microsoft, and Microsoft decided it’d prefer “with.” So just to be clear, I’m saying that MVC was better than WebForms.

Having started a career in web development over 11 years ago, I have to say that WebForms was a complete wrong turn for the industry. The progression *should* have been (classic) ASP -> ASP.NET MVC -> Web API (and SPA). Hindsight is, of course, 20/20. WebForms should never have been in the sequence. It was a clear effort to give VB6 developers an on-ramp to the web. Unfortunately, it was also an effort to change ASP developers into VB6-in-the-browser developers. VB6 was a huge success (and I enjoyed it), so the idea of copying it was not without it’s merit, but it was executed with a fundamental flaw: it tried to hide the web. Putting html element events into server code is craziness. I trust we’ve all seen the absolutely ridiculous urls generated by WebForms. Forcing statefullness was a mistake. And what critique of WebForms would be complete without at least referring to the addling Page Life Cycle and the PostBack pit of dispair. The effort to weld together and paint over the seams inherent and good in the client-server architecture of the web gave birth to a mess. It would have been good if it had never been born.

ASP.NET MVC was a return to and embracing of the real web. And all the real progress in the .Net web frameworks has been coming from MVC– regaining control of the html, extension-less urls, RESTful urls, separated routing, inversion of control, test-driven development, etc. No, ASP.NET MVC was no fad, no accident caused by marketing. ASP.NET MVC has been the next evolutionary step. And MVC development continues to be the area of inovation, even when it’s not about using M-V-C. What I’m continually excited about is the separation and “librarying” of web server responsibilities. In contrast to the WebForms Titanic, everything is becoming far more modularized, interfaced, testable and replaceable. IDependencyResolver? That’s ASP.NET MVC, and that’s improved software engineering– better craftsmanship.

But ASP.NET MVC isn’t the end solution to web development. It’s the frontier of web development. MVC is really about the community and it’s warp speed trek toward improvement; it’s continued mission to seek out new and better solutions, “to boldly go where no one has gone before.” And when “M-V-C” gets replaced by API or SPA or XYZ, it will be the MVC community that takes it there. The MVC community is, in some sense, the R&D folks of web development…except that at the same time we’re quickly shipping lots of production code.

Interestingly, some of the valuable work within the MVC crowd has been recognized and pulled backwards into WebForms (e.g., routing). But I still don’t want to develop there. Shaun mentioned, “Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline.” But it’s an afterthought and it’s an inverse to the pit of success. It’s a peak of success where getting there requires much more effort and diligence, like climbing the icy slopes of Everest. Don’t misunderstand. I’m a firm believer in due diligence and discipline, but we should be making best practices easier.

And thanks to the MVC goodness engine (which is that rapid feedback loop), best practices are easier and even better tooling is coming. The frontier is smarter and varied clients. Web API and SPA are responses to this. 10 years ago, we couldn’t put all the processing on the client. The browser of 2002 just couldn’t handle the demands we place on it today (gmail, google docs, etc.). A historical tradition of generating HTML on the server combined with limited browser performance encouraged us to put all the display logic on the server. From an SOA perspective, this is completely backwards. A VB6 programmer would have thought it certifiably insane to dynamically build out a form with imbedded data at runtime on the server and then deliver that app to the client. But on the web that’s what we did. And it’s largely what we still do today, encouraged by tradition, the current tooling, and demos (don’t even get me started on the demos). But the sheer looseness and flexibility of MVC, the very quality that I believe is really responsible for its popularity, also enables and empowers developers to do things very differently, very easily. The idea, which is not new, not even close, is to build client applications out of HTML and JavaScript that run in the browsers and consume services offered by the server. This model kills the page life cycle completely. It even kills posting a form. Well designed apps are binding form data into view models on the client, and then transferring DTO representations of that back and forth to the services. You know what that is exactly like? Client-server WinForms (i.e., modern VB6)! The bonus is the effortless deployment of the web.

So WebForms was a good idea, but completely misemployed. Instead of welding it all together, WebForms should have pulled it all apart and taken the responsibility for creating HTML and CSS with client JavaScript “code behind” files instead of server files. The server code is best handled by Web API and SPA. Get the display logic off the server. My current dream tooling for web development would be something similar to WPF forms editor, except for HTML and CSS instead of XAML. No more dynamic HTML rendering. If we want data on the client, the client app gets it from a service– very SPA. Maybe WebForms will become just that.

I also have begun seeing the value in the REST movement to support this new world of client-agnostic services. (Again, from within the MVC camp and backported to WebForms.) Shaun agrees and states, “REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques.” And we know what he means, that REST is newly popular, and that’s correct. Comedically, however, REST is not new at all. In this context it’s as old as the web itself. SOAP is not older; it was built on top of HTTP. That we’re returning to the fundamentals of HTTP established some two decades ago is hilarious. Maybe it was the not-invented-here syndrome. Maybe we just never took the time to really learn HTTP because what little of it we used just worked. We certainly didn’t have the tooling because we didn’t realize we should be demanding it. Eventually we did demand it, and it’s finally upon us and will surely be quickly refined in the next few years, again by that MVC kick-started feedback loop. Tooling for REST on the Microsoft stack is again taking the next step with Web API that it finally began with MVC 1.

What’s funny to me is that Shaun freely admits that they built a version of DotNetNuke on MVC2, “because it had the most intuitive, light-weight REST implementation in the .NET stack.” Yes. Yes it does. In any case, Shaun is right to look with positive anticipation toward Web API and his work toward creating smart clients for DNN, but it’s coming largely because of the MVC community, not in spite of it. And is there going to be a reconciliation of web development on the .Net stack? One ring, as Shaun alludes to? I doubt it. If anything, web development is more capable than it ever was and I’m highly doubtful one technique is the best for all the situations. I think we’ll continue to see multiple technologies in tandem to choose from to meet the many and various needs. Still, to all WebForms developers, I give this always open invitation: abandon that Titanic and climb aboard with the MVC community and it’s spirit of innovation and rapid improvement!  🙂

Unit test patterns, Episode III : Avoid the combinatorial – cheat!

This is the third post in a series on unit test patterns.

The previous two patterns have been my go-to guys for almost everything I have to do, and they work well to pressure me to better understand the problem domain and to decompose complex processes into simple ones. However, there have been a few occasions when I cheated a bit and intentionally didn’t specify the behavior for every situation within a process.

For example, in the last post I listed the possible situations for a process that invites a user given an email address and an account number…

  • Given a valid email address and a valid account number
  • Given an invalid email address and a valid account number
  • Given a valid email address and an invalid account number
  • Given an invalid email address and an invalid account number

This is only validating 2 items and yet I have 4 possible combinations. If I had a large number of items, well, I trust you see the exponential problem (specifically, I think this one is 2^n,  but that’s an oversimplification). Perhaps you also see the out: we don’t have to care about every situation. True, I want any validation routine to be user friendly and return all the errors, not just the first one caught. But simply returning a list of individual errors can provide that without needing a dedicated error for each combination of wrong-doings. I know, duh. But the point is that I have a collection of independent scenarios whose combined situations I don’t feel a need to specify behavior for.

In the example, this means that if the invite user process, given an invalid email address, returns a collection of errors containing one for “invalid email address,” I am trusting it will still give me that error if the process was also given an invalid account number. Yeah, you see the discomfort I have. The worst situation would be to pass validation with just the right combination of invalid datum, but to be fair, something should have to be messed up royally for that to happen. Theoretically, if I could write specs for each independent situation, I could then programmatically combine them for each possible combination – let the computer do the heavy lifting. Maybe I’ll do that some day, but 2^n can get big pretty fast (and I see that it’s sometimes more like 3^n).

I’m going to refactor the InviteUserProcess interface a bit from the last post. Instead of having each field accepted as a separate parameter, I’m combining them into one “InviteUserForm” object. I’ll be adding to this object later; otherwise, I probably wouldn’t bother with such a small set of data. More importantly, this will simplify the spec and the production code as well.

So instead of…

InviteUserProcess.Execute(String emailAddress, String accountNumber)

I’ll have…

InviteUserProcess.Execute(InviteUserForm form)

Also, the InviteUserProcess previously took two dependencies– an IValidateEmailAddressProcess and an IValidateAccountProcess– but now I’ll replace it with a single IValidateInviteUserFormProcess that returns a list of errors– an empty list meaning no errors, i.e., the input data is valid. That fast forwards me to looking at that new form validation process. Whereas the consumer of this process (InviteUserProcess) now has only two scenarios– a valid form or an invalid form– the validation process alone bares the full responsibility of dealing with all the different validation scenarios. To begin specifying the behavior of this process, I’ll define a “standard situation” where the form has all valid data…

namespace Validate_invite_user_form_process_specs
{
   public class ValidateFormProcessSpec
   {
      protected ValidateFormProcess ValidateFormProcess;
      protected InviteUserForm InviteUserForm;
      protected List<InviteUserError> ExpectedErrors;
      protected IEnumerable<InviteUserError> Errors;

      public virtual void SetupTestFixture()
      {
         InviteUserForm = new InviteUserForm
                              {
                                 EmailAddress = "test@example.com",
                                 AccountNumber = "1234567890"
                              };
         ValidateFormProcess = new ValidateFormProcess();
      }
   }
}

So I’m pulling out my inheritance pattern and setting up protected members (accessible from child classes), and instantiating an InviteUserForm that should pass validation. That form will serve as my “standard.” Notice that the SetupTestFixture() method is virtual (override-able).

Now I create a sub class for each “alternate situation” I want to spec. I’ll start by specifying that the all-valid situation returns an empty errors collection.

[TestFixture]
public class Given_a_valid_form : ValidateFormProcessSpec
{
   [TestFixtureSetUp]
   public override void SetupTestFixture()
   {
      base.SetupTestFixture();
      Errors = ValidateFormProcess.Execute( InviteUserForm );
   }

   [Test]
   public void The_errors_collection_should_be_empty()
   {
      CollectionAssert.IsEmpty( Errors );
   }
}

To create a spec for what should happen if the form contains an invalid email address, I tweak just that field…

[TestFixture]
public class Given_an_invalid_email_address : ValidateFormProcessSpec
{
   private List
   [TestFixtureSetUp]
   public override void SetupTestFixture()
   {
      base.SetupTestFixture();
      InviteUserForm.EmailAddress = "invalidEmail"; // tweaked the InviteUserForm for this particular situation: an invalid email address
      ExpectedErrors = new List<InviteUserError> { InviteUserError.InvalidEmailAddress };
      Errors = ValidateFormProcess.Execute( InviteUserForm );
   }

   [Test]
   public void The_errors_collection_should_contain_an_error_for_invalid_email()
   {
      CollectionAssert.AreEquivalent( ExpectedErrors, Errors ); // defined errors with an enum
   }
}

Because I know that the base form is fully valid, it’s very easy to setup a specification for just an invalid email. By using NUnit’s CollectionAssert.AreEquivalent, I can easily specify that no other errors should exist. I probably wouldn’t test for “Given_an_invalid_email_and_an_invalid_account_number”. I don’t like leaving situations unspec’ed, but as the number of input fields grow, you can see how this shortcut can save considerable time.

If I wanted to add an optional phone number to the InviteUserForm, an empty phone number is technically invalid, but it’s still acceptable. So I’d test for the invalid case like above, but I may also test for the no-phone-number case just to be sure that I don’t get that invalid error.

[TestFixture]
public class Given_no_phone_number : ValidateFormProcessSpec
{
   [TestFixtureSetUp]
   public override void SetupTestFixture()
   {
      base.SetupTestFixture();
   }

   [Test]
   [TestCase( "" )]
   [TestCase( null )]
   public void The_errors_collection_should_not_contain_an_error_for_invalid_phone_number( String phoneNumber )
   {
      InviteUserForm.PhoneNumber = phoneNumber;
      Errors = ValidateFormProcess.Execute( InviteUserForm );
      CollectionAssert.DoesNotContain( Errors, InviteUserError.InvalidPhoneNumber );
   }
}

That’s pretty much it. I wanted to explore how a process may have a set of data that combine to form too many scenarios to test under normal conditions. If this code sent people to the moon, I’d probably have a more comprehensive solution. But since I don’t, this works. I’ve used the validation requirements in this series’ example code, and I really have done validation this way, but I also do validation other ways I like better. Don’t use this as a template for all your validation needs.

To summarize this pattern:

  • One base class to define a “standard” situation of a process
  • Multiple sub-classes that alter the “standard” to fit individual, alternate situations and that specify behavior for those
  • Not specifying behavior for every combination of alternate situations if they’re not related

Unit test formats, Episode II : Inheritance strikes back

This is the second post in a series on unit test patterns.

Referring to the base pattern from the previous post, I broke a test or specification down into three parts:

  • a process
  • a situation
  • and expectations

In this post, I expand the base pattern to specify behavior for different situations (the second bullet point) of the same process. There are many ways to specify behavior for different situations, including using frameworks like NSpec or SpecFlow that have more complex syntax. I don’t mind saying that I’m not there yet, but I’m eying them. So instead, I’ll do this with NUnit and…inheritance…

An aside: inheritance has had an interesting rise and fall in popularity. And I agree with the “market’s correction.” It seems that there was a point where OO was about objects and inheritance and little else. And when all us developers jumped out of the gate with nothing but “IS A” in our heads, well, when all you’ve got is a hammer… Anyway, good or bad, I’m using inheritance and so far it’s helping, not hurting.

Returning to the UserInvitationManager class of the last post, I now forget what that class does, exactly. What about user invitations does it manage? Blast my absent-mindedness! Well, I can look at the public methods and get a much better idea– InviteUser()– ah, ok; but I don’t like that I had to do that. Not at all. I want to rename that guy to something just in-my-face obvious: InviteUserProcess. Of course it’s interfaced, so there’s also an IInviteUserProcess now. You could also call the interface just IInviteUser. And I’ll just replace the redundancy of InviteUserProcess.InviteUser() with InviteUserProcess.Execute() or .Run(), .Invoke(), etc.

Now, InviteUserProcess is a very specific name. I’ve kind of locked myself in here. I’m limited to pretty much that one function and I don’t have the freedom to put a lot of other user invitation-related methods on it like I did with the UserInvitationManager…GREAT!!! In fact, I like that so well that I’ll go ahead and update the names of the dependencies as well, e.g., from IUserInvitationSender to ISendUserInvitationProcess.

Anyway, this renames the specification file and its namespace, but I think I can also eliminate that When_ namespace since the new name defines the “when”. Here’s the last post’s specification code with these changes…

namespace UserManagerSpecs
namespace InviteUserProcessSpecs
{
 namespace When_inviting_a_user_to_join_an_account
 {
      [TestFixture]
      public class Given_an_email_address_and_an_account_number
      {
         [Test]
 public void Then_the_user_invitation_manager_should_pre_register_that_email_address_to_the_account()
         public void This_process_should_preregister_that_email_address_to_the_account()
         {

That defines only one situation: given an email address and an account number. The different situations that are the focus of this post are going to arise out of an increased awareness that just any ol’ string won’t do for either the email address or the account number. We need to validate those two strings. Now, if there’s anything I hate doing, it’s validation, but there’s no sense crying about it; it has to be done. So here are the situations…

  • Given a valid email address and a valid account number
  • Given an invalid email address and a valid account number
  • Given a valid email address and an invalid account number
  • Given an invalid email address and an invalid account number

In the first situation, my expectations/desired behavior is super simple– I want the process to 1) preregister that email address and 2) send the invitation. Oh, in the last version, we assumed the values were fine so we didn’t have anything to return. The real world is a messy place, ain’t it? Anway, this adds one more expectation: 3) let the consumer of this process know that it worked. On the flip side, if either parameters are invalid, the consumer of this process is going to want to know something went wrong and what that was, and I certainly don’t want to preregister the email address or send the invitation in such a case. So, here I list these expected behaviors…

  • Given a valid email address and a valid account number
    – The process should preregister the email address with the account
    – The process should send an invitation to the email address
    – The process should return a success notice
  • Given an invalid email address and a valid account number
    – The process should not preregister anything
    – The process should not send an invitation
    – The process should return an invalid email notice
  • Given a valid email address and an invalid account number
    – The process should not preregister anything
    – The process should not send an invitation
    – The process should return an invalid account notice
  • Given an invalid email address and an invalid account number
    – The process should not preregister anything
    – The process should not send an invitation
    – The process should return an invalid email and account notice

So I haven’t written any code yet, but I’ve got an excellent idea of the problem domain I’m up against and how the system should behave. Writing these specifications for the behavior has drawn all that out.

The InviteUserProcess has gained the new responsibilities of knowing how to recognize these two parameters as valid. I could load up this process with some regex’s or something, but that would be harder to test. Because it will be easier to test, I’ll imagine there’s an IValidateEmailAddressProcess and an IValidateAccountProcess, and then stub them. That’s actually quite a bit easier. My specifications will not have to simultaneously deal with defining expectations for what to do because of an invalid email address and what makes for an invalid email address. Or what makes for an invalid account number. I’m dropping that extra responsibility (think the ‘S’ in SOLID). Note: “Because it will be easier to test, I’ll…” is the effect of the driving nature of TDD/BDD, and here I see how that drive pushes me toward better code. Remember, TDD is not just about having an automated test, it’s also about improving how you code (and other things, too).

Now that I know my situations and expected behaviors, I’ll break out my inheritance pattern for building the spec classes. The inheritance pattern creates a base class which defines all the common parties to greatly reduce redundant setup code for the derived classes. I’ll also bother giving this base class a name that will satisfy the “IS A” criteria, “InviteUserProcessSpec,” though that may be a bit superfluous.

public class InviteUserProcessSpec
{
   // System under test
   protected InviteUserProcess inviteUserProcess;

   // Dependencies
   protected IValidateEmailAddressProcess validateEmailProcess = MockRepository.GenerateMock<IValidateEmailAddressProcess>();
   protected IValidateAccountProcess validateAcctProcess = MockRepository.GenerateMock<IValidateAccountProcess>();
   protected IPreregisterEmailProcess preregisterEmailProcess = MockRepository.GenerateMock<IPreregisterEmailProcess>();
   protected ISendUserInvitationProcess sendInvitationProcess = MockRepository.GenerateMock<ISendUserInvitationProcess>();

   // Parameters
   protected String emailAddress = "doesntMatter";
   protected String accountNumber = "doesntMatterEither";

   // Return value
   protected InviteUserResponse response;

   public virtual void SetupTestFixture()
   {
       inviteUserProcess = new InviteUserProcess( validateEmailProcess, validateAcctProcess, preregisterEmailProcess, sendInvitationProcess );
   }
}

You can see that, as I mentioned in the previous post, I’ve still got four kinds of players: the system under test, the dependencies, the parameters, and the return value. You may also notice that in this class I’ve introduced a return value: InviteUserResponse. This is just an enum defined with the different response cases (e.g. Successful, InvalidEmailAddress, etc.). It could have been as simple as a string with a message or more complex like a full blown object.

And finally, a specification for a given situation, which “is a” InviteUserProcessSpec, looks like this. Notice that I setup for the defined situation in the TestFixtureSetup. It’s here where I actually make the “Given” true.

[TestFixture]
public class Given_a_valid_email_address_and_a_valid_account_number : InviteUserProcessSpec
{
   [TestFixtureSetUp]
   public override void SetupTestFixture()
   {
      base.SetupTestFixture();
      validateEmailProcess.Stub( validateProc => validateProc.Execute( emailAddress ) ).Return( true ); // Given a valid email
      validateAcctProcess.Stub( validateProc => validateProc.Execute( accountNumber ) ).Return( true ); // and a valid account
      response = inviteUserProcess.Execute( emailAddress, accountNumber ); // For this spec, I can act here once for all asserts
   }

   [Test]
   public void The_process_should_pre_register_the_email_address_with_the_account()
   {
      preregisterEmailProcess.AssertWasCalled( registerProc => registerProc.Execute( emailAddress, accountNumber ) );
   }

   [Test]
   public void The_process_should_send_an_invitation_to_the_email_address()
   {
      sendInvitationProcess.AssertWasCalled( sendProc => sendProc.Execute( emailAddress ) );
   }

   [Test]
   public void The_process_should_return_successful()
   {
      Assert.AreEqual( InviteUserResponse.Successful, response ); // checking the returned enum value
   }
}

So this should look very similar to the bullet points from above. Notice how using the base class to hold the basics really cleans up the spec. I wouldn’t do this for just one situation, but we’ve got a handful of situations here. I’ll do one more situation for illustration.

[TestFixture]
public class Given_an_invalid_email_address_and_a_valid_account_number : InviteUserProcessSpec
{
   [TestFixtureSetUp]
   public override void SetupTestFixture()
   {
      base.SetupTestFixture();
      validateEmailProcess.Stub( validateProc => validateProc.Execute( emailAddress ) ).Return( false ); // Given an INvalid email
      validateAcctProcess.Stub( validateProc => validateProc.Execute( accountNumber ) ).Return( true ); // and a valid account
      response = inviteUserProcess.Execute( emailAddress, accountNumber );
   }

   [Test]
   public void The_process_should_not_pre_register_anything()
   {
      preregisterEmailProcess.AssertWasNotCalled( registerProc => registerProc.Execute( Arg<String>.Is.Anything, Arg<String>.Is.Anything ) );
   }

   [Test]
   public void The_process_should_not_send_an_invitation()
   {
      sendInvitationProcess.AssertWasNotCalled( sendProc => sendProc.Execute( Arg<String>.Is.Anything ) );
   }

   [Test]
   public void The_process_should_return_invalid_email()
   {
      Assert.AreEqual( InviteUserResponse.InvalidEmail, response );
   }
}

You can see that for the negative assertions (the process should not…) I’m using NUnit’s Arg<> class. Since this post is about the inheritance pattern and not NUnit syntax, I’ll just say that this guy can be very powerful.

So there it is, multiple situations tamed with inheritance.  And as far as that pattern goes, my big take away is noticing the leanness of the spec classes, including how the individual [Test] methods are just one assertion line. When I can easily get away with that, I do it. Sometimes I can’t, so I “act” within the [Test] method. When I see that the spec classes are simple, then I feel like the system code is getting well written, meaning that it’s understandable, it’s maintainable, it’s delectable! Or so I think today. “Therefore let him who thinks he stands take heed lest he fall.” So I’ll see what tomorrow holds. 🙂

Unit test patterns, Episode I : A foundation emerges

Learning TDD has been a lot of work. I’ll start by saying that. If you’re new to TDD, you’re probably floundering around a bit. I did. A lot. I think the main, underlying issue is to “see the layers,” specifically just the one layer you’re writing a test for. Before TDD, I saw an application as more of a big unit with a few large layers. This really causes a struggle when writing a test. TDD really pressures you (drives you ;-)) to see the small, incremental steps. It’s seeing those little steps that’s the trick. If you bite off too much of an application to unit test because you’re a power coder that “get’s things done,” then writing that test becomes like trying to put your shoes on without untying them. It will frustrate you to no end. I’ve been there and done that.  But don’t get rid of the laces. Just untie the shoes. This is what kept me from giving up on TDD– something about it just felt right and smarter folks than me were doing it. Now, I’m no lemming, blindly following all the latest fads, but, well, I’m no lemming blindly following the status quo either. And to be fair, TDD is far too old to be a fad.

And I’m certainly still learning– I’m young to TDD– just barely over a year old now. I had tried it a few times before and couldn’t quite get off the ground, so I took a weekend at home and locked myself in a room until I wrote some successful tests. That’s when I finally got started and I’ve used it on my most recent project at work: a new website (ASP.Net MVC). Perhaps someday I’ll write about how to start a project with TDD, but for now I just want to share a few testing patterns I’m currently using. Feel free to critique; I’m on a journey– “Give instruction to a wise man, and he will be still wiser; Teach a just man, and he will increase in learning.”

So, about those patterns. This post will look at the basic one which is all about

  • a process
  • a situation
  • and expectations

This fits well with Arrange, Act, and Assert for the mechanics of the tests/specs, and all the formats share this overarching design or structure which ends up looking like this

namespace Process
{
   public class Situation
   {
      public void Expectation()
      {}
   }
}

I like the BDD naming convention and totally favor a perspective of “specifying behavior” rather than “passing tests.” In the past, I’ve used “When” to declare the process and “Given” to declare the situation. Because of this, I typically reorder the classic “Given-When-Then” naming pattern to “When-Given-Then.”

I’ll take a recent real world (and common) situation for an example: a web site where a user who has an account may invite others to join that account. I’ll call this a UserInvitationManager (for this post).

I feel using the “When” first to declare the activity/process establishes the context quickly. So…

namespace When_inviting_a_user_to_join_an_account
{
}

The situation is limited to just the input parameters in this case, so…

namespace When_inviting_a_user_to_join_an_account
{
   [TestFixture]
   public class Given_an_email_address_and_an_account_number
   {
   }
}

Before I move on to the “Then,” I set up for those Thens. This can be most of the work, actually, as you’ll see. Most of the time, I put the Act(ing) into the TestFixtureSetup method rather than inside the Test method. I do this so that each TestFixture specifies the behavior for one situation and has the chance to easily Assert many things after the single Act(ion). By no means am I married to this. If a situation serves me better to re-Act before every Assert(ion), I’ll do it in an instant. But otherwise, I’ll Act once, which keeps the Assert(ion)s very simple.

So to set up, I first need to define and initialize the actors involved, including the system under test, or in other words, the process for which I’m specifying behavior. And like I said, I’m skipping some of the “TDD-how-to,” so I won’t go into how I came up with the dependencies for the UserInvitationManager. But I will go ahead and Act at the end of the setup.

namespace When_inviting_a_user_to_join_an_account
{
   [TestFixture]
   public class Given_an_email_address_and_an_account_number
   {
      UserInvitationManager invitationMgr;
      IUserInvitationSender invitationSender;
      IAccountManager accountMgr;

      String emailAddress;
      String accountNumber;

      [TestFixtureSetup]
      public void SetupTestFixture()
      {
         // Arrange
         invitationSender = MockRepository.GenerateMock<IUserInvitationSender>();
         accountMgr = MockRepository.GenerateMock<IAccountManager>();
         invitationMgr = new UserInvitationManager(invitationSender, accountMgr);
         emailAddress = "newUserEmail@doesntmatter.stilldoesntmatter";
         accountNumber = "doesntMatterEither";

         //Act
         invitationMgr.InviteUser(emailAddress, accountNumber);
      }
   }
}

So, a lot of code, relatively speaking, just got added. For me, arranging is defining your parties and intializing them.

I generally define four groups of parties:

  • system under test (the UserInvitationManager)
  • dependencies (the invitationSender and accountMgr)
  • parameters (the emailAddress and accountNumber)
  • result (this action returned no result, but if it did, I would have set the result variable during the Act)

Then I initialize them.

To keep things separated in the above example, I initialized them in the setup method, but most times I initialize what I can on definition to reduce unnecessary loc.

IAccountManager accountMgr = MockRepository.GenerateMock();

Purely a preference. Perhaps a TDD puritan would refuse, but once I got used to this format, I didn’t need to always separate the definition and intialization, and I found that it cleans up some stuff for other setup efforts, like stubbing dependencies.

Now we get to the “Thens.” And, thanks to Ayende and RhinoMocks, they’re short and sweet, as, I think, they should be. Also notice one Assert per Test. Yeah, I’m generally one of those.

namespace When_inviting_a_user_to_join_an_account
{
   [TestFixture]
   public class Given_an_email_address_and_an_account_number
   {
      UserInvitationManager invitationMgr;
      IUserInvitationSender invitationSender;
      IAccountManager accountMgr;

      String emailAddress;
      String accountNumber;

      [TestFixtureSetup]
      public void SetupTestFixture()
      {
         // Arrange
         invitationSender = MockRepository.GenerateMock<IUserInvitationSender>();
         accountMgr = MockRepository.GenerateMock<IAccountManager>();
         invitationMgr = new UserInvitationManager(invitationSender, accountMgr);
         emailAddress = "newUserEmail@doesntmatter.stilldoesntmatter";
         accountNumber = "doesntMatterEither";

         //Act
         invitationMgr.InviteUser(emailAddress, accountNumber);
      }

      [Test]
      public void Then_the_user_invitation_manager_should_pre_register_that_email_address_to_the_account()
      {
         accountMgr.AssertWasCalled(acctMgr => acctMgr.PreRegisterByEmailAddress(accountNumber, emailAddress));
      }

      [Test]
      public void Then_the_user_invitation_manager_should_send_an_invitation_to_the_given_email_address()
      {
         invitationSender.AssertWasCalled(sender => sender.SendInvitation(emailAddress));
      }
   }
}

Pretty basic, but again:

  • One [TestFixture] per situation as generally defined (“Given”) by parameters. Sometimes, the situation is additionally defined by a result from calling a dependency.
  • Usually one Act(ion) per situation. Sometimes this is not possible or preferable, and when it’s not, I don’t cry about it.
  • One Assert(ion) per [Test] method.

That’s the foundational pattern that has emerged for me and it’s good for many of my specifications. In a future post, I’ll actually change up how I name some of these elements, like the Whens and the Thens and even the system under test. See the previous post for a hint. But upon this basic structure I can easily define more complex situations. So, it’s been a little over a year for TDD/BDD and me, and this is where I am…headed.

Now read Unit test patterns, Episode II : Inheritance strikes back.

Coding forwards

So I’ve been drinking the dependency injection Kool-Aid for a while now and it got me thinking about the way I represent business logic in code. Maybe I’ve just come back to the basics of 50yr old programming, but that’s ok because the basics are the basics. And I think they’re this: you’ve got Data and you’ve got Processes. So how did dependency injection (with an IoCContainer/DependencyResolver) get me here? (Skip this long story by jumping to the asterisks *******)

3 words: inversion of control. They flip (invert) the whole development process on it’s head…or maybe it’s been upside down this whole time and it’s flipped it on it’s feet. I suspect so. I used to do my analysis and start coding the data access components: writing queries, building classes and server-side processes to return this data to a client that would utilize it. Now, I start with the top of the application.

I can do this because my dependencies are abstractions that don’t have to be implemented for the consumer to be completed or even for the project to compile. The reason I do this is because it keeps me on target. I’m not building some large infrastructure back-end that ends up being a few pieces of data off. Those can be big things to fix. But by starting with the application, I know exactly what the next layer down needs to do. There’s almost no mystery at all. It feels like I’ve been driving backwards this whole time and now I’ve finally turned my truck around.

And here’s where that meets with a new fixation on “processes.” Because I start at the top, I easily recognize these big tasks to do. If the application had a “Do the yard work” feature, these big tasks would be “mow the yard,” “weed eat,” and “blow off the drive way”. These tasks are the processes. In business perhaps they’d be stuff like “bill the customer” or “report a claim.” Immediately, I see how this flows naturally with the language of my business customers and I like that…a lot.

So in my backwards driving days, because I was working from the inside out, I’d notice that lots of data and processes were related and I’d package them up inside a single class. This lead to sizable classes, sometimes with many methods, often methods that I’d create because they seemed like they could be handy even though I didn’t know of a specific case that needed it. Remember, I was driving backwards from the supposed data to the application. Who knows what I might need? In reality, seldom would any one consumer use all these methods. Typically, the consumer instantiated the class for just one method.

**********************************************
**********************************************

But driving forward I see clearly what methods I actually need and I make just those. But make them where? In an interface, of course! “Bill the customer” becomes a BillCustomer() method inside the interface ICustomerBiller. “Report a claim” becomes ReportClaim() inside IClaimReporter. I was initially super excited about the separation and independence of the interfaces, but their definitions seemed a bit…contrived. ICustomerBiller? I saw that I was trying to make a noun out of all these processes, usually by swapping the words and adding an “er.” Works great, but these phony nouns seem hokey, and at times even vague. Definitely not going for vague. Then I thought, ** I don’t even need the noun! ** I’m just interested in the process. So then came IBillCustomerProcess. I love how the name is clear, the declared dependencies are equally clear (x process depends on IBillCustomerProcess), and the unit tests become clear (The BillCustomerProcess should invoke the CreateInvoiceProcess).

Excellent. But by now someone is crying, “But the name and method sound so redundant: BillCustomerProcess.BillCustomer()???” Yes, it is redundant, at least from a consumer’s point of view. The redundancy in the eyes of the consumer bothers me. I really just want to say something like “do the BillCustomerProcess.” Returning to my unit test (+1 for TDD), one process needs to call or “invoke” another process…hmmm, invoke sounds like delegates to me. What if I could depend on a delegate? Then the consumer could just BillCustomerProcess.Invoke(), or even .BeginInvoke for easy multi-threading stuff. Cool. At this point I laugh at myself. This is probably in a number of freshmen level textbooks somewhere for classes I never took. Oh well, better late than never.

So now I’m experimenting with delegate dependencies. Up front, my biggest uneasiness is that the delegate displaces the need for an interface. And I really like interfaces. They’ve become a kind of security blanket. I suppose I just really like abstraction and delegates are abstractions, but it doesn’t feel safe yet. That, and the IoC Container registration looks ugly. And I’m not sure other consequences there might be.

I still must have my IoC Container, of course, to auto-wire these dependencies. No compromise there. So here’s some example code of how I did that…

First, the process will be a simple “print message” process defined like so…

public delegate void PrintMessageProcess();

Next, my TestClass will declare the dependency in its constructor…

public TestClass( PrintMessageProcess printMessageProcess )
{
   _printMessageProcess = printMessageProcess;
}

I still have to write a function in a class somewhere, so I’ll do that…

public class PrintClass
{
   public void Print()
   {
      Console.WriteLine( "Hello world." );
   }
}

Then the ugly part, registering it in an IoC Container. I’ll use StructureMap here…

x.For<PrintMessageProcess>().Use( () => ObjectFactory.GetInstance<PrintClass>().Print );

Finally, somewhere in my TestClass…

public void RunTest()
{
   _printMessageProcess.Invoke();
}

And that’s it.

One of the things I really like is that the implementation doesn’t have to know anything about the delegate, which is unlike interfaces. This could make breaking down legacy code easier since the classes wouldn’t have to reference anything. The only one that knows what’s going on is the IoC Container.

One of the things I don’t like is that the implementation doesn’t have to know anything about the delegate, which is unlike interfaces. 😀 This could make for unintended behavior since the implementation is not subject to a defined contract. Scary.

If anybody actually read this far, any thoughts?

First blog post ever

Why? Creative outlet, I suppose, and to understand my own ideas and others better. Communicating ideas is often a great way of learning them more deeply yourself. And who knows, perhaps I’ll even get a comment to correct my thinking. After all, “The first to present his case seems right, till another comes forward and questions him.”  So hello, world. Feel free to question me.