Closures leading to “Type not marked as serializable” exception

Today I faced a SerializationException that refered to some anonymous inner class stating it was not serializable, when IIS tried to store the session in the ASP.NET State Service:

Type ‘Xyz+<>c__DisplayClass10’ in Assembly ‘Xyz, Version=1.2.5429.24450, Culture=neutral, PublicKeyToken=null’ is not marked as serializable.

I looked for lambdas in my code and found quite a few, but most of them were not new and did never have any issues in serialization. But then I noticed that I had built in a new lambda expression that “happened” to build up a closure.

I have built a very simple example to confirm that closures are not serializable whereas “normal” functions are:

[TestFixture]
public class GeneralUnderstandingTests
{
    [Serializable]
    private class ASerializableType
    {
        private readonly Func thisIsAClosure;
        private readonly Func thisIsNotAClosure;

        public ASerializableType()
        {
            // Normal functions are allowed: The following
            // succeeds to serialize
            const int SomeConst = 12345;
            thisIsNotAClosure = () => SomeConst;

            // But closures are compiled to non-serializable
            // inner classes and break serialization:
            var someVariable = 12345;
            thisIsAClosure = () => someVariable;
        }
    }

    [Test]
    public void ASerializableType_CanBeSerialized()
    {
        var sessionState = SessionStateItemCollection();
        sessionState["sut"] = new ASerializableType();
        sessionState.Serialize(
            new BinaryWriter(new MemoryStream()));
    }
}

This test fails but goes green as soon as the line thisIsAClosure = … is commented out. The line thisIsNotAClosure = … however does not cause any issues as SomeConst is not a variable but a constant, that is, it does not build a closure but is inlined by the compiler.

New Feature Request: [Rotten] attribute

This is an inoffical new feature request – I am floating between sarkasm and really considering to implement this.

Here it is, I proudly present:

The [Rotten] attribute*

* Other candidate names are: [Smells], [CodeSmell], [Stinks]

The principle is simple:

  1. If you find a piece of code that is of very low quality, annotate it with the [Rotten] attribute.
  2. The tooling (e.g., Resharper, CI, any quality management tools such as SonarCube) does the rest for you:
    • The properties, methods, or classes (short: code unit) marked with the attribute are considered “rotten” and are marked as such – this could be anything ranging from a simple Resharper warning to a “rotten lines quotient” or the like, to gamification tools such as the Jenkins game plugin which would punish the introduction of and reward the removal of rotten code (though introduction of the attribute and the cascading markings should not count).
    • Any code unit that directly depends on a rotten code unit is itself considered rotten.
    • Any code unit that directly inherits from rotten code is itself considered rotten, too.
    • These rules are applied recursively such as to propagate the rot to the uttermost thing depending on the rotten cluster of code.

Help – my code is marked “rotten” – what can I do?

You can take the following measures to bring down your “rotten code quotient”:

1. Fix the rotten code.

Cover it with unit tests, refactor it, fix the bugs, remove the smell, make it readable. Once you are finished, remove the attribute – all of your dependent code will immediately get rid of its “rotten” status, too.

2. If 1. is not (yet) possible: Protect yourself against the rotten code

Remove direct dependencies on rotten units. For “has-a” dependencies: Invert your dependency. Put a well-defined interface between the two units. Then you can cover both sides with unit tests to document the behavior of the rotten unit, including correct and incorrect parts, and verify that your “clean” unit handles both correct and incorrect parts properly. It will also loosen the coupling of your not-in-itself-rotten code to the rotten part. It will improve odds that you will ever be able to simply exchange the rotten unit by a newly-written one.

In severe cases you could even consider to introduce a mediator class/adapter class between rotten and clean units. They could “even the bumps” and prevent the clean part from assimilating to the rotten part.

For “is-a” dependencies (i.e., subclasses) – Stop inheriting from that rotten base class. Introduce an interface for other code pieces so you can satisfy them without inheriting from a base class. Introduce a new clean baseclass, or stop subclassing at all – it might be part of the problem that led to the code rot.

3. Are you kidding?

Kind of. Only. If you take the measures above, you will manage to keep as much as possible of your code secure from being infected by the rot of the already-rotten parts. If you do not take these measures, and the code that you might consider clean depends on rotten code, your clean code will misbehave and become hard to maintain just as the rotten code does, because code rot is infectious. This is why this code is harshly considered “rotten”, too, thus providing an incentive to fix that.

Polymorph Razor Views

As an object-orientation-spoiled programmer, when you come to use an HTML rendering engine such as Razor you will soon start to wish for the same couple of design advantages that object orientation brings along.

Using polymorphism in “class-ic” object orientation, e.g., you can do things like that (simplified code snippet – don’t try copy-paste):

class Person
{
    string FirstName { get; set; }
    string LastName { get; set; }
    virtual string FormatName()
    {
        return FirstName + " " + LastName;
    }
}

class PersonWithMiddleName : Person
{
    string MiddleName { get; set; }
    override string FormatName()
    {
        return FirstName + " " + MiddleName + " " + LastName;
    }
}

That means, you can write some generic code that iterates over a couple of Person objects and prints their names without caring how they are actually built. By default a Person would yield their first and last name when invoking FormatName, but if you need to print a middle name, too, you can simply subclass Person and override FormatName. When invoking FormatName on an instance of PersonWithMiddleName, even when it is known as Person only, it will apply the most special implementation found, here the one including the middle name.

The good news – you can get that in Razor, too

Razor has a concept that reminds me of the same type of polymorphism. Using Display Templates and Editor Templates you can define a general template to render a Person:

@* ~/EditorTemplates/Person.cshtml *@
@model Person
<div class="Person">@Model.FirstName @Model.LastName</div>

This template applies each time you use EditorFor with a Person type property:

@* suppose a class Book { Person Author { get; set; } *@
@model Book
@Html.EditorFor(m => m.Author)

Now say we have a book with a PersonWithMiddleName author. Then the Person template is still applied, because PersonWithMiddleName inherits from Person. But we have the option to override the template for the special class:

@* ~/EditorTemplates/PersonWithMiddleName.cshtml *@
@model PersonWithMiddleName
<div class="Person">
@Model.FirstName @Model.MiddleName @Model.LastName
</div>

Now, whenever the book has a PersonWithMiddleName author, this template applies. For all other Person objects, the Person.cshtml applies.

We can take it a step further! “Base templates”

If you did not know of Editor Templates before, you might be so excited about them now that you want to try them out right away and cannot think clearly enough to head on reading. Just return later.

If you already did know Editor Templates you might have been wondering: Can we take it a step further? In C# the overriding method can call its base method to reuse the code of the base class:

class PersonWithMiddleName : Person
{
    string MiddleName { get; set; }
    override string FormatName()
    {
        return base.FormatName() + ", middle name: " + MiddleName;
    }
}

Can we do that with Editor Templates?

We can, making use of this helpful answer on stackoverflow.

@* ~/EditorTemplates/PersonWithMiddleName.cshtml *@
@Html.Partial("~/EditorTemplates/Person.cshtml", Model)
, middle name:
@MiddleName

At a first glance you might think it looks a bit clerky but it is not: Just as a subclass needs to reference its base class with its full class name including the namespace, we reference the “base template” using its full path. We simply pass that template the same model. Thus it also uses the exact same ViewData (e.g. the “HtmlFieldPrefix”). Thus we pass it the context, which is kind of analaguous to the “this” keyword in C#.

Template Method Pattern

Now we get to an even more interesting point: If we have inheritance and polymorphism at hand, can we use any of the popular object oriented design patterns?

I tried the template method pattern. It is very useful in this place because when you render a view it is pretty likely that you want to “inject” some additional HTML in the midst of the rendering output of the base template.

Taking the example from above, let us say we want to render FirstName + MiddleName + LastName again:

@* ~/EditorTemplates/Person.cshtml *@
@model Person
@{
    var extension1 = ViewData.ContainsKey("ExtensionPoint1")
        ? ViewData["ExtensionPoint1"] : string.Empty;
}
<div class="Person">
    @FirstName
    @Html.Raw(extension1)
    @LastName
</div>

The template basically renders FirstName + LastName but provides an “extension point” for inserting HTML between the two names. A pretty silly application of the pattern, but, you know, it is an example only.

The extension must be a string containing HTML. It is only rendered if you provide it.

@* ~/EditorTemplates/PersonWithMiddleName.cshtml *@
@model PersonWithMiddleName

@{
    var extension1 = @RenderMiddleName().ToString();
    ViewData["ExtensionPoint1"] = extension1;
}
@Html.Partial("~/EditorTemplates/Person.cshtml", Model)

@helper RenderMiddleName()
{
    @MiddleName
}

Our specialized template makes use of the Person.cshtml “base template” for rendering the parts that the base template already knows best how to render. But it combines that with its own specializd rendering by only injecting the part that is really different from the base template – the middle name.

We don’t define the extension1 string directly but use a Razor helper instead. This allows us to make use of the Razor syntax to define the part to inject rather than writing lenghy worms of plain HTML strings.

Conclusion

Using Editor Templates (and Display Templates) in a smart combination with Html.Partial and ViewData you can leverage a (maybe) unknown range of object-oriented techniques for building your web pages, allowing for a modular, clean design in this part of your web application just like in the rest of your application, thus improving code reuse and maintainability of your templates.

I think this is great news! What do you think? Have you been using this approach before? Do you have better ideas? Or do you see any issues with this approach?

No more copy & paste in my IDE

This is day one of my new trial.

I have removed the key shortcuts for copy & paste from Visual Studio. I have seen too much of code duplication, some of it bears my very own signature. More often than not it is evil. I am aware that there are times copying might be fine. But the few times that this is the case, having to navigate to the menu should not be too heavy of a burden. However I hope that the time and inconvenience it takes to navigate there will trigger the thought process of what I am about to do, why I am doing it and whether there might be better ways.

Depending on the outcome I might re-add the shortcuts or else remove the menu options, too.

step-1

step-2

step-3

Do not fix code duplication*

(* Full title: – Do not fix code duplication – solve it)

When you find two identical or near-identical pieces of code in your codebase this is identified as code duplication. There are many resources to learn why code duplication is an issue.

When you find duplicate code, the intuitive reaction is to fix it right away. Don’t. Here is why:

There is a subtle difference between fixing an issue and solving it. Fixing means to remove it. Solving means to analyze the root cause and fix that. When you fix code duplication the typical thought process is: “Oh, here is a piece of code, and here is the same again. That’s bad. Let us merge them. Hm, maybe we can extract a new private method. Or we create a class for it. Or we place it in this class here, since this is already used in both places anyway. Oh yes, let’s do that.”

However, when you find duplicate pieces of code this is usually not the root issue. It is only a smell. There might be a flaw in your design. The place of your code might be discrepant from where the associated semantic concept belongs to.

Take this example:

We have a model class that represents a service product in three shapes: Basic, Medium, Top. We find this in a property:

class Product
{
    int ProductType { get; set; }
}

Then we have a document generation module for printing contracts or offers. Therein we find multiple blocks of the form

if (product.ProductType == Constants.MEDIUM
    || product.ProductType == Constants.TOP)
{
    ...
}

We could fix that by extracting a local variable:

var isMediumOrTop = product.ProductType == Constants.MEDIUM
                    || product.ProductType == Constants.TOP;

Or we could extract a method

private static bool IsMediumOrTop(product) {...}

No, much better. Let us introduce a helper class.

public class AllPurposeDocumentGenerationHelper { ... }

We won’t do that.

Instead we lean back and think:

  • We have duplicate code. Why is that?
  • Is there a business meaning attached to this condition?
  • What is it?

We find that actually the business wants to “unlock” a certain special offer in the product if you buy the medium version or above. So it is a business thing. It is nothing specific to generating a document, nor is “IsMediumOrTop” necessarily a good name to represent it.

Recognizing that, we decide that the model needs to be extended:

bool SpecialOfferIncluded
{
    get
    {
        return ProductType == Constants.MEDIUM
               || ProductType == Constants.TOP;
    }
}

The conditionals in the document generation will now look like this:

if (product.SpecialOfferIncluded) { ... }

The semantic meaning will be immediately clear to the reader (and coder). Also, by looking at the new model property you will learn a business rule: “If the product type is medium or top, the special offer is unlocked.”

Next we notice that in our front end code we have the same conditional but there it was not so obvious:

if (product.ProductType == Constants.BASIC) { ... }

Gotcha!

if (!product.SpecialOfferIncluded) { ... }

No, wait! We wanted to lean back and think first. While doing that, it might turn out that this piece of code is actually not related to the special offer thing. Instead it is really specific to the product type. Luckily we now have a way to make a semantic distinction between the two meanings.

Soon the business will come to you and tell you that in the next version, the SpecialOffer should be included only if the price exceeds a certain limit. Easy:

bool SpecialOfferIncluded
{
    get
    {
        return Price > Constants.MIN_PRICE_FOR_SPECIAL_OFFER;
    }
}

Done.

Conclusion

When you face duplicate code, take it as a opportunity to think about your design, the business, and how well the design represents the semantics of your application. There are good chances you will find a way to improve that, thus having really solved the root issue of the duplicate code rather than just having it fixed in an arbitrary way.

Generic Types are prettier than Dictionaries

If you need to pass key-value-pairs to a method you can find many modern implementations prefer to take them as an “object” rather than a dictionary. This allows you to create a anonymous object which is prettier than a dictionary: It is a shorter notation, and it brings along some of the advantages of static typing:

var result = MyMethodThatTakesAnAnonymousObject(
    new { foo = "bar", baz = 1 };

 

The method then reflects on the object and uses the property names as keys and their values as the – values.

What you might not have been aware of: The same can be done for return objects. If the caller of the method knows what keys it expects, you can support it by taking this expectation as an anonymous object:

var result = PleaseReturnTheResultInAnObjectLikeThis(
    new { foo = "", bar = 0 });
Console.WriteLine(result.foo);

 

And this is how it works:

[TestClass]
public class LittleTest
{
    [TestMethod]
    public void Test_A_Little()
    {
        var t = PleaseReturnTheResultInAnObjectLikeThis(new { Foo = "" });
        Assert.AreEqual("hello", t.Foo);
    }
    private T PleaseReturnTheResultInAnObjectLikeThis<T>(T input)
    {
        // The following line simulates any logic that creates keys and
        // values in any form. You can use LINQ to transform them into
        // a dictionary as a pre-step to fill them into your T object.
        var values = new Dictionary<string, object> { { "Foo", "hello" } };

        // The next line brings us to the heart of the idea
        return MapToGenericType(input, values);
    }
    private static T MapToGenericType<T>(T input, IDictionary<string, object> values)
    {
        var constructorInfo = input.GetType().GetConstructors().Single();
        var parameterInfos = constructorInfo.GetParameters().ToArray();
        var parameters = new object[parameterInfos.Length];
        foreach (var parameterInfo in parameterInfos)
        {
            parameters[parameterInfo.Position] = values[parameterInfo.Name];
        }
        var output = (T) constructorInfo.Invoke(parameters);
        return output;
    }
}

The clue is that an anonymous object always brings along its anonymous type with a constructor whose parameters are 1:1 the properties that you defined for the anonymous object. You can use generics to transport knowledge of the type of any anonymous object beyond the scope where it was defined.

This way you kind of define an expectation to the results in a static-typed manner. Your compiler will not guarantee that the expectations hold true though, you will only find it by the time that MapToGenericType is executed.

Unit Tests: An AAA-conform way to expect exceptions

I like the way how NUnit supports expected exceptions in unit tests – as opposed to some other frameworks, where you have to misuse a try-catch block

try
{
    DoSomething();
    Assert.Fail("exception expected");
}
catch (ExpectedException ex)
{
    // this is the good case, do nothing
}

in NUnit you can

Assert.Throws<ExpectedException>(() => DoSomething());

Or, if you want, you can make more assertions on the exception thrown:

Assert.Throws<ExpectedException>(
    () => DoSomething(),
    ex => ex.Message == "Failed to do this");

However, this unfortunately (still) breaks the AAA structure of a unit test. With only a tiny helper you can get an AAA-structured unit test that expects an exception:

// Arrange
var x = "foo";

// Act
var actualException = Catch<ExpectedException>(() => DoSomething(x));

// Assert
Assert.That(actualException, Is.Not.Null);
Assert.That(actualException.Message, Is.EqualTo("Failed to do this"));

And this is the helper:

public TException Catch<TException>(Action action)
    where TException : Exception
{
    try
    {
        action();
        return null;
    }
    catch (TException ex)
    {
        return ex;
    }
}

I think this is a big plus. What do you think? Does this approach have any drawbacks or caveats?