Implementing a Strategy to “rule them all…”

The Strategy pattern

The Strategy pattern is one of the OOP design patterns which I like the most..

According to wikipedia, “the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime.” – source

This UML diagram showcases it Pretty well:

Why I like it?

I believe there are several reasons that make this design pattern one of the most useful ones around..

  • Improves the KISS (Keep It Simple and Standard) principle on the code.
  • LSP – The “Strategies” are interchangeable, can be substituted by each others. This is a clear aplication of the L in SOLID, the Liskov Substitution Principle or LSP: “Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” – Source
  • Open-Closed – The Strategy implementation through an interface is a clear application of the “Open-Closed Principle: “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”. In this case we can extend it by writing a class that extends the behavior but we cannot modify the interface. Even more, if we implement the Strategy using a Plugin implementation we do not even need to modify its source code. It’s a very clean implementation. It also helps into Decoupling the code and responsibilities. – Source
  • SRP : We can strongly affirm also say that the Strategy promotes the Single Responsibility Principle as each Strategy implementation should be implemented in a single class.
  • DI: And also, the Dependency Inversion Principle: “One should “depend upon abstractions, [not] concretions.”  – Source. This is so as the Strategies depend on abstractions, an interface which defines the strategy.

 

We can easily implement the Strategy pattern with Dependency Injection but this makes that the code of the Strategy to be on the same assembly or executable and thus, coupled. Due to this, I consider this as a sub-optimal implementation which does not fulfill the Open-Closed principle at 100% if we consider the main Executable as a “Software Entity”.

Even more, if we are in a highly regulated Environment, this means that we can add functionality without altering “the main software” which might be subject to a regulated process like FDA-approval, in case of a medical system… that means several months of documentation, testing and waiting for FDA to sign everyhting. 

Do you like it already? Wait – there are more Benefits!

In my previous work, at RUF Telematik, I proposed the application of this pattern with a plugin system as part of the Technical Product Roadmap. Basically to decouple the code which interfaces a concrete hardware (type of HW, manufacturer, version…) So the main software would not need to know how to talk with a camera, monitor or communication device in the train System. The responsibility is delegated to a .dll plugin that knows how to do that work and we can dynamically add These Features without altering the main software.

In addition to the software architecture benefits and the code quality, we have some more benefits:

  • We can parallelize the development of the Hardware Manager dlls to different developers who can test them separately.
  • We can separate the release and test workflows and accelerate the development time.
  • We do not need to test the main software every time we add support for a new device or version of a device firmware..
  • We do not need to revalidate through industry standards the full software over and over again (usually with an substantial cost of time and money)

In a train we could categorize the different hardware on it on hte following four categories:

  • TFT screens
  • LCD Screens
  • RCOM Communication devices
  • Camera devices

Each one has different Vendors, models and version numbers so a bit more complex implementation should be needed but this is an example so we do not Need to build “the real Thing”.

So we could implement an interface like ITrainInstrumentManager that supported methods like:

  • Connect
  • Getversion
  • Update
  • Check
  • ExecuteTests
  • UpdateCredentials
  • and so on…

 

And then implement a Strategy that fulfills this interface for every Type of Equipment, for every brand/vendor and for every model and version…

With the added Benefit that I can parallelize this work and get several persons to work in different Strategies, one for each new device. This would enable to add support for new hardware devices in no time at all.

And they could be tested separately, with the warranty that if the tests do work, the main tool would work as well.

Without altering or releasing the tool, just adding the plugins in the corresponding folder or loading them dynamically from an online service or location. (if we implement the strategy using the plugin technique, of course)

This presentation showcases some of the points mentioned above, if you are still curious.

 

Implementation of the Strategy Pattern

One of the best implementations I have ever been part of is when I worked along 2011 and 2012 at Parlam Software where a plug-in architecture was designed and implemented by my now friend Xavier Casals. Then he was my customer and CTO of Parlam (and still is).

<Commercial mode>

If you are in Need of translations, do check their solution. Basically it is a full fledged TMS (Translation Management System) which automates your language translation process. More on this here and here.

</Commercial mode>

This Plugin system enabled adding dynamically Data convertors to third party systems, like different CMS systems as “SDL Tridion”, to where his service connects and works with, so basically he can deliver an interface to anybody that wants to interface with its system and enables an easy implementation as well as testing and deployment. Once the DLL is tested and verified, can be signed for security reasons and added to a folder where it is magically loaded and we get the perfect implementation of Open Closed Principle…

“software entities… should be open for extension, but closed for modification”

I know it is a lot to say but let’s get it done and you tell me after 😉

 

Structure

We will create a .NET standard solution which will have implement 3 Projects:

  • StrategyInterface –> A .NET Core Class Library that will hold the Strategy interface, two custom attributes and a custom exception for managing the plugin. This is the basic contract that we will share between the main application that will use the plugin(s) and the plugins themselves.
  • Plugins –> This is a project with a simple class that implements the Interface on the StrategyInterface Project/Assembly. I will use the two custom Attributes to add a name and a description so I can programatically go through them with Reflection before creating an instance, which is convenient if I want to avoid creating excessive objects. Note that this Project will have different implementations, in our case I created 4: CAM, LED, RCOM and TFT. Each one will create a DLL in a concrete directory, “D:\Jola\Plugins”.
  • StrategyPatternDoneRight –> feel free to discuss with me on the name, comments are open to all ;). This is the main customer/user of the Plugins that implement the Strategy and will load the plugins that match the interface from a concrete location of the filesystem. At the moment I did not put too much logic but just to load all the matching assemblies and execute a simple method which all the plugins provide.

The solution looks like:

Strategy01 structure

StrategyInterface project

The most important here is the interface that defines the Strategy:

Strategy02 interface

There we will create the custom Attributes, one for Name and another for Description:

Strategy03 cust attrib name

Plugin project(s)

I created a Plugins Folder to contain them all, then created .NET Standard assemblies and added a reference to the StrategyInterface Project.

Lets say that we create the CAM.Generic Project to implement support for the Train Network Cameras… there we add a class which implements the Strategy Interface and we add the two custom Attributes to it:

Strategy04 Plugin Strategy Implementation

Obviously this is a simplification but here we would put all the hardware dependant code for handling complex network operations with the camera…

All the Plugin Projects are customized to have the same Build Output Path, to avoid doing Manual work:

Strategy05 Plugin Build properties

Just be Aware that the output path that you use must exist and be the same for all Plugins.

Main Application

So, all we have left is to implement the mechanism to recover the assemblies at a concrete filesystem path and load them dynamically into our current execution process. We will do this using Reflection.

I am creating a wrapper class for exposing the strategies implemented by our plugin assemblies.

This class is named StrategyPluginContainer and will expose the two custom Attributes and an instance of the Plugin (really it is an instance of the class that implements the Strategy Interface).

The two key reflection techniques used here are:

  1. Activator.CreateInstance(Type) – This creates an instance of the specified Type using the default constructor. Note this is not reflection but comes directly from System Namespace.
  2. Type.GetCustomAttributes(Attribute type, inherit) – this obtains from a type the value of a custom attribute.

Note: green is due to style suggestions from my VS installation to what I do not agree if I want clarity. Expression bodied properties or using ?? are good, reduce space but if somebody is not used to this Syntax readability and understandability are reduced..

Strategy06 Plugin Wrapper

Now we can implement the StrategyPluginLoader. This class responsability is to Keep a list of the Plugins that implement the Strategy and it does so by loading them from the Filesystem (could get them from a web service or other mean).

Basically it has a List of StrategyPluginContainer which we just created, exposed through a property.

And populates it by getting all the DLLs from a specific hard disk Folder and loading them with Reflection’s Assembly.LoadFrom(filename).

then we get the types contained on this Assembly and iterate through them to match them against the Strategy Interface. I also check that the two custom Attributes are supported and if everything matches, I create a StrategyPluginContainer instance of this concrete type.

As a final check, I verify if the Plugin is already on the plugin list for not repeating and if is existing I update it in a proper way.

Strategy07 Plugin loader

Last but not least, I use all this through a nice console application, I create the StrategyPluginLoader, I execute the command to load all the plugins and iterate through them invoking the only command in the interface which is implemented in separate, decupled assemblies and loaded dynamically at runtime, without no Knowledge or coupling of any Kind in the main application.

Strategy08 bringing it together

The full code can be found in GitHub here.

 

Happy coding!

 

 

NDepend, a review

Shortly ago I got my hands on NDepend, thanks to Patrick Smacchia, lead developer for NDepend, a static code analyzer made for performing static analysis on .NET code

On this blog post, I am doing to explain a bit what is it and what it does

What is NDepend?

As mentioned, a static analysis code for .NET & .NET Core. Static means that the code analysis is performed on the code while it is not being executed.

Usually static code analysis is performed to ensure that the code adheres to some guidelines or metrics, like the number of warnings, certain errors..

Probably, if you work professionally with .NET you have worked with static analyzers from visual studio itself, being the most common Fxcop, Stylecop or the more advanced SonarQube.

That said, the fact is that these code analyzers do not compete with NDepend as they are fundamentally different. In fact, they complement each other.

What is different?

Basically, the rule set implemented by NDepend is essentially different from the other static analyzers, like SonarQube or other Roslyn Analyzers. These are good to analyze what happens in a method, code, syntax and the code flow… whilst NDepend is good at seeings things from a wider, higher-level perspective. It is really focused on analyzing the architecture, OOP structure and implementation, dependencies – where the product name comes from 😉 -, metrics, breaking changes and mutability – and many others too.

The strength of NDepend relies in analyzing software architectures and their components, complexity and interrelation whilst other products strengths are at a different level, focusing more in code modules, being all of them of course excellent products.

NDepend is designed to integrate with some of these products, like SonarQube.

To know more, here

What does it do?

It performs static code analysis on .NET & .NET Core and, upon that, delivers the following information about your code and, importantly, its architecture:

  • Technical Debt Estimation
  • Issue identification
  • Code quality
  • Test Coverage
  • Complexity and diagrams
  • Architecture overview

(Note: it does way more but I’ve shortened to what I think Is important)

And it shows it in a very smart way, let me show you the NDepend Dashboard:

ndepend 01.JPG

Additionally it integrates seamlessly with visual studio, TFS and VSTS. Integrates especially well with the build process, provides the ability to analyze this data over time, comparing builds, test coverage, the build processes.

To know more, here 

Another feature, which is important for communicating to management and reasoning on “passing a milestone” or “fixing the technical debt” (read Technical Debt as the total issues that we leave in the code knowing they are there… but software has to ship, right?). But coming to this, it provides a smart estimation on it.

 

A quick example

To get some hands on .NET Core I implemented recently a simple service in .NET Core, which I implemented some tests just for fun and also made it asynchronous. Let’s see how it faces the truth! – Just bear in mind it was a just for fun project and time was limited 😉

I’ts quite easy, I followed the steps on the “getting started” video here, installed NDepend, its visual studio plug-in and opened my VS2017, where now appears an NDepend tab.

Let’s open my RomanConverter coding dojo self practice project and click on attach a new NDepend project.

ndepend 02.JPG

The following window appears and we can already click the “play” green button.

ndepend 03.JPG

On the bottom right corner, there is a sphere indicating the status of NDepend. This will start the analysis and the indicator will showcase that it is analyzing.

Once finished, our report will display itself on a browser.

ndepend 03a.JPG

From the Dashboard, click and explore the different information exposed.

A simple click in the Rules Pane, for example, in the violated rules gives us this dashboard:

ndepend 03b.JPG

I find it brilliant, not only the issues are identified but a stacked DataBars are used to showcase the rules with more issues or, with bigger times to fix, as well as having them slightly color identified so understanding which issue(s) are the most critical and deciding which ones to tackle first – or right away – is pretty damn intuitive.

Note to this: I also realized, thanks Patrick for pointing, that clicking on the issues will show them, so what seems like a presentation UI is becoming like a fully interactive dashboard that gets you into action – or to understand the underlying issues better.

There are easily identifiable what our managers would call “low hanging fruit”, easy to fix and saving trouble for later..

Other useful panel is the Queries and Rules explorer which we can open from the circle menu, on the bottom right corner. Or we can use the usual menu: NDepend à Rules à View Explorer Panel.

ndepend 04a.JPG

And it will appear:

ndepend 04.JPG

With this panel, we can explore the rules that the solution has violated, which are grouped into categories like Code Smells, Object Oriented Design, Architecture, Code Coverage, Naming conventions, a predefined “Quality Gates”, Dead Code, and many more… If we click on a rule, we can explore the  C# LINQ Query aka “CQLinq” query that defines it.

This CQLinq attacks a code model dedicated to code quality and can be edited live and also compiled & executed live.

An example of such rule follows:

// <Name>Interfaces must start with an I</Name>
 warnif count > 0 
 Application.Types.Where(t => t.IsInterface && !t.SimpleName.StartsWith("I"))

And it seems damn simple, even to me.. 😉

From here we can quickly access the offensive code and fix it.

Other visualizations that must be explored are the Dependency graph, matrix, and the Metrics view.

All are advanced visualizations which show great insight on our solution and how it is structured. Let’s see them.

Code Metrics View

ndepend 05.JPG

Here we can see that by default we are analyzing our Methods with Lines of Code versus Cyclomatic Complexity. This visualization being a treemap, helps greatly to understand the information.

We can configure the metrics, with a wide selection on level of granularity, size (of the treemap boxes) and color. An especially useful one is code coverage.

An example can be seen next, based on the own Ndepend 😉  source here

TreemapColor.png

Dependency Graph

Here we can see a graph representation, initially on our namespaces which uses LOC as sizing factor for the nodes and the members for the width of the edges connecting them. We can also include the third party assemblies.

ndepend 06.JPG

It is great for getting to know if the interfaces are respected from a Software Architecture viewpoint or, to see if certain assembly is used only where it should be.

I saw no possibility to group several assemblies into one, for example, the explosion of Microsoft.AspNetCore into several assemblies is of no use, I would like to be able to group them into a single Node for example for having the graph more clear. Otherwise this can add noise which can might make other relations I want to visualize harder to detect.. (Update: Patrick Smacchia mentioned that this is on the works – cool!)

 

The Dependency Matrix

Large solutions would make the previous graph representation a mess, too many namespaces and assemblies. Here we can select namespaces or assemblies and restrict them, drilling down to the elements that we want to dig in, and go to their graph view.

ndepend 07.JPG

There we can select even a method and right click to either see it on a dependency graph view or as a dependency matrix.

 

What is special about it?

Simply said, its estimation is not only based on source code, but also on the solution level analysis, as mentioned earlier.

Also I mentioned the C# LINQ Queries and it seems to me like a quite flexible approach, everything is interlinked and all the analysis are performed on queries and a lot of the data presented is based on queries apart from the rules: trend, quality gate, searches..

Its visualizations are special, point. Showing the right visualization for the job in a simple, yet efficient, way. Yes, if we are not used to Graphs, Dependency matrixes or tree maps this tool can look intimidating, but don’t be. Once you get used to it, it will become second nature. I used it some years ago to fix a mess and it helped greatly. I did not use it fully though, just for two visualizations.

Other aspect I do really like is that whenever you visualize some information, all relevant information comes along. An example are the Rules! I like the detail that even on the top menu we can see what the current solution is violating.

Or the fact that when I see the rules panel, I see the issues next to the debt and the annual interest and more.

Fundamentally, helps by showing important information where we need it.

Should we use it?

First, to have a visual reference of your project and what is good (or wrong) on it. It can show a lot of things in a very visual way, which can help greatly in:

  1. Knowing the state of our solution
  2. Understanding (the issues, where are they, and the effort it will take to fix them)
  3. Communicating to managers

 

Concrete features:

  • Enforce Technical Debt measurement in your coding guidelines and processes. This is especially regarding the cost to fix and cost of letting unfixed issues “in”.
  • Understand the entanglement of our software
  • Analyze Software architecture and Code Quality
  • Accurately track the state of our software over time, being able to determine its improvement (or worsening) on its different attributes.

 

Summarizing – I like it a lot!

It is easy to get lost but it is a rare jewel with an entry barrier that you should push until it’s broken, to see its beauty or, said clearly, its usefulness.

To me its best feature is able to showcase already implemented software architectures in a different, kinesthesic way, with different visualizations tailored to showcase and highlight important aspects of your software.  This is great to see the real state of your code and understand it – and fix it.

 

References: