NDepend, a review

Shortly ago I got my hands on NDepend, thanks to Patrick Smacchia, lead developer for NDepend, a static code analyzer made for performing static analysis on .NET code

On this blog post, I am doing to explain a bit what is it and what it does

What is NDepend?

As mentioned, a static analysis code for .NET & .NET Core. Static means that the code analysis is performed on the code while it is not being executed.

Usually static code analysis is performed to ensure that the code adheres to some guidelines or metrics, like the number of warnings, certain errors..

Probably, if you work professionally with .NET you have worked with static analyzers from visual studio itself, being the most common Fxcop, Stylecop or the more advanced SonarQube.

That said, the fact is that these code analyzers do not compete with NDepend as they are fundamentally different. In fact, they complement each other.

What is different?

Basically, the rule set implemented by NDepend is essentially different from the other static analyzers, like SonarQube or other Roslyn Analyzers. These are good to analyze what happens in a method, code, syntax and the code flow… whilst NDepend is good at seeings things from a wider, higher-level perspective. It is really focused on analyzing the architecture, OOP structure and implementation, dependencies – where the product name comes from 😉 -, metrics, breaking changes and mutability – and many others too.

The strength of NDepend relies in analyzing software architectures and their components, complexity and interrelation whilst other products strengths are at a different level, focusing more in code modules, being all of them of course excellent products.

NDepend is designed to integrate with some of these products, like SonarQube.

To know more, here

What does it do?

It performs static code analysis on .NET & .NET Core and, upon that, delivers the following information about your code and, importantly, its architecture:

  • Technical Debt Estimation
  • Issue identification
  • Code quality
  • Test Coverage
  • Complexity and diagrams
  • Architecture overview

(Note: it does way more but I’ve shortened to what I think Is important)

And it shows it in a very smart way, let me show you the NDepend Dashboard:

ndepend 01.JPG

Additionally it integrates seamlessly with visual studio, TFS and VSTS. Integrates especially well with the build process, provides the ability to analyze this data over time, comparing builds, test coverage, the build processes.

To know more, here 

Another feature, which is important for communicating to management and reasoning on “passing a milestone” or “fixing the technical debt” (read Technical Debt as the total issues that we leave in the code knowing they are there… but software has to ship, right?). But coming to this, it provides a smart estimation on it.


A quick example

To get some hands on .NET Core I implemented recently a simple service in .NET Core, which I implemented some tests just for fun and also made it asynchronous. Let’s see how it faces the truth! – Just bear in mind it was a just for fun project and time was limited 😉

I’ts quite easy, I followed the steps on the “getting started” video here, installed NDepend, its visual studio plug-in and opened my VS2017, where now appears an NDepend tab.

Let’s open my RomanConverter coding dojo self practice project and click on attach a new NDepend project.

ndepend 02.JPG

The following window appears and we can already click the “play” green button.

ndepend 03.JPG

On the bottom right corner, there is a sphere indicating the status of NDepend. This will start the analysis and the indicator will showcase that it is analyzing.

Once finished, our report will display itself on a browser.

ndepend 03a.JPG

From the Dashboard, click and explore the different information exposed.

A simple click in the Rules Pane, for example, in the violated rules gives us this dashboard:

ndepend 03b.JPG

I find it brilliant, not only the issues are identified but a stacked DataBars are used to showcase the rules with more issues or, with bigger times to fix, as well as having them slightly color identified so understanding which issue(s) are the most critical and deciding which ones to tackle first – or right away – is pretty damn intuitive.

Note to this: I also realized, thanks Patrick for pointing, that clicking on the issues will show them, so what seems like a presentation UI is becoming like a fully interactive dashboard that gets you into action – or to understand the underlying issues better.

There are easily identifiable what our managers would call “low hanging fruit”, easy to fix and saving trouble for later..

Other useful panel is the Queries and Rules explorer which we can open from the circle menu, on the bottom right corner. Or we can use the usual menu: NDepend à Rules à View Explorer Panel.

ndepend 04a.JPG

And it will appear:

ndepend 04.JPG

With this panel, we can explore the rules that the solution has violated, which are grouped into categories like Code Smells, Object Oriented Design, Architecture, Code Coverage, Naming conventions, a predefined “Quality Gates”, Dead Code, and many more… If we click on a rule, we can explore the  C# LINQ Query aka “CQLinq” query that defines it.

This CQLinq attacks a code model dedicated to code quality and can be edited live and also compiled & executed live.

An example of such rule follows:

// <Name>Interfaces must start with an I</Name>
 warnif count > 0 
 Application.Types.Where(t => t.IsInterface && !t.SimpleName.StartsWith("I"))

And it seems damn simple, even to me.. 😉

From here we can quickly access the offensive code and fix it.

Other visualizations that must be explored are the Dependency graph, matrix, and the Metrics view.

All are advanced visualizations which show great insight on our solution and how it is structured. Let’s see them.

Code Metrics View

ndepend 05.JPG

Here we can see that by default we are analyzing our Methods with Lines of Code versus Cyclomatic Complexity. This visualization being a treemap, helps greatly to understand the information.

We can configure the metrics, with a wide selection on level of granularity, size (of the treemap boxes) and color. An especially useful one is code coverage.

An example can be seen next, based on the own Ndepend 😉  source here


Dependency Graph

Here we can see a graph representation, initially on our namespaces which uses LOC as sizing factor for the nodes and the members for the width of the edges connecting them. We can also include the third party assemblies.

ndepend 06.JPG

It is great for getting to know if the interfaces are respected from a Software Architecture viewpoint or, to see if certain assembly is used only where it should be.

I saw no possibility to group several assemblies into one, for example, the explosion of Microsoft.AspNetCore into several assemblies is of no use, I would like to be able to group them into a single Node for example for having the graph more clear. Otherwise this can add noise which can might make other relations I want to visualize harder to detect.. (Update: Patrick Smacchia mentioned that this is on the works – cool!)


The Dependency Matrix

Large solutions would make the previous graph representation a mess, too many namespaces and assemblies. Here we can select namespaces or assemblies and restrict them, drilling down to the elements that we want to dig in, and go to their graph view.

ndepend 07.JPG

There we can select even a method and right click to either see it on a dependency graph view or as a dependency matrix.


What is special about it?

Simply said, its estimation is not only based on source code, but also on the solution level analysis, as mentioned earlier.

Also I mentioned the C# LINQ Queries and it seems to me like a quite flexible approach, everything is interlinked and all the analysis are performed on queries and a lot of the data presented is based on queries apart from the rules: trend, quality gate, searches..

Its visualizations are special, point. Showing the right visualization for the job in a simple, yet efficient, way. Yes, if we are not used to Graphs, Dependency matrixes or tree maps this tool can look intimidating, but don’t be. Once you get used to it, it will become second nature. I used it some years ago to fix a mess and it helped greatly. I did not use it fully though, just for two visualizations.

Other aspect I do really like is that whenever you visualize some information, all relevant information comes along. An example are the Rules! I like the detail that even on the top menu we can see what the current solution is violating.

Or the fact that when I see the rules panel, I see the issues next to the debt and the annual interest and more.

Fundamentally, helps by showing important information where we need it.

Should we use it?

First, to have a visual reference of your project and what is good (or wrong) on it. It can show a lot of things in a very visual way, which can help greatly in:

  1. Knowing the state of our solution
  2. Understanding (the issues, where are they, and the effort it will take to fix them)
  3. Communicating to managers


Concrete features:

  • Enforce Technical Debt measurement in your coding guidelines and processes. This is especially regarding the cost to fix and cost of letting unfixed issues “in”.
  • Understand the entanglement of our software
  • Analyze Software architecture and Code Quality
  • Accurately track the state of our software over time, being able to determine its improvement (or worsening) on its different attributes.


Summarizing – I like it a lot!

It is easy to get lost but it is a rare jewel with an entry barrier that you should push until it’s broken, to see its beauty or, said clearly, its usefulness.

To me its best feature is able to showcase already implemented software architectures in a different, kinesthesic way, with different visualizations tailored to showcase and highlight important aspects of your software.  This is great to see the real state of your code and understand it – and fix it.










Microsoft’s news at Ignite: A lot About AI..

That said I haven’t been at Ignite, but I am overwhelmed by the vast amount of announcements made there and around these dates… and the vast amount of content, 119 talks on AI, 313 sessions on Machine Learning, whoa, it’s getting crazy and the feeling is that all the technologies on Microsoft around AI/Machine Learning/Data Science are accelerating – Fast!

So, let’s catch up!

  • Microsoft ML Server 9.2 released – Microsoft’s Platform offer on Machine Learning and Advanced Analytics for enterprises. As big improvements, now it supports full data science lifecycle support for ETL (Extract Transform and Load operations), supporting R & Python. And yes, this is what was known earlier as Microsoft’s R server, whose name was not fully correct after ‘adopting’ Python 😉 Oh, and now it’s fully integrated with SQL Server 2017… you can read more at the official source. Or watch a quick 2′ introductory video here. I think it is pretty damn important on the full operationalization offer that Microsoft proposes…
  • Azure SQL Database supports now real-time scoring for R and Python
  • Yay! The pretty much expected next gen SQL Database Server from Microsoft has been released: SQL Server 2017, with full support for R & Python and including the before mentioned ML Server.
  • Azure Machine Learning has been greatly updated… this service now brings to us the AML workbench, a client application for AI powered data wrangling and “ML fun”, which I have reserved some time this weekend to download and “have some time together”.. also the AML Experimentation service has been launched to help Data Scientist to experiment faster and better, as well as the AML Model Management  Service. Here you can find an overview as “short” as it can be..
  • Microsoft R Client 3.4.1 release – supporting the obvious version of R, providing desktop capabilities for R development with the ability to deploy models and computations to a ML Server. Original source here. Note that to properly use this, it is vital to use the Visual Studio IDE and to install the R Tools for Visual Studio, which is free.
  • Talking about Visual Studio, we have now the Visual Studio Code Tools For AI which provide extensions to build, test and deploy Deep Learning / AI solutions, being fully integrated with Azure Machine Learning. Github is here.
  • Visual Studio Code for AI has been launched as well, meant for developing for Tensorflow, CNTK, AML and more.
  • The “Quantum Revolution” happened 😉 – the initial movement from Microsoft to embrace the next “shiny light” in computing: Quantum computing. But this is not released, but announced… there will be a Quantum programming language inheriting from C#, Python and F#, a quantum computer simulator and everything integrated with our favorite developer IDE. Short introduction (2′) here. To follow it and suscribe to news on this, go here and on the bottom there is a “signup” button which I’ve already pressed – so, what are you waiting for? 😉

Hope it was interesting!

Let me know if you liked it..  And.. Would you like a hands-on overview on the AML “Workbench”? 😉


It’s been a while…

Yup, since 2013 I haven’t blogged at all… nothing… I guess I gave it all to my book and needed some rest… just joking, moved in late 2012 to Switzerland and it has been an intense ride…

..and with a 2:30h commute, so that did not help too much…

So, I ended up in the end of 2015 with 94,5 kg (with 1.77m tall) so was, in fact obese, and with some health issues, stress, wrong habits, etc…

Basically “not having time”… which is wrong – you have time, 24 hours a day. We just prioritize it wrong. And justify ourselves, that is..

2016 was a game changer, I said stop and put myself to work. April 4th I was 76 Kg (same height though) and somewhat fitter… Hey, I even got into the 20 finalists at the Bodybuilding.com 12 week 250K USD transformation contest! (no price was won though.. I won back health – Yay!)

As of today, jumping up and down in weight around 80-84, but that will change in short..

Professionally, I have had some fun, initially mostly fixing code and putting in place proper architecture practices (and implementing them hands on) and when I was tired of fixing and fixing and fixing…. I went into the realm of testing as “Performance Test Lead…” And loved that! Doing something I never did forced me to learn fast, applying business analysis and planning skills for defining the Performance Test architecture and why not, also the test architecture and implement it in a POC 😉

That was a great experience and enjoyed it, made me better so now I can think as a developer and as a tester… from a low level (coder, tester) and a high level (SW architect, Test Manager) but retaining the ability to go deep – which I enjoy. (you know, the ability to affect the quality of a product that much… and even in earlier stages – if you are allowed to – is a great feeling 🙂

If I had to describe myself right now I’d say I’m a Dev Architect with the ability to see things from a high level, system perspective, to a low level. From a “gamer” pov I’d say I am a sniper that can zoom from afar and aim to the weakest point – and get “the shot”.

After this, shortly entered 2016 I entered commando mode and that broke my recently acquired healthy habits, Dammn! Had up to 4 assignemnts on 2016.. to adventure myself in unknown “code pools” – going to teams to fix issues they were not able to… in their own code or an intriguing “piece of art” whose DAL was executing transactions in a funny way or let’s say they just were “not behaving as expected”…

Later on 2016 joined the CoreLab team as Test Analyst & SW engineer

By the end of 2016 I started learning Machine Learning,which helped me greatly to focus and realize how much I like to get “engaged” in learning a technology or topic (even this one is pretty wide…).

I truly believe that Machine Learning / Data Science and AI Programing are a key toolset, a game changer technology and knowledge that if applied properly can change our world for good. Also for bad sadly, as a weapons race seems to have already started

but that is the topic of another post 😉


Thank you for reading and let’s meet again shortly…





Win A Free Copy of Packt’s Microsoft .NET Framework 4.5 Quickstart Cookbook

Win A Free Copy of Packt’s Microsoft .NET Framework 4.5 Quickstart Cookbook

[UPDATE: The contest will finish at the end of August]

I am pleased to announce that, with support from Packt Publishing, we are organizing a giveaway especially for you. All you need to do is just comment below the post and win a free e-copy of Microsoft .NET Framework 4.5 Quickstart Cookbook. Two lucky winners stand a chance to win an e-copy of the book. Keep reading to find out how you can be one of the Lucky One.


Overview of Microsoft .NET Framework 4.5 Quickstart Cookbook

  • Designed for the fastest jump into .NET 4.5, with a clearly designed roadmap of progressive chapters and detailed examples.
  • A great and efficient way to get into .NET 4.5 and not only understand its features but clearly know how to use them, when, how and why.
  • Covers Windows 8 XAML development, .NET Core (with Async/Await & reflection improvements), EF Code First & Migrations, ASP.NET, WF, and WPF

Note: I am posting this also on my new blog, http:\\xamlguy.com, feel free to post it there (preferably) or here as I will watch out both sites.