Shortly ago I got my hands on NDepend, thanks to Patrick Smacchia, lead developer for NDepend, a static code analyzer made for performing static analysis on .NET code
On this blog post, I am doing to explain a bit what is it and what it does
What is NDepend?
As mentioned, a static analysis code for .NET & .NET Core. Static means that the code analysis is performed on the code while it is not being executed.
Usually static code analysis is performed to ensure that the code adheres to some guidelines or metrics, like the number of warnings, certain errors..
Probably, if you work professionally with .NET you have worked with static analyzers from visual studio itself, being the most common Fxcop, Stylecop or the more advanced SonarQube.
That said, the fact is that these code analyzers do not compete with NDepend as they are fundamentally different. In fact, they complement each other.
What is different?
Basically, the rule set implemented by NDepend is essentially different from the other static analyzers, like SonarQube or other Roslyn Analyzers. These are good to analyze what happens in a method, code, syntax and the code flow… whilst NDepend is good at seeings things from a wider, higher-level perspective. It is really focused on analyzing the architecture, OOP structure and implementation, dependencies – where the product name comes from 😉 -, metrics, breaking changes and mutability – and many others too.
The strength of NDepend relies in analyzing software architectures and their components, complexity and interrelation whilst other products strengths are at a different level, focusing more in code modules, being all of them of course excellent products.
NDepend is designed to integrate with some of these products, like SonarQube.
To know more, here
What does it do?
It performs static code analysis on .NET & .NET Core and, upon that, delivers the following information about your code and, importantly, its architecture:
- Technical Debt Estimation
- Issue identification
- Code quality
- Test Coverage
- Complexity and diagrams
- Architecture overview
(Note: it does way more but I’ve shortened to what I think Is important)
And it shows it in a very smart way, let me show you the NDepend Dashboard:
Additionally it integrates seamlessly with visual studio, TFS and VSTS. Integrates especially well with the build process, provides the ability to analyze this data over time, comparing builds, test coverage, the build processes.
To know more, here
Another feature, which is important for communicating to management and reasoning on “passing a milestone” or “fixing the technical debt” (read Technical Debt as the total issues that we leave in the code knowing they are there… but software has to ship, right?). But coming to this, it provides a smart estimation on it.
A quick example
To get some hands on .NET Core I implemented recently a simple service in .NET Core, which I implemented some tests just for fun and also made it asynchronous. Let’s see how it faces the truth! – Just bear in mind it was a just for fun project and time was limited 😉
I’ts quite easy, I followed the steps on the “getting started” video here, installed NDepend, its visual studio plug-in and opened my VS2017, where now appears an NDepend tab.
Let’s open my RomanConverter coding dojo self practice project and click on attach a new NDepend project.
The following window appears and we can already click the “play” green button.
On the bottom right corner, there is a sphere indicating the status of NDepend. This will start the analysis and the indicator will showcase that it is analyzing.
Once finished, our report will display itself on a browser.
From the Dashboard, click and explore the different information exposed.
A simple click in the Rules Pane, for example, in the violated rules gives us this dashboard:
I find it brilliant, not only the issues are identified but a stacked DataBars are used to showcase the rules with more issues or, with bigger times to fix, as well as having them slightly color identified so understanding which issue(s) are the most critical and deciding which ones to tackle first – or right away – is pretty damn intuitive.
Note to this: I also realized, thanks Patrick for pointing, that clicking on the issues will show them, so what seems like a presentation UI is becoming like a fully interactive dashboard that gets you into action – or to understand the underlying issues better.
There are easily identifiable what our managers would call “low hanging fruit”, easy to fix and saving trouble for later..
Other useful panel is the Queries and Rules explorer which we can open from the circle menu, on the bottom right corner. Or we can use the usual menu: NDepend à Rules à View Explorer Panel.
And it will appear:
With this panel, we can explore the rules that the solution has violated, which are grouped into categories like Code Smells, Object Oriented Design, Architecture, Code Coverage, Naming conventions, a predefined “Quality Gates”, Dead Code, and many more… If we click on a rule, we can explore the C# LINQ Query aka “CQLinq” query that defines it.
This CQLinq attacks a code model dedicated to code quality and can be edited live and also compiled & executed live.
An example of such rule follows:
// <Name>Interfaces must start with an I</Name> warnif count > 0 Application.Types.Where(t => t.IsInterface && !t.SimpleName.StartsWith("I"))
And it seems damn simple, even to me.. 😉
From here we can quickly access the offensive code and fix it.
Other visualizations that must be explored are the Dependency graph, matrix, and the Metrics view.
All are advanced visualizations which show great insight on our solution and how it is structured. Let’s see them.
Code Metrics View
Here we can see that by default we are analyzing our Methods with Lines of Code versus Cyclomatic Complexity. This visualization being a treemap, helps greatly to understand the information.
We can configure the metrics, with a wide selection on level of granularity, size (of the treemap boxes) and color. An especially useful one is code coverage.
An example can be seen next, based on the own Ndepend 😉 source here
Here we can see a graph representation, initially on our namespaces which uses LOC as sizing factor for the nodes and the members for the width of the edges connecting them. We can also include the third party assemblies.
It is great for getting to know if the interfaces are respected from a Software Architecture viewpoint or, to see if certain assembly is used only where it should be.
I saw no possibility to group several assemblies into one, for example, the explosion of Microsoft.AspNetCore into several assemblies is of no use, I would like to be able to group them into a single Node for example for having the graph more clear. Otherwise this can add noise which can might make other relations I want to visualize harder to detect.. (Update: Patrick Smacchia mentioned that this is on the works – cool!)
The Dependency Matrix
Large solutions would make the previous graph representation a mess, too many namespaces and assemblies. Here we can select namespaces or assemblies and restrict them, drilling down to the elements that we want to dig in, and go to their graph view.
There we can select even a method and right click to either see it on a dependency graph view or as a dependency matrix.
What is special about it?
Simply said, its estimation is not only based on source code, but also on the solution level analysis, as mentioned earlier.
Also I mentioned the C# LINQ Queries and it seems to me like a quite flexible approach, everything is interlinked and all the analysis are performed on queries and a lot of the data presented is based on queries apart from the rules: trend, quality gate, searches..
Its visualizations are special, point. Showing the right visualization for the job in a simple, yet efficient, way. Yes, if we are not used to Graphs, Dependency matrixes or tree maps this tool can look intimidating, but don’t be. Once you get used to it, it will become second nature. I used it some years ago to fix a mess and it helped greatly. I did not use it fully though, just for two visualizations.
Other aspect I do really like is that whenever you visualize some information, all relevant information comes along. An example are the Rules! I like the detail that even on the top menu we can see what the current solution is violating.
Or the fact that when I see the rules panel, I see the issues next to the debt and the annual interest and more.
Fundamentally, helps by showing important information where we need it.
Should we use it?
First, to have a visual reference of your project and what is good (or wrong) on it. It can show a lot of things in a very visual way, which can help greatly in:
- Knowing the state of our solution
- Understanding (the issues, where are they, and the effort it will take to fix them)
- Communicating to managers
- Enforce Technical Debt measurement in your coding guidelines and processes. This is especially regarding the cost to fix and cost of letting unfixed issues “in”.
- Understand the entanglement of our software
- Analyze Software architecture and Code Quality
- Accurately track the state of our software over time, being able to determine its improvement (or worsening) on its different attributes.
Summarizing – I like it a lot!
It is easy to get lost but it is a rare jewel with an entry barrier that you should push until it’s broken, to see its beauty or, said clearly, its usefulness.
To me its best feature is able to showcase already implemented software architectures in a different, kinesthesic way, with different visualizations tailored to showcase and highlight important aspects of your software. This is great to see the real state of your code and understand it – and fix it.