Also, from that same talk. I have the confidence than watching the talks from Steve Sanderson is a good way to keep yourself up to date with this quite interesting technology, so “Shall we?” 😉
PS: I did a PR with the three modifications but probably there will be no time to review or will be ignored 😉
I have been a 3D interface geek since long, with a passion for 2.5D and 3D interfaces design and programming… got one of the first Kinect (for PC) and Kinect 2, and also played with the Oculus SDK1 and 2…
For me, I faced a solid wall with my nausea when entering VR… so tried to focus on the AR with Hololens, so I got tickets and flight for the Build conference when it was announced a possible showcase of HoloLens in Build 2015showcase of HoloLens in Build 2015, which finally happened and I even got invited to a private preview and hands on programming… but also was lacking to me some key factors.
I have since the Oculus crowdfunding been looking and trying all the different headsets, just to find issues (nausea), lack of responsiveness or not enough resolution, etc.. with the HTC Vive, later versions of the Rift… until now.
Meet the Oculus Quest
Short, the Quest is a standalone VR with OLED displays with 1440 x 1600 px per eye, 72Hz refresh, Qualcomm Snapdragon 835 processor with 4Gb Ram and 6 DOF. 571g in total.
Secure, you can define a security “play/interaction” area around you and if you approach you receive a haptic and visual feedback. And it works really well.
Wireless. No cables. No obstacles. Nothing but freedom.
PC Free, it is a standalone device.
This last thing comes at a price but it is barely noticeable and the Qualcom.
Surprisingly good audio.
Some “Mixed Reality” support.
And not a feature but I have severe motion sickness and I can stand it for half an hour to one hour. this is accomplished with all the factors (features and how they have programmed some of the games I’ve played). And after 1h, I am not dizy or have nausea.
Veredict:
Shortly said: A VR Revolution, a sum of good ideas brought together and implemented to work together in a brilliant way. Simply Wow! ..and I mean a bold WOW!!
If you are interested in VR for gaming or for developing, this is it. It is easy to setup, no wires, no expensive PC, it is FAST and accurate, the visuals are really good and also it is fairly easy to start programming with it.
Also, the Inside-out tracking makes it , apart from easier to setup, far more cheaper than earlier VR models where you needed a couple of sensors “lighthouses”.
For me and my opinion, the Oculus Quest is cost-efficiency speaking is the best VR headset of the market. It does the work, with good resolution, refresh rate, without some of the major showstoppers of the recent past (needing a PC and the setup).
To my opinion, it makes VR easy and mainstream. I guess that’s the reason why it has run out of stock in many suppliers and has sold $5 million in content in the first two weeks… or the Superhot sales are 300% higher for the OQ than they were for the original 2016 Rift Launch, so it is an estimation on how the Quest is performing in rellation to the original Rift.
Development for the OQ is great, easy to set up and dive in. It simply does work.
I have on the past two weeks unrusted my Unity skills and I am playing now in learning how to use the “Oculus Integration” tools and integrating the VR with some service I am setting up in the cloud with Azure, to interact with it in brilliant 3D.
And so far, the experience is brilliant and I am having a lot of fun 🙂
only hint I can give is to invest in learning how to apply object pooling as resources are limited, but that is common sense though.
OQ, Oculus Quest supports Mixed reality so it has a mode that you can locate the controllers with a generated view of your surroundings, with some other minor applications.
BUT and is a big BUT, Oculus is actively working on mixed reality scenarios as well as collaborative interaction, as shown on the following video from the recent “Oculus Connect”:
And..
Also interesting to see how Oculus Insights technology works:
Note: Insight was earlier referred as “project Santa Cruz”, which Oculus has been working in since 2016…
Sometime ago, about 1 and a half month I decided to focus in Microsoft Azure Technology and acquire expertise on it…
This is a bit what I have decided to do and how I am doing it.
To say, that I do not like taking chances and usually I overprepare… which is convenient given how TRICKY some of this exams are (at least to me..).
This is the current Exam & Certification roadmap:
I disagree a bit on the Architecture Path, the green one on the Picture, towards getting the “Azure Solutions Architect”. Even you should be able to “paint boxes and connect them”, to me a Software Architect is somebody that also knows very well what is inside of These boxes and how they do work.
So for methe Roadmap towards the Azure Solutions Architect has the AZ-203 before the AZ-300.
So, in short, my initial roadmap is:
Get AZ-900 (Update: got it!).
Get AZ-203.
Get AZ-300.
Get AZ-301.
I’d like to have some solid foundations so I focus on a good understanding of the basis so to me, AZ-900 is a must have. There are simply too many “Things” (services, types of services, concepts…) laying around… So having a clear ground basis is a must.
https://www.udemy.com/az900-azure/Update: Did not like this course, had some misleading points and seems not to follow the usual Quality of the author (sorry). One of the points in SaaS section is to put SQL as an example… when it is PaaS. So do not invest 3h or more on it.
And some exam preparation to get the “hands on” feeling.
(now I am pending to have some time hopefully this week to prepare and execute the exam which you can do online through here:
Update: The exam is done and passed, will post shortly some of my comments and thoughts on it..
For the AZ-203 I am halfway preparing and have done/plan to do the following:
To highlight that their paths have a “role IQ”, an in portal exam system that helps to measure your Level and where to focus on. This is what I got when started, just after Scott Duffy training and some “hands on”
Support from some of the Microsoft Learn resources, but if you filter by azure developer there are way too many… I found that this link helped me greatly to focus. here you can see the following picture-recommendation for learning roadmap:
Basically, in Microsoft Learn all the following learning paths. But I plan to do them just as a support if I consider I am not confident on the topic.
I am setting up some projects of my own to put some things together so I can glue them in a way that makes sense, but this has some work implications and thus, cannot share in full detail. One of them is implementing a full REST API with Azure Functions and expose it through Azure API Management, to finally consume it from an Azure App (a web app). Is still have decided if the data will be stored in a Cosmos DB or SQL Azure database… but for sure it will have AD authentication.
And, of course, some exam preparation to get hands on feeling and get to know some of the tricks and traps you might face 😉
If you have any tip or recommendation, just shoot in the comments or contact me directly, would be greatly appreciated. I know that some people just take the Scott course, some exam practice and get it but I want some more hands on experience on me before moving forward.
For the “Azure Solutions Architect” certification, I would like to have some real experience and practice, but for now I plan to do:
Two courses from Nick Colyer (Havent made any from him so can’t say yet but they are highly rated)
That is what I said yesterday to one of my Interviewers 😉
Yesterday I had a damn good interview, four hours of what became an interesting technical conversation on mostly coding and Software architecture related topics, with some interesting challenges which I will not disclose.
But I had fun! and at the end of it, the not anymore interviewer but future colleague asked me if I knew what was a Quine… to what I had no clue so I asked.
Basically is a program that produces a copy of its own source code as its output. Without any other purpose. Apart from the “AHA” moment when you understand what is happening 😉
A simple example, create a console application and paste the following code:
namespace WelcomeTo
{
class Program
{
static void Main()
{
string s = @"using System;
namespace WelcomeTo{{
class Zühlke{{static void Main()
{{
string s=@{0}{1}{0};
Console.Write(s,'{0}',s);
Console.ReadLine();
}}}}}}";
Console.Write(s, '"', s);
Console.ReadLine();
}
}
}
Once we execute it, we get the following result :
Which is basically the same code which generated it.
So, now is your turn to try to figure out why.
Hint: There is a very easy to catch eastern egg on it…
We all have components in our software or network that we want to monitor how are they behaving.
This is usually called a watchdog component and I was not that happy with some of the implementations and decided to put a bit of my private time on it, as a bit of “technical challenge”.
Quick introduction
Recently in my Company we had to implement a watchdog to monitor hardware resources and their availability.
Reasoning is that network connectivity is “eventual” and we must react to these states (basically we are talking about trains, its wagons and accessing them from central so we have these things that happen like separating and uniting wagons, tunnels and our wonderful networks that work so flawlessly when we need them the most, right?)
But, this might be migrated to another scenarios, like watching out for micro services, identify if they are behaving properly and, if not, restart them if necessary…
I decided to put some fun tech time to get some of the best features I could find on the net and from my colleagues and create something close to the best solution possible to a watchdog system that could notify “some other system” when disconnection and/or re-connection events happen 😉
Yes, I wanted this to be efficient but also as decoupled as possible.
Maybe too much for a single blog post? Maybe you are correct, but come with me till the end of the post to see if I managed to get there…
Let’s get to it!!
An early beginning..
To monitor networking resources we will use a Ping, for this we must use its containing .NET namespace, System.Net.NetworkInformation.
Ping sends an ICMP (Internet Control Message Protocol) echo message to the computer with the specified IP address. We have also a parameter for a timeout in milliseconds. But, we have to note that if this number is very small, the ping can be anyway received even if the timeout ms have elapsed. So it seems a bit “unimportant”.
After we can use its basic construct so let’s try to ping ourselves..
int timeout = 10;
Ping p = new Ping();
PingReply rep = p.Send("192.168.178.1", timeout);
if (rep.Status == IPStatus.Success)
{
Console.WriteLine("It's alive!");
Console.ReadLine();
}
Not very exciting yet, but it Works (or it should… as we are pinging ourselves… unless we have a very restrictive firewall..)
Now some more timers.. and pinging Asynchronously…
Basically the Ping tries to raise the PingCompleted on the same thread that SendAsync() was invoked on. But, as we blocked the thread with the countdownEvent, the Ping cannot complete as the thread is blocked. Welcome Deadlock (I am speaking in the case of executing this code on Winforms).
In a console application it Works due that it does not have a synchronisation provider and the PingComplete() is raised in a thread pool thread.
Solution would be to run the code on a worker thread, but this will result that the PingComplete() will also be called on that thread..
Here it clarifies what we saw regarding using the Ping.SendAsync(). Basically we have another command on this .NET namespace with an extremely similar name, SendPingAsync(). At a simple view we would think it is redundant and does not sound right… so, what are they doing and what are they different?
According to MSDN
SendAsync method Asynchronously attempts to send an Internet Control Message Protocol (ICMP) echo message to a computer, and receive a corresponding ICMP echo reply message from that computer.
SendPingAsync Sends an Internet Control Message Protocol (ICMP) echo message to a computer, and receives a corresponding ICMP echo reply message from that computer as an asynchronous operation.
So translating a bit, SendAsync sends the ICMP asynchronously but the reception is not asynchronous. SendPingAsync ensures that the reception is asynchronous.
Putting it all together with our “ol’ friend”, the “Timer”..
Initial implementations I have seen for a watchdog were usually following the pattern that a timer is created that will trigger a “watch this resource” task on a given regularity.
This Tick of the timer will trigger the Ping process we have seen.
I got two very good tips on this implementation which are:
Once the timer Tick is called, we pause the timer it by setting the timer period to infinite. We resume it only when the tasks to be performed, in this case a single ping, are done. This will ensure no overlap happens and we have no escalation on the number of threads.
Set the timer to a random time, to ensure a bit of variability on the execution to help on distributing the load, ie., the pings do not happen at the same time.
Some code:
using System;
using System.Net.NetworkInformation;
using System.Threading;
namespace cappPingWithTimer
{
class Program
{
static Random rnd = new Random();
static int min_period = 30000; // in ms
static int max_period = 50000;
static Timer t;
static string ipToPing = "192.168.178.1";
static void Main(string[] args)
{
Console.WriteLine("About to set a timer for the ping... press enter to execute >.<");
Console.ReadLine();
Console.WriteLine("Creating new timer and executing it right away...");
t = new Timer(new TimerCallback(timerTick), null, 0, 0);
Console.WriteLine("press Enter to stop execution!");
Console.ReadLine();
}
private static void timerTick(object state)
{
Console.WriteLine("Timer created, first tick here");
t.Change(Timeout.Infinite, Timeout.Infinite); // this will pause the timer
Ping p = new Ping();
p.PingCompleted += new PingCompletedEventHandler(p_PingCompleted);
Console.WriteLine("Sending the ping inside the timer tick...");
p.SendAsync(ipToPing, 500, ipToPing);
}
private static void p_PingCompleted(object sender, PingCompletedEventArgs e)
{
string ip = (string)e.UserState;
if (e.Reply != null && e.Reply.Status == IPStatus.Success)
{
Console.WriteLine("{0} is up: ({1} ms)", ip, e.Reply.RoundtripTime);
}
else if (e.Reply == null) //if the IP address is incorrectly specified, the reply object can be null, so it needs to be handled for the code to be resilient..
{
Console.WriteLine("Pinging {0} failed. (Null Reply object?)", ip);
}
else
{
Console.WriteLine("response: {0}", e.Reply.Status.ToString());
}
int TimerPeriod = getTimeToWait();
Console.WriteLine(String.Format("rescheduling the timer to execute in... {0}", TimerPeriod));
t.Change(TimerPeriod, TimerPeriod); // this will resume the timer
}
static int getTimeToWait()
{
return rnd.Next(min_period, max_period);
}
}
}
Main issues here is that this technique is meant to trigger a timer “watchdog” for every component/resource we want to monitor, so architecturally wise it might look like an uncontrolled mess that can create a lot of threads..
But this is a Task driven world… since .NET 4.5 at least..
So, now we know how to implement a truly asynchronous ping with SendPingAsync(), which we can call recurrently with a Timer object instance..
But as of this writing we have better tools in .NET for asynchronous/parallel work.. and it is not Backgroundworker (which would be if we needed to deploy a pre- .NET 4.5 solution)…
…but using Async/Await Tasks, which we have since .NET 4.5 (aka, TAP).
Basically it would become a matter of creating a task for every Ping operation and wait for them to complete in parallel.
And if inside a watchdog, to call it every certain time.. maybe not with Timer… if we have .NET 4.5 we can try to avoid creating extra threads if we can, right?
And now, discussing our Watchdog implementation…
Wouldn’t be ideal to have a decoupled watchdog system that we are able to plug and play anywhere in a SOLID way?
Basically I try to think in simple patterns and provide simple solutions so the first that comes to my mind is implementing the watchdog using an observer pattern that we can register for getting updates on the “connectivity” status of different network resources.
To me only two simple connectivity status matter:
Connected
Disconnected
So the code can react on this…
We could add “reconnecting” but this would mean that the watchdog knows the implementation of whoever uses it, but we want to decouple (abstract) so this is something that the “user” app should have to manage by itself. So no.
To our watchdog if we get a IP response back, we are connected. And the Consumer app react on this two basic facts. Simple, right?
Another thing we would need is a way to add the elements to be watched by the Watchdog.
For now, I believe this list should have this information for each of the elements to watch over:
IP
ConnectionStatus
LastPingResponse (why not keep the latest ping reply)
ElapsedFromLastConnected (from the previous time we had a connection)
TotalPings
TotalPingsSuccessful
Timer or no Timer? As a timer will create a thread 100% we have also a way to have this done by TAP, and we can make the process cancellable, so the decision is easy.
And we put the Observer pattern in the mix too right?
So, to benefit our “Network” Watchdog subscribers (Observers) we will provide means for:
Attach or Register
Detach or Unregister
Be Notified
Also, we have the fundamental question on how do we want to do this… we can let them subscribe to the full list of resources, if they are centralised, it makes sense or, otherwise, we might have them to be observed by a concrete end component.. and on this case it might only be interested in a single resource.. so… what to do?
Basically I implemented the Subject Interface to both; the resource list and the concrete resource. Then we have a flexible design that fits all functions and fits with proper software quality standards.
The code is simple, a simple Project implementing a .NET Core component with a simple console application that showcases its usage:
Five interfaces are declared, one for the Network Watchdog so we can:
Add resources to be watched
Configure the Watchdog
Start it
Stop it
Yes, future one would be to remove resources, but did not think that fully yet. But will probably get that in short.
The other four are the interfaces for the Observer pattern, one for the Subject and other for the Observer. I am more familiar with the concept of having a Publisher and a Subscriber and these names sound better and feel more meaningful to me tan Subject/Observer so I use them instead.
One pair is meant for the Network Watchdog, for all the resources and another one is meant for a concrete Resource itself.
Then we have an enum with the connectivity state, which holds Connected and Disconnected.
Then the next thing to watch is the implementation of ResourceConnection:
using ConnWatchDog.Interfaces;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Net.NetworkInformation;
using System.Threading.Tasks;
namespace ConnWatchDog
{
public class ResourceConnection : IResourcePublisher
{
public string IP { get; private set; }
public ConnectivityState ConnectionState { get; private set; }
public IPStatus LastStatus { get; private set; } // Technically it can be obtained from the PingReply..
public PingReply LastPingReply { get; private set; }
public TimeSpan LastConnectionTime { get; private set; }
public long TotalPings { get; set; }
public long TotalSuccessfulPings { get; set; }
private Stopwatch stopWatch = new Stopwatch();
public bool StateChanged { get; private set; }
// Member for Subscriber management
List<IResourceSubscriber> ListOfSubscribers;
public ResourceConnection(string ip)
{
ConnectionState = ConnectivityState.Disconnected; // first we asume its disconnection until we prove opposite.
LastStatus = IPStatus.Unknown;
TotalPings = 0;
TotalSuccessfulPings = 0;
stopWatch.Start();
IP = ip;
StateChanged = false;
ListOfSubscribers = new List<IResourceSubscriber>();
}
public void AddPingResult(PingReply pr)
{
StateChanged = false;
TotalPings++;
LastPingReply = pr;
LastStatus = pr.Status;
if (pr.Status == IPStatus.Success)
{
stopWatch.Stop();
LastConnectionTime = stopWatch.Elapsed;
TotalSuccessfulPings++;
stopWatch.Restart();
if (ConnectionState == ConnectivityState.Disconnected)
StateChanged = true;
ConnectionState = ConnectivityState.Connected;
}
else // no success..
{
if (ConnectionState == ConnectivityState.Connected)
StateChanged = true;
ConnectionState = ConnectivityState.Disconnected;
}
// We trigger the observer event so everybody subscribed gets notified
if (StateChanged)
{
NotifySubscribers();
}
}
/// <summary>
/// Interface implemenation for Observer pattern (IPublisher)
/// </summary>
public void RegisterSubscriber(IResourceSubscriber subscriber)
{
ListOfSubscribers.Add(subscriber);
}
public void RemoveSubscriber(IResourceSubscriber subscriber)
{
ListOfSubscribers.Remove(subscriber);
}
public void NotifySubscribers()
{
Parallel.ForEach(ListOfSubscribers, subscriber => {
subscriber.Update(this);
});
}
}
}
This is a simple beast that implements the Observer pattern for if any other component wants to watch over a concrete resource.
The NetworkWatchdog code:
using ConnWatchDog.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.NetworkInformation;
using System.Threading;
using System.Threading.Tasks;
namespace ConnWatchDog
{
public class NetworkWatchdogService : IWatchdogPublisher, INetworkWatchdog
{
List<ResourceConnection> ListOfConnectionsWatched;
List<IWatchdogSubscriber> ListOfSubscribers;
int RefreshTime = 0;
int PingTimeout = 0;
public CancellationToken cancellationToken { get; set; }
bool NotifyOnlyWhenChanges = false;
bool IsConfigured = false;
public NetworkWatchdogService()
{
ListOfConnectionsWatched = new List<ResourceConnection>();
ListOfSubscribers = new List<IWatchdogSubscriber>();
}
/// <summary>
/// Interface implemenation for Observer pattern (IPublisher)
/// </summary>
public void RegisterSubscriber(IWatchdogSubscriber subscriber)
{
ListOfSubscribers.Add(subscriber);
}
public void RemoveSubscriber(IWatchdogSubscriber subscriber)
{
ListOfSubscribers.Remove(subscriber);
}
public void NotifySubscribers()
{
Parallel.ForEach(ListOfSubscribers, subscriber => {
subscriber.Update(ListOfConnectionsWatched);
});
}
/// <summary>
/// Interfaces for the Network Watchdog
/// </summary>
public void AddResourceToWatch(string IP)
{
ResourceConnection rc = new ResourceConnection(IP);
ListOfConnectionsWatched.Add(rc);
}
public void ConfigureWatchdog(int RefreshTime = 30000, int PingTimeout = 500, bool notifyOnlyWhenChanges = true)
{
this.RefreshTime = RefreshTime;
this.PingTimeout = PingTimeout;
this.NotifyOnlyWhenChanges = notifyOnlyWhenChanges;
cancellationToken = new CancellationToken();
IsConfigured = true;
}
public void Start()
{
StartWatchdogService();
}
public void Stop()
{
cancellationToken = new CancellationToken(true);
}
private async void StartWatchdogService()
{
var tasks = new List<Task>();
if (IsConfigured) {
while (!cancellationToken.IsCancellationRequested)
{
foreach (var resConn in ListOfConnectionsWatched)
{
Ping p = new Ping();
var t = PingAndUpdateAsync(p, resConn.IP, PingTimeout);
tasks.Add(t);
}
if (this.NotifyOnlyWhenChanges)
{
await Task.WhenAll(tasks).ContinueWith(t =>
{
// now we can send the notification ... if any resources has changed its state from connected <==> disconnected
if (ListOfConnectionsWatched.Any(res => res.StateChanged == true))
{
NotifySubscribers();
}
});
}
else NotifySubscribers();
// After all resources are monitored, we delay until the next planned execution.
await Task.Delay(RefreshTime).ConfigureAwait(false);
}
}
else
{
throw new Exception("Cannot start Watchdog not configured");
}
}
private async Task PingAndUpdateAsync(Ping ping, string ip, int timeout)
{
var reply = await ping.SendPingAsync(ip, timeout);
var res = ListOfConnectionsWatched.First(item => item.IP == ip);
res.AddPingResult(reply);
}
}
}
Yes, now Reading it again the ListOfConnectionsWatched could be named differently like ListOfNetworkResourcesWatched… so keeping it in mind so I can update that in the Git repo later on.
As a detail, while writing the demo, I thought that in some cases we want all the updates to be notified, so we have a permanent heartbeat that we can bind to a UI. NotifyOnlyWhenChanges does this, otherwise we only send notifications if there has been a change in the connected/disconnected state.
And.. no timer is used, so at the end we are using:
The last piece of code is an example application which uses the presented software component:
This is an example on how to use the presented software component, my “use case” is “I want a ping sweep over my local home network to see who is responding or not”.
The code creates several resources to be watched and setup the class as a subscriber to the receive connection status update.
For this we have to implement the interface IWatchdogSubscriber and implement the update method.
public class AsyncPinger : IWatchdogSubscriber
{
private string BaseIP = "192.168.178.";
private int StartIP = 1;
private int StopIP = 255;
private string ip;
private int timeout = 1000;
private int heartbeat = 10000;
private NetworkWatchdogService nws;
public AsyncPinger()
{
nws = new NetworkWatchdogService();
nws.ConfigureWatchdog(heartbeat, timeout, false);
}
public async void RunPingSweep_Async()
{
var tasks = new List<Task>();
for (int i = StartIP; i < StopIP; i++)
{
ip = BaseIP + i.ToString();
nws.AddResourceToWatch(ip);
}
nws.RegisterSubscriber(this);
var cts = new CancellationTokenSource();
cts.CancelAfter(60000);
nws.cancellationToken = cts.Token;
nws.Start();
}
public void Update(List<ResourceConnection> data)
{
Console.WriteLine("Update from the Network watcher!");
foreach (var res in data)
{
if (res.ConnectionState == ConnectivityState.Connected)
{
Console.WriteLine("Received from " + res.IP + " total pings: " + res.TotalPings.ToString() + " successful pings: " + res.TotalSuccessfulPings.ToString());
}
}
Console.WriteLine("End of Update ");
}
}
Note that the CancellationTokenSource is there for convenience, this will stop the Watchdog after 60 seconds, but we could bound that to the UI or any other logic.
From my main clause, I have only two lines of code needed:
var ap = new AsyncPinger();
ap.RunPingSweep_Async();
I am happy with the results and time dedicated and I am already thinking on how to potentially extend such a system like:
Extending to another scenarios like monitoring Services
Extending to provide custom actions (ping, Service monitor, application monitor, etc…) but this would require as well custom data too.
Extending to enable the watchdog to do simple actions like a “keep alive” or in case of a Service issue, to be able on its own to try to solve the issue “stop and restart” protocols…
Improve this above points with implementing the Strategy pattern “properly”.
So what do you think? What would you change, add or remove from the proposed solution to make it better?
Shortly ago I got my hands on NDepend, thanks to Patrick Smacchia, lead developer for NDepend, a static code analyzer made for performing static analysis on .NET code
On this blog post, I am doing to explain a bit what is it and what it does
What is NDepend?
As mentioned, a static analysis code for .NET & .NET Core. Static means that the code analysis is performed on the code while it is not being executed.
Usually static code analysis is performed to ensure that the code adheres to some guidelines or metrics, like the number of warnings, certain errors..
Probably, if you work professionally with .NET you have worked with static analyzers from visual studio itself, being the most common Fxcop, Stylecop or the more advanced SonarQube.
That said, the fact is that these code analyzers do not compete with NDepend as they are fundamentally different. In fact, they complement each other.
What is different?
Basically, the rule set implemented by NDepend is essentially different from the other static analyzers, like SonarQube or other Roslyn Analyzers. These are good to analyze what happens in a method, code, syntax and the code flow… whilst NDepend is good at seeings things from a wider, higher-level perspective. It is really focused on analyzing the architecture, OOP structure and implementation, dependencies – where the product name comes from 😉 -, metrics, breaking changes and mutability – and many others too.
The strength of NDepend relies in analyzing software architectures and their components, complexity and interrelation whilst other products strengths are at a different level, focusing more in code modules, being all of them of course excellent products.
NDepend is designed to integrate with some of these products, like SonarQube.
It performs static code analysis on .NET & .NET Core and, upon that, delivers the following information about your code and, importantly, its architecture:
Technical Debt Estimation
Issue identification
Code quality
Test Coverage
Complexity and diagrams
Architecture overview
(Note: it does way more but I’ve shortened to what I think Is important)
And it shows it in a very smart way, let me show you the NDepend Dashboard:
Additionally it integrates seamlessly with visual studio, TFS and VSTS. Integrates especially well with the build process, provides the ability to analyze this data over time, comparing builds, test coverage, the build processes.
Another feature, which is important for communicating to management and reasoning on “passing a milestone” or “fixing the technical debt” (read Technical Debt as the total issues that we leave in the code knowing they are there… but software has to ship, right?). But coming to this, it provides a smart estimation on it.
A quick example
To get some hands on .NET Core I implemented recently a simple service in .NET Core, which I implemented some tests just for fun and also made it asynchronous. Let’s see how it faces the truth! – Just bear in mind it was a just for fun project and time was limited 😉
I’ts quite easy, I followed the steps on the “getting started” video here, installed NDepend, its visual studio plug-in and opened my VS2017, where now appears an NDepend tab.
Let’s open my RomanConverter coding dojo self practice project and click on attach a new NDepend project.
The following window appears and we can already click the “play” green button.
On the bottom right corner, there is a sphere indicating the status of NDepend. This will start the analysis and the indicator will showcase that it is analyzing.
Once finished, our report will display itself on a browser.
From the Dashboard, click and explore the different information exposed.
A simple click in the Rules Pane, for example, in the violated rules gives us this dashboard:
I find it brilliant, not only the issues are identified but a stacked DataBars are used to showcase the rules with more issues or, with bigger times to fix, as well as having them slightly color identified so understanding which issue(s) are the most critical and deciding which ones to tackle first – or right away – is pretty damn intuitive.
Note to this: I also realized, thanks Patrick for pointing, that clicking on the issues will show them, so what seems like a presentation UI is becoming like a fully interactive dashboard that gets you into action – or to understand the underlying issues better.
There are easily identifiable what our managers would call “low hanging fruit”, easy to fix and saving trouble for later..
Other useful panel is the Queries and Rules explorer which we can open from the circle menu, on the bottom right corner. Or we can use the usual menu: NDepend à Rules à View Explorer Panel.
And it will appear:
With this panel, we can explore the rules that the solution has violated, which are grouped into categories like Code Smells, Object Oriented Design, Architecture, Code Coverage, Naming conventions, a predefined “Quality Gates”, Dead Code, and many more… If we click on a rule, we can explore the C# LINQ Query aka “CQLinq” query that defines it.
This CQLinq attacks a code model dedicated to code quality and can be edited live and also compiled & executed live.
An example of such rule follows:
// <Name>Interfaces must start with an I</Name>
warnif count > 0Application.Types.Where(t => t.IsInterface && !t.SimpleName.StartsWith("I"))
And it seems damn simple, even to me.. 😉
From here we can quickly access the offensive code and fix it.
Other visualizations that must be explored are the Dependency graph, matrix, and the Metrics view.
All are advanced visualizations which show great insight on our solution and how it is structured. Let’s see them.
Code Metrics View
Here we can see that by default we are analyzing our Methods with Lines of Code versus Cyclomatic Complexity. This visualization being a treemap, helps greatly to understand the information.
We can configure the metrics, with a wide selection on level of granularity, size (of the treemap boxes) and color. An especially useful one is code coverage.
An example can be seen next, based on the own Ndepend 😉 source here
Dependency Graph
Here we can see a graph representation, initially on our namespaces which uses LOC as sizing factor for the nodes and the members for the width of the edges connecting them. We can also include the third party assemblies.
It is great for getting to know if the interfaces are respected from a Software Architecture viewpoint or, to see if certain assembly is used only where it should be.
I saw no possibility to group several assemblies into one, for example, the explosion of Microsoft.AspNetCore into several assemblies is of no use, I would like to be able to group them into a single Node for example for having the graph more clear. Otherwise this can add noise which can might make other relations I want to visualize harder to detect.. (Update: Patrick Smacchia mentioned that this is on the works – cool!)
The Dependency Matrix
Large solutions would make the previous graph representation a mess, too many namespaces and assemblies. Here we can select namespaces or assemblies and restrict them, drilling down to the elements that we want to dig in, and go to their graph view.
There we can select even a method and right click to either see it on a dependency graph view or as a dependency matrix.
What is special about it?
Simply said, its estimation is not only based on source code, but also on the solution level analysis, as mentioned earlier.
Also I mentioned the C# LINQ Queries and it seems to me like a quite flexible approach, everything is interlinked and all the analysis are performed on queries and a lot of the data presented is based on queries apart from the rules: trend, quality gate, searches..
Its visualizations are special, point. Showing the right visualization for the job in a simple, yet efficient, way. Yes, if we are not used to Graphs, Dependency matrixes or tree maps this tool can look intimidating, but don’t be. Once you get used to it, it will become second nature. I used it some years ago to fix a mess and it helped greatly. I did not use it fully though, just for two visualizations.
Other aspect I do really like is that whenever you visualize some information, all relevant information comes along. An example are the Rules! I like the detail that even on the top menu we can see what the current solution is violating.
Or the fact that when I see the rules panel, I see the issues next to the debt and the annual interest and more.
Fundamentally, helps by showing important information where we need it.
Should we use it?
First, to have a visual reference of your project and what is good (or wrong) on it. It can show a lot of things in a very visual way, which can help greatly in:
Knowing the state of our solution
Understanding (the issues, where are they, and the effort it will take to fix them)
Communicating to managers
Concrete features:
Enforce Technical Debt measurement in your coding guidelines and processes. This is especially regarding the cost to fix and cost of letting unfixed issues “in”.
Understand the entanglement of our software
Analyze Software architecture and Code Quality
Accurately track the state of our software over time, being able to determine its improvement (or worsening) on its different attributes.
Summarizing – I like it a lot!
It is easy to get lost but it is a rare jewel with an entry barrier that you should push until it’s broken, to see its beauty or, said clearly, its usefulness.
To me its best feature is able to showcase already implemented software architectures in a different, kinesthesic way, with different visualizations tailored to showcase and highlight important aspects of your software. This is great to see the real state of your code and understand it – and fix it.
That said I haven’t been at Ignite, but I am overwhelmed by the vast amount of announcements made there and around these dates… and the vast amount of content, 119 talks on AI, 313 sessions on Machine Learning, whoa, it’s getting crazy and the feeling is that all the technologies on Microsoft around AI/Machine Learning/Data Science are accelerating – Fast!
So, let’s catch up!
Microsoft ML Server 9.2 released – Microsoft’s Platform offer on Machine Learning and Advanced Analytics for enterprises. As big improvements, now it supports full data science lifecycle support for ETL (Extract Transform and Load operations), supporting R & Python. And yes, this is what was known earlier as Microsoft’s R server, whose name was not fully correct after ‘adopting’ Python 😉 Oh, and now it’s fully integrated with SQL Server 2017… you can read more at the official source. Or watch a quick 2′ introductory video here. I think it is pretty damn important on the full operationalization offer that Microsoft proposes…
Yay! The pretty much expected next gen SQL Database Server from Microsoft has been released: SQL Server 2017, with full support for R & Python and including the before mentioned ML Server.
Azure Machine Learning has been greatly updated… this service now brings to us the AML workbench, a client application for AI powered data wrangling and “ML fun”, which I have reserved some time this weekend to download and “have some time together”.. also the AML Experimentation service has been launched to help Data Scientist to experiment faster and better, as well as the AML Model Management Service. Here you can find an overview as “short” as it can be..
Microsoft R Client 3.4.1 release – supporting the obvious version of R, providing desktop capabilities for R development with the ability to deploy models and computations to a ML Server. Original source here. Note that to properly use this, it is vital to use the Visual Studio IDE and to install the R Tools for Visual Studio, which is free.
Talking about Visual Studio, we have now the Visual Studio Code Tools For AI which provide extensions to build, test and deploy Deep Learning / AI solutions, being fully integrated with Azure Machine Learning. Github is here.
Visual Studio Code for AI has been launched as well, meant for developing for Tensorflow, CNTK, AML and more.
The “Quantum Revolution” happened 😉 – the initial movement from Microsoft to embrace the next “shiny light” in computing: Quantum computing. But this is not released, but announced… there will be a Quantum programming language inheriting from C#, Python and F#, a quantum computer simulator and everything integrated with our favorite developer IDE. Short introduction (2′) here. To follow it and suscribe to news on this, go here and on the bottom there is a “signup” button which I’ve already pressed – so, what are you waiting for? 😉
Hope it was interesting!
Let me know if you liked it.. And.. Would you like a hands-on overview on the AML “Workbench”? 😉
Yup, since 2013 I haven’t blogged at all… nothing… I guess I gave it all to my book and needed some rest… just joking, moved in late 2012 to Switzerland and it has been an intense ride…
..and with a 2:30h commute, so that did not help too much…
So, I ended up in the end of 2015 with 94,5 kg (with 1.77m tall) so was, in fact obese, and with some health issues, stress, wrong habits, etc…
Basically “not having time”… which is wrong – you have time, 24 hours a day. We just prioritize it wrong. And justify ourselves, that is..
2016 was a game changer, I said stop and put myself to work. April 4th I was 76 Kg (same height though) and somewhat fitter… Hey, I even got into the 20 finalists at the Bodybuilding.com 12 week 250K USD transformation contest! (no price was won though.. I won back health – Yay!)
As of today, jumping up and down in weight around 80-84, but that will change in short..
Professionally, I have had some fun, initially mostly fixing code and putting in place proper architecture practices (and implementing them hands on) and when I was tired of fixing and fixing and fixing…. I went into the realm of testing as “Performance Test Lead…” And loved that! Doing something I never did forced me to learn fast, applying business analysis and planning skills for defining the Performance Test architecture and why not, also the test architecture and implement it in a POC 😉
That was a great experience and enjoyed it, made me better so now I can think as a developer and as a tester… from a low level (coder, tester) and a high level (SW architect, Test Manager) but retaining the ability to go deep – which I enjoy. (you know, the ability to affect the quality of a product that much… and even in earlier stages – if you are allowed to – is a great feeling 🙂
If I had to describe myself right now I’d say I’m a Dev Architect with the ability to see things from a high level, system perspective, to a low level. From a “gamer” pov I’d say I am a sniper that can zoom from afar and aim to the weakest point – and get “the shot”.
After this, shortly entered 2016 I entered commando mode and that broke my recently acquired healthy habits, Dammn! Had up to 4 assignemnts on 2016.. to adventure myself in unknown “code pools” – going to teams to fix issues they were not able to… in their own code or an intriguing “piece of art” whose DAL was executing transactions in a funny way or let’s say they just were “not behaving as expected”…
Later on 2016 joined the CoreLab team as Test Analyst & SW engineer
By the end of 2016 I started learning Machine Learning,which helped me greatly to focus and realize how much I like to get “engaged” in learning a technology or topic (even this one is pretty wide…).
I truly believe that Machine Learning / Data Science and AI Programing are a key toolset, a game changer technology and knowledge that if applied properly can change our world for good. Also for bad sadly, as a weapons race seems to have already started…
but that is the topic of another post 😉
Thank you for reading and let’s meet again shortly…
[UPDATE: The contest will finish at the end of August]
I am pleased to announce that, with support from Packt Publishing, we are organizing a giveaway especially for you. All you need to do is just comment below the post and win a free e-copy of Microsoft .NET Framework 4.5 Quickstart Cookbook. Two lucky winners stand a chance to win an e-copy of the book. Keep reading to find out how you can be one of the Lucky One.
Overview of Microsoft .NET Framework 4.5 Quickstart Cookbook
Designed for the fastest jump into .NET 4.5, with a clearly designed roadmap of progressive chapters and detailed examples.
A great and efficient way to get into .NET 4.5 and not only understand its features but clearly know how to use them, when, how and why.
Covers Windows 8 XAML development, .NET Core (with Async/Await & reflection improvements), EF Code First & Migrations, ASP.NET, WF, and WPF
Note: I am posting this also on my new blog, http:\\xamlguy.com, feel free to post it there (preferably) or here as I will watch out both sites.