Creating a Serverless RESTful API with Azure Functions Part 1 – Introduction

Azure latest (and in my opinion greatest) trend is Serverless, it’s cool, anything serverless can scale seamessly and if done right, be elastic and adapt to a dynamically changing load… which makes it ideal for implementing Microservices..

That said I am pretty new to Azure at the moment but nonstop learning, about in 1 or 2 weeks to face the AZ-203 exam as mentioned some weeks ago

But I like to play and this is the outcome of some trying and managing to get something close to a “proper” REST API endpoint in Azure, along with OpenAPI and some Azure provided goodies..

What and how?

So, what will we exacltly be doing and how will we do it?

I will be implementing a REST API endpoint using Azure Functions through a HTTP Trigger and C#, implementing it with Visual Studio 2019.

Then, I do plan to implement OpenAPI support with Azure API Management, which has several benefits as getting the documentation and test layer done almost automatically for us.

This said, I initially had thought of using an Azure Function Proxy (introduced by the end of 2017) as the front-end for the implemented API, which as well as some benefits like redirecting it to a Mock while the API is in development or being able to play between a Staging and an experimental new version of the API, etc.. But it seems that API Management does this and much more so I am unsure I will be using this feature… to understand it, this diagram might be of help:

Solution-Architecture-Diagram

So basically the Proxy acts as a facade between a user-consumer-publisher of our API and our API (Azure Function on the diagram). There we can do transformations of the request, response, redirect it to another Azure Function or service, etc..

As said, I’ll use API Management and not dig into Azure Functions Proxies.

Also, I do not plan into digging into authentication-authorization but that might change as we progress into it.

 

Azure Functions 101

Azure Functions are a great solution for running quick functions, this meaning 2,5 minutes for a HTTP based response (usually 5 minutes, modifiable up to 10 minutes).

They can be implemented with several languages, including C#, and supports dependencies as NuGet.

They Support integrated security with OAuth as Azure Active Directorz, Facebook, Google, Twitter, etc..

Integrates perfectly with various Azure and 3rd party services.

FaaS/LaaS or PaaS – mainly they are Serverless so that means “Function as a Service” or also named “Logic as a Service” but we can change the pricing plan from the default “Consupmtion” to “App Service”. So, basically a “Web API”. That said, the Consumption plan right now comes with 1 million executions for free a month, which is not bad, right?

 

Azure Functions Proxies

The Azure Function Proxies act as a single URI, a unified facade which we can use some redirections and transforms to our convenience, like:

  • Request and Response overrides.
  • Modify input request to send different HTTP verbs, headers or query parameters to a hidden backend service.
  • Replace Backend response status code, headers and body.

This said, API Management supersedes Azure Function by large. That said, they have their role, if we only want to unify separate functions or API services into a single API, we should use Azure Functions Proxies.

If we want a API Management, then we know what to use.

That said, it might be interesting if we have a somehow complex environment. As said, an image is worth 1000 words:

api-management_500x259

So it could be used to unify several related API micro-services, whilst having a greater API facade which Manages them all.

Or to mock an Azure Function..

As we will implement something way simpler, I’d go for the API Management. Also, it seems more fun.

 

API Management

API Management enables a full fledged professional facade/gateway for our API, providing in-built services which support all that the Azure Functions Proxies can do plus:

  • Load Balancing
  • Hot Swapping
  • Rate limiting
  • Authentication and Authorization
  • IP Whitelisting
  • Access Policies
  • Caching
  • Subcriptions, including User and RBAC management
  • Licensing
  • Business Insights & Analytics
  • Developer Portal (for ISV partners and-or internal use, including API documentation, interactive console, account creation and analytics on own usage..

API Management was launched on December 2018 and the consumption plan (which I plan to use) has been recently released (it was in preview) by the end of May 2019. the announcement clearly says that it is specifically meant for microservices.

Read more on the official Microsoft documentation on API Management or just get the whitepaper.

 

Next..

And that’s it for today, on our next entry we will create and verify our first REST api in C# with Azure Functions.

Happy coding!

Implementing a Strategy to “rule them all…”

The Strategy pattern

The Strategy pattern is one of the OOP design patterns which I like the most..

According to wikipedia, “the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime.” – source

This UML diagram showcases it Pretty well:

Why I like it?

I believe there are several reasons that make this design pattern one of the most useful ones around..

  • Improves the KISS (Keep It Simple and Standard) principle on the code.
  • LSP – The “Strategies” are interchangeable, can be substituted by each others. This is a clear aplication of the L in SOLID, the Liskov Substitution Principle or LSP: “Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” – Source
  • Open-Closed – The Strategy implementation through an interface is a clear application of the “Open-Closed Principle: “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”. In this case we can extend it by writing a class that extends the behavior but we cannot modify the interface. Even more, if we implement the Strategy using a Plugin implementation we do not even need to modify its source code. It’s a very clean implementation. It also helps into Decoupling the code and responsibilities. – Source
  • SRP : We can strongly affirm also say that the Strategy promotes the Single Responsibility Principle as each Strategy implementation should be implemented in a single class.
  • DI: And also, the Dependency Inversion Principle: “One should “depend upon abstractions, [not] concretions.”  – Source. This is so as the Strategies depend on abstractions, an interface which defines the strategy.

 

We can easily implement the Strategy pattern with Dependency Injection but this makes that the code of the Strategy to be on the same assembly or executable and thus, coupled. Due to this, I consider this as a sub-optimal implementation which does not fulfill the Open-Closed principle at 100% if we consider the main Executable as a “Software Entity”.

Even more, if we are in a highly regulated Environment, this means that we can add functionality without altering “the main software” which might be subject to a regulated process like FDA-approval, in case of a medical system… that means several months of documentation, testing and waiting for FDA to sign everyhting. 

Do you like it already? Wait – there are more Benefits!

In my previous work, at RUF Telematik, I proposed the application of this pattern with a plugin system as part of the Technical Product Roadmap. Basically to decouple the code which interfaces a concrete hardware (type of HW, manufacturer, version…) So the main software would not need to know how to talk with a camera, monitor or communication device in the train System. The responsibility is delegated to a .dll plugin that knows how to do that work and we can dynamically add These Features without altering the main software.

In addition to the software architecture benefits and the code quality, we have some more benefits:

  • We can parallelize the development of the Hardware Manager dlls to different developers who can test them separately.
  • We can separate the release and test workflows and accelerate the development time.
  • We do not need to test the main software every time we add support for a new device or version of a device firmware..
  • We do not need to revalidate through industry standards the full software over and over again (usually with an substantial cost of time and money)

In a train we could categorize the different hardware on it on hte following four categories:

  • TFT screens
  • LCD Screens
  • RCOM Communication devices
  • Camera devices

Each one has different Vendors, models and version numbers so a bit more complex implementation should be needed but this is an example so we do not Need to build “the real Thing”.

So we could implement an interface like ITrainInstrumentManager that supported methods like:

  • Connect
  • Getversion
  • Update
  • Check
  • ExecuteTests
  • UpdateCredentials
  • and so on…

 

And then implement a Strategy that fulfills this interface for every Type of Equipment, for every brand/vendor and for every model and version…

With the added Benefit that I can parallelize this work and get several persons to work in different Strategies, one for each new device. This would enable to add support for new hardware devices in no time at all.

And they could be tested separately, with the warranty that if the tests do work, the main tool would work as well.

Without altering or releasing the tool, just adding the plugins in the corresponding folder or loading them dynamically from an online service or location. (if we implement the strategy using the plugin technique, of course)

This presentation showcases some of the points mentioned above, if you are still curious.

 

Implementation of the Strategy Pattern

One of the best implementations I have ever been part of is when I worked along 2011 and 2012 at Parlam Software where a plug-in architecture was designed and implemented by my now friend Xavier Casals. Then he was my customer and CTO of Parlam (and still is).

<Commercial mode>

If you are in Need of translations, do check their solution. Basically it is a full fledged TMS (Translation Management System) which automates your language translation process. More on this here and here.

</Commercial mode>

This Plugin system enabled adding dynamically Data convertors to third party systems, like different CMS systems as “SDL Tridion”, to where his service connects and works with, so basically he can deliver an interface to anybody that wants to interface with its system and enables an easy implementation as well as testing and deployment. Once the DLL is tested and verified, can be signed for security reasons and added to a folder where it is magically loaded and we get the perfect implementation of Open Closed Principle…

“software entities… should be open for extension, but closed for modification”

I know it is a lot to say but let’s get it done and you tell me after 😉

 

Structure

We will create a .NET standard solution which will have implement 3 Projects:

  • StrategyInterface –> A .NET Core Class Library that will hold the Strategy interface, two custom attributes and a custom exception for managing the plugin. This is the basic contract that we will share between the main application that will use the plugin(s) and the plugins themselves.
  • Plugins –> This is a project with a simple class that implements the Interface on the StrategyInterface Project/Assembly. I will use the two custom Attributes to add a name and a description so I can programatically go through them with Reflection before creating an instance, which is convenient if I want to avoid creating excessive objects. Note that this Project will have different implementations, in our case I created 4: CAM, LED, RCOM and TFT. Each one will create a DLL in a concrete directory, “D:\Jola\Plugins”.
  • StrategyPatternDoneRight –> feel free to discuss with me on the name, comments are open to all ;). This is the main customer/user of the Plugins that implement the Strategy and will load the plugins that match the interface from a concrete location of the filesystem. At the moment I did not put too much logic but just to load all the matching assemblies and execute a simple method which all the plugins provide.

The solution looks like:

Strategy01 structure

StrategyInterface project

The most important here is the interface that defines the Strategy:

Strategy02 interface

There we will create the custom Attributes, one for Name and another for Description:

Strategy03 cust attrib name

Plugin project(s)

I created a Plugins Folder to contain them all, then created .NET Standard assemblies and added a reference to the StrategyInterface Project.

Lets say that we create the CAM.Generic Project to implement support for the Train Network Cameras… there we add a class which implements the Strategy Interface and we add the two custom Attributes to it:

Strategy04 Plugin Strategy Implementation

Obviously this is a simplification but here we would put all the hardware dependant code for handling complex network operations with the camera…

All the Plugin Projects are customized to have the same Build Output Path, to avoid doing Manual work:

Strategy05 Plugin Build properties

Just be Aware that the output path that you use must exist and be the same for all Plugins.

Main Application

So, all we have left is to implement the mechanism to recover the assemblies at a concrete filesystem path and load them dynamically into our current execution process. We will do this using Reflection.

I am creating a wrapper class for exposing the strategies implemented by our plugin assemblies.

This class is named StrategyPluginContainer and will expose the two custom Attributes and an instance of the Plugin (really it is an instance of the class that implements the Strategy Interface).

The two key reflection techniques used here are:

  1. Activator.CreateInstance(Type) – This creates an instance of the specified Type using the default constructor. Note this is not reflection but comes directly from System Namespace.
  2. Type.GetCustomAttributes(Attribute type, inherit) – this obtains from a type the value of a custom attribute.

Note: green is due to style suggestions from my VS installation to what I do not agree if I want clarity. Expression bodied properties or using ?? are good, reduce space but if somebody is not used to this Syntax readability and understandability are reduced..

Strategy06 Plugin Wrapper

Now we can implement the StrategyPluginLoader. This class responsability is to Keep a list of the Plugins that implement the Strategy and it does so by loading them from the Filesystem (could get them from a web service or other mean).

Basically it has a List of StrategyPluginContainer which we just created, exposed through a property.

And populates it by getting all the DLLs from a specific hard disk Folder and loading them with Reflection’s Assembly.LoadFrom(filename).

then we get the types contained on this Assembly and iterate through them to match them against the Strategy Interface. I also check that the two custom Attributes are supported and if everything matches, I create a StrategyPluginContainer instance of this concrete type.

As a final check, I verify if the Plugin is already on the plugin list for not repeating and if is existing I update it in a proper way.

Strategy07 Plugin loader

Last but not least, I use all this through a nice console application, I create the StrategyPluginLoader, I execute the command to load all the plugins and iterate through them invoking the only command in the interface which is implemented in separate, decupled assemblies and loaded dynamically at runtime, without no Knowledge or coupling of any Kind in the main application.

Strategy08 bringing it together

The full code can be found in GitHub here.

 

Happy coding!

 

 

Introduction to the Azure Machine Learning Workbench

Following the announcements post published some days ago  here,  we will dig deeper on this new tool, the Workbench. This is also called AML Workbench, which is shorter and this term will be used from now on to refer to Azure Machine Learning Workbench (glad about the acronym as I do not want to type that again :P)

 

But, what’s the AML Workbench?

It is a desktop application for Windows and MacOS, it has built-in data preparation that learns the data preparation steps as we perform them, which is able take avantage of the best open source frameworks including TensorFlow, Cognitive Toolkit, Spark ML and scikit-learn.

This also means that if you have a GPU that supports AI (read my earlier blog post on the topic here https://joslat.blog/2017/10/15/give-me-power-pegasus-or-the-state-of-hardware-in-ai/ ) you will be benefitting from that power heads-on.

Oh, it has also a command line interface for those who like them 😀

 

Sounds interesting? Then let’s get started!

 

Concepts first!

AML – Azure Machine Learning

This is in need to be described at the earliest as it might be a bit confusing. This is a solution proposal from Microsoft that englobes different components and services to provide an integrated end-to-end solution for data science and advanced analytics.

With it we can prepare data, develop experiments and deploy models at cloud scale (read massive scalability here)

AML consists of a few components:

  • AML Workbench – Desktop tool to “do-it-all” from a single location.
  • AML Experimentation Service – I “suppose” this will enable us to validate hypothesis in a protected scenario.
  • AML Model Management Service – I suppose this will enable us to manage our models
  • Microsoft Machine Learning Libraries for Apache Spark (MML Spark Library) – I read Spark/Hadoop integration here, probably to Azure servers
  • Visual Studio Code Tools for AI – I read here R & Python integration with Visual Studio

 

This picture showcases how AML Workbench fits in the Microsoft AI Ecosystem:

AML intro architec high level.JPG

To say that AML fully integrates with OS (Open Source) initiatives such as scikit-learn, TensorFlow, Microsoft Cognitive Toolkit or Spark ML.

The created experiments can be run in managed environments as Docker containers and clusters running Hadoop with Spark (I am wondering why is Microsoft is only mentioning Spark there if they work together? – Ok! As it was built as an improvement over MapReduce it can also run Stand Alone in the cloud, that’s why!). Also they can use advanced hardware like GPU-enabled VMs in Azure.

AML is built on top of the following technologies:

  • Jupyter Notebooks
  • Apache Spark
  • Docker
  • Kubernetes
  • Python
  • Conda

 

AML Workbench (yeah, finally!)

Desktop application with a command-line for Windows & MacOS to manage ML solutions through the entire data science life cycle.

  • ETL
  • Model development and experiment management
  • Model Deployment

 

It provides the following functionalities:

  • Data Preparation that can learn by example (Wow!)
  • Data source abstraction
  • Python SDK for invoking visually constructed data preparation packages (SSIS anyone?)
  • Built-in Jupyter Notebook service and Client UX (like anaconda?)
  • Experiment monitoring and management
  • Role-based access to support sharing and collaboration
  • Automatic project snapshots for each run and version control (trazability on “experiments” at last!) along GIT integration
  • Integration with popular Python IDEs

 

Let’s install it!

First things first, do you have a computer with windows 10 or macOS Sierra? (I guess you won’t have a Windows Server 2016 at home, do you?) If so proceed.. else go update https://www.microsoft.com/en-us/store/b/windows 😉

 

Oh, well… before installing we need to set up an ML experimentation account..

Go log-in in the Azure portal here https://portal.azure.com/

Click on the new button (+) on the top left corner of the Azure portal and tzpe “machine learning” and select the “Machine Learning experimentation (preview)”

Azure 01 MLE preview.JPG

Click create and fill in the nice form:

Azure 02 MLE preview.JPG

Be sure to select “DevTest” as the cost-saving Model Management pricing tear, otherwise it will have a cost. Dev Test appears as “0.00”. Otherwise you might forget and have a non-pleasant surprise…

Azure 03 MLE preview.JPG

As I am not that much into playing with azure at a personal level (mostly HOLs and learning) I deleted all my resources, including a DB I created at a HOL that suddenly had a cost… luckily very low… and created all the required elements from the ground up. Resource Group, experimentation account, storage account, workspace, model management account, account name… my recommendation is that you keep that data safe and close to you. As this is all protected by Microsoft’s Azure security.

Oh, and Click “Create” to create all this components in the cloud. We should see a “Deployment in progress..” message which should be over in a couple of minutes, as shown in the picture below.

Azure 04 MLE preview.JPG

Also we should see the details of the resources created, storage, resource group, etc… along some useful tools to download (at last) the AML Workbench.

We can also download it from here https://aka.ms/azureml-wb-msi

And double click it or right click and select “install”.

After the installer loads we should see the gorgeous installer…

Azure 05 MLE preview.JPG

It’s clean, it’s Metro..ups! I meant modern!

As usual, click continue and you will be presented the dependencies and installation path, shown next.

Azure 06 MLE preview.JPG

There I did not like I could not change the installation path… so we can only click the install button… and cross fingers that this does not create any conflict with your Anaconda installation… as this is a clear preview (read here: under your own responsibility)

Oh, I do NOT recommend you to wait for the installation to finish…

…go watch a series or to the gym (as I did in fact) – the installation will take about half an hour to download and install all the required components…

 

Now AML Workbench (preview) is installed in your computer, congratulations!!

Azure 07 MLE preview.JPG

Note that you can find it at C:\Users\<user>\AppData\Local\AmlWorkbench

Oh, and get used to this icon, I have the feeling we will be visiting it for a while 😉

But let’s continue, we are not yet finished!!

 

First steps!

So, let’s do something! Baby steps of course..

Start it and log-in to your Azure/Microsoft account. Automatically you should be able to see the recently created workspace in the Azure portal.

 

Click on the plus sign next to “Projects” panel in the top left or in the text menu, select File and then “New Project…”

We will give our project a name, like “Iris”, then select a directory to save your Azure Machine Learning Projects in your local computer and add a description.

We have to select our workspace, which by default will be the one we just created in the first place.

We will select the template “Classifying Iris” and click on the “Create” button below. This template is a companion sample project for Azure Machine Learning which uses the iris flower dataset.

Azure 07b MLE preview.JPG

Once the project has been created we will see the dashboard of the recently created project.

We can see several sections from our dashboard: the home section of our project, the data sources, notebooks, runs and Files.

On the project dashboard panel we can see a description of our project with instructions on how to set it up and follow the Quick start and tutorials, as well as an execution section.

The Data panel showcases the data sources and the preparations for obtaining them. This is a pretty special section with truly amazing features that can only be found on the AML Workbench, but we will see it on a next post.

It is worth noting that the Notebook panel is basically a Jupyter notebook container, as on the installation there was a custom anaconda installation being made, which also did not seem to tamper with my installation of Anaconda…

We can also open the project in Visual Studio Code or other configured IDE.

If we do not have we can install it now here https://code.visualstudio.com/

On the text menu, select File, then “Configure Project IDE” and input a name and the path for your IDE, which I selected VS Code, as we can see next:

Azure 08 MLE preview.JPG

Once this is installed, we should install Python support for VSCode, so we go to the extensions menu and select one of the Python extensions. In my case I selected the Python 0.7.0 from Don Jayamanne, but this extension seems the most complete.

Azure 09 MLE preview.JPG

Once this is set up we can go to the text menu, click on File, then on “Open Project”, next to it should appear our configured IDE between brackets, “(Visual Studio Code)”. We can see VSCode with the project loaded and we can click on one of the Python source files, for example, iris_sklearn.py. We should see the syntax highlighter at work, intellisense also working, between some other features.

Azure 10 MLE preview.JPG

Now, let’s execute it, we can go to the project dashboard panel and select “local”, then the source file “iris_sklearn.py” and add “0.01” in the arguments and click run.

We could also execute it on the Files panel, select the “iris_sklearn.py”

On the right side of AML Workbench, the Jobs pane should appear and showcase the execution(s) that we have started.

While we are at it, we could try some other executions changing the argument to range from 0.001 to 10.

Azure 11 MLE preview.JPG

What we are doing is executing a logistic regression algorithm which uses the scikit-learn library.

Once the different executions have finished, we could go to the Runs panel. There, select “iris_sklearn.py” on the run list and the run history dashboard should show the different runs.

Azure 12 MLE preview.JPG

We can click on the different executions and see the details.

By now we have grasped the concepts of AML and its ecosystem, configured our environment in Azure, installed the Workbench, configured and next created a sample project and executed it locally.

Hope this was a good introduction and you enjoyed it.

 

 

Sources:

Managed version (2.0) of the 1.0 Silverlight Media Quickstart

Hi, this is my humble Silverlight 1.1 version (oops, sorry, 2.0) of the Silverlight 1.0 media quickstart. As this is a thing I found it lacked from the 2.0 Quickstarts :P. Source code, even it is not too much is included, enjoy!

The link is this one for the stand alone version and this other for the Silverlight Streaming version.

Note that for the Silverlight Streaming version you will have to create an account and put your Silverlight Streaming Live Account Id in the CreateSilverlight() Javascript function.

This code was written for an article at a magazine we here at Spain adore, DotNetMania and if you want to get a detailed explanation of this “Quickstart” version, I reccommend you to access the magazine’s website and suscribe to it (or buy the number), as I will not provide further support for it than just the download, sorry.

Anyway, it’s a great magazine and I’m honored to be able to write at it 🙂

Oops, forgot to put the code 😛 Here it is