Creating a Serverless RESTful API with Azure Functions Part 2 – Creating the API

On today’s post we will be creating our Serverless REST API with Azure Functions. We will be doing that with Visual Studio.

This post is part of a series where the first part should be read beforehand.

Creating the “basic MM Service”

For this we have to install the latest Azure Functions Tools and an operative Azure Subscription. We can do this from Visual Studio, from either “Tools” and then “Extensions and Updates”, we should either install or update the latest Azure Functions and Web Jobs Tools” (if we are using VS 2017) or if we are in VS2019, we should go to Tools, then “Get Tools and Features” and install the “Azure development” workload.

We should create a new “Azure Functions” template based Project.

01 create Azure Function

And select the following options:

  • Azure Functions v2 (.NET Core)
  • Http trigger
  • Storage Account: Storage Emulator
  • Authorization level: Anonymous

02 AZ Func creation config.PNG

Then we can click create and just run it to see it working. Pressing F5 should start the Azure Functions Core Tools which enables us to test our functions locally, a convenient feature I’d say. Once starting we should see a console window with the cool Azure Functions logo flashing.

03 AZ functions core tools.PNG

then once the startup has finished copy the function URL, which in our case is “http://localhost:7071/api/Function1”.

04 AZ functions core tools URL.PNG

To try this from  a client perspective we can use a browser and type “http://localhost:7071/api/Function1?name= we are going to build something better than this…” but I’d rather use a serious tool like Postman and issue a GET request with a query parameter with key “name” and the Value “we are going to build something better than this…”.  This should work right away and also some tracing should appear on the Azure Functions Core Tools. The outcome in Postman should look like:

05 consuming our Micky Mouse with Postman.PNG

Or if you opted for the browser test “Hello,  we are going to build something better than this…” – which we are going to do in short.

We just saw a “Mickey Mouse” implementation but if you are like me, that won’t satisfy you unless you add some layers in to decouple responsibility and some interfaces because, er… we like complication right? 😉 –  Not really but I thought it would be good to put in code which does something simple but is still as SOLID as I can get without too much effort.

What are we going to build?

We are going to create a REST API that exposes a simple list of categories (a complex POCO with Id and Name), then the API Controller which will be the main API interface and it will delegate obtaining the Data to a Service and the Service to a Repository.

And we will stop here, so this will be completely on memory so no Database will be hurt in the process, also no Unit Of Work will Need to be implemented as well.

We will create some interfaces and configure/register them with Dependency Injection, which uses the ASP.NET Core DI basically.

I have a disclaimer which is that WordPress has somehow disabled HTLM editing so I can’t put code from GIT or have a nice “code view” in place which is… making me think of switching to another blog provider… so if you have any suggestion, I’d appreciate you mention it to me with a comment or directly.

 

Creating the “boilerplate”

We will start by creating the POCO (Plain Old CLR Object) class, Category. For this we can create a “Domain” Folder and inside it create a “Models” Folder where we will place this class:

    public class Category 
    {
        public int Id { get; set; } 
        public string Name { get; set; }
    }
Next we will implement the Repository Pattern which is used normally to abstract us from the data management for a concrete Entity. Usually we encapsulate on this classes all the logic to handle data like Methods to list all the data, create, edit, delete or recover a concrete data element. These operations are usually named CRUD, for Create/Read/Update/Delete and are meant to issolate data acess operations from the rest of the application.
Internally These Methods should talk to the data storage but as we mentioned, we will create an in memory list and that will be it.
Inside the “Domain” Folder we will create a “Repositories” Folder, on it we will add a “ICategoryRepository” interface as:
    public interface ICategoryRepository
    {
        Task<IEnumerable<Category>> ListAsync();
        Task AddCategory(Category c);
        Task<Category> GetCategoryById(int id);
        Task RemoveCategory(int id);
        Task UpdateCategory(Category c);
    }
And a class CategoryRepository which implements this interface..
    public class CategoryRepository : ICategoryRepository
    {
        private static List<Category> _categories;
        public static List<Category> Categories
        {
            get
            {
                if (_categories is null)
                {
                    _categories = InitializeCategories();
                }
                return _categories;
            }
            set { _categories = value; }
        }

        public async Task<IEnumerable<Category>> ListAsync()
        {
            return Categories; 
        }

        public async Task AddCategory(Category c)
        {
            Categories.Add(c);
        }

        public async Task<Category> GetCategoryById(int id)
        {
            var cat = Categories.FirstOrDefault(t => t.Id == Convert.ToInt32(id));

            return cat;
        }
        public async Task UpdateCategory(Category c)
        {
            await this.RemoveCategory(c.Id);
            Categories.Add(c);
        }

        public async Task RemoveCategory(int id)
        {
            var cat = _categories.FirstOrDefault(t => t.Id == Convert.ToInt32(id));
            Categories.Remove(cat);
        }

        private static List<Category> InitializeCategories()
        {
            var _categories = new List<Category>();
            Random r = new Random();

            for (int i = 0; i < 25; i++)
            {
                Category s = new Category()
                {
                    Id = i,
                    Name = "Category_name_" + i.ToString()
                };

                _categories.Add(s);
            }

            return _categories;
        }
    }
This Repository will  be talked to only by the Service, whose responsibility is to ensure the data related Tasks are performed agnostically from where the data is stored. As we mentioned, the Service could Access more than one repository if the data was more complex. But, as of now it is a mere passthrough or man-in-the-middle between the API Controller and the Repository. Just trust me on this… (or search for “service and repository layer”) a good read on this here
There we have all wired by an in-memory list of categories List<Category> named oportunely Categories, which is populated by by a function named InitializeCategories().
So, now we can go and create our “Services” Folder inside the “Domain” Folder…
We should add a ICategoryService interface:
    public interface ICategoryService
    {
        Task<IEnumerable<Category>> ListAsync();
        Task AddCategory(Category c);
        Task<Category> GetCategoryById(int id);
        Task RemoveCategory(int id);
        Task UpdateCategory(Category c);
    }
Which matches perfectly the functionality of the Repository, for the reasons stated, but the objectives and responsabilities are different.
We will add a CategoryService class which implements the ICategoryInterface:
    public class CategoryService : ICategoryService
    {
        private readonly ICategoryRepository _categoryRepository;

        public CategoryService(ICategoryRepository categoryRepository)
        {
            _categoryRepository = categoryRepository;
        }

        public async Task<IEnumerable<Category>> ListAsync()
        {
            return await _categoryRepository.ListAsync();
        }

        public async Task AddCategory(Category c)
        {
            await _categoryRepository.AddCategory(c);
        }
        public async Task<Category> GetCategoryById(int id)
        {
            return await _categoryRepository.GetCategoryById(id);
        }
        public async Task UpdateCategory(Category c)
        {
            await _categoryRepository.UpdateCategory(c);
        }
        public async Task RemoveCategory(int id)
        {
            await _categoryRepository.RemoveCategory(id);
        }
    }
With this we should be able to implement the Controller without any Showstopper..

Adding the necessary services wiring up

This is where everything comes together. There are some changes though, due to the fact that we will be adding dependency injection to setup the services. Basically we are following the steps from the official Microsoft how-to guide which in short are:

1. Add the following  NuGet packages:

  • Microsoft.Azure.Functions.Extensions
  • Microsoft.NET.Sdk.Functions version 1.0.28 or later (at the moment of writing this, it was 1.0.29).

2. Register the services.

For this we have to add a Startup method to configure and add components to an IFUnctionsHostBuilder instance. For this to work we have to add a FunctionsStartup assembly attributethat specifies the type name used during startup.

On this class we should override the Configure method which has the IFUnctionsHostBuilder as a parameter and use this to configure the services.

For this we will create a c# class named Startup and put the following code:

[assembly: FunctionsStartup(typeof(ZuhlkeServerlessApp01.Startup))]

namespace ZuhlkeServerlessApp
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddSingleton<ICategoryService, CategoryService>();
            builder.Services.AddSingleton<ICategoryRepository, CategoryRepository>();
        }
    }
}

Now we can use constructor injection to make our dependencies available to  another class. For example the REST API which is implemented as an HTTP Trigger Azure Function will resolve the ICategoryServer on its Constructor.

For this to happen we will have to change how this HTTP Trigger is originally built (hint: It’s Static…) so we need it to call its constructor..

We will remove the Static part on the generated HTTP Trigger function and the class that contains it or either generate a complete new file.

We will either edit it or add a new one with the name “CategoriesController”. It should look like follows:

    public class CategoriesController
    {
        private readonly ICategoryService _categoryService;
        public CategoriesController(ICategoryService categoryService) //, IHttpClientFactory httpClientFactory
        {
            _categoryService = categoryService;
        }
..

The dependencies will be resolved and registered at the application startup and resolved through the constructor each time a concrete Azure Function is called as its containing class will now we invoked due it is no longer static.

So, we cannot use Static functions if we want the benefits of dependency injection, IMHO a fair trade off.

In the code I am using DI on two places, one on the CategoriesController and another on an earlier class already shown, the CategoriesController. Did you realize the constructor injection?

 

Creating the REST API

As stated on the earlier section, we removed the static from the class and the functions.

Now we are ready to finish implementing the final part of the REST API… Here we will implement the following:

  1. Get all the Categories (GET)
  2. Create a new Category (POST)
  3. Get a Category by ID (GET)
  4. Delete a Category (DELETE)
  5. Update a Category (PUT)

Get all the Categories

Starting with getting all the categories, we will create the following function:
        [FunctionName("MMAPI_GetCategories")]        
        public async Task<IActionResult> GetCategories(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "categories")]
            HttpRequest req, ILogger log)
        {
            log.LogInformation("Getting categories from the service");
            var categories = await _categoryService.ListAsync();
            return new OkObjectResult(categories);
        }
Here we are associating “MMAPI_GetCategories” as the function Name, adding “MMAPI_” to relate them together in our Azure UI, as everything we deploy as a package will be put togher and using a naming convention will be convenient for ordering and grouping.
The Route for our HttpTrigger is “categories” and we associate this function to the HTTP GET verb without any Parameters.
Note here we are using async as the Service might take some time and not be inmediate.
Once we get the list of categories from our service, we return it inside the OkObjectResult

Create a new Category

For creating a new Category we will create the following function:
        [FunctionName("MMAPI_CreateCategory")]
        public async Task<IActionResult> CreateCategory(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "categories")] HttpRequest req,
    ILogger log)
        {
            log.LogInformation("Creating a new Category");

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject<Category>(requestBody);

            var TempCategory = (Category)data;

            await _categoryService.AddCategory(TempCategory);

            return new OkObjectResult(TempCategory);
        }

This will use the POST HTTP verb, has the same route and will expect the body of the request to contain the Category as JSON.

Our code will get the body and deserialize it – I am using the Newtonsoft Json library for this and I delegate to our Service interface the Task to add this new Category.

Then we return the object to the caller with an OkObjectResult.

Get a Category by ID

The code is as follows, using a GET HTTP verb and a custom route that will include the ID:

        [FunctionName("MMAPI_GetCategoryById")]
        public async Task<IActionResult> GetCategoryById(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "categories/{id}")]HttpRequest req,
            ILogger log, string id)
        {
            var categ = await _categoryService.GetCategoryById(Convert.ToInt32(id));            
            if (categ == null)
            {
                return new NotFoundResult();
            }

            return new OkObjectResult(categ);
        }

To get a concrete Category by an ID, we have to adjust our route to include the ID, which we retrieve and hand over to the categoryservice class to recover it.

Then if found we return our already well known OkObjectResult with it or, if the Service is not able to find it, we will return a NotFoundResult.

 

Delete a Category

The Delete is similar to the Get  by ID but using the DELETE HTTP verb and obviously a different logic. The code is as follows:
        [FunctionName("MMAPI_DeleteCategory")]
        public async Task<IActionResult> DeleteCategory(
            [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "categories/{id}")]HttpRequest req,
            ILogger log, string id)
        {
            var OldCategory = _categoryService.GetCategoryById(Convert.ToInt32(id));
            if (OldCategory == null)
            {
                return new NotFoundResult();
            }

            await _categoryService.RemoveCategory(Convert.ToInt32(id));
            
            return new OkResult();
        }
The route is the same as the previous HTTP Trigger , we also do a Retrieval of the Category by Id. Then the change happens if we find it, then we ask the Service to remove the category.
Once this is complete we return the OkResult. Without any category as we have just deleted it.

Update a Category (PUT)

The Update process is similar as well to the delete with some changes, Code is as follows:

        [FunctionName("MMAPI_UpdateCategory")]
        public async Task<IActionResult> UpdateCategory(
            [HttpTrigger(AuthorizationLevel.Anonymous, "put", Route = "categories/{id}")]HttpRequest req,
            ILogger log, string id)
        {
            var OldCategory = await _categoryService.GetCategoryById(Convert.ToInt32(id));
            if (OldCategory == null)
            {
                return new NotFoundResult();
            }

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            var updatedCategory = JsonConvert.DeserializeObject<Category>(requestBody);
            await _categoryService.UpdateCategory(updatedCategory);

            return new OkObjectResult(updatedCategory);
        }

Here we use the PUT HTTP Verb, receive an ID that indicates us the category we want to update and in the body, just as with the Creation of a new category, we have the Category in JSON.

We check that the Category exists and then we deserialize the body into a Category and delegate to the service Updating it.

Finally, we return the OkObjectResult containing the updated category.

 

Deploying the REST API

We could now test it in local as we did at the beginning but what’s the fun of it, right?
So, in order to deploy it to the cloud we will Need to have a Microsoft Account logged in our Visual Studio that has an Azure subscription associated to it.
We will build it in release and then go to our Project, right click and choose “Publish..”
06 Publishing our REST API.PNG
We will choose “Azure Function App” as publish target and “create new”. Also it is recommended to mark the “Run from package file” which is the recommended option.
07 Publishing configuration 01.PNG
And click on “create profile”. There we will be presented into the profile configuration, where I recommend we create all the resources and do not reuse.
Regarding the Hosting Plan, be sure to check the “Consumption plan” which has like 1Milion of API Calls before any charge is made, which is convenient for testing purposes I’d say.
08 Publishing configuration 02.PNG
Note that when I publish this post this app service will be long gone so no issues in Posting it :).
Once configured we should click on “Create” and wait for all the resources to be created, once this is done the Project will be built and deployed to Azure.

Checking our Azure REST API

Now it is a good moment to move to our favorite web browser and type portal.azure.com.
We can move right away to the resource group we created and into the App Service created. Sometimes, when Publishing the Functions are not populated.
if that happens, don’t panic, just re-publish it again. The Publish view should look like:
09 Publishing again if no functions.PNG
A bit after, we should be able to see our services in our App Service:
10 Our REST API in azure.PNG
To try this quickly, we should go to the GetCategories, click on the “Get function URL” and copy it.
11 Get function URL
Then we can go into our friend PostMan, set a GET HTTP request and paste the URL we just copied in the, er, URL Field. And click Send.
We should see something like:
12 Trying from Postman out new Azure Function REST API
We could also test this on a web browser and on the Test section on the Azure portal, but I like to test Things as close as customer conditions.
As next steps I throw you the challenge to test this Serverless REST Api we just build in Azure with Azure Function Apps. I’d propose to try to create a new Category such as “999” – “.NET Core 3.0” then retrieve it by ID, then modify it to “.NET 5.0”, retrieve it and end up deleting it and trying to retrieve it again just to get the “NotFoundResult”.
So, what do you think about Azure Functions and what we just have build?

Next..

And that’s it for today, on our next entry we will jump into API Management .

Happy Coding & until the next entry!

 

 

 

 

 

Creating a Serverless RESTful API with Azure Functions Part 1 – Introduction

Azure latest (and in my opinion greatest) trend is Serverless, it’s cool, anything serverless can scale seamessly and if done right, be elastic and adapt to a dynamically changing load… which makes it ideal for implementing Microservices..

That said I am pretty new to Azure at the moment but nonstop learning, about in 1 or 2 weeks to face the AZ-203 exam as mentioned some weeks ago

But I like to play and this is the outcome of some trying and managing to get something close to a “proper” REST API endpoint in Azure, along with OpenAPI and some Azure provided goodies..

What and how?

So, what will we exacltly be doing and how will we do it?

I will be implementing a REST API endpoint using Azure Functions through a HTTP Trigger and C#, implementing it with Visual Studio 2019.

Then, I do plan to implement OpenAPI support with Azure API Management, which has several benefits as getting the documentation and test layer done almost automatically for us.

This said, I initially had thought of using an Azure Function Proxy (introduced by the end of 2017) as the front-end for the implemented API, which as well as some benefits like redirecting it to a Mock while the API is in development or being able to play between a Staging and an experimental new version of the API, etc.. But it seems that API Management does this and much more so I am unsure I will be using this feature… to understand it, this diagram might be of help:

Solution-Architecture-Diagram

So basically the Proxy acts as a facade between a user-consumer-publisher of our API and our API (Azure Function on the diagram). There we can do transformations of the request, response, redirect it to another Azure Function or service, etc..

As said, I’ll use API Management and not dig into Azure Functions Proxies.

Also, I do not plan into digging into authentication-authorization but that might change as we progress into it.

 

Azure Functions 101

Azure Functions are a great solution for running quick functions, this meaning 2,5 minutes for a HTTP based response (usually 5 minutes, modifiable up to 10 minutes).

They can be implemented with several languages, including C#, and supports dependencies as NuGet.

They Support integrated security with OAuth as Azure Active Directorz, Facebook, Google, Twitter, etc..

Integrates perfectly with various Azure and 3rd party services.

FaaS/LaaS or PaaS – mainly they are Serverless so that means “Function as a Service” or also named “Logic as a Service” but we can change the pricing plan from the default “Consupmtion” to “App Service”. So, basically a “Web API”. That said, the Consumption plan right now comes with 1 million executions for free a month, which is not bad, right?

 

Azure Functions Proxies

The Azure Function Proxies act as a single URI, a unified facade which we can use some redirections and transforms to our convenience, like:

  • Request and Response overrides.
  • Modify input request to send different HTTP verbs, headers or query parameters to a hidden backend service.
  • Replace Backend response status code, headers and body.

This said, API Management supersedes Azure Function by large. That said, they have their role, if we only want to unify separate functions or API services into a single API, we should use Azure Functions Proxies.

If we want a API Management, then we know what to use.

That said, it might be interesting if we have a somehow complex environment. As said, an image is worth 1000 words:

api-management_500x259

So it could be used to unify several related API micro-services, whilst having a greater API facade which Manages them all.

Or to mock an Azure Function..

As we will implement something way simpler, I’d go for the API Management. Also, it seems more fun.

 

API Management

API Management enables a full fledged professional facade/gateway for our API, providing in-built services which support all that the Azure Functions Proxies can do plus:

  • Load Balancing
  • Hot Swapping
  • Rate limiting
  • Authentication and Authorization
  • IP Whitelisting
  • Access Policies
  • Caching
  • Subcriptions, including User and RBAC management
  • Licensing
  • Business Insights & Analytics
  • Developer Portal (for ISV partners and-or internal use, including API documentation, interactive console, account creation and analytics on own usage..

API Management was launched on December 2018 and the consumption plan (which I plan to use) has been recently released (it was in preview) by the end of May 2019. the announcement clearly says that it is specifically meant for microservices.

Read more on the official Microsoft documentation on API Management or just get the whitepaper.

 

Next..

And that’s it for today, on our next entry we will create and verify our first REST api in C# with Azure Functions.

Happy coding!

Introduction to the Azure Machine Learning Workbench

Following the announcements post published some days ago  here,  we will dig deeper on this new tool, the Workbench. This is also called AML Workbench, which is shorter and this term will be used from now on to refer to Azure Machine Learning Workbench (glad about the acronym as I do not want to type that again :P)

 

But, what’s the AML Workbench?

It is a desktop application for Windows and MacOS, it has built-in data preparation that learns the data preparation steps as we perform them, which is able take avantage of the best open source frameworks including TensorFlow, Cognitive Toolkit, Spark ML and scikit-learn.

This also means that if you have a GPU that supports AI (read my earlier blog post on the topic here https://joslat.blog/2017/10/15/give-me-power-pegasus-or-the-state-of-hardware-in-ai/ ) you will be benefitting from that power heads-on.

Oh, it has also a command line interface for those who like them 😀

 

Sounds interesting? Then let’s get started!

 

Concepts first!

AML – Azure Machine Learning

This is in need to be described at the earliest as it might be a bit confusing. This is a solution proposal from Microsoft that englobes different components and services to provide an integrated end-to-end solution for data science and advanced analytics.

With it we can prepare data, develop experiments and deploy models at cloud scale (read massive scalability here)

AML consists of a few components:

  • AML Workbench – Desktop tool to “do-it-all” from a single location.
  • AML Experimentation Service – I “suppose” this will enable us to validate hypothesis in a protected scenario.
  • AML Model Management Service – I suppose this will enable us to manage our models
  • Microsoft Machine Learning Libraries for Apache Spark (MML Spark Library) – I read Spark/Hadoop integration here, probably to Azure servers
  • Visual Studio Code Tools for AI – I read here R & Python integration with Visual Studio

 

This picture showcases how AML Workbench fits in the Microsoft AI Ecosystem:

AML intro architec high level.JPG

To say that AML fully integrates with OS (Open Source) initiatives such as scikit-learn, TensorFlow, Microsoft Cognitive Toolkit or Spark ML.

The created experiments can be run in managed environments as Docker containers and clusters running Hadoop with Spark (I am wondering why is Microsoft is only mentioning Spark there if they work together? – Ok! As it was built as an improvement over MapReduce it can also run Stand Alone in the cloud, that’s why!). Also they can use advanced hardware like GPU-enabled VMs in Azure.

AML is built on top of the following technologies:

  • Jupyter Notebooks
  • Apache Spark
  • Docker
  • Kubernetes
  • Python
  • Conda

 

AML Workbench (yeah, finally!)

Desktop application with a command-line for Windows & MacOS to manage ML solutions through the entire data science life cycle.

  • ETL
  • Model development and experiment management
  • Model Deployment

 

It provides the following functionalities:

  • Data Preparation that can learn by example (Wow!)
  • Data source abstraction
  • Python SDK for invoking visually constructed data preparation packages (SSIS anyone?)
  • Built-in Jupyter Notebook service and Client UX (like anaconda?)
  • Experiment monitoring and management
  • Role-based access to support sharing and collaboration
  • Automatic project snapshots for each run and version control (trazability on “experiments” at last!) along GIT integration
  • Integration with popular Python IDEs

 

Let’s install it!

First things first, do you have a computer with windows 10 or macOS Sierra? (I guess you won’t have a Windows Server 2016 at home, do you?) If so proceed.. else go update https://www.microsoft.com/en-us/store/b/windows 😉

 

Oh, well… before installing we need to set up an ML experimentation account..

Go log-in in the Azure portal here https://portal.azure.com/

Click on the new button (+) on the top left corner of the Azure portal and tzpe “machine learning” and select the “Machine Learning experimentation (preview)”

Azure 01 MLE preview.JPG

Click create and fill in the nice form:

Azure 02 MLE preview.JPG

Be sure to select “DevTest” as the cost-saving Model Management pricing tear, otherwise it will have a cost. Dev Test appears as “0.00”. Otherwise you might forget and have a non-pleasant surprise…

Azure 03 MLE preview.JPG

As I am not that much into playing with azure at a personal level (mostly HOLs and learning) I deleted all my resources, including a DB I created at a HOL that suddenly had a cost… luckily very low… and created all the required elements from the ground up. Resource Group, experimentation account, storage account, workspace, model management account, account name… my recommendation is that you keep that data safe and close to you. As this is all protected by Microsoft’s Azure security.

Oh, and Click “Create” to create all this components in the cloud. We should see a “Deployment in progress..” message which should be over in a couple of minutes, as shown in the picture below.

Azure 04 MLE preview.JPG

Also we should see the details of the resources created, storage, resource group, etc… along some useful tools to download (at last) the AML Workbench.

We can also download it from here https://aka.ms/azureml-wb-msi

And double click it or right click and select “install”.

After the installer loads we should see the gorgeous installer…

Azure 05 MLE preview.JPG

It’s clean, it’s Metro..ups! I meant modern!

As usual, click continue and you will be presented the dependencies and installation path, shown next.

Azure 06 MLE preview.JPG

There I did not like I could not change the installation path… so we can only click the install button… and cross fingers that this does not create any conflict with your Anaconda installation… as this is a clear preview (read here: under your own responsibility)

Oh, I do NOT recommend you to wait for the installation to finish…

…go watch a series or to the gym (as I did in fact) – the installation will take about half an hour to download and install all the required components…

 

Now AML Workbench (preview) is installed in your computer, congratulations!!

Azure 07 MLE preview.JPG

Note that you can find it at C:\Users\<user>\AppData\Local\AmlWorkbench

Oh, and get used to this icon, I have the feeling we will be visiting it for a while 😉

But let’s continue, we are not yet finished!!

 

First steps!

So, let’s do something! Baby steps of course..

Start it and log-in to your Azure/Microsoft account. Automatically you should be able to see the recently created workspace in the Azure portal.

 

Click on the plus sign next to “Projects” panel in the top left or in the text menu, select File and then “New Project…”

We will give our project a name, like “Iris”, then select a directory to save your Azure Machine Learning Projects in your local computer and add a description.

We have to select our workspace, which by default will be the one we just created in the first place.

We will select the template “Classifying Iris” and click on the “Create” button below. This template is a companion sample project for Azure Machine Learning which uses the iris flower dataset.

Azure 07b MLE preview.JPG

Once the project has been created we will see the dashboard of the recently created project.

We can see several sections from our dashboard: the home section of our project, the data sources, notebooks, runs and Files.

On the project dashboard panel we can see a description of our project with instructions on how to set it up and follow the Quick start and tutorials, as well as an execution section.

The Data panel showcases the data sources and the preparations for obtaining them. This is a pretty special section with truly amazing features that can only be found on the AML Workbench, but we will see it on a next post.

It is worth noting that the Notebook panel is basically a Jupyter notebook container, as on the installation there was a custom anaconda installation being made, which also did not seem to tamper with my installation of Anaconda…

We can also open the project in Visual Studio Code or other configured IDE.

If we do not have we can install it now here https://code.visualstudio.com/

On the text menu, select File, then “Configure Project IDE” and input a name and the path for your IDE, which I selected VS Code, as we can see next:

Azure 08 MLE preview.JPG

Once this is installed, we should install Python support for VSCode, so we go to the extensions menu and select one of the Python extensions. In my case I selected the Python 0.7.0 from Don Jayamanne, but this extension seems the most complete.

Azure 09 MLE preview.JPG

Once this is set up we can go to the text menu, click on File, then on “Open Project”, next to it should appear our configured IDE between brackets, “(Visual Studio Code)”. We can see VSCode with the project loaded and we can click on one of the Python source files, for example, iris_sklearn.py. We should see the syntax highlighter at work, intellisense also working, between some other features.

Azure 10 MLE preview.JPG

Now, let’s execute it, we can go to the project dashboard panel and select “local”, then the source file “iris_sklearn.py” and add “0.01” in the arguments and click run.

We could also execute it on the Files panel, select the “iris_sklearn.py”

On the right side of AML Workbench, the Jobs pane should appear and showcase the execution(s) that we have started.

While we are at it, we could try some other executions changing the argument to range from 0.001 to 10.

Azure 11 MLE preview.JPG

What we are doing is executing a logistic regression algorithm which uses the scikit-learn library.

Once the different executions have finished, we could go to the Runs panel. There, select “iris_sklearn.py” on the run list and the run history dashboard should show the different runs.

Azure 12 MLE preview.JPG

We can click on the different executions and see the details.

By now we have grasped the concepts of AML and its ecosystem, configured our environment in Azure, installed the Workbench, configured and next created a sample project and executed it locally.

Hope this was a good introduction and you enjoyed it.

 

 

Sources: