Monday, April 27, 2020

What is Helm for Kubernetes?

In this post, I would like to talk about what is Helm and why do we need it, installing and uninstalling a chart and difference between a repo and a hub.

For this post, I am also assuming you are familiar with Kubernetes on a high level.

Before we dive into the details, first, let’s understand what does the word Helm mean in English.

“Helm is a lever or wheel controlling the rudder of a ship for steering.” – Merriam Webster

In a ship, you might have seen a wheel like mechanism used by the captain to steer the ship as shown below.


The logo for Helm (in Kubernetes context) as seen on Helm.sh website is shown below. We can now somewhat connect the dots with respect to logos.


If you want to setup wordpress inside a Kubernetes Cluster, then you will have to find relevant docker images for the wordpress front end and mysql database docker image and then setup networking, configuration, secrets, load balancing, etc., by installing multiple .yaml files.

After you have setup everything and everything is working, you will feel like you don’t want to touch your setup. But life doesn’t end there and eventually, you will have to worry about things listed below.

1. Delete deployments

2. Setup another wordpress instance for another customer

3. Update your images with new wordpress or mysql images

4. Rollback installation manually

and more.

Wouldn’t it be nice, if we didn’t had to worry about any of those .yaml files?

Wouldn’t it be nice, if we could leverage wordpress expert’s knowledge of installing and configuring wordpress into a cluster?

What if we could just execute few commands to install, uninstall and upgrade a software?

That’s what Helm does for Kubernetes. Helm helps you steer your software into your cluster.

You can execute helm commands against your K8S cluster such as

helm search hub bitnami/wordpress

helm install my-wordpress bitnami/wordpress

helm uninstall my-wordpress 

These commands would look familiar to you if you are familiar with apt-get or chocolatey or brew.

Helm is just like apt-get, chocolatey or brew, a Package Manager.

From the Helm website - “Helm is the package manager for Kubernetes. It is the best place to find, share and install software for Kubernetes”

It is the package manager for your Kubernetes Cluster and not for your machine.

You can install helm on your machine by following your operating system specific instructions as shown on the helm site - https://helm.sh/docs/intro/install/


Helm utilizes the same Kubernetes APIs to install software into K8S cluster.

A package manager is responsible for installing, uninstalling, and upgrading software packages into destination from a remote/local package repository.

Likewise, Helm works against a repository hosted locally or remotely. A repo can be hosted by anyone. For example, Google has its own helm repository. Bitnami hosts its own repository.

A repository contains many software packages. Each package has multiple versions.

Within helm context, a package is called a Chart. From now on we will refer packages as Charts.

By default, helm doesn’t know about any repository. If you want to use a particular repository, then you have to first add that repository to helm.

helm repo add bitnami https://charts.bitnami.com/bitnami

After you have done that, you can install any software from bitnami repository.

helm install my-wordpress bitnami/wordpress

When you execute the above command, you are telling helm to install the wordpress chart from the bitnami repository.

You can uninstall a chart by

helm uninstall my-wordpress

A small recap – Helm is a package manager that works against one or more repositories hosted by anyone to install/uninstall/upgrade a chart into a Kubernetes Cluster.

Are you with me so far? If yes, then let’s continue.

Since, repositories can be hosted by anyone (some of them could be private or public), how to find and search charts within these repositories?

Do we have to add every single repository out there?

How do we discover these repositories?

Helm Hub is a central location to easily find charts that are hosted outside the helm project.

When you execute the command helm search –help, you are presented with two options search hub or search repo. In the below comands, we are searching for wordpress chart against the hub and then against the repository.

helm search hub wordpress

helm search repo wordpress

What happens when you install a chart (remember we called it package initially)?

A Chart is nothing but collection of files that describe a related set of Kubernetes resources. For example, a wordpress chart will have all the .yaml files required to install wordpress. In addition to that it will have metadata information about the chart itself.

When you install a chart, it creates a new release into your K8S cluster. A release is like an instance of a resource. For example, when you install wordpress chart using the install command mentioned earlier, it will create a new release by the name my-wordpress. That release is unique to this cluster.

You can install a chart multiple times to create multiple releases. For example, if you can install wordpress chart 3 times, then you will have 3 wordpress instances configured—all with their own unique urls, usernames and passwords. By executing the below command, we will have 3 releases installed in our cluster.

helm install my-wordpress1 bitnami/wordpress

helm install my-wordpress2 bitnami/wordpress

helm install my-wordpress3 bitnami/wordpress

Clear?

Take two, let’s say you install mysql chart 3 times in your cluster, then you have 3 mysql database instances configured in your cluster.

You can execute helm list command to see what is installed in your cluster.

Finally, you can uninstall a release by executing the command helm uninstall my-wordpress

I hope this helped your understand helm a bit better.

Wednesday, July 17, 2019

Passing multiple parameters in an Angular Route using RouteLink

In Angular, let’s say you have a route defined like this.

{ path : 'user/:userId/building/:buildingid, component: UserScheduleDetailsComponent, pathMatch : 'full'}

and you want to navigate the user to this component by using this route. I couldn’t easily find a way to pass these parameters by using routelink.

So this is how you do it.

<a [routerLink]="[ '/user/', userId, 'building', buildingId]">I am a link</a>

The userid is public property in the .ts file of the controller. There is just one gotcha. In the middle, there are these hard coded strings, for eg. building, do not put ‘/’ in the beginning or at the end of ‘building’. Only the first part of this route can have ‘/’ in it.

Sunday, June 16, 2019

Unit Testing ASP.NET Core Web API using XUnit and FakeItEasy

Let’s take a one ASP.NET Core Web API method and add unit tests using XUnit and FakeItEasy.

My environment: Visual Studio 2019 Enterprise, ASP.NET Core 2.2, FakeItEasy, AutoFixture and XUnit.

Source code

I have a Before and After version of the source code hosted on github. In the After folder, you can view the completed solution.

System Under Test

There is a ProductsController with one HTTP GET method returning a list of products and a unit test project. There is a ProductService injected into ProductsController that returns the Products. In the following section, we would like to add tests around the code and make it more production ready. One of the goals of this post is to show how often we just start like the code snippet as shown below and then when the code goes into production, there are all sorts of conditions that we have to account for. We will add all those conditions but let’s do it by adding tests for each condition.

[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
    private readonly IProductService _productService;
    [HttpGet]
    public ActionResult<IEnumerable<Product>> Get()
    {
        return _productService.GetProducts();
    }
}

Let’s write our first test that would validate that the Get() method shown above.

I like to start with a simple empty method with just the name of the test method split into three parts. This helps me understand under given condition how should the code behave. The three parts are explained as follows:

1.Actual Name of the method being tested – Get

2.Condition – WhenThereAreProducts

3.Expected Outcome - ShouldReturnActionResultOfProductsWith200StatusCode

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{

}

We will add all the dependencies that are required for ProductsController, below is the code that does that.

using AutoFixture;
using System;
using UnitTestingDemo.Controllers;
using UnitTestingDemo.Services;
using Xunit;
using FakeItEasy;
namespace UnitTestingDemo.Tests
{
    public class ProductControllerTest
    {
        //Fakes
        private readonly IProductService _productService;

        //Dummy Data Generator
        private readonly Fixture _fixture;
        
        //System under test
        private readonly ProductsController _sut;
        public ProductControllerTest()
        {
            _productService = A.Fake<IProductService>();
            _sut = new ProductsController(_productService);

        _fixture = new Fixture();
        }

        [Fact]
        public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
        {
            //Arrange


            //Act


            //Assert
        }
    }
}

In the above code, I have added comments that explain what each line of code does. The public constructor is responsible for setting up our private objects. The system under test is called as _sut, so in all the methods it is easier to locate the system under test. In the unit test, we have three sections, Arrange, Act and Assert. I like to put these comments, so it is easier to scan different pieces of logic.

Next, we will add code to Arrange and Act as shown below,

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get();

    //Assert

}

In the above code, we create 3 fake products using _fixture and then using FakeItEasy we define that when GetProducts() is called it should return those three fake products. And then we call the _sut.Get().

In the Assert part, we want to make sure that our method _productService.GetProducts() was called and it returned a result that is of type ActionResult and it returns a 200 status code. If it doesn’t return that status code, then it should fail and then we will refactor our ProductsController code.

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as ActionResult<IEnumerable<Product>>;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.IsType<ActionResult<IEnumerable<Product>>>(result);
    Assert.NotNull(result);
    Assert.Equal(products.Count, result.Value.Count());
}

In the above code, in the Assert section, we are making sure that _productService.GetProducts() must have been called only once, the type of the result is of type ActionResult, the result is not null and the count of products returned are the same as we created in the Arrange section. Using this approach, we am not able to validate the status code of the result. In order to test the status code, we will have to modify the Controller code.

Lesson learned, just use ActionResult in the method signature and instead of returning directly a List, return an Ok(products). The OkObjectResult contains status code. Using ActionResult only makes it easier to test for different status codes.

Controller Method is now modified as below and the test is now modified to test for a valid 200 status code.

[HttpGet]
public ActionResult Get()
{
    return Ok(_productService.GetProducts());
}

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as OkObjectResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    var returnValue = Assert.IsType<List<Product>>(result.Value);
    Assert.Equal(products.Count, returnValue.Count());
    Assert.Equal(StatusCodes.Status200OK, result.StatusCode);
}

Now that we have all the Asserts statements passing, we ask ourselves another question, Is this the only test case to test against? Can there be more test cases that we haven’t accounted for? What if there is an unhandled exception or when there is no product found?

Let’s add another test that covers the test case of an unhandled exception. Before you look at the following code, ask yourself, what should be the expected outcome when there is an exception being thrown by the ProductService? It should probably return a 500 error code and possibly log the error.

First, let’s begin by writing an empty test with a descriptive name as shown below.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCode()
{
    //Arrange

    //Act
    

    //Assert
    
}

Next, we define the behavior of ProductService to throw an exception whenever GetProducts is called. If an exception is thrown, then we want to ensure that 500 HTTP Status Code is returned from the web service. The following test will fail, since we are not handling that case properly.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCode()
{
    //Arrange
    A.CallTo(() => _productService.GetProducts()).Throws<Exception>();

    //Act
    var result = _sut.Get() as StatusCodeResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    Assert.Equal(StatusCodes.Status500InternalServerError, result.StatusCode);
}

Let’s modify the Get method to handle unhandled exception by putting the processing logic into a try and catch block. After modify the code as shown below, you can run the test again and this time it would pass.

[HttpGet]
public ActionResult Get()
{
    try
    {
        return Ok(_productService.GetProducts());
    }
    catch (Exception ex)
    {

    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

We would like to add some kind of logging in the catch exception part. Logging using ILogger is the way to go, however, unit testing using ILogger is a bit problematic, because you have to use Adapter pattern to create your own logger that uses ILogger. For this part, I created a simple Logger called MyLogger with just a Log method to demonstrate unit testing.

The MyLogger.cs code is shown below.

using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace UnitTestingDemo.Services
{
    public interface IMyLogger
    {       
        void Log(string message, Exception ex);
    }
    public class MyLogger : IMyLogger
    {
        public void Log(string message, Exception ex)
        {
            //Log to database or use application insights.
        }
    }
}

The ProductsController.cs is modified to log exception as shown below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using UnitTestingDemo.Models;
using UnitTestingDemo.Services;

namespace UnitTestingDemo.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class ProductsController : ControllerBase
    {
        private readonly IProductService _productService;
        private readonly IMyLogger _logger;

        public ProductsController(IProductService productService, IMyLogger logger)
        {
            _productService = productService;
            _logger = logger;
        }

        [HttpGet]
        public ActionResult Get()
        {
            try
            {
                return Ok(_productService.GetProducts());
            }
            catch (Exception ex)
            {
                _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
            }
            return StatusCode(StatusCodes.Status500InternalServerError);
        }
    }
}

The Startup.cs modified as shown below

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

    services.AddTransient<IProductService, ProductService>();
    services.AddSingleton<IMyLogger, MyLogger>();
}

We now modify our unit test to in the following ways to make it pass,

1. We renamed the method to include logging piece.

2. Mocked behavior of MyLogger.cs class

3. Asserting that MyLogger’s Log method must have been called when there was an exception.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCodeAndLogAnException()
{
    //Arrange
    A.CallTo(() => _productService.GetProducts()).Throws<Exception>();
    A.CallTo(() => _logger.Log(A<string>._, A<Exception>._)).DoesNothing();

    //Act
    var result = _sut.Get() as StatusCodeResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    A.CallTo(() => _logger.Log(A<string>._, A<Exception>._)).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    Assert.Equal(StatusCodes.Status500InternalServerError, result.StatusCode);
}

Let’s add another test case to account for when there are no products found, then we would like to return a 404 Not Found result.

[Fact]
public void Get_WhenThereAreNoProductsFound_ShouldReturn404NotFoundResult()
{
    //Arrange
    

    //Act
    

    //Assert
    
}

     Let’s write a failing unit test and then add the condition in our Controller Get method.

[Fact]
public void Get_WhenThereAreNoProductsFound_ShouldReturn404NotFoundResult()
{
    //Arrange
    var products = new List<Product>();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as NotFoundResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);                        
    Assert.Equal(StatusCodes.Status404NotFound, result.StatusCode);

}

Modify the Get method as follows

[HttpGet]
public ActionResult Get()
{
    try
    {
        var products = _productService.GetProducts();
        if (products?.Count > 0)
        {
            return Ok(products);
        }
        return NotFound();
    }
    catch (Exception ex)
    {
        _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

Finally, if you are using Swagger then adding the Produces attribute will result into better documentation. I know this isn’t related to unit testing but it is a nice to have.

[HttpGet]        
[ProducesResponseType(typeof(List<Product>), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesResponseType(StatusCodes.Status500InternalServerError)]        
public ActionResult Get()
{
    try
    {
        var products = _productService.GetProducts();
        if (products?.Count > 0)
        {
            return Ok(products);
        }
        return NotFound();
    }
    catch (Exception ex)
    {
        _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

We have accounted for all the test cases to make this code ready for Production.

If you can think of any test case that I haven’t accounted for, then please let me know in the comments section below. The final version of the test can be found here.

Thursday, May 9, 2019

PowerApps and Flow Deployment Issues

In this article, we take a look at Flow deployment issues when it comes deploying multiple PowerApps applications. The issues are encountered when you have the following setup.

  • You have multiple applications within the same environment. For instance in the same environment, you have a PowerApp called AppDEV, AppQA, AppUAT and AppPROD
  • You have applications connected to their respective SQL Servers.
  • You have a separate Flow for each App and they are named as FlowDEV, FlowQA, FlowUAT, and FlowPROD
  • Each Flow will use a different connection such as SQLConnectionDEV, SQLConnectionQA, SQLConnectionUAT and SQLConnectionPROD.

To deploy the application, please follow procedure listed below:

  • Export the application and on the Export package screen under Related Resource for the app, for each resource and for Import Setup option, select either Update or Create as New option.
  • Import the application and on the Import package screen under Related Resource for the app, for each resource and for the Import Setup option, select either Update or Create as New option.

Ideal deployment scenario,

Export AppDEV app and import it into AppQA app. The connections for AppDEV and AppQA are pointing to their respective connections including Flow.

Deployment 1.

When we export the app for the first time, we select the Update option for the SQL Connector and Create as New for the Flow, since the Flow doesn’t exist in QA.

Expected Behavior

1. AppQA should be connected to SQLConnectionQA

2. FlowQA should be connected to SQLConnectionQA.

Actual Behavior

1. AppQA is connected to SQLConnectionQA

2. FlowQA is connected to SQLConnectionQA.

So this is good, as everything is as expected. Alright, follow along as new requirements have come up and require a new deployment.

Deployment 2.

Now assume that we have the environment setup as shown below.

1. AppDEV and FlowDEV connected to SQLConnectionDEV

2. AppQA and FlowQA connected to SQLConnectionQA.

When we export the application, our gut instinct is that since SQL Connector and Flow both exists as QA we need not to select “Create as New” for the Flow. So we select Update option during Export and Import of the Application.

Expected Behavior

1. AppQA should be connected to SQLConnectionQA

2. FlowQA should be connected to SQLConnectionQA.

Actual Behavior

1. AppQA is connected to SQLConnectionQA

2. FlowQA is now renamed to FlowDEV and is to SQLConnectionQA.

Our environment now looks like as follows:

1. AppDEV and FlowDEV connected to SQLConnectionDEV

2. AppQA and FlowDEV connected to SQLConnectionQA.

To recap, if you didn’t notice, we now have two Flows by the same name called FlowDEV on pointing to SQLConnectionDEV and another pointing to SQLConnectionQA but by just looking at the name we don’t know which Flow is which.

The issues now start to compound.

Deployment 3.

On the Import screen, on the Import Setup option, click on the Update option. When presented with the Flow to select you will be presented with two flows by the same name FlowDEV. At this point you do not know which Flow is pointing to SQLConnectionQA. Accidentally, you clicked on the wrong Flow and then you deploy.

The issues happen because you do not know which Flow is getting used by your application. If you delete the wrong flow then the PowerApp starts misbehaving. Again if more people create additional flows and it becomes a huge mess trying to chase down the bug.

Solution

The solution to this problem we have come up with is to during the Import process always select “Create as New” option and then follow a naming convention. When you have multiple flows created by this method, you can delete the older flows.

The naming convention

Follow the naming convention if you are doing a deployment on 2019-05-12 at 10:47.

<Environment>_NameoftheFlow_20190512_1047.

<Environment> is either DEV, QA, UAT or PROD.

What are the benefits of using this naming convention?

The different parts of the name provide different benefits and they are as follows:

Environment – When multiple flow is helps determine the Flow is targeting which environment. However, always verify on Flow details page regarding which connecting string is being used.

Name of Flow – Provide a distinct name that will be unique across different applications. This will provide us some hints with the purpose of the Flow.

Current Date – Add date formatted as 20190512 (YYYYMMDD). It helps to identify when the Flow was created and when it is safe to delete the Flow. For instance, if a newer Flow exists for the same environment then a cleanup can be performed.

Current Time – Add current time formatted as 1053 (HHmm). It helps to distinguish Flows that were created the same day.

You can check flows that were created older than the most recent one and delete them. So far this approach has worked for us and you can suggest if you have an alternate way of deploying PowerApps with Flows in this kind of setup.

What are the disadvantages of this approach?

The Flow execution history will be lost if you are concerned with that. However, if everything was successful then after certain period of time probably you do not care about the execution runs.

It also has an administration overhead as you have to remember stuff and it is definitely not DevOps friendly approach.

Thursday, February 21, 2019

A22-Use Azure DevOps to build a docker image and push to private repository

In this post, I want to explain the steps I took to create a docker image and push to a private docker hub repository using Azure DevOps. In the previous post, we looked at adding docker support to an existing ASP.NET Core SPA application that uses Angular 7.

It took me 17th attempts to get the build to work. In hindsight, it is always easy to say, that I could have read the logs or documentation but by just following the docs or the web interface of Azure DevOps, it isn’t obvious for a new user to figure this out. You can argue that Azure DevOps is great, or this and that, but it has its quirks and issues. I have been using it since it was called TFS 2010, so when I said a new user, I meant a new user from the standpoint of creating and publishing docker images.

Initially, I started with just two tasks in the build pipeline, Build an image and Push an image. It didn’t work because I didn’t built and push using dotnet build and dotnet publish tasks. After adding those two tasks, it still didn’t work because the files weren’t available inside docker container. To fix that, I had to make sure that dotnet publish was using the –output src as an argument. It is because in the docker file for the project, I am switching the working directory to /src. You can take a look at the docker file here.

After doing that it still didn’t work and because I had to make sure that the build context was point to the folder that had dockerfile. Who came up with the name build context? Seriously. They could have named it “folder containing dockerfile”? The tool tips are the worst in Azure DevOps Builds Tasks. When you are stuck in a situation none of these tooltips make any sense. For instance, I am trying to figure out what does build context means. Is it a variable? Some build folder? And the tooltip says, “Path to build context”. But what does context mean? Crazy.

Alright, on to the next hurdle I overcame. Access denied. This one I tried multiple times. First of all, I added Docker Login and it didn’t work. Then I add a separate step to add tag. It still didn’t work. Because the tag you are supposed to provide has to match your docker hub repository name. Even after adding the same tag that docker is expecting it didn’t work. The reason was that Azure DevOps uses $(Build.BuildNumber) or something as image name and that same image name is being used when it tries to push to docker repository. As the last step that made it work was adding image name in the build tasks to be the same as docker is expecting. <yourid>/<repositoryname>:tagname.

I ensured that all the docker build tasks are using the same name <yourid>/<repositoryname>:tagname. Finally, I was able to push a docker image to docker hub. What a relief!

Monday, February 18, 2019

A21-Adding docker support to ASP.NET Core SPA Application

This post is about my experience adding docker support to the A100 website. You can find the github repo here.

Disclaimer: I am a beginner to docker and Linux, and I don’t know what I am doing while trying to configure docker but I just wanted to share what steps I took and where my frustrations were with respect to dockerizing existing ASP.NET Core SPA applications.

I thought to take a spin at docker and decided to dockerize the A100 website. I knew that you could just right click the project and click on “Add Docker Support” to dockerize an existing application.  So I did that and then I clicked on F5 to debug the website inside a docker container from Visual Studio. It didn’t worked because there was no node installed.

No node installed on the base dotnet runtime image

The A100 ASP.NET Core application is a spa application built using the default Angular template. The backend is ASP.NET Web API and front-end is using Angular, all within a single Visual Studio project. The spa application depends on npm and node during compile time as well as runtime. The docker file that is generated uses nano server as the base image, which has the dotnet core runtime and nothing else.  There is no node installed in the base nano server image.

Not an easy way to install node on nano server without PowerShell

This led me to the path of installing node on the nano server, but there is no easy way to install node because PowerShell is not available in those nano server images.

Mitul, curl.exe and tar.exe are available on nano server, why you didn’t try that”- as you might suggest. I did try that, however, I couldn’t unzip the .zip file that I downloaded from the website using tar.exe. I tried tar.exe –xf node.zip multiple times but it didn’t work. Maybe there is a bug in the tar.exe on nano server. It kept saying “Unrecognized format”. Maybe tar doesn’t support .zip files. At this point, I gave up the idea of using a window server container image since I couldn’t get it to work.

But Mitul, you could have used multi stage builds in docker image to download the file inside server core and then copied into nano server image. Why didn’t you try that?” – as you might suggest again. I didn’t realize this at this point and I learned that that’s how dotnet is building their base image. This is an option that I will try next. I hope it would work.

I followed this github issue to successfully install nodejs.

Node-sass doesn’t work on Linux container

I switched to Linux containers and then regenerated docker file again and this time too the base image didn’t had node installed on it. So I had to install nodejs both on the sdk image and the runtime image because you need once for compiling and another one during runtime. Finally, I was successfully able to install node. Yay! But when I tried to build docker image, it errored out complaining that node-sass is not suitable or not available for your environment. I came across the suggestion that I might have to rebuild it. I tried rebuilding node-sass package and this time I was successfully able to rebuild node-sass. However, I during runtime I kept getting error that could not generate .css from sass and it was node-sass related issue. It didn’t work.

Error: Missing binding /app/ClientApp/node_modules/node-sass/vendor/linux-x64-67/binding.node
Node Sass could not find a binding for your current environment: Linux 64-bit with Node.js 11.x

Found bindings for the following environments:
   - Windows 64-bit with Node.js 10.x
   - Windows 64-bit with Node.js 11.x

This usually happens because your environment has changed since running `npm install`.
Run `npm rebuild node-sass` to download the binding for your current environment.

My cross platform dream was vanishing quickly at this point. I was questioning my approach of developing in windows and hosting it in linux. Should I have developed inside a Linux environment like Windows Subsystem for Linux (WSL) instead? Since node-sass was giving issues, I decided to ditch sass and use scss. But I realized that scss is sass itself. Its newer version is called scss and it still requires node-sass. I was getting really frustrated. Finally, I switched to plain .css for the A100 website.

This step worked and I was able to do F5 and run the website from inside a Linux container. It might seem like that in one or two tries this thing worked for me. No. I spent one and half day in trying to figure this thing out with multiple attempts at building a docker image. I realized that there are few things where we can improve this experience for new users.

Provide alternate ways of creating images with node installed

I understand that ASP.NET Core team wants to keep the docker images as lean as possible but please provide documentation on how we can easily install node on nano server. Maybe there is an I was not able to find it. A better approach would be that when you try to click on “Add Docker Support” inside Visual Studio, and if this is a spa application, then provide a message or add comments to the docker file indicating that there is no node installed and please follow this article to install node in this docker container.

Node-sass npm package to be fixed

The node-sass package has given me the maximum issues. I do not understand that why would node-sass not work on Linux. Inside of package.json, there is nothing specific related to windows then when it tries to install node-sass on Linux, then it should try to install node-sass that is native to Linux. Why it is failing to work properly on Linux? I have no idea. It would be nice if this is addressed as I am not using sass or scss for now. I could be wrong but I have seen in tutorials or on youtube videos that most people are using sass or scss. If this is the case then this has to be improved.

I learnt that I need to read more documentation regarding docker, Linux and ASP.NET Core. If you have suggestions then please provide me in improving my understanding of docker.

Tuesday, January 15, 2019

PowerApps From A DevOps Perspective

In this post, I would like to talk about PowerApps from a DevOps perspective. When we think about DevOps we think about Source Control, Continuous Integration, Automation, Configuration Management etc. Let’s take a look into it.

No Source Control for PowerApps

When you are developing with PowerApps, there is no way to do Source Control. There are no source files. The only artifact you can version control is the .zip file that you can export. The application that you create/edit in the browser is continuously updated and whenever you feel like it is ready for publishing then you make that version of the application available to users in your organization. Lack of version control causes problems when you cannot track down which version of the application caused a particular bug. The only way to guarantee no bugs is by doing a thorough testing of the application.

Multiple Environments Beware

Creating a separate Dev, Test, UAT and Production environment is quite common for DevOps practices. In PowerApps, you can create separate environments in PowerApps but it comes with its own problems. For instance, in order to create and administer environments you need Plan 2 License in PowerApps. This is not a big deal since you can manage with one license. If you are creating application that uses On-Premises Data Gateway then only applications from the Default Environment can connect to the Gateway. Yeah! if you thought that was a bummer. Then you are right. Well all hope is not lost, if you really need it then you have to create a support ticket with Microsoft. Another limitation with multiple environments is that you cannot create as many environments as you like. Because you can only create two production environments. Another bummer. If you wanted Test, UAT and Production environment then you are toast. You will have to request another user license and then you would be able to create it. In PowerApps, having multiple environments creates more issues in my opinion from a DevOps perspective. It is not as seamless as one would come to expect from a traditional DevOps practice. If you are creating Model Driven apps only then it makes sense to create multiple environments. If you are creating only Canvas Driven apps then just create different apps with – DEV, TEST, UAT and PROD suffixes. Having multiple apps for different purposes creates it own issues as detailed below.

CI/CD Pipeline

In PowerApps, you don’t have to build your code. Any change you make to the application is live for you to test it. In that way it is very productive. To publish the application you just click on the publish button and it is live. But what if you screw an update, then you have an option of going back to a previous version of the application. The workflow of getting the application to end-users all the way to production is not seamless. Since we create different app for DEV,TEST,UAT we had to export and import every time we had to propagate a change to these apps. For one of the applications we built it was quite a lot of work every time we exported and imported into another application-dev/test/prod. For instance, we had to ensure that all the connections were pointing to environment specific connection strings, data going in accurately, perform manual checks, create a new flow every time.

Configuration Management

Well there is none. You cannot manage configuration of your PowerApps application in one place. For example, you can put app settings inside web.config file in a typical asp.net web application. In asp.net when you change environments then you can just update web.config. If you want to persist your app settings then it is better to put them inside a SQL table if you are utilizing On-Premises databases. In other words, store your app settings into an external persistent store.

Collaboration

It is very common to collaborate with brilliant team mates on different projects. If you are using any modern source control system then you can easily collaborate with Team members and work on features simultaneously. In PowerApps, only one developer can work on the application at given point of time. Good luck building your next ERP front end! One of the selling points of PowerApps is that it is very good for building quick small single purpose application that a single developer can churn out in days. But not good when you want to build a large application.

Since we are on the topic of collaboration let me add it here. If you developing application for an enterprise and think people are going to use it, then please don’t use your own personal account to develop the application. Because the user who creates the application becomes the owner of the application and guess what, you cannot change the owner of the application later on. If you are an ops guy then you want to have that ownership of the application. Please utilize a service account to develop PowerApps application.

That’s all I have for today and if you would like to share anything regarding PowerApps please let me know in the comment below.