Thursday, September 16, 2021

DevOps Links for 16/9/2021

Many people don't know the difference between Git and GitHub and it is a constant source of confusion for first timers. This post is a simple getting started post on Git and GitHub. 

 How to get started with GitHub and Git

SQL Injection is still there in the top 10 OWASP list. Broken Access Control is at the top of OWASP 2021 list which you can find below. 

Here is the OWASP Top 10 for 2021


A different take on branching strategy. In my opinion, you need a high degree of trust and keep changes small and adopt a strategy that works for your team.

Branching Strategy - Ship / Show / Ask


All the Azure DevOps features visualized using Mind Map.

Azure DevOps In a Nutshell Mind Map


I am trying to find interesting questions on GitHub Actions on Stackoverflow. This is one of the highly voted questions on GitHub Actions.

How to get current branch within GitHub actions


This post lists four key metrics - Deployment Frequency, Lead Time for Changes, Change Failure Rate, Time to Restore Service, that you need to track to measure your DevOps performance in your organization.

Use Four Keys metrics like change failure rate to measure your DevOps performance | Google Cloud Blog




Thursday, September 9, 2021

DevOps Links for 9/9/2021

If you are learning GitHub Actions, then this is a good place to start.

Introduction to GitHub Actions 


Currently, I am deploying different kinds of .NET application to Azure using GitHub Actions. You can find more information on how to deploy to Azure App Service below. 

An interesting post on GitHub Actions Limitations and Gotchas. The workflow_dispatch feature needs major improvements. If you don't know what that is, then you will keep guessing what this feature is and how to discover it. This feature is to manually trigger the GitHub Actions.

Monday, April 27, 2020

What is Helm for Kubernetes?

In this post, I would like to talk about what is Helm and why do we need it, installing and uninstalling a chart and difference between a repo and a hub.

For this post, I am also assuming you are familiar with Kubernetes on a high level.

Before we dive into the details, first, let’s understand what does the word Helm mean in English.

“Helm is a lever or wheel controlling the rudder of a ship for steering.” – Merriam Webster

In a ship, you might have seen a wheel like mechanism used by the captain to steer the ship as shown below.


The logo for Helm (in Kubernetes context) as seen on Helm.sh website is shown below. We can now somewhat connect the dots with respect to logos.


If you want to setup wordpress inside a Kubernetes Cluster, then you will have to find relevant docker images for the wordpress front end and mysql database docker image and then setup networking, configuration, secrets, load balancing, etc., by installing multiple .yaml files.

After you have setup everything and everything is working, you will feel like you don’t want to touch your setup. But life doesn’t end there and eventually, you will have to worry about things listed below.

1. Delete deployments

2. Setup another wordpress instance for another customer

3. Update your images with new wordpress or mysql images

4. Rollback installation manually

and more.

Wouldn’t it be nice, if we didn’t had to worry about any of those .yaml files?

Wouldn’t it be nice, if we could leverage wordpress expert’s knowledge of installing and configuring wordpress into a cluster?

What if we could just execute few commands to install, uninstall and upgrade a software?

That’s what Helm does for Kubernetes. Helm helps you steer your software into your cluster.

You can execute helm commands against your K8S cluster such as

helm search hub bitnami/wordpress

helm install my-wordpress bitnami/wordpress

helm uninstall my-wordpress 

These commands would look familiar to you if you are familiar with apt-get or chocolatey or brew.

Helm is just like apt-get, chocolatey or brew, a Package Manager.

From the Helm website - “Helm is the package manager for Kubernetes. It is the best place to find, share and install software for Kubernetes”

It is the package manager for your Kubernetes Cluster and not for your machine.

You can install helm on your machine by following your operating system specific instructions as shown on the helm site - https://helm.sh/docs/intro/install/


Helm utilizes the same Kubernetes APIs to install software into K8S cluster.

A package manager is responsible for installing, uninstalling, and upgrading software packages into destination from a remote/local package repository.

Likewise, Helm works against a repository hosted locally or remotely. A repo can be hosted by anyone. For example, Google has its own helm repository. Bitnami hosts its own repository.

A repository contains many software packages. Each package has multiple versions.

Within helm context, a package is called a Chart. From now on we will refer packages as Charts.

By default, helm doesn’t know about any repository. If you want to use a particular repository, then you have to first add that repository to helm.

helm repo add bitnami https://charts.bitnami.com/bitnami

After you have done that, you can install any software from bitnami repository.

helm install my-wordpress bitnami/wordpress

When you execute the above command, you are telling helm to install the wordpress chart from the bitnami repository.

You can uninstall a chart by

helm uninstall my-wordpress

A small recap – Helm is a package manager that works against one or more repositories hosted by anyone to install/uninstall/upgrade a chart into a Kubernetes Cluster.

Are you with me so far? If yes, then let’s continue.

Since, repositories can be hosted by anyone (some of them could be private or public), how to find and search charts within these repositories?

Do we have to add every single repository out there?

How do we discover these repositories?

Helm Hub is a central location to easily find charts that are hosted outside the helm project.

When you execute the command helm search –help, you are presented with two options search hub or search repo. In the below comands, we are searching for wordpress chart against the hub and then against the repository.

helm search hub wordpress

helm search repo wordpress

What happens when you install a chart (remember we called it package initially)?

A Chart is nothing but collection of files that describe a related set of Kubernetes resources. For example, a wordpress chart will have all the .yaml files required to install wordpress. In addition to that it will have metadata information about the chart itself.

When you install a chart, it creates a new release into your K8S cluster. A release is like an instance of a resource. For example, when you install wordpress chart using the install command mentioned earlier, it will create a new release by the name my-wordpress. That release is unique to this cluster.

You can install a chart multiple times to create multiple releases. For example, if you can install wordpress chart 3 times, then you will have 3 wordpress instances configured—all with their own unique urls, usernames and passwords. By executing the below command, we will have 3 releases installed in our cluster.

helm install my-wordpress1 bitnami/wordpress

helm install my-wordpress2 bitnami/wordpress

helm install my-wordpress3 bitnami/wordpress

Clear?

Take two, let’s say you install mysql chart 3 times in your cluster, then you have 3 mysql database instances configured in your cluster.

You can execute helm list command to see what is installed in your cluster.

Finally, you can uninstall a release by executing the command helm uninstall my-wordpress

I hope this helped your understand helm a bit better.

Wednesday, July 17, 2019

Passing multiple parameters in an Angular Route using RouteLink

In Angular, let’s say you have a route defined like this.

{ path : 'user/:userId/building/:buildingid, component: UserScheduleDetailsComponent, pathMatch : 'full'}

and you want to navigate the user to this component by using this route. I couldn’t easily find a way to pass these parameters by using routelink.

So this is how you do it.

<a [routerLink]="[ '/user/', userId, 'building', buildingId]">I am a link</a>

The userid is public property in the .ts file of the controller. There is just one gotcha. In the middle, there are these hard coded strings, for eg. building, do not put ‘/’ in the beginning or at the end of ‘building’. Only the first part of this route can have ‘/’ in it.

Sunday, June 16, 2019

Unit Testing ASP.NET Core Web API using XUnit and FakeItEasy

Let’s take a one ASP.NET Core Web API method and add unit tests using XUnit and FakeItEasy.

My environment: Visual Studio 2019 Enterprise, ASP.NET Core 2.2, FakeItEasy, AutoFixture and XUnit.

Source code

I have a Before and After version of the source code hosted on github. In the After folder, you can view the completed solution.

System Under Test

There is a ProductsController with one HTTP GET method returning a list of products and a unit test project. There is a ProductService injected into ProductsController that returns the Products. In the following section, we would like to add tests around the code and make it more production ready. One of the goals of this post is to show how often we just start like the code snippet as shown below and then when the code goes into production, there are all sorts of conditions that we have to account for. We will add all those conditions but let’s do it by adding tests for each condition.

[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
    private readonly IProductService _productService;
    [HttpGet]
    public ActionResult<IEnumerable<Product>> Get()
    {
        return _productService.GetProducts();
    }
}

Let’s write our first test that would validate that the Get() method shown above.

I like to start with a simple empty method with just the name of the test method split into three parts. This helps me understand under given condition how should the code behave. The three parts are explained as follows:

1.Actual Name of the method being tested – Get

2.Condition – WhenThereAreProducts

3.Expected Outcome - ShouldReturnActionResultOfProductsWith200StatusCode

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{

}

We will add all the dependencies that are required for ProductsController, below is the code that does that.

using AutoFixture;
using System;
using UnitTestingDemo.Controllers;
using UnitTestingDemo.Services;
using Xunit;
using FakeItEasy;
namespace UnitTestingDemo.Tests
{
    public class ProductControllerTest
    {
        //Fakes
        private readonly IProductService _productService;

        //Dummy Data Generator
        private readonly Fixture _fixture;
        
        //System under test
        private readonly ProductsController _sut;
        public ProductControllerTest()
        {
            _productService = A.Fake<IProductService>();
            _sut = new ProductsController(_productService);

        _fixture = new Fixture();
        }

        [Fact]
        public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
        {
            //Arrange


            //Act


            //Assert
        }
    }
}

In the above code, I have added comments that explain what each line of code does. The public constructor is responsible for setting up our private objects. The system under test is called as _sut, so in all the methods it is easier to locate the system under test. In the unit test, we have three sections, Arrange, Act and Assert. I like to put these comments, so it is easier to scan different pieces of logic.

Next, we will add code to Arrange and Act as shown below,

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get();

    //Assert

}

In the above code, we create 3 fake products using _fixture and then using FakeItEasy we define that when GetProducts() is called it should return those three fake products. And then we call the _sut.Get().

In the Assert part, we want to make sure that our method _productService.GetProducts() was called and it returned a result that is of type ActionResult and it returns a 200 status code. If it doesn’t return that status code, then it should fail and then we will refactor our ProductsController code.

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as ActionResult<IEnumerable<Product>>;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.IsType<ActionResult<IEnumerable<Product>>>(result);
    Assert.NotNull(result);
    Assert.Equal(products.Count, result.Value.Count());
}

In the above code, in the Assert section, we are making sure that _productService.GetProducts() must have been called only once, the type of the result is of type ActionResult, the result is not null and the count of products returned are the same as we created in the Arrange section. Using this approach, we am not able to validate the status code of the result. In order to test the status code, we will have to modify the Controller code.

Lesson learned, just use ActionResult in the method signature and instead of returning directly a List, return an Ok(products). The OkObjectResult contains status code. Using ActionResult only makes it easier to test for different status codes.

Controller Method is now modified as below and the test is now modified to test for a valid 200 status code.

[HttpGet]
public ActionResult Get()
{
    return Ok(_productService.GetProducts());
}

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as OkObjectResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    var returnValue = Assert.IsType<List<Product>>(result.Value);
    Assert.Equal(products.Count, returnValue.Count());
    Assert.Equal(StatusCodes.Status200OK, result.StatusCode);
}

Now that we have all the Asserts statements passing, we ask ourselves another question, Is this the only test case to test against? Can there be more test cases that we haven’t accounted for? What if there is an unhandled exception or when there is no product found?

Let’s add another test that covers the test case of an unhandled exception. Before you look at the following code, ask yourself, what should be the expected outcome when there is an exception being thrown by the ProductService? It should probably return a 500 error code and possibly log the error.

First, let’s begin by writing an empty test with a descriptive name as shown below.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCode()
{
    //Arrange

    //Act
    

    //Assert
    
}

Next, we define the behavior of ProductService to throw an exception whenever GetProducts is called. If an exception is thrown, then we want to ensure that 500 HTTP Status Code is returned from the web service. The following test will fail, since we are not handling that case properly.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCode()
{
    //Arrange
    A.CallTo(() => _productService.GetProducts()).Throws<Exception>();

    //Act
    var result = _sut.Get() as StatusCodeResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    Assert.Equal(StatusCodes.Status500InternalServerError, result.StatusCode);
}

Let’s modify the Get method to handle unhandled exception by putting the processing logic into a try and catch block. After modify the code as shown below, you can run the test again and this time it would pass.

[HttpGet]
public ActionResult Get()
{
    try
    {
        return Ok(_productService.GetProducts());
    }
    catch (Exception ex)
    {

    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

We would like to add some kind of logging in the catch exception part. Logging using ILogger is the way to go, however, unit testing using ILogger is a bit problematic, because you have to use Adapter pattern to create your own logger that uses ILogger. For this part, I created a simple Logger called MyLogger with just a Log method to demonstrate unit testing.

The MyLogger.cs code is shown below.

using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace UnitTestingDemo.Services
{
    public interface IMyLogger
    {       
        void Log(string message, Exception ex);
    }
    public class MyLogger : IMyLogger
    {
        public void Log(string message, Exception ex)
        {
            //Log to database or use application insights.
        }
    }
}

The ProductsController.cs is modified to log exception as shown below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using UnitTestingDemo.Models;
using UnitTestingDemo.Services;

namespace UnitTestingDemo.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class ProductsController : ControllerBase
    {
        private readonly IProductService _productService;
        private readonly IMyLogger _logger;

        public ProductsController(IProductService productService, IMyLogger logger)
        {
            _productService = productService;
            _logger = logger;
        }

        [HttpGet]
        public ActionResult Get()
        {
            try
            {
                return Ok(_productService.GetProducts());
            }
            catch (Exception ex)
            {
                _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
            }
            return StatusCode(StatusCodes.Status500InternalServerError);
        }
    }
}

The Startup.cs modified as shown below

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

    services.AddTransient<IProductService, ProductService>();
    services.AddSingleton<IMyLogger, MyLogger>();
}

We now modify our unit test to in the following ways to make it pass,

1. We renamed the method to include logging piece.

2. Mocked behavior of MyLogger.cs class

3. Asserting that MyLogger’s Log method must have been called when there was an exception.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCodeAndLogAnException()
{
    //Arrange
    A.CallTo(() => _productService.GetProducts()).Throws<Exception>();
    A.CallTo(() => _logger.Log(A<string>._, A<Exception>._)).DoesNothing();

    //Act
    var result = _sut.Get() as StatusCodeResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    A.CallTo(() => _logger.Log(A<string>._, A<Exception>._)).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    Assert.Equal(StatusCodes.Status500InternalServerError, result.StatusCode);
}

Let’s add another test case to account for when there are no products found, then we would like to return a 404 Not Found result.

[Fact]
public void Get_WhenThereAreNoProductsFound_ShouldReturn404NotFoundResult()
{
    //Arrange
    

    //Act
    

    //Assert
    
}

     Let’s write a failing unit test and then add the condition in our Controller Get method.

[Fact]
public void Get_WhenThereAreNoProductsFound_ShouldReturn404NotFoundResult()
{
    //Arrange
    var products = new List<Product>();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as NotFoundResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);                        
    Assert.Equal(StatusCodes.Status404NotFound, result.StatusCode);

}

Modify the Get method as follows

[HttpGet]
public ActionResult Get()
{
    try
    {
        var products = _productService.GetProducts();
        if (products?.Count > 0)
        {
            return Ok(products);
        }
        return NotFound();
    }
    catch (Exception ex)
    {
        _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

Finally, if you are using Swagger then adding the Produces attribute will result into better documentation. I know this isn’t related to unit testing but it is a nice to have.

[HttpGet]        
[ProducesResponseType(typeof(List<Product>), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesResponseType(StatusCodes.Status500InternalServerError)]        
public ActionResult Get()
{
    try
    {
        var products = _productService.GetProducts();
        if (products?.Count > 0)
        {
            return Ok(products);
        }
        return NotFound();
    }
    catch (Exception ex)
    {
        _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

We have accounted for all the test cases to make this code ready for Production.

If you can think of any test case that I haven’t accounted for, then please let me know in the comments section below. The final version of the test can be found here.

Thursday, May 9, 2019

PowerApps and Flow Deployment Issues

In this article, we take a look at Flow deployment issues when it comes deploying multiple PowerApps applications. The issues are encountered when you have the following setup.

  • You have multiple applications within the same environment. For instance in the same environment, you have a PowerApp called AppDEV, AppQA, AppUAT and AppPROD
  • You have applications connected to their respective SQL Servers.
  • You have a separate Flow for each App and they are named as FlowDEV, FlowQA, FlowUAT, and FlowPROD
  • Each Flow will use a different connection such as SQLConnectionDEV, SQLConnectionQA, SQLConnectionUAT and SQLConnectionPROD.

To deploy the application, please follow procedure listed below:

  • Export the application and on the Export package screen under Related Resource for the app, for each resource and for Import Setup option, select either Update or Create as New option.
  • Import the application and on the Import package screen under Related Resource for the app, for each resource and for the Import Setup option, select either Update or Create as New option.

Ideal deployment scenario,

Export AppDEV app and import it into AppQA app. The connections for AppDEV and AppQA are pointing to their respective connections including Flow.

Deployment 1.

When we export the app for the first time, we select the Update option for the SQL Connector and Create as New for the Flow, since the Flow doesn’t exist in QA.

Expected Behavior

1. AppQA should be connected to SQLConnectionQA

2. FlowQA should be connected to SQLConnectionQA.

Actual Behavior

1. AppQA is connected to SQLConnectionQA

2. FlowQA is connected to SQLConnectionQA.

So this is good, as everything is as expected. Alright, follow along as new requirements have come up and require a new deployment.

Deployment 2.

Now assume that we have the environment setup as shown below.

1. AppDEV and FlowDEV connected to SQLConnectionDEV

2. AppQA and FlowQA connected to SQLConnectionQA.

When we export the application, our gut instinct is that since SQL Connector and Flow both exists as QA we need not to select “Create as New” for the Flow. So we select Update option during Export and Import of the Application.

Expected Behavior

1. AppQA should be connected to SQLConnectionQA

2. FlowQA should be connected to SQLConnectionQA.

Actual Behavior

1. AppQA is connected to SQLConnectionQA

2. FlowQA is now renamed to FlowDEV and is to SQLConnectionQA.

Our environment now looks like as follows:

1. AppDEV and FlowDEV connected to SQLConnectionDEV

2. AppQA and FlowDEV connected to SQLConnectionQA.

To recap, if you didn’t notice, we now have two Flows by the same name called FlowDEV on pointing to SQLConnectionDEV and another pointing to SQLConnectionQA but by just looking at the name we don’t know which Flow is which.

The issues now start to compound.

Deployment 3.

On the Import screen, on the Import Setup option, click on the Update option. When presented with the Flow to select you will be presented with two flows by the same name FlowDEV. At this point you do not know which Flow is pointing to SQLConnectionQA. Accidentally, you clicked on the wrong Flow and then you deploy.

The issues happen because you do not know which Flow is getting used by your application. If you delete the wrong flow then the PowerApp starts misbehaving. Again if more people create additional flows and it becomes a huge mess trying to chase down the bug.

Solution

The solution to this problem we have come up with is to during the Import process always select “Create as New” option and then follow a naming convention. When you have multiple flows created by this method, you can delete the older flows.

The naming convention

Follow the naming convention if you are doing a deployment on 2019-05-12 at 10:47.

<Environment>_NameoftheFlow_20190512_1047.

<Environment> is either DEV, QA, UAT or PROD.

What are the benefits of using this naming convention?

The different parts of the name provide different benefits and they are as follows:

Environment – When multiple flow is helps determine the Flow is targeting which environment. However, always verify on Flow details page regarding which connecting string is being used.

Name of Flow – Provide a distinct name that will be unique across different applications. This will provide us some hints with the purpose of the Flow.

Current Date – Add date formatted as 20190512 (YYYYMMDD). It helps to identify when the Flow was created and when it is safe to delete the Flow. For instance, if a newer Flow exists for the same environment then a cleanup can be performed.

Current Time – Add current time formatted as 1053 (HHmm). It helps to distinguish Flows that were created the same day.

You can check flows that were created older than the most recent one and delete them. So far this approach has worked for us and you can suggest if you have an alternate way of deploying PowerApps with Flows in this kind of setup.

What are the disadvantages of this approach?

The Flow execution history will be lost if you are concerned with that. However, if everything was successful then after certain period of time probably you do not care about the execution runs.

It also has an administration overhead as you have to remember stuff and it is definitely not DevOps friendly approach.

Thursday, February 21, 2019

A22-Use Azure DevOps to build a docker image and push to private repository

In this post, I want to explain the steps I took to create a docker image and push to a private docker hub repository using Azure DevOps. In the previous post, we looked at adding docker support to an existing ASP.NET Core SPA application that uses Angular 7.

It took me 17th attempts to get the build to work. In hindsight, it is always easy to say, that I could have read the logs or documentation but by just following the docs or the web interface of Azure DevOps, it isn’t obvious for a new user to figure this out. You can argue that Azure DevOps is great, or this and that, but it has its quirks and issues. I have been using it since it was called TFS 2010, so when I said a new user, I meant a new user from the standpoint of creating and publishing docker images.

Initially, I started with just two tasks in the build pipeline, Build an image and Push an image. It didn’t work because I didn’t built and push using dotnet build and dotnet publish tasks. After adding those two tasks, it still didn’t work because the files weren’t available inside docker container. To fix that, I had to make sure that dotnet publish was using the –output src as an argument. It is because in the docker file for the project, I am switching the working directory to /src. You can take a look at the docker file here.

After doing that it still didn’t work and because I had to make sure that the build context was point to the folder that had dockerfile. Who came up with the name build context? Seriously. They could have named it “folder containing dockerfile”? The tool tips are the worst in Azure DevOps Builds Tasks. When you are stuck in a situation none of these tooltips make any sense. For instance, I am trying to figure out what does build context means. Is it a variable? Some build folder? And the tooltip says, “Path to build context”. But what does context mean? Crazy.

Alright, on to the next hurdle I overcame. Access denied. This one I tried multiple times. First of all, I added Docker Login and it didn’t work. Then I add a separate step to add tag. It still didn’t work. Because the tag you are supposed to provide has to match your docker hub repository name. Even after adding the same tag that docker is expecting it didn’t work. The reason was that Azure DevOps uses $(Build.BuildNumber) or something as image name and that same image name is being used when it tries to push to docker repository. As the last step that made it work was adding image name in the build tasks to be the same as docker is expecting. <yourid>/<repositoryname>:tagname.

I ensured that all the docker build tasks are using the same name <yourid>/<repositoryname>:tagname. Finally, I was able to push a docker image to docker hub. What a relief!