Sunday, June 16, 2019

Unit Testing ASP.NET Core Web API using XUnit and FakeItEasy

Let’s take a one ASP.NET Core Web API method and add unit tests using XUnit and FakeItEasy.

My environment: Visual Studio 2019 Enterprise, ASP.NET Core 2.2, FakeItEasy, AutoFixture and XUnit.

Source code

I have a Before and After version of the source code hosted on github. In the After folder, you can view the completed solution.

System Under Test

There is a ProductsController with one HTTP GET method returning a list of products and a unit test project. There is a ProductService injected into ProductsController that returns the Products. In the following section, we would like to add tests around the code and make it more production ready. One of the goals of this post is to show how often we just start like the code snippet as shown below and then when the code goes into production, there are all sorts of conditions that we have to account for. We will add all those conditions but let’s do it by adding tests for each condition.

[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
    private readonly IProductService _productService;
    [HttpGet]
    public ActionResult<IEnumerable<Product>> Get()
    {
        return _productService.GetProducts();
    }
}

Let’s write our first test that would validate that the Get() method shown above.

I like to start with a simple empty method with just the name of the test method split into three parts. This helps me understand under given condition how should the code behave. The three parts are explained as follows:

1.Actual Name of the method being tested – Get

2.Condition – WhenThereAreProducts

3.Expected Outcome - ShouldReturnActionResultOfProductsWith200StatusCode

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{

}

We will add all the dependencies that are required for ProductsController, below is the code that does that.

using AutoFixture;
using System;
using UnitTestingDemo.Controllers;
using UnitTestingDemo.Services;
using Xunit;
using FakeItEasy;
namespace UnitTestingDemo.Tests
{
    public class ProductControllerTest
    {
        //Fakes
        private readonly IProductService _productService;

        //Dummy Data Generator
        private readonly Fixture _fixture;
        
        //System under test
        private readonly ProductsController _sut;
        public ProductControllerTest()
        {
            _productService = A.Fake<IProductService>();
            _sut = new ProductsController(_productService);

        _fixture = new Fixture();
        }

        [Fact]
        public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
        {
            //Arrange


            //Act


            //Assert
        }
    }
}

In the above code, I have added comments that explain what each line of code does. The public constructor is responsible for setting up our private objects. The system under test is called as _sut, so in all the methods it is easier to locate the system under test. In the unit test, we have three sections, Arrange, Act and Assert. I like to put these comments, so it is easier to scan different pieces of logic.

Next, we will add code to Arrange and Act as shown below,

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get();

    //Assert

}

In the above code, we create 3 fake products using _fixture and then using FakeItEasy we define that when GetProducts() is called it should return those three fake products. And then we call the _sut.Get().

In the Assert part, we want to make sure that our method _productService.GetProducts() was called and it returned a result that is of type ActionResult and it returns a 200 status code. If it doesn’t return that status code, then it should fail and then we will refactor our ProductsController code.

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as ActionResult<IEnumerable<Product>>;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.IsType<ActionResult<IEnumerable<Product>>>(result);
    Assert.NotNull(result);
    Assert.Equal(products.Count, result.Value.Count());
}

In the above code, in the Assert section, we are making sure that _productService.GetProducts() must have been called only once, the type of the result is of type ActionResult, the result is not null and the count of products returned are the same as we created in the Arrange section. Using this approach, we am not able to validate the status code of the result. In order to test the status code, we will have to modify the Controller code.

Lesson learned, just use ActionResult in the method signature and instead of returning directly a List, return an Ok(products). The OkObjectResult contains status code. Using ActionResult only makes it easier to test for different status codes.

Controller Method is now modified as below and the test is now modified to test for a valid 200 status code.

[HttpGet]
public ActionResult Get()
{
    return Ok(_productService.GetProducts());
}

[Fact]
public void Get_WhenThereAreProducts_ShouldReturnActionResultOfProductsWith200StatusCode()
{
    //Arrange
    var products = _fixture.CreateMany<Product>(3).ToList();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as OkObjectResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    var returnValue = Assert.IsType<List<Product>>(result.Value);
    Assert.Equal(products.Count, returnValue.Count());
    Assert.Equal(StatusCodes.Status200OK, result.StatusCode);
}

Now that we have all the Asserts statements passing, we ask ourselves another question, Is this the only test case to test against? Can there be more test cases that we haven’t accounted for? What if there is an unhandled exception or when there is no product found?

Let’s add another test that covers the test case of an unhandled exception. Before you look at the following code, ask yourself, what should be the expected outcome when there is an exception being thrown by the ProductService? It should probably return a 500 error code and possibly log the error.

First, let’s begin by writing an empty test with a descriptive name as shown below.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCode()
{
    //Arrange

    //Act
    

    //Assert
    
}

Next, we define the behavior of ProductService to throw an exception whenever GetProducts is called. If an exception is thrown, then we want to ensure that 500 HTTP Status Code is returned from the web service. The following test will fail, since we are not handling that case properly.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCode()
{
    //Arrange
    A.CallTo(() => _productService.GetProducts()).Throws<Exception>();

    //Act
    var result = _sut.Get() as StatusCodeResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    Assert.Equal(StatusCodes.Status500InternalServerError, result.StatusCode);
}

Let’s modify the Get method to handle unhandled exception by putting the processing logic into a try and catch block. After modify the code as shown below, you can run the test again and this time it would pass.

[HttpGet]
public ActionResult Get()
{
    try
    {
        return Ok(_productService.GetProducts());
    }
    catch (Exception ex)
    {

    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

We would like to add some kind of logging in the catch exception part. Logging using ILogger is the way to go, however, unit testing using ILogger is a bit problematic, because you have to use Adapter pattern to create your own logger that uses ILogger. For this part, I created a simple Logger called MyLogger with just a Log method to demonstrate unit testing.

The MyLogger.cs code is shown below.

using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace UnitTestingDemo.Services
{
    public interface IMyLogger
    {       
        void Log(string message, Exception ex);
    }
    public class MyLogger : IMyLogger
    {
        public void Log(string message, Exception ex)
        {
            //Log to database or use application insights.
        }
    }
}

The ProductsController.cs is modified to log exception as shown below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using UnitTestingDemo.Models;
using UnitTestingDemo.Services;

namespace UnitTestingDemo.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class ProductsController : ControllerBase
    {
        private readonly IProductService _productService;
        private readonly IMyLogger _logger;

        public ProductsController(IProductService productService, IMyLogger logger)
        {
            _productService = productService;
            _logger = logger;
        }

        [HttpGet]
        public ActionResult Get()
        {
            try
            {
                return Ok(_productService.GetProducts());
            }
            catch (Exception ex)
            {
                _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
            }
            return StatusCode(StatusCodes.Status500InternalServerError);
        }
    }
}

The Startup.cs modified as shown below

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

    services.AddTransient<IProductService, ProductService>();
    services.AddSingleton<IMyLogger, MyLogger>();
}

We now modify our unit test to in the following ways to make it pass,

1. We renamed the method to include logging piece.

2. Mocked behavior of MyLogger.cs class

3. Asserting that MyLogger’s Log method must have been called when there was an exception.

[Fact]
public void Get_WhenThereIsUnhandledException_ShouldReturn500StatusCodeAndLogAnException()
{
    //Arrange
    A.CallTo(() => _productService.GetProducts()).Throws<Exception>();
    A.CallTo(() => _logger.Log(A<string>._, A<Exception>._)).DoesNothing();

    //Act
    var result = _sut.Get() as StatusCodeResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    A.CallTo(() => _logger.Log(A<string>._, A<Exception>._)).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);
    Assert.Equal(StatusCodes.Status500InternalServerError, result.StatusCode);
}

Let’s add another test case to account for when there are no products found, then we would like to return a 404 Not Found result.

[Fact]
public void Get_WhenThereAreNoProductsFound_ShouldReturn404NotFoundResult()
{
    //Arrange
    

    //Act
    

    //Assert
    
}

     Let’s write a failing unit test and then add the condition in our Controller Get method.

[Fact]
public void Get_WhenThereAreNoProductsFound_ShouldReturn404NotFoundResult()
{
    //Arrange
    var products = new List<Product>();
    A.CallTo(() => _productService.GetProducts()).Returns(products);

    //Act
    var result = _sut.Get() as NotFoundResult;

    //Assert
    A.CallTo(() => _productService.GetProducts()).MustHaveHappenedOnceExactly();
    Assert.NotNull(result);                        
    Assert.Equal(StatusCodes.Status404NotFound, result.StatusCode);

}

Modify the Get method as follows

[HttpGet]
public ActionResult Get()
{
    try
    {
        var products = _productService.GetProducts();
        if (products?.Count > 0)
        {
            return Ok(products);
        }
        return NotFound();
    }
    catch (Exception ex)
    {
        _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

Finally, if you are using Swagger then adding the Produces attribute will result into better documentation. I know this isn’t related to unit testing but it is a nice to have.

[HttpGet]        
[ProducesResponseType(typeof(List<Product>), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesResponseType(StatusCodes.Status500InternalServerError)]        
public ActionResult Get()
{
    try
    {
        var products = _productService.GetProducts();
        if (products?.Count > 0)
        {
            return Ok(products);
        }
        return NotFound();
    }
    catch (Exception ex)
    {
        _logger.Log($"The method {nameof(ProductService.GetProducts)} caused an exception", ex);
    }
    return StatusCode(StatusCodes.Status500InternalServerError);
}

We have accounted for all the test cases to make this code ready for Production.

If you can think of any test case that I haven’t accounted for, then please let me know in the comments section below. The final version of the test can be found here.

Thursday, May 9, 2019

PowerApps and Flow Deployment Issues

In this article, we take a look at Flow deployment issues when it comes deploying multiple PowerApps applications. The issues are encountered when you have the following setup.

  • You have multiple applications within the same environment. For instance in the same environment, you have a PowerApp called AppDEV, AppQA, AppUAT and AppPROD
  • You have applications connected to their respective SQL Servers.
  • You have a separate Flow for each App and they are named as FlowDEV, FlowQA, FlowUAT, and FlowPROD
  • Each Flow will use a different connection such as SQLConnectionDEV, SQLConnectionQA, SQLConnectionUAT and SQLConnectionPROD.

To deploy the application, please follow procedure listed below:

  • Export the application and on the Export package screen under Related Resource for the app, for each resource and for Import Setup option, select either Update or Create as New option.
  • Import the application and on the Import package screen under Related Resource for the app, for each resource and for the Import Setup option, select either Update or Create as New option.

Ideal deployment scenario,

Export AppDEV app and import it into AppQA app. The connections for AppDEV and AppQA are pointing to their respective connections including Flow.

Deployment 1.

When we export the app for the first time, we select the Update option for the SQL Connector and Create as New for the Flow, since the Flow doesn’t exist in QA.

Expected Behavior

1. AppQA should be connected to SQLConnectionQA

2. FlowQA should be connected to SQLConnectionQA.

Actual Behavior

1. AppQA is connected to SQLConnectionQA

2. FlowQA is connected to SQLConnectionQA.

So this is good, as everything is as expected. Alright, follow along as new requirements have come up and require a new deployment.

Deployment 2.

Now assume that we have the environment setup as shown below.

1. AppDEV and FlowDEV connected to SQLConnectionDEV

2. AppQA and FlowQA connected to SQLConnectionQA.

When we export the application, our gut instinct is that since SQL Connector and Flow both exists as QA we need not to select “Create as New” for the Flow. So we select Update option during Export and Import of the Application.

Expected Behavior

1. AppQA should be connected to SQLConnectionQA

2. FlowQA should be connected to SQLConnectionQA.

Actual Behavior

1. AppQA is connected to SQLConnectionQA

2. FlowQA is now renamed to FlowDEV and is to SQLConnectionQA.

Our environment now looks like as follows:

1. AppDEV and FlowDEV connected to SQLConnectionDEV

2. AppQA and FlowDEV connected to SQLConnectionQA.

To recap, if you didn’t notice, we now have two Flows by the same name called FlowDEV on pointing to SQLConnectionDEV and another pointing to SQLConnectionQA but by just looking at the name we don’t know which Flow is which.

The issues now start to compound.

Deployment 3.

On the Import screen, on the Import Setup option, click on the Update option. When presented with the Flow to select you will be presented with two flows by the same name FlowDEV. At this point you do not know which Flow is pointing to SQLConnectionQA. Accidentally, you clicked on the wrong Flow and then you deploy.

The issues happen because you do not know which Flow is getting used by your application. If you delete the wrong flow then the PowerApp starts misbehaving. Again if more people create additional flows and it becomes a huge mess trying to chase down the bug.

Solution

The solution to this problem we have come up with is to during the Import process always select “Create as New” option and then follow a naming convention. When you have multiple flows created by this method, you can delete the older flows.

The naming convention

Follow the naming convention if you are doing a deployment on 2019-05-12 at 10:47.

<Environment>_NameoftheFlow_20190512_1047.

<Environment> is either DEV, QA, UAT or PROD.

What are the benefits of using this naming convention?

The different parts of the name provide different benefits and they are as follows:

Environment – When multiple flow is helps determine the Flow is targeting which environment. However, always verify on Flow details page regarding which connecting string is being used.

Name of Flow – Provide a distinct name that will be unique across different applications. This will provide us some hints with the purpose of the Flow.

Current Date – Add date formatted as 20190512 (YYYYMMDD). It helps to identify when the Flow was created and when it is safe to delete the Flow. For instance, if a newer Flow exists for the same environment then a cleanup can be performed.

Current Time – Add current time formatted as 1053 (HHmm). It helps to distinguish Flows that were created the same day.

You can check flows that were created older than the most recent one and delete them. So far this approach has worked for us and you can suggest if you have an alternate way of deploying PowerApps with Flows in this kind of setup.

What are the disadvantages of this approach?

The Flow execution history will be lost if you are concerned with that. However, if everything was successful then after certain period of time probably you do not care about the execution runs.

It also has an administration overhead as you have to remember stuff and it is definitely not DevOps friendly approach.

Thursday, February 21, 2019

A22-Use Azure DevOps to build a docker image and push to private repository

In this post, I want to explain the steps I took to create a docker image and push to a private docker hub repository using Azure DevOps. In the previous post, we looked at adding docker support to an existing ASP.NET Core SPA application that uses Angular 7.

It took me 17th attempts to get the build to work. In hindsight, it is always easy to say, that I could have read the logs or documentation but by just following the docs or the web interface of Azure DevOps, it isn’t obvious for a new user to figure this out. You can argue that Azure DevOps is great, or this and that, but it has its quirks and issues. I have been using it since it was called TFS 2010, so when I said a new user, I meant a new user from the standpoint of creating and publishing docker images.

Initially, I started with just two tasks in the build pipeline, Build an image and Push an image. It didn’t work because I didn’t built and push using dotnet build and dotnet publish tasks. After adding those two tasks, it still didn’t work because the files weren’t available inside docker container. To fix that, I had to make sure that dotnet publish was using the –output src as an argument. It is because in the docker file for the project, I am switching the working directory to /src. You can take a look at the docker file here.

After doing that it still didn’t work and because I had to make sure that the build context was point to the folder that had dockerfile. Who came up with the name build context? Seriously. They could have named it “folder containing dockerfile”? The tool tips are the worst in Azure DevOps Builds Tasks. When you are stuck in a situation none of these tooltips make any sense. For instance, I am trying to figure out what does build context means. Is it a variable? Some build folder? And the tooltip says, “Path to build context”. But what does context mean? Crazy.

Alright, on to the next hurdle I overcame. Access denied. This one I tried multiple times. First of all, I added Docker Login and it didn’t work. Then I add a separate step to add tag. It still didn’t work. Because the tag you are supposed to provide has to match your docker hub repository name. Even after adding the same tag that docker is expecting it didn’t work. The reason was that Azure DevOps uses $(Build.BuildNumber) or something as image name and that same image name is being used when it tries to push to docker repository. As the last step that made it work was adding image name in the build tasks to be the same as docker is expecting. <yourid>/<repositoryname>:tagname.

I ensured that all the docker build tasks are using the same name <yourid>/<repositoryname>:tagname. Finally, I was able to push a docker image to docker hub. What a relief!

Monday, February 18, 2019

A21-Adding docker support to ASP.NET Core SPA Application

This post is about my experience adding docker support to the A100 website. You can find the github repo here.

Disclaimer: I am a beginner to docker and Linux, and I don’t know what I am doing while trying to configure docker but I just wanted to share what steps I took and where my frustrations were with respect to dockerizing existing ASP.NET Core SPA applications.

I thought to take a spin at docker and decided to dockerize the A100 website. I knew that you could just right click the project and click on “Add Docker Support” to dockerize an existing application.  So I did that and then I clicked on F5 to debug the website inside a docker container from Visual Studio. It didn’t worked because there was no node installed.

No node installed on the base dotnet runtime image

The A100 ASP.NET Core application is a spa application built using the default Angular template. The backend is ASP.NET Web API and front-end is using Angular, all within a single Visual Studio project. The spa application depends on npm and node during compile time as well as runtime. The docker file that is generated uses nano server as the base image, which has the dotnet core runtime and nothing else.  There is no node installed in the base nano server image.

Not an easy way to install node on nano server without PowerShell

This led me to the path of installing node on the nano server, but there is no easy way to install node because PowerShell is not available in those nano server images.

Mitul, curl.exe and tar.exe are available on nano server, why you didn’t try that”- as you might suggest. I did try that, however, I couldn’t unzip the .zip file that I downloaded from the website using tar.exe. I tried tar.exe –xf node.zip multiple times but it didn’t work. Maybe there is a bug in the tar.exe on nano server. It kept saying “Unrecognized format”. Maybe tar doesn’t support .zip files. At this point, I gave up the idea of using a window server container image since I couldn’t get it to work.

But Mitul, you could have used multi stage builds in docker image to download the file inside server core and then copied into nano server image. Why didn’t you try that?” – as you might suggest again. I didn’t realize this at this point and I learned that that’s how dotnet is building their base image. This is an option that I will try next. I hope it would work.

I followed this github issue to successfully install nodejs.

Node-sass doesn’t work on Linux container

I switched to Linux containers and then regenerated docker file again and this time too the base image didn’t had node installed on it. So I had to install nodejs both on the sdk image and the runtime image because you need once for compiling and another one during runtime. Finally, I was successfully able to install node. Yay! But when I tried to build docker image, it errored out complaining that node-sass is not suitable or not available for your environment. I came across the suggestion that I might have to rebuild it. I tried rebuilding node-sass package and this time I was successfully able to rebuild node-sass. However, I during runtime I kept getting error that could not generate .css from sass and it was node-sass related issue. It didn’t work.

Error: Missing binding /app/ClientApp/node_modules/node-sass/vendor/linux-x64-67/binding.node
Node Sass could not find a binding for your current environment: Linux 64-bit with Node.js 11.x

Found bindings for the following environments:
   - Windows 64-bit with Node.js 10.x
   - Windows 64-bit with Node.js 11.x

This usually happens because your environment has changed since running `npm install`.
Run `npm rebuild node-sass` to download the binding for your current environment.

My cross platform dream was vanishing quickly at this point. I was questioning my approach of developing in windows and hosting it in linux. Should I have developed inside a Linux environment like Windows Subsystem for Linux (WSL) instead? Since node-sass was giving issues, I decided to ditch sass and use scss. But I realized that scss is sass itself. Its newer version is called scss and it still requires node-sass. I was getting really frustrated. Finally, I switched to plain .css for the A100 website.

This step worked and I was able to do F5 and run the website from inside a Linux container. It might seem like that in one or two tries this thing worked for me. No. I spent one and half day in trying to figure this thing out with multiple attempts at building a docker image. I realized that there are few things where we can improve this experience for new users.

Provide alternate ways of creating images with node installed

I understand that ASP.NET Core team wants to keep the docker images as lean as possible but please provide documentation on how we can easily install node on nano server. Maybe there is an I was not able to find it. A better approach would be that when you try to click on “Add Docker Support” inside Visual Studio, and if this is a spa application, then provide a message or add comments to the docker file indicating that there is no node installed and please follow this article to install node in this docker container.

Node-sass npm package to be fixed

The node-sass package has given me the maximum issues. I do not understand that why would node-sass not work on Linux. Inside of package.json, there is nothing specific related to windows then when it tries to install node-sass on Linux, then it should try to install node-sass that is native to Linux. Why it is failing to work properly on Linux? I have no idea. It would be nice if this is addressed as I am not using sass or scss for now. I could be wrong but I have seen in tutorials or on youtube videos that most people are using sass or scss. If this is the case then this has to be improved.

I learnt that I need to read more documentation regarding docker, Linux and ASP.NET Core. If you have suggestions then please provide me in improving my understanding of docker.

Tuesday, January 15, 2019

PowerApps From A DevOps Perspective

In this post, I would like to talk about PowerApps from a DevOps perspective. When we think about DevOps we think about Source Control, Continuous Integration, Automation, Configuration Management etc. Let’s take a look into it.

No Source Control for PowerApps

When you are developing with PowerApps, there is no way to do Source Control. There are no source files. The only artifact you can version control is the .zip file that you can export. The application that you create/edit in the browser is continuously updated and whenever you feel like it is ready for publishing then you make that version of the application available to users in your organization. Lack of version control causes problems when you cannot track down which version of the application caused a particular bug. The only way to guarantee no bugs is by doing a thorough testing of the application.

Multiple Environments Beware

Creating a separate Dev, Test, UAT and Production environment is quite common for DevOps practices. In PowerApps, you can create separate environments in PowerApps but it comes with its own problems. For instance, in order to create and administer environments you need Plan 2 License in PowerApps. This is not a big deal since you can manage with one license. If you are creating application that uses On-Premises Data Gateway then only applications from the Default Environment can connect to the Gateway. Yeah! if you thought that was a bummer. Then you are right. Well all hope is not lost, if you really need it then you have to create a support ticket with Microsoft. Another limitation with multiple environments is that you cannot create as many environments as you like. Because you can only create two production environments. Another bummer. If you wanted Test, UAT and Production environment then you are toast. You will have to request another user license and then you would be able to create it. In PowerApps, having multiple environments creates more issues in my opinion from a DevOps perspective. It is not as seamless as one would come to expect from a traditional DevOps practice. If you are creating Model Driven apps only then it makes sense to create multiple environments. If you are creating only Canvas Driven apps then just create different apps with – DEV, TEST, UAT and PROD suffixes. Having multiple apps for different purposes creates it own issues as detailed below.

CI/CD Pipeline

In PowerApps, you don’t have to build your code. Any change you make to the application is live for you to test it. In that way it is very productive. To publish the application you just click on the publish button and it is live. But what if you screw an update, then you have an option of going back to a previous version of the application. The workflow of getting the application to end-users all the way to production is not seamless. Since we create different app for DEV,TEST,UAT we had to export and import every time we had to propagate a change to these apps. For one of the applications we built it was quite a lot of work every time we exported and imported into another application-dev/test/prod. For instance, we had to ensure that all the connections were pointing to environment specific connection strings, data going in accurately, perform manual checks, create a new flow every time.

Configuration Management

Well there is none. You cannot manage configuration of your PowerApps application in one place. For example, you can put app settings inside web.config file in a typical asp.net web application. In asp.net when you change environments then you can just update web.config. If you want to persist your app settings then it is better to put them inside a SQL table if you are utilizing On-Premises databases. In other words, store your app settings into an external persistent store.

Collaboration

It is very common to collaborate with brilliant team mates on different projects. If you are using any modern source control system then you can easily collaborate with Team members and work on features simultaneously. In PowerApps, only one developer can work on the application at given point of time. Good luck building your next ERP front end! One of the selling points of PowerApps is that it is very good for building quick small single purpose application that a single developer can churn out in days. But not good when you want to build a large application.

Since we are on the topic of collaboration let me add it here. If you developing application for an enterprise and think people are going to use it, then please don’t use your own personal account to develop the application. Because the user who creates the application becomes the owner of the application and guess what, you cannot change the owner of the application later on. If you are an ops guy then you want to have that ownership of the application. Please utilize a service account to develop PowerApps application.

That’s all I have for today and if you would like to share anything regarding PowerApps please let me know in the comment below.

Monday, December 17, 2018

5 Takeaways for Building PowerApps Application

In this post, I would like to share some of the pitfalls we encountered while developing an enterprise grade PowerApps Application.

First some background about the application. The key components of the application comprised of an on-premises SQL Server, Enterprise Data Gateway, Microsoft Flow and Windows Service. The SQL Server database consisted of enterprise ERP data and hence entities in this database had lots of columns just like a typical ERP system. The application we built was PowerApps Canvas application where you have more control over the look and feel of the application. We were surprised by how much you can customize the look of the application and make it look enterprizey [see I made up a new word].

Limit the number of controls
There are couple of reasons why you want to limit controls that are present on a page and in the whole application.

1. For performance reasons, in PowerApps Canvas driven apps, if you are adding lots of controls on a given page then performance of the application degrades. We saw performance of dropdowns controls and calendar controls decrease. Dropdown would appear after a delay and calendar popup would show after a delay.

2. In PowerApps if you add any control in the application then you can access that control from anywhere unlike Windows Forms page where you can only access controls that are present on a given page. Oh! that’s powerful you might say and that is why it is called PowerApps. As you add more controls to PowerApps, it becomes difficult to keep track of all the controls and soon you will run out of creative and unique names to provide for your controls.

3. After some point when you have lots of controls, you will forget control names and make mistakes while trying to reference the right control. Why did that popup didn’t disappear when it was supposed to.

Limit the logic you put into the Functions bar
One of the SOLID principles is Separation of Concerns and when you go against it then it bites you where it hurts the most. In PowerApps, presentation layer is meant for displaying and Microsoft Flow is for handling business logic processing. The Function bar at the top is so powerful that you can accomplish a whole lot with so many functions at your disposal. So no wonder we added a ton of business logic into it and to a point that lines of code increased to 1000 lines. No kidding. In hindsight, it is always easy to point the mistake and bang on the head that it was so obvious but everyday it is not. Let’s go through the issues with putting lot of lines of code into that tiny function bar at the top. People familiar with Excel will be able to relate to this.

1. Every single time to view code you have to expand the function bar.

2. Endless scrolling of numerous lines of code.

3. As lines of code increases, error messages looked like some ancient Egyptian symbols. We were spending hours debugging the error message. Luckily we knew how to comment code :) and narrow down the issues.

4. Smallest mistake or a syntax issue can take up hours to resolve it because the error message don’t point to the the current line of code. Oh and did I tell you that there are no line numbers for the code you write in that function bar.

5. Function bar freezes when there are many lines of code. Even few characters of text can take few seconds to appear. When you try to comment out a section it takes a while to take effect.

6. Bonus! There is no source control for that code. So if you broke something then good luck. Like a good programmer you should always use source control for any type of code. If you had lot of logic associated with a button and you clicked on Associated Flow with this button then oops you just lost all that logic. Please put code into source control.

Limit the number of columns and records you display
PowerApps is not meant to create the front end of an ERP system and it should not be used for such purposes. PowerApps shines when you build an application that fulfils that last mile gap. As there are more columns on a given page, the page becomes slower to respond and UX is not the best when a user has to scroll horizontally or vertically to view the full picture.

Put application settings into external table

In PowerApps, there is no App.Config or Web.Config file where one can put application settings and change them. So it is important that you store app settings into an external table. If you hard code some key into Application then when you try to export and import PowerApps to create a new application then you will have to edit that key again. Try to do that at multiple places and this process become quickly inefficient. Again this is basic 101 if you a developer.

Do not fear Flow because of Licensing issues

Early on we wanted to be cost effective and hence avoided using Flow. The moment we found ourselves stuck with having to debug 1000 lines of code again and again we decided to give Flow a chance. What we learnt after reading the docs is that for our purposes, the Flow quota allotted for per user per month were plenty. If you have Office 365 license then you have 2000 executions allowed per user per month and it is aggregated at the tenant level. As soon as we resorted to Flow and extracted key business logic into SQL Server stored procedure, we were back to application stability.

So that’s all I have to share this Monday late night. If you have your learnings regarding PowerApps please share in the comments below.

Tuesday, December 4, 2018

A20-Upgrading to Angular 7 and ASP.NET Core

This post is a part of a series of posts that I am writing as I am building an app using Angular and ASP.NET Core 2.1. Links to previous posts –> A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19Github Repo

Due to personal issues (new job, new country, India trip), I wasn’t able to blog.

Since I posted last time, Angular came out with Angular 7 and a new SDK for .NET Core also came out. Instead of continuing on the older version of Angular I decided to update everything.

I updated

Node to latest version v11.3.0.

NPM is at 6.4.1

I updated Visual Studio to latest and VS Code to latest version.

For updating angular to 7, I followed steps outlined at https://update.angular.io/

Updated Angular-cli globally and locally

Updated Angular Core and RxJs stuff.

Github warns you of vulnerabilities and everytime you run npm commands it shows you issues with packages. There were lot of package vulnerabilities so one by one I removed all of them and npm audit is not complaining anymore. 

After updating all the packages and running ng update for angular packages, I was getting errors when running dotnet run.

image

I had to update nuget package Microsoft.AspNetCore.SpaServices.Extensions. After updating it, I had to fix package.json scripts section’s start and build command to remove –extract-css flags.

Last issue I had to fix was related to certain rxjs operator usages in the application, eg. of, map, catch.

image

After fixing those I was able to compile and run the command dotnet run to view the application in the browser as I used to do before.