Thursday, April 19, 2018

Session : Jumpstart to Azure Functions at Sri Lanka Developer Forum

Delivered an hour-long session titled Jumpstart to Azure Functions at Sri Lanka Developer Forum meetup for the month of April 2018. There I went through the following areas along with a demo,
  • Serverless Computing
  • Azure Functions
  • Azure Functions Proxies
  • Performance Considerations
highres_469721004
Sri Lanka Developer Forum - April, 2018
For more information,
   Meetup Event

Happy Coding.

Regards,
Jaliya

Tuesday, April 10, 2018

Durable Functions in Azure Functions for Function Chaining

When designing Azure functions one of the best practices that Microsoft recommends is Azure functions should be stateless.

But imagine you have this requirement, where you to need call an Azure function and the output of that function needs to be applied as the input of another function and so on and so forth.
image
Function Chaining
This scenario is known as Function chaining. The very first function (say F0) which orchestrates F1, F2, F3, and F4 will need to maintain the return values and control the flow. But what if after x amount of time, the process running F0 recycles or the VM which F0 is running on rebooted. If we use regular Azure Functions, since we aren’t maintaining state, we can’t track at which point the F0 got failed.

Enter Durable Functions.

Durable Functions are designed to do all the hard work of maintaining the function state for us. It's built on top of Durable Task Framework. As of today (10th April 2018) Durable Functions are in the preview stage.

In this post let's see how Durable Functions can be used to chain functions. It’s always best to go by an example. First, let’s see how we can create Durable Functions. Please note all the instructions given are as of today, these steps might change over time.

If you are using Azure Portal, if you create a new Function App, by default it’s using Runtime 1.x. And because of that, you won’t be able to see any Durable Function templates when creating new functions. For that, you need to go to Function App Settings and change the runtime to beta.
image
Function App Settings
image
Runtime Version
Here I have already switched to beta. Alternatively, you can go to Application Settings and change the FUNCTIONS_EXTENSION_VERSION to beta.

And then if you try to create a new function, you can see all the Durable Functions related templates.
image
Durable Functions Templates
When you try to create a new function using one of the above templates, you will be prompted to install Durable Functions extension.

If you are using Visual Studio, make sure you are on 15.3 or greater and Azure Development workload is included in your setup.

And when you are creating new Function App, make sure to select Azure Functions V2 Preview (.NET Core).
image
New Project
For this demo, let’s go ahead with Visual Studio. Once the project is created, let’s add a new function using Durable Functions Orchestration template.

image
Templates
Once it’s created, you can see that Microsoft.Azure.WebJobs.Extensions.DurableTask NuGet package is added for us.
image
project.csproj
I have modified the default file to something like below.
public static class Function1
{
    [FunctionName("Function1")]
    public static async Task<int> RunOrchestrator(
        [OrchestrationTrigger] DurableOrchestrationContext context)
    {
        int x = await context.CallActivityAsync<int>("GetSum", new SumModel(1, 10));
        int y = await context.CallActivityAsync<int>("GetSum", new SumModel(x, 10));
        int z = await context.CallActivityAsync<int>("GetSum", new SumModel(y, 10));
 
        return z; // z = 31
    }
 
    [FunctionName("GetSum")]
    public static int GetSum([ActivityTrigger] SumModel model, TraceWriter log)
    {
        // Time consuming operation
        return model.Number1 + model.Number2;
    }
 
    [FunctionName("Function1_HttpStart")]
    public static async Task<HttpResponseMessage> HttpStart(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")]HttpRequestMessage req,
        [OrchestrationClient]DurableOrchestrationClient starter,
        TraceWriter log)
    {
        // Function input comes from the request content.
        string instanceId = await starter.StartNewAsync("Function1", null);
 
        log.Info($"Started orchestration with ID = '{instanceId}'.");
 
        return starter.CreateCheckStatusResponse(req, instanceId);
    }
}
 
public class SumModel
{
    public int Number1 { get; private set; }
    public int Number2 { get; private set; }
 
    public SumModel(int number1, int number2)
    {
        Number1 = number1;
        Number2 = number2;
    }
}
Here at the bottom, we have a HttpTrigger Function1_HttpStart to trigger the Orchestrator function which is Function1. Function1 will call GetSum multiple times to make it like a chain.

There are a couple of interesting things happening here. Imagine GetSum takes 10 seconds to complete. So that means Function1 will be running for more than 30 seconds. But it’s not. After calling GetSum, Function1 goes to sleep. And when GetSum returns completing his functionality, he notifies Function1 with the return values and Function1 resumes its operation. For all these time, the state is getting managed by the runtime.

Isn’t it fascinating.

There are different scenarios where Durable Functions is coming really handy, please do check those out using the link below.
https://docs.microsoft.com/en-us/azure/azure-functions/durable-functions-overview

Happy Coding.

Regards,
Jaliya

Tuesday, March 27, 2018

Visual C# Technical Guru - February 2018

Another month as a judge in Microsoft TechNet Guru Awards under Visual C# category. The TechNet Guru Awards celebrate the technical articles on Microsoft TechNet.

image
Visual C# Technical Guru - February 2018
Happy Coding.

Regards,
Jaliya

Monday, March 26, 2018

Azure App Service: Enabling Static Content

I had an AngularJS application running as an Azure App Service and noted that it doesn’t serve JSON files.

The fix was simple, adding a web.config file manually and allowing static content.
<configuration>
  <system.webServer>
    <staticContent>
      <mimeMap fileExtension="json" mimeType="application/json" />
    </staticContent>
  </system.webServer>
</configuration>
Happy Coding.

Regards,
Jaliya

Tuesday, March 20, 2018

Azure Functions : Map Data to Output Bindings

From Azure Functions, we can map data to Output bindings using following three options.
  1. Using function return value
  2. Using out parameter
  3. Using ICollector or IAsyncCollector
In this post let’s have a look at them in detail. I have created a Azure Function App using the Azure Portal and I will be using portals’ editor in this post.

The scenario I am going use is, I will have a Azure Function which can be triggered manually. From there I will be outputting messages to a Azure Storage Queue.

So first let’s create a Queue trigger.
image
Queue trigger
image
Queue trigger Properties
using System;
 
public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Queue trigger function processed: {myQueueItem}");
}
So every time a message get queued to myqueue-demo, QueueTriggerCSharp1 function will get triggered. It will just log the message, so we can see whether the data is getting correctly mapped when we are using Output bindings in our Manual trigger.

Now let’s create the Manual trigger.
image
Manual trigger
image
Manual trigger Properties
using System;
 
public static void Run(string input, TraceWriter log)
{
    log.Info($"Manually triggered function called with input: {input}");
}


1. Using function return value


The most easiest way to map data to Output binding is using the functions return value.

I am changing our ManualTriggerCSharp1 as follows.
using System;
 
public static string Run(string input, TraceWriter log)
{
    log.Info($"Manually triggered function with input: {input}");
    return $"Using Return: {input}";
}
Now we have a function which returns a string. Our requirement is map this return value to a Output binding.

For that, let’s go to Integrate menu under our ManualTriggerCSharp1 function.
image
Integrate
And you will see something like below.
image
Integrate
Click on New Output under Outputs. Select Azure Queue Storage.
image
Select Output Type
From next wizard page, you can simply select the “Use function return value”. Then you will need to select the Queue name and it’s Storage account connection which you want to write the output into.
image
New Output Properties
Click on Save, and run our Manual trigger. After running it, if you go to Monitor menu under QueueTriggerCSharp1, you can see the return value has been read.
SNAGHTML1ec1504c
Monitor
image
Monitor


2. Using out parameter


Now let’s modify our ManualTriggerCSharp1 function to have out parameter and set it’s value in the function body.
using System;
 
public static string Run(string input, TraceWriter log, out string outParameter)
{
    log.Info($"Manually triggered function with input: {input}");
 
    outParameter = $"Using Out: {input}";

    return $"Using Return: {input}";
}
Now let’s add this Output binding.

Again add a New Output and select Azure Queue Storage and then do as following (remember to specify the correct Queue name which our QueueTriggerCSharp1 is watching).
image
New Output Properties
Important thing to keep in mind is Message parameter name needs to be same as your out parameter name.

Save and run the manual trigger and examine the Monitor menu under QueueTriggerCSharp1.

Now you should be seeing the value of out parameter has been added to queue and is read.
image
Monitor
Now what if the function is an async one. Then we can’t use out parameters.

3. Using ICollector or IAsyncCollector


This is my preferred way of mapping data to Output bindings. One reason of course is most of the time, functions are async. And the other reason is, using Collectors we can pass multiple data. Let’s modify the ManualTriggerCSharp1 as follows.
using System;
 
public static async Task<string> Run(string input, TraceWriter log, IAsyncCollector<string> queueCollector)
{
    log.Info($"Manually triggered function with input: {input}");
 
    await queueCollector.AddAsync($"Using Collector: {input} 1");
    await queueCollector.AddAsync($"Using Collector: {input} 2");
 
    return $"Using Return: {input}";
}
Right now since we don’t have the existing outParameter and it’s binding, let’s change the previous Output binding.
image
Output Properties
Again, we need to make sure Message parameter name is same as the function parameter name.

And if we save and run, we should be seeing the message is queued and read by QueueTriggerCSharp1.
image
Monitor
And note: ICollector is the synchronous counterpart of IAsyncCollector and I don’t think it deserves a demonstration of it’s use.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Friday, March 2, 2018

Visual C# Technical Guru - January 2018

Another month as a judge in Microsoft TechNet Guru Awards under Visual C# category. The TechNet Guru Awards celebrate the technical articles on Microsoft TechNet.

Post in WikiNinjas Official Blog,
https://blogs.technet.microsoft.com/wikininjas/2018/03/02/microsoft-technet-guru-winners-january-2018/
image
Visual C# Technical Guru - January 2018
Happy Coding.

Regards,
Jaliya

Tuesday, February 13, 2018

Wrote a post on Wiki Life at Official Blog of TechNet Wiki

Wrote a post in Wiki Ninjas - Official Blog of TechNet Wiki. The title of the post is TNWiki Article Spotlight – Azure DevOps Project – A review of the preview.
image
TNWiki Article Spotlight – Azure DevOps Project – A review of the preview

Happy Coding.

Regards,
Jaliya

Wednesday, January 31, 2018

Visual C# Technical Guru - December 2017

Another month as a judge in Microsoft TechNet Guru Awards under Visual C# category. The TechNet Guru Awards celebrate the technical articles on Microsoft TechNet.

Post in WikiNinjas Official Blog,
image
Visual C# Technical Guru - December 2017
Happy Coding.

Regards,
Jaliya

Tuesday, January 23, 2018

Rename/Move Files and Directories using git-mv

If you have used git, I am sure you must have experienced this where you have renamed files/folders specially a case change, it won't reflect in the remote. For instance, if you rename a file from helloworld.txt to helloWorld.txt or HelloWorld.txt, even though it's changed in the local, it won't get changed in the remote. 

In such cases, git-mv is a really handy command. A guy named Patrick Wied has written this wonderful post back in 2012 explaining use of git-mv with different scenarios. Great post!

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, January 17, 2018

Getting Started with Azure Functions

Serverless Computing has become a trending area in Azure than ever before. Basically, Serverless Computing is completely abstracting away the required resources from your concerns. It acts in an event-driven manner, resources are utilized only when you want them to, those are not up and running when you are not using them. So you are getting billed for only for what’s being used. You don’t have to worry about anything else.

When talking about Serverless Computing in Azure, as of today Azure provides following 3 set of features.
  1. Functions - Serverless Compute
  2. Logic Apps - Serverless Workflow
  3. Event Grid - Serverless Events
In this post, let’s see how we can get started with Azure Functions.

First, open up the Azure Portal. Go to New and search for “functions”. And click on Function App.
1. create
Function App
From the next panel, click on Create.
1.1 create
Create
From the next screen, fill in the details as required. For the Hosting Plan, let’s go ahead with the Consumption Plan because we can then get billed per execution of our functions.
3.  create config
New Function App Details
Once the function app is created. Find the created function app by going to More services and searching.
4. find function apps
Find Function Apps
And you should see all your function apps.
image
Function App
Now when you can click on created function app, you will something like below.
5. function app - overview
Function App: Overview
You can find the function app URL under Overview tab. If you click on the URL, an application will open, just as an App Service.
5. url-running
Function App
Function apps’ Settings is under Platform features.
5. function app - platform features
Function App: Platform Features
Right now to get us keep going, let’s create a Function. Click on Functions and then New Function as below.
6. new function
New Function
You are presented with a set of templates.
7. select template
Select Template
All the languages available are as follow (please note, for some of the languages all the templates are not available).
7. languages
Available Languages
First, let’s create an HTTP Trigger.
image
New HTTP Trigger
I have selected Language as C#. For the Name, you can give whatever the name you want. And then we need to select the Authorization level.
  • Function - A function-specific API key is required. This is the default value if none is provided.
  • Anonymous - No API key is required.
  • Admin - The master key is required.
Let’s go ahead with Anonymous for the purpose of the demo.

Once the function is created, you are presented with a page with a csx file. csx stands for C# Script file. Here you can edit the body of the Run method. Please note, you need to have the method signature as it is.

On the Test panel, let’s change the HTTP Method to GET and add in a new query parameter ‘’name”. Give it some value and click on Run, and you can see your Azure Function is running.
8. httptrigger
run.csx
You can get the function’s URL as above, let’s just copy it into a notepad.

To make the demo nicer, let’s add another function, this time a Timer trigger.
image
New Timer Trigger
I have selected C# as the Language, some Name and you can see the Schedule is expressed as a CRON expression. I have updated the CRON expression to every second.

Like the previous function, you will see the Run method, here let’s modify the method to send an HTTP GET to our previously created function.
using System;
 
public static async Task Run(TimerInfo myTimer, TraceWriter log)
{
    var client = new HttpClient();
    var response = await client.GetAsync("{FunctionUrl}?name=John");
    var result = await response.Content.ReadAsStringAsync();
 
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}, Result: {result}");
}
You can replace the {FunctionUrl} with the URL we have copied over to notepad. And let’s run it. You can see our Timer trigger function is executing every second calling the HTTP trigger function.
9 - time trigger log
Timer Trigger Running Log
Isn’t it nice or what?

Happy Coding.

Regards,
Jaliya

Monday, January 15, 2018

Passing Nullable Value for a DbCommand Parameter

I had this requirement where I wanted to pass a nullable property for a Parameter in DbCommand.
public void Execute(int? someId)
{
    using (DbCommand dbCommand = _context.Database.GetDbConnection().CreateCommand())
    {
        dbCommand.CommandType = CommandType.StoredProcedure;
        dbCommand.CommandText = "sp_SomeStoredProcedure";
        dbCommand.Parameters.Add(new SqlParameter("ParameterId", someId);

        // some code
    }

    // some code
}
I was expecting when someId is null, ADO.NET will consider passing null for the parameter. But apparently, that doesn't seem to be the case. Got required parameter is not supplied error. I even tried below which I felt would work,
dbCommand.Parameters.Add(new SqlParameter("ParameterId", someId.HasValue ? someId.Value : null));
But kept getting the error. Finally, Null coalescing operator with DBNull was there to my rescue.
dbCommand.Parameters.Add(new SqlParameter("ParameterId", someId ?? (object)DBNull.Value));
Happy Coding.

Regards,
Jaliya

Monday, January 8, 2018

C# 7.2 : in Parameters

With C# 7.2, a nice set of syntax improvements that enable working with value types were introduced. My favorite among these is in Parameters.

In this post let’s see what in parameters really is.

As you have already know C# had ref and out for a quite a while now. If we recall what ref and out does (within parameter modifier context), it’s basically as follows.
static void Main(string[] args)
{
    Method1(out int i);
    Console.WriteLine(i); // 10
 
    Method2(ref i);
    Console.WriteLine(i); // 20
}
 
static void Method1(out int i)
{
    // Variable i needs to be assigned a value before leaving the method
    i = 10;
}
 
static void Method2(ref int i)
{
    // Variable i might/might not be assigned a value before leaving the method
    i = 20;
}
Both were used to pass the parameter by reference. The difference is when using out parameter, variable needs to be assigned a value before returning from the method. In ref parameter, there is no such requirement, within the method being called you can or not assign a value to ref parameter. But since there is a possibility of a value not being set there, before passing the ref parameter, it should have a value assigned.

But here from the caller, there is no option to say, within the method being called, the parameter should stay as readonly (if we make the parameter readonly, it’s affecting for outside use of the variable as well).

For this comes the in parameters.
static void Main(string[] args)
{
    Method1(out int i);
    Console.WriteLine(i); // 10
 
    Method2(ref i);
    Console.WriteLine(i); // 20
 
    Method3(i);
    Console.WriteLine(i); // 20
}
 
static void Method1(out int i)
{
    // Variable i needs to be assigned a value before leaving the method
    i = 10;
}
 
static void Method2(ref int i)
{
    // Variable i might/might not be assigned a value before leaving the method
    i = 20;
}
 
static void Method3(in int i)
{
    // Variable i is 20 and cannot assign a value
}
You can see, we have Method3 which accepts int i with in modifier. Unlike out and ref, when we are calling the methods which has in parameters, we don’t have to call like Method(in i). We can omit the in modifier, because the variable is going to be readonly within the method being called. Trying to set a value for in parameters from the method being called is illegal.

Isn’t it nice!

Happy Coding.

Regards,
Jaliya

Saturday, January 6, 2018

No CREATE OR ALTER before Microsoft SQL Server 2016 SP1

In one of the applications that I am working on, there was a stored procedure pushed in by a fellow developer as follows.
ALTER PROCEDURE [someSchema].[someProcedureName]
//--implementation
Since I didn’t have the procedure in my local Microsoft SQL Server database, I have changed the procedure with following and pushed back.
CREATE OR ALTER PROCEDURE [someSchema].[someProcedureName]
//--implementation
The change is I have made is from ALTER PROCEDURE to CREATE OR ALTER PROCEDURE. Then to my surprise got to know that, the stored procedure is breaking on my fellow developer's environment. He was having Microsoft SQL Server 2014 and I am on Microsoft SQL Server 2017.

Did some googling and got to know that CREATE OR ALTER was introduced with as part of Microsoft SQL Server 2016 SP1 enhancements.

For more  information read the following post,
Developers Choice: CREATE OR ALTER

Happy Coding.

Regards,
Jaliya

Friday, January 5, 2018

C# 7 Point Releases

C# 7.0 was publicly released on March, 2017 with the release of Visual Studio 2017. Prior to C# 7, there was less to no discussion about point releases to C# language version (citation needed). But with C# 7, the story is not the same. As of today, we already have C# 7.1 and 7.2.

C# 7.1 was released on August, 2017 with Visual Studio 15.3 while 7.2 was released on December, 2017 with Visual Studio 15.5.

Here is a list of features which got available with point releases.

C# 7.1
  • async Main method
  • default literal expressions
  • Inferred tuple element names
C# 7.2
  • Reference semantics with value types
  • Non-trailing named arguments
  • Leading underscores in numeric literals
  • private protected access modifier
Visual Studio 2017 lets you select the language version for your project. Go to Project Properties -> Build -> Advanced. You can decide whether you are going to live by the edge or not.
image
Project Language Version
Happy Coding.

Regards,
Jaliya

Sunday, December 31, 2017

Visual C# Technical Guru - November 2017

Another month as a judge in Microsoft TechNet Guru Awards under Visual C# category. The TechNet Guru Awards celebrate the technical articles on Microsoft TechNet.

Post in WikiNinjas Official Blog,
image
Visual C# Technical Guru - November 2017
Happy Coding.

Regards,
Jaliya

Tuesday, December 5, 2017

AutoMapper : Handling Profile Dependencies using Custom Value Resolvers

If you are using or if you have used (I am sure you have) AutoMapper, Profiles lets you organize your mapping configurations in an easy manner.

In this post, let’s see how we can handle AutoMapper Profile dependencies using Custom Value Resolvers. As it’s always good to go with an example, let’s go with an example ASP.NET Core Web Application.

For an ASP.NET Core Web Application, AutoMapper can be configured with following easy steps.
You can add a class deriving from Profile class, and in the constructor you can setup your mapping configurations.
public class SomeProfile : Profile
{
    public SomeProfile()
    {
        CreateMap<MyClass, MyClassDTO>();
        // likewise
    }
}
Next in the Startup.ConfigureServices method, you just need to add the following line. (note: you will need to installer required AutoMapper nuget package)
public void ConfigureServices(IServiceCollection services)
{
    // some code
    services.AddAutoMapper();
}
Now you can use IMapper in your required classes as follows.
public class MyController : Controller
{
    private IMapper _mapper;
 
    public MyController(IMapper mapper)
    {
        _mapper = mapper;
    }
}
Now consider we have following two classes.
public class MyClass
{
    public int Id { get; set; }
}
 
public class MyClassDTO
{
    public int Id { get; set; }

    public string SomeProperty { get; set; }
}
And here on MyClassDTO, SomeProperty can't be mapped directly, we will need to get the value by calling ISomeService.GetSomeProperty(int id). Imagine ISomeService is registered for dependency injection.
public interface ISomeService
{
    string GetSomeProperty(int id);
}
So what we would expect is we can get the MyProperty value as follows.
public class SomeProfile : Profile
{
    private ISomeService _someService;
 
    public SomeProfile(ISomeService someService)
    {
        _someService = someService;
 
        CreateMap<MyClass, MyClassDTO>()
            .ForMember(obj => obj.SomeProperty,
                exp => exp.MapFrom(prop => _someService.GetSomeProperty(prop.Id)));
    }
}
But unfortunately, if we run this, we will get an error “No parameterless constructor defined for this object” on services.AddAutoMapper().

In these kinds of scenarios, we can use AutoMapper Custom Value Resolvers. Here I can do the DI without any issues.
public class MyPropertyResolver : IValueResolver<MyClass, MyClassDTO, string>
{
    private ISomeService _someService;
 
    public MyPropertyResolver(ISomeService someService)
    {
        _someService = someService;
    }
 
    public string Resolve(MyClass source, MyClassDTO destination, string destMember, ResolutionContext context)
    {
        return _someService.GetSomeProperty(source.Id);
    }
}
And the usage would be as follows.
public class SomeProfile : Profile
{
    public SomeProfile()
    {
        CreateMap<MyClass, MyClassDTO>()
            .ForMember(obj => obj.SomeProperty,
                exp => exp.ResolveUsing<MyPropertyResolver>());
    }
}
Now SomeProperty value should get resolved without any errors.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, November 28, 2017

Wrote a post on Wiki Life at Official Blog of TechNet Wiki

Wrote a post in Wiki Ninjas - Official Blog of TechNet Wiki. The title of the post is TNWiki Article Spotlight – Implementing Server Side validations in AngularJS.
image
TNWiki Article Spotlight – Implementing Server Side validations in AngularJS
Read the rest on,
TNWiki Article Spotlight – Implementing Server Side validations in AngularJS

Happy Coding.

Regards,
Jaliya

Sunday, November 26, 2017

Fiddler Doesn’t Capture All RavenDB REST API Calls

I was trying to figure out an issue in a Web Application that I am currently working on and it has a RavenDB backend.

The code was doing some operations on the RavenDB database (behind the scene those method calls gets translated into API calls to RavenDB REST API), and I was using Fiddler to trace the API calls to RavenDB. But apparently, only some of the API calls were listed on Fiddler and most of the calls were not present. But RavenDB logs showed that it is receiving calls, so I was bit confused why Fiddler isn’t capturing the calls.

Spent couple of hours changing Fiddler settings, RavenDB IIS Web Application settings, but output was still the same.

I had RavenDB IIS Web Application running on 8080 and I was using Url=http://localhost:8080;Database={MyDatabaseName} as the connection string. I just replaced localhost with my machine name and that was it. I am seeing all the API calls to RavenDB in fiddler.

So  if you want Fiddler to capture all RavenDB calls being made from your application, instead of having localhost or 127.0.0.1, use the machine name.

So now everything is in-place to find out the real issue, I am back to it.

Hope someone will find this helpful.

Happy Coding.

Regards,
Jaliya

Tuesday, November 21, 2017

Visual C# Technical Guru - October 2017

Another month as a judge in Microsoft TechNet Guru Awards under Visual C# category. The TechNet Guru Awards celebrate the technical articles on Microsoft TechNet.

Post in WikiNinjas Official Blog,
image
Visual C# Technical Guru - October 2017
Happy Coding.

Regards,
Jaliya

Friday, November 3, 2017

Azure PowerShell - Cloning App Service Slots

This is some set of scripts I keep using for cloning Azure App Service Slots. Hope someone will find it useful.
# Login
Login-AzureRmAccount
 
# List all subscriptions (this step is only useful if you have multiple subscriptions)
Get-AzureRmSubscription
 
# Select azure subscription (this step is only useful if you have multiple subscriptions)
Get-AzureRmSubscription –SubscriptionName "<SubscriptionName>" | Select-AzureRmSubscription
 
# Listing all slots for a app service
Get-AzureRmWebAppSlot -ResourceGroupName "<ResourceGroupName>" -Name "<AppServiceName>"
 
# Cloning web app to a new slot
$srcWebApp = Get-AzureRmWebApp -ResourceGroupName "<ResourceGroupName>" -Name "<AppServiceName>"
New-AzureRmWebAppSlot -ResourceGroupName "<ResourceGroupName>" -Name "<AppServiceName>" -AppServicePlan "<AppServicePlan>" -Slot "<NewSlotName>" -SourceWebApp $srcWebApp
 
# Cloning web app slot to a new slot
$srcWebAppSlot = Get-AzureRmWebAppSlot -ResourceGroupName "<ResourceGroupName>" -Name "<AppServiceName>" -Slot "<SourceSlotName>"
New-AzureRmWebAppSlot -ResourceGroupName "<ResourceGroupName>" -Name "<AppServiceName>" -AppServicePlan "<AppServicePlan>" -Slot "<NewSlotName>" -SourceWebApp $srcWebAppSlot
Happy Coding.

Regards,
Jaliya

Wednesday, November 1, 2017

ASP.NET Core 2.0: Why it’s important to have Program.BuildWebHost method?

I was doing some refactoring on an ASP.NET Core 2.0 Web Application which was migrated from ASP.NET Core 1.1.

In ASP.NET Core 1.1 Web Applications, program.cs was like this.
public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup&lt;Startup>()
            .UseApplicationInsights()
            .Build();
 
        host.Run();
    }
}
But with ASP.NET Core 2.0, it has to be something like this.
public class Program
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }
 
    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup&lt;Startup>()
            .Build();
}
I felt like I can eliminate BuildWebHost method, so I did like follows.
public class Program
{
    public static void Main(string[] args)
    {
        WebHost.CreateDefaultBuilder(args)
            .UseStartup&lt;Startup>()
            .Build()
            .Run();
    }
}
All seem be good, the application was running well. After sometime, I wanted to add a database migration, and when I tried to do Add-Migration, I was getting this weird error.
Unable to create an object of type 'T'. Add an implementation of 'IDesignTimeDbContextFactory&lt;T>' to the project, or see https://go.microsoft.com/fwlink/?linkid=851728 for additional patterns supported at design time.
Apparently the error turned out to be program.cs not having BuildWebHost method.

In ASP.NET Core 1.x to ASP.NET Core 2.0 Migration Guide, it specifically says, “The adoption of this new 2.0 pattern is highly recommended and is required for product features like Entity Framework (EF) Core Migrations to work.”

Once I added it back, I was able to add database migrations back again.

Lesson learnt: When doing refactoring, somethings are better left alone!

Happy Coding.

Regards,
Jaliya