Thursday, November 14, 2024

Blazor Web App: Authentication Redirect URI is not HTTPS

.NET 9 is finally out and I was playing around with Blazor. I was setting up Authentication in a .NET 9 Blazor Web App. The authentication is configured with AzureAD, and locally everything was working fine. The application was running on HTTPS and the redirect_uri was HTTPS too.  

When the application was deployed to Azure, the Authentication was failing, because the redirect_uri was HTTP.  In Azure AD App Registration I configured it with HTTPS (HTTP is allowed only when using localhost). The application was running inside a Linux Container in an Azure Web App.

In order for redirect_uri to be HTTPS, I had to do the following:

1. Enable UseForwardedHeaders

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
    options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});
// Other service registrations

WebApplication app = builder.Build();

// Note: Forwarded Headers Middleware should run before other middleware. 
// This ordering ensures that the middleware relying on forwarded headers information can consume the header values for processing. 
// Forwarded Headers Middleware can run after diagnostics and error handling, but it MUST BE RUN before calling UseHsts
app.UseForwardedHeaders();

if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error"createScopeForErrorstrue);
    app.UseHsts();
}

app.UseHttpsRedirection();
// Other middleware

app.Run();

2. Add the following app setting in Azure (More: Forward the scheme for Linux and non-IIS reverse proxies)

{
  "name""ASPNETCORE_FORWARDEDHEADERS_ENABLED",
  "value""true",
  "slotSetting"false
}

And that did it.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, November 6, 2024

Azure APIM: Policy Expression to Read application/x-www-form-urlencoded Request Data

Recently I had a requirement where a particular Client sends some request data as application/x-www-form-urlencoded and needed to get those moved to the request header before it gets forwarded to the Backend.

In this post, let's see how we can read request data sent via application/x-www-form-urlencoded.

If we have a look at .NET Framework types allowed in policy expressions, we have access to System.Net.WebUtility.

So we can make use of that as follows:
<policies>
    <inbound>
        <!--Extract URL Encoded Form Data (if any)-->
        <set-variable name="serializedFormData" value="@{
            var formData = new System.Collections.Generic.Dictionary<String, String>();
            if(context.Request.Headers.GetValueOrDefault("Content-Type", "") != "application/x-www-form-urlencoded")
            {
                return JsonConvert.SerializeObject(formData);
            }

            string encodedBody = context.Request.Body.As<String>(preserveContent: true);
            string decodedBody = System.Net.WebUtility.UrlDecode(encodedBody);
            foreach (string key in decodedBody.Split('&'))
            {
                string[] keyValue = key.Split('=');
                formData.Add(keyValue[0], keyValue[1]);
            }

            return JsonConvert.SerializeObject(formData);
        }" />
        <!--Check if the interested headers are sent in the form data-->
        <set-variable name="isRequiredHeadersSentInFormData" value="@{
            string serializedFormData = context.Variables.GetValueOrDefault<String>("serializedFormData");
            System.Collections.Generic.Dictionary<String, String> formData = 
                JsonConvert.DeserializeObject<System.Collections.Generic.Dictionary<String, String>>(serializedFormData);

             return formData.ContainsKey("key1") && formData.ContainsKey("key2");
        }" />
        <!--Set the headers from the form data if present-->
        <choose>
            <when condition="@(context.Variables.GetValueOrDefault<bool>("isRequiredHeadersSentInFormData"))">
                <set-header name="x-key1" exists-action="override">
                    <value>@{
                        string serializedFormData = context.Variables.GetValueOrDefault<String>("serializedFormData");
                        System.Collections.Generic.Dictionary<String, String> formData = 
                            JsonConvert.DeserializeObject<System.Collections.Generic.Dictionary<String, String>>(serializedFormData);

                        return formData["key1"];
                    }</value>
                </set-header>
                <set-header name="x-key2" exists-action="override">
                    <value>@{
                        string serializedFormData = context.Variables.GetValueOrDefault<String>("serializedFormData");
                        System.Collections.Generic.Dictionary<String, String> formData = 
                            JsonConvert.DeserializeObject<System.Collections.Generic.Dictionary<String, String>>(serializedFormData);

                        return formData["key2"];
                    }</value>
                </set-header>
            </when>
        </choose>
        ...
    </inbound>
    ...
</policies>
And now we test with trace, we can see the request is correctly being transformed.
Trace
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, October 23, 2024

.NET 9.0: Out-of-order Metadata Reading in System.Text.Json

Release of .NET 9.0 is like 3 weeks away and in this post, let's have a look at one of the most awaited System.Text.Json features.

Let's have a look at an example.

Consider the below types.
[JsonDerivedType(typeof(Student)"student")]
record Person(string Name);

record Student(string Namestring StudentId) : Person(Name);
The following code will throw an error on Deserialize.
using System.Text.Json.Serialization;
using System.Text.Json;

JsonSerializerOptions options = new();

Person person = new Employee("John Doe""STU001");
string serializedPerson = JsonSerializer.Serialize(personoptions);
// {"$type":"student","StudentId":"STU001","Name":"John Doe"}

// Change the order of $type
serializedPerson = """
{
    
"StudentId":"STU001",
    
"Name":"John Doe",
    
"$type":"student"
}
"""
;

person = JsonSerializer.Deserialize<Person>(serializedPersonoptions)!; // Exception
// System.Text.Json.JsonException: The metadata property is either not supported by the type or is not the first property in the deserialized JSON object.
The reason is (as the exception says) that the metadata property $type is not the first property in the JSON string, and it has to be. While there is a reason for that requirement, until .NET 9, it was a known limitation (see #dotnet/runtime/issues/72604).

Finally, in .NET 9, we can enable AllowOutOfOrderMetadataProperties in JsonSerializerOptions.
JsonSerializerOptions options = new()
{
    AllowOutOfOrderMetadataProperties = true
};
And now the above would work just fine.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, October 21, 2024

Ingressing Custom Text Log Files from an Azure Virtual Machine into Log Analytics

I recently had a requirement where I wanted to see some logs in an Azure Virtual Machine, but didn't want to SSH into the VM to see the logs. Instead, it would have been nice if I could see those in a Log Analytics workspace (LAW) in Azure. This gives a lot of advantages, like I can set up alerts on those logs if I want to.

So in this post, let's see how we can ingress some logs that is being written into some custom log file in an Azure Virtual Machine into LAW.

Assuming we already have a LAW created, the first step is creating a custom table there to ingress the logs into.

# Connect-AzAccount
$tableName = "<TableName>_CL"
$tableParams = @"
{
    "properties": {
        "schema": {
            "name": "$
tableName",
            "columns": [
                {
                    "name": "TimeGenerated",
                    "type": "DateTime"
                },
                {
                    "name": "RawData",
                    "type": "String"
                },
                {
                    "name": "FilePath",
                    "type": "String"
                },
                {
                    "name": "Computer",
                    "type": "String"
                }
            ]
        }
    }
}
"@


Invoke-AzRestMethod `
    -Path "/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCE_GROUP>/providers/microsoft.operationalinsights/workspaces/<LAW_NAME>/tables/$($tableName)?api-version=2021-12-01-preview" `
    -Method PUT `
    -payload $tableParams

Here I am using a default template as for the demo purposes I am not doing any transformations.

Make sure the table is created in your LAW.
Log Analytics Workspace
Next, I am going to Azure Monitor (search for Monitor in Azure Portal) and creating a Data Collection Endpoint (DCE).
Create data collection endpoint
It's pretty straightforward.

Now from Azure Monitor, I am creating a Data Collection Rule (DCR).

Create Data Collection Rule
My VM is on Linux so I am selecting Linux as the Platform Type, and I am selecting the DCE I created in the previous step as the Data Collection Endpoint.

Next on Resources, I am adding a new Resource and selecting the target VM and the DCE.
Create Data Collection Rule: Resources
Now going to Collection and deliver tab and selecting a Data Source of type Custom Text Logs.
Add data source: Custom Text Logs

Add data source: Custom Text Logs
Here, for the File pattern, I am giving the path of the file where I need the logs ingested from and for the Table name, giving the name of LAW table I created before. And for the Transform, I am leaving it as it is.

Now configuring the destination as Azure Monitor Logs as follows.

Add data source: Custom Text Logs
And that's it. I am creating the DCR and Azure will deploy Azure Monitor Agent to the target VM and configure the ingestion.

And once deployment is completed, after some time, I can see the logs in LAW.
Ingested Logs
More read:
   Data collection rules (DCRs) in Azure Monitor
   Collect data with Azure Monitor Agent
   Troubleshooting guidance for the Azure Monitor agent on Linux virtual machines and scale sets

Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, October 14, 2024

Azure AD B2C: Copy signInNames.emailAddress to email Claim in a Custom Policy

I had a requirement where in an Azure AD B2C custom policy, as part of an UserJourney, there was an OrchestrationStep to invoke a REST endpoint passing in the email claim (if it exists). While the REST endpoint was executed for AzureADSignUpOrSignIn, it wasn't being executed for LocalSignUpOrSignIn. So obviously email was missing in the claims bag. But for LocalSignUpOrSignIn, it has signInNames.emailAddress

In this post, let's see how we can make use of a TechnicalProfile to copy signInNames.emailAddress claim to email  claim.

The first step is creating a ClaimsTransformation.
<ClaimsTransformations>
  ...
  <ClaimsTransformation Id="CopySignInNamesEmailAddressToEmail" TransformationMethod="CopyClaim">
    <InputClaims>
      <InputClaim ClaimTypeReferenceId="signInNames.emailAddress" TransformationClaimType="inputClaim" />
    </InputClaims>
    <OutputClaims>
      <OutputClaim ClaimTypeReferenceId="email" TransformationClaimType="outputClaim" />
    </OutputClaims>
  </ClaimsTransformation>
</ClaimsTransformations>
Then create a TechnicalProfile to use ClaimsTransformation.
<ClaimsProviders>
  ...
  <ClaimsProvider>
    ...
    <TechnicalProfiles> ...
      <TechnicalProfile Id="Transform-SignInNamesEmailAddressToEmail">
        <DisplayName>Transform SignInNames.EmailAddress To Email</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <OutputClaims>
          <OutputClaim ClaimTypeReferenceId="email" />
        </OutputClaims>
        <OutputClaimsTransformations>
          <OutputClaimsTransformation ReferenceId="CopySignInNamesEmailAddressToEmail" />
        </OutputClaimsTransformations>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>
Then update UserJourney adding an OrchestrationStep.
<UserJourneys>
  ...
  <UserJourney Id="LocalSignUpOrSignIn">
    <OrchestrationSteps>
      ...
      <OrchestrationStep Order="6" Type="ClaimsExchange"> <!--only execute if email claim does not exist AND signInNames.emailAddress claim exists -->
        <Preconditions>
          <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
            <Value>email</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
          <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
            <Value>signInNames.emailAddress</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
        </Preconditions>
        <ClaimsExchanges>
          <ClaimsExchange Id="TransformSignInNamesEmailAddressToEmail" TechnicalProfileReferenceId="Transform-SignInNamesEmailAddressToEmail" />
        </ClaimsExchanges>
      </OrchestrationStep> ...
    </OrchestrationSteps>
  </UserJourney>
</UserJourneys>
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, October 2, 2024

DNS Resolution in Internal Container Apps Environments (CAE)

In this post, let's see how an App A can communicate with a container app: App B deployed on a Container Apps Environment (CAE) that is integrated into a VNet using internal virtual IP. App A is running inside the same VNet.
Create Container Apps Environment
The container app: App B, has Ingress enabled and accepts traffic from anywhere within the VNet.
Container App: Ingress settings
Now from App A, let's try to communicate with App B.
Server can't find
App A can't resolve the address. 

The reason:
So let's create and configure an Azure Private DNS Zone for domain resolution.

I am using the Azure  CLI (on Windows).
# Declare Variables
$RESOURCE_GROUP = '<RESOURCE_GROUP>'
$CA_ENV_NAME = '<CONTAINER_APP_ENVIRONMENT_NAME>'
$VNET_NAME = '<VNET_NAME>'

# Retrieve the default domain of the Container App Environment
$CA_ENV_DEFAULT_DOMAIN = az containerapp env show `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_NAME `
  --query properties.defaultDomain `
  --output tsv

# Retrieve the static IP of the Container App Environment
$CA_ENV_STATIC_IP = az containerapp env show `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_NAME `
  --query properties.staticIp `
  --output tsv
Let's create the Private DNS Zone first.
# Create the Private DNS Zone
az network private-dns zone create `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_DEFAULT_DOMAIN
Create the Private DNS Zone
Next, we need to add a Virtual Network Link to the target VNet.
# Create the Virtual Network Link
az network private-dns link vnet create `
  --resource-group $RESOURCE_GROUP `
  --name $VNET_NAME `
  --virtual-network $VNET_NAME `
  --zone-name $CA_ENV_DEFAULT_DOMAIN `
  --registration-enabled true
Create the Virtual Network Link
Finally, create a A Record to point to the default domain of CAE.
# Create the A Record
az network private-dns record-set a add-record `
  --resource-group $RESOURCE_GROUP `
  --zone-name $CA_ENV_DEFAULT_DOMAIN `
  --record-set-name '*' `
  --ipv4-address $CA_ENV_STATIC_IP
Create the A Record
Now let's try to communicate with App B again.
Success
And that's it.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, September 26, 2024

Azure AD B2C: Single Tenant to Serve Multiple Environments using Custom Policies

In this post, let's see how we can use a single Azure AD B2C tenant to serve multiple environments using custom policies. This is using the nice Inheritance Model support in Azure AD B2C custom policies.

Let's go by an example scenario: 

Say I have an AADB2C tenant that is serving 2 environments. As part of a user journey, the policy calls a REST endpoint and the REST endpoint is environment-specific. 

AADB2C doesn't have a concept like app settings, however, we can override policies in the base files.

Say, I have the following ClaimsProvider in TrustFrameworkExtensions.xml .

<ClaimsProviders>
  ...
  <ClaimsProvider>
    <DisplayName>Get Additional Claims via REST</DisplayName>
    <TechnicalProfiles>
      <TechnicalProfile Id="REST-GetAdditionalClaims">
        <DisplayName>Get Additional Claims via REST call and transform claims</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <Metadata>
          <!--TODO: Override in RP File -->
          <Item Key="ServiceUrl">{{ServiceUrl}}</Item>
          <Item Key="SendClaimsIn">QueryString</Item>
          <Item Key="AuthenticationType">None</Item>
          <Item Key="AllowInsecureAuthInProduction">true</Item>
        </Metadata>
        <InputClaims>
          <InputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="objectId" />
        </InputClaims>
        <OutputClaims>
          ...
        </OutputClaims>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>

Here the ServiceUrl is environment-specific. So in this case, we can have 2 RP (Relying Party) files.

SignUpOrSignin_AD_DEV_001.xml

<TrustFrameworkPolicy ...>

  <BasePolicy>
    <TenantId>some-tenant.onmicrosoft.com</TenantId>
    <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
  </BasePolicy>

  <ClaimsProviders>
    <ClaimsProvider>
      <DisplayName>Get Additional Claims via REST</DisplayName>
      <TechnicalProfiles>
        <TechnicalProfile Id="REST-GetAdditionalClaims">
          <Metadata>
            <!--Override ServiceUrl-->
            <Item Key="ServiceUrl">https://app-user-api-dev-001.azurewebsites.net/api/users</Item>
          </Metadata>
        </TechnicalProfile>
      </TechnicalProfiles>
    </ClaimsProvider>
  </ClaimsProviders>

  <RelyingParty>
    ...
  </RelyingParty>

</TrustFrameworkPolicy>

SignUpOrSignin_AD_QA_001.xml

<TrustFrameworkPolicy ...>

  <BasePolicy>
    <TenantId>some-tenant.onmicrosoft.com</TenantId>
    <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
  </BasePolicy>

  <ClaimsProviders>
    <ClaimsProvider>
      <DisplayName>Get Additional Claims via REST</DisplayName>
      <TechnicalProfiles>
        <TechnicalProfile Id="REST-GetAdditionalClaims">
          <Metadata>
            <!--Override ServiceUrl-->
            <Item Key="ServiceUrl">https://app-user-api-qa-001.azurewebsites.net/api/users</Item>
          </Metadata>
        </TechnicalProfile>
      </TechnicalProfiles>
    </ClaimsProvider>
  </ClaimsProviders>

  <RelyingParty>
    ...
  </RelyingParty>

</TrustFrameworkPolicy>

Now the consumer can use the correct RP file based on the environment.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, September 17, 2024

Azure Pipelines: Passing Variables Between Stages

In my last post I wrote about Azure Pipelines: Passing Variables Between Jobs, and in this let's see how we can pass variables between stages in Azure Pipelines.

Let's say we need to pass a variable from StageA to StageB.

trigger:
main

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  variables:
    someVariable: ''
  jobs:  
  - job: JobA
    displayName: Job A
    steps:    
    # Create output variable and set value
    - script: |
        echo "##vso[task.setvariable variable=stageAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

stage: StageB
  displayName: Stage B
  variables:
    # Map the output variable from Stage A into Stage B
    stageBVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobB
    displayName: Job B
    steps:
    # stageBVariable has the value set in JobA
    - script: |
        echo $(stageBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

Note how StageA.JobA.outputVariable is outputting the variable using isOutput=true. And then StageB is getting it mapped via stageDependencies

When we have multiple stages in a pipeline, by default, they run sequentially in the order in which they are defined in the YAML file. So in the above use case where we have 2 stages, we don't explicitly need to use dependsOn

Now let's add another Step: StageC.

trigger:
main

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  variables:
    someVariable: ''
  jobs:  
  - job: JobA
    displayName: Job A
    steps:
    # Create output variable      
    - script: |
        echo "##vso[task.setvariable variable=stageAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

stage: StageB
  displayName: Stage B
  variables:
    # Map the output variable from Stage A into Stage B
    stageBVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobB
    displayName: Job B
    steps:
    # stageBVariable has the value set in JobA
    - script: |
        echo $(stageBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

stage: StageC
  displayName: Stage C
  # Need to explictely depend on Stage A
  dependsOn: 
  - StageA
  - StageB
  variables:
    # Map the output variable from Stage A into Stage C
    stageCVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobC
    displayName: Job C
    steps:
    # stageCVariable has the value set in JobA
    - script: |
        echo $(stageCVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

Here especially for StageC, we need to explicitly depend on StageA. You might skip it thinking they run sequentially meaning StageA -> StageB -> StageC, so technically StageC is depending on StageA.  But unfortunately depending just on the previous stage is not enough. You need to explicitly declare the dependency on the stage from which you are mapping the variable from.

Hope this helps.

Happy Coding.

Regards,
Jaliya