Monday, October 14, 2024

Azure AD B2C: Copy signInNames.emailAddress to email Claim in a Custom Policy

I had a requirement where in an Azure AD B2C custom policy, as part of an UserJourney, there was an OrchestrationStep to invoke a REST endpoint passing in the email claim (if it exists). While the REST endpoint was executed for AzureADSignUpOrSignIn, it wasn't being executed for LocalSignUpOrSignIn. So obviously email was missing in the claims bag. But for LocalSignUpOrSignIn, it has signInNames.emailAddress

In this post, let's see how we can make use of a TechnicalProfile to copy signInNames.emailAddress claim to email  claim.

The first step is creating a ClaimsTransformation.
<ClaimsTransformations>
  ...
  <ClaimsTransformation Id="CopySignInNamesEmailAddressToEmail" TransformationMethod="CopyClaim">
    <InputClaims>
      <InputClaim ClaimTypeReferenceId="signInNames.emailAddress" TransformationClaimType="inputClaim" />
    </InputClaims>
    <OutputClaims>
      <OutputClaim ClaimTypeReferenceId="email" TransformationClaimType="outputClaim" />
    </OutputClaims>
  </ClaimsTransformation>
</ClaimsTransformations>
Then create a TechnicalProfile to use ClaimsTransformation.
<ClaimsProviders>
  ...
  <ClaimsProvider>
    ...
    <TechnicalProfiles> ...
      <TechnicalProfile Id="Transform-SignInNamesEmailAddressToEmail">
        <DisplayName>Transform SignInNames.EmailAddress To Email</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <OutputClaims>
          <OutputClaim ClaimTypeReferenceId="email" />
        </OutputClaims>
        <OutputClaimsTransformations>
          <OutputClaimsTransformation ReferenceId="CopySignInNamesEmailAddressToEmail" />
        </OutputClaimsTransformations>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>
Then update UserJourney adding an OrchestrationStep.
<UserJourneys>
  ...
  <UserJourney Id="LocalSignUpOrSignIn">
    <OrchestrationSteps>
      ...
      <OrchestrationStep Order="6" Type="ClaimsExchange"> <!--only execute if email claim does not exist AND signInNames.emailAddress claim exists -->
        <Preconditions>
          <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
            <Value>email</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
          <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
            <Value>signInNames.emailAddress</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
        </Preconditions>
        <ClaimsExchanges>
          <ClaimsExchange Id="TransformSignInNamesEmailAddressToEmail" TechnicalProfileReferenceId="Transform-SignInNamesEmailAddressToEmail" />
        </ClaimsExchanges>
      </OrchestrationStep> ...
    </OrchestrationSteps>
  </UserJourney>
</UserJourneys>
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, October 2, 2024

DNS Resolution in Internal Container Apps Environments (CAE)

In this post, let's see how an App A can communicate with a container app: App B deployed on a Container Apps Environment (CAE) that is integrated into a VNet using internal virtual IP. App A is running inside the same VNet.
Create Container Apps Environment
The container app: App B, has Ingress enabled and accepts traffic from anywhere within the VNet.
Container App: Ingress settings
Now from App A, let's try to communicate with App B.
Server can't find
App A can't resolve the address. 

The reason:
So let's create and configure an Azure Private DNS Zone for domain resolution.

I am using the Azure  CLI (on Windows).
# Declare Variables
$RESOURCE_GROUP = '<RESOURCE_GROUP>'
$CA_ENV_NAME = '<CONTAINER_APP_ENVIRONMENT_NAME>'
$VNET_NAME = '<VNET_NAME>'

# Retrieve the default domain of the Container App Environment
$CA_ENV_DEFAULT_DOMAIN = az containerapp env show `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_NAME `
  --query properties.defaultDomain `
  --output tsv

# Retrieve the static IP of the Container App Environment
$CA_ENV_STATIC_IP = az containerapp env show `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_NAME `
  --query properties.staticIp `
  --output tsv
Let's create the Private DNS Zone first.
# Create the Private DNS Zone
az network private-dns zone create `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_DEFAULT_DOMAIN
Create the Private DNS Zone
Next, we need to add a Virtual Network Link to the target VNet.
# Create the Virtual Network Link
az network private-dns link vnet create `
  --resource-group $RESOURCE_GROUP `
  --name $VNET_NAME `
  --virtual-network $VNET_NAME `
  --zone-name $CA_ENV_DEFAULT_DOMAIN `
  --registration-enabled true
Create the Virtual Network Link
Finally, create a A Record to point to the default domain of CAE.
# Create the A Record
az network private-dns record-set a add-record `
  --resource-group $RESOURCE_GROUP `
  --zone-name $CA_ENV_DEFAULT_DOMAIN `
  --record-set-name '*' `
  --ipv4-address $CA_ENV_STATIC_IP
Create the A Record
Now let's try to communicate with App B again.
Success
And that's it.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, September 26, 2024

Azure AD B2C: Single Tenant to Serve Multiple Environments using Custom Policies

In this post, let's see how we can use a single Azure AD B2C tenant to serve multiple environments using custom policies. This is using the nice Inheritance Model support in Azure AD B2C custom policies.

Let's go by an example scenario: 

Say I have an AADB2C tenant that is serving 2 environments. As part of a user journey, the policy calls a REST endpoint and the REST endpoint is environment-specific. 

AADB2C doesn't have a concept like app settings, however, we can override policies in the base files.

Say, I have the following ClaimsProvider in TrustFrameworkExtensions.xml .

<ClaimsProviders>
  ...
  <ClaimsProvider>
    <DisplayName>Get Additional Claims via REST</DisplayName>
    <TechnicalProfiles>
      <TechnicalProfile Id="REST-GetAdditionalClaims">
        <DisplayName>Get Additional Claims via REST call and transform claims</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <Metadata>
          <!--TODO: Override in RP File -->
          <Item Key="ServiceUrl">{{ServiceUrl}}</Item>
          <Item Key="SendClaimsIn">QueryString</Item>
          <Item Key="AuthenticationType">None</Item>
          <Item Key="AllowInsecureAuthInProduction">true</Item>
        </Metadata>
        <InputClaims>
          <InputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="objectId" />
        </InputClaims>
        <OutputClaims>
          ...
        </OutputClaims>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>

Here the ServiceUrl is environment-specific. So in this case, we can have 2 RP (Relying Party) files.

SignUpOrSignin_AD_DEV_001.xml

<TrustFrameworkPolicy ...>

  <BasePolicy>
    <TenantId>some-tenant.onmicrosoft.com</TenantId>
    <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
  </BasePolicy>

  <ClaimsProviders>
    <ClaimsProvider>
      <DisplayName>Get Additional Claims via REST</DisplayName>
      <TechnicalProfiles>
        <TechnicalProfile Id="REST-GetAdditionalClaims">
          <Metadata>
            <!--Override ServiceUrl-->
            <Item Key="ServiceUrl">https://app-user-api-dev-001.azurewebsites.net/api/users</Item>
          </Metadata>
        </TechnicalProfile>
      </TechnicalProfiles>
    </ClaimsProvider>
  </ClaimsProviders>

  <RelyingParty>
    ...
  </RelyingParty>

</TrustFrameworkPolicy>

SignUpOrSignin_AD_QA_001.xml

<TrustFrameworkPolicy ...>

  <BasePolicy>
    <TenantId>some-tenant.onmicrosoft.com</TenantId>
    <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
  </BasePolicy>

  <ClaimsProviders>
    <ClaimsProvider>
      <DisplayName>Get Additional Claims via REST</DisplayName>
      <TechnicalProfiles>
        <TechnicalProfile Id="REST-GetAdditionalClaims">
          <Metadata>
            <!--Override ServiceUrl-->
            <Item Key="ServiceUrl">https://app-user-api-qa-001.azurewebsites.net/api/users</Item>
          </Metadata>
        </TechnicalProfile>
      </TechnicalProfiles>
    </ClaimsProvider>
  </ClaimsProviders>

  <RelyingParty>
    ...
  </RelyingParty>

</TrustFrameworkPolicy>

Now the consumer can use the correct RP file based on the environment.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, September 17, 2024

Azure Pipelines: Passing Variables Between Stages

In my last post I wrote about Azure Pipelines: Passing Variables Between Jobs, and in this let's see how we can pass variables between stages in Azure Pipelines.

Let's say we need to pass a variable from StageA to StageB.

trigger:
main

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  variables:
    someVariable: ''
  jobs:  
  - job: JobA
    displayName: Job A
    steps:    
    # Create output variable and set value
    - script: |
        echo "##vso[task.setvariable variable=stageAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

stage: StageB
  displayName: Stage B
  variables:
    # Map the output variable from Stage A into Stage B
    stageBVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobB
    displayName: Job B
    steps:
    # stageBVariable has the value set in JobA
    - script: |
        echo $(stageBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

Note how StageA.JobA.outputVariable is outputting the variable using isOutput=true. And then StageB is getting it mapped via stageDependencies

When we have multiple stages in a pipeline, by default, they run sequentially in the order in which they are defined in the YAML file. So in the above use case where we have 2 stages, we don't explicitly need to use dependsOn

Now let's add another Step: StageC.

trigger:
main

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  variables:
    someVariable: ''
  jobs:  
  - job: JobA
    displayName: Job A
    steps:
    # Create output variable      
    - script: |
        echo "##vso[task.setvariable variable=stageAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

stage: StageB
  displayName: Stage B
  variables:
    # Map the output variable from Stage A into Stage B
    stageBVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobB
    displayName: Job B
    steps:
    # stageBVariable has the value set in JobA
    - script: |
        echo $(stageBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

stage: StageC
  displayName: Stage C
  # Need to explictely depend on Stage A
  dependsOn: 
  - StageA
  - StageB
  variables:
    # Map the output variable from Stage A into Stage C
    stageCVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobC
    displayName: Job C
    steps:
    # stageCVariable has the value set in JobA
    - script: |
        echo $(stageCVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

Here especially for StageC, we need to explicitly depend on StageA. You might skip it thinking they run sequentially meaning StageA -> StageB -> StageC, so technically StageC is depending on StageA.  But unfortunately depending just on the previous stage is not enough. You need to explicitly declare the dependency on the stage from which you are mapping the variable from.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, September 12, 2024

Azure Pipelines: Passing Variables Between Jobs

In this post let's see how we can pass variables between jobs in Azure Pipelines.

Let's say we need to pass a variable from JobA to future jobs.

trigger:
main/*

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  jobs:  
  - job: JobA
    displayName: Job A
    steps:
    # Create output variable and set value
    - script: |
        echo "##vso[task.setvariable variable=jobAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

  - job: JobB
    displayName: Job B
    dependsOn: JobA
    variables:
      # Map the output variable from Job A into Job B
      jobBVariable: $[ dependencies.JobA.outputs['outputVariable.jobAVariable'] ]
    steps:
    # This will print the updated value of the variable
    - script: |
        echo $(jobBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

  - job: JobC
    displayName: Job C
    dependsOn: JobA
    variables:
      # Map the output variable from Job A into Job C
      jobCVariable: $[ dependencies.JobA.outputs['outputVariable.jobAVariable'] ]
    steps:
    # This will print the updated value of the variable
    - script: |
       echo $(jobCVariable)     
      name: printUpdatedVariable
      displayName: Print Updated Variable

Note how StageA.JobA.outputVariable is outputting the variable using isOutput=true. By default jobs run in parallel. So first we need to wait till JobA  completes using dependsOn. Then we can map the variable into any job through dependencies.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Sunday, September 8, 2024

Azure Pipelines: DurableTask.Netherite: Netherite backend requires 64bit, but current process is 32bit.

Recently one of our .NET Isolated Azure Durable Functions got switched to use Netherite as its storage provider and suddenly all the integration tests started to fail in the Azure DevOps CI pipeline. 

The underlying error was:  

DurableTask.Netherite: Netherite backend requires 64bit, but current process is 32bit

In the pipeline, we were using FuncToolsInstaller@0 to install Azure Functions Core Tools.

task: FuncToolsInstaller@0
  displayName: Install Azure Func Core Tools

Upon investigating, noticed FuncToolsInstaller@0 is using x86 version of Azure Functions Core Tools.

FuncToolsInstaller@0
Checked choco (chocolatey), it was also using x86 as well, fortunately, npm was using x64 by default.
bash: |
    npm i -g azure-functions-core-tools@4 --unsafe-perm true
  displayName: Install Azure Func Core Tools

And that solved the issue.

Created microsoft/azure-pipelines-tasks enhancement request for FuncToolsInstaller@0.
   [enhancement]: FuncToolsInstaller@0: Use win-x64 for Windows

Hope this helps.

Happy Coding.

Regards,
Jaliya

Friday, August 30, 2024

ASP.NET Core 9.0: HybridCache in ASP.NET Core

With ASP.NET Core 9.0, we have access to a new Caching API: HybridCache, and it's designed to replace both IDistributedCache and IMemoryCache.

Let's go through with an example code.

I have the following code:

public interface IDataService
{
    Task<ConfigurationData> GetConfigurationData(CancellationToken cancellationToken = default);
}

public abstract class DataServiceBase : IDataService
{
    protected const string CacheKey = "configuration-cache-key";

    public abstract Task<ConfigurationData> GetConfigurationData(CancellationToken cancellationToken = default);

    protected async Task<ConfigurationData> GetConfigurationFromSource(CancellationToken cancellationToken = default)
    {
        return await Task.FromResult(new ConfigurationData
        {
            SomeConfig1 = "Some Config1",
            SomeConfig2 = "Some Config2"
        });
    }
}
First, let's see how IDistributedCache works and then let's see how HybridCache can simplify it.
public class DataServiceWithIDistributedCache(IDistributedCache distributedCache)
    : DataServiceBase
{
    public async override Task<ConfigurationData> GetConfigurationData(CancellationToken cancellationToken = default)
    {
        byte[]bytes = await distributedCache.GetAsync(CacheKey, cancellationToken)// Try to get from cache.

        // Cache hit; return the deserialized data.
        if (bytes is not null)
        {
            return JsonSerializer.Deserialize<ConfigurationData>(bytes)!;
        }

        // Cache miss; get the data from the real source and cache it.
        ConfigurationData configurationData = await GetConfigurationFromSource(cancellationToken);

        bytes = JsonSerializer.SerializeToUtf8Bytes(configurationData);

        await distributedCache.SetAsync(CacheKey, bytescancellationToken);

        return configurationData;
    }
}
I have IDistributedCache configured with Redis.
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = "<ConnectionString>";
});
builder.Services.AddScoped<IDataServiceDataServiceWithIDistributedCache>();
Now here in DataServiceWithIDistributedCache, we are first checking the cache to see whether the item exists, if it is we return the item from the cache, if not we retrieve the item from the original source, cache it, and then return the item.

There are potential problems here. Say the item does not exist in the cache and more than one thread attempts to read Configuration simultaneously. In that case, multiple threads are going to cache the item.

On top of that, we had to first check whether the item exists in the cache, and if not, we need to retrieve the item from original source, and cache it. When retrieving an item from a cache, almost all the time, that's something we will have to do.

Now let's see how HybridCache works. First, we need to install a new package: Microsoft.Extensions.Caching.Hybrid (it's still pre-release as of today)
<PackageReference Include="Microsoft.Extensions.Caching.Hybrid" Version="9.0.0-preview.7.24406.2" />
public class DataServiceWithHybridCache(HybridCache hybridCache)
    : DataServiceBase
{
    public async override Task<ConfigurationData> GetConfigurationData(CancellationToken cancellationToken = default)
    {
        return await hybridCache.GetOrCreateAsync(
            CacheKey, factory: async token => await GetConfigurationFromSource(token),
            cancellationTokencancellationToken
        );
    }
}
Now we need to register HybridCache.
WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = "<ConnectionString>";
});

builder.Services.AddScoped<IDataServiceDataServiceWithHybridCache>();
builder.Services.AddHybridCache(options =>
{
    // TODO: customize options if required
});
So here HybridCache is created with a primary cache and a secondary cache. 
HybridCache
HybridCache by default uses MemoryCache for its primary cache, and for secondary cache, it uses any IDistributedCache implementation that is configured. Since I have Redis configured, Redis is registered as the secondary cache here.

HybridCache exposes GetOrCreateAsync with two overloads, taking a key and: 
  1. A factory method.
  2. State, and a factory method.
The method uses the key to retrieve the item from the primary cache. If it's not there (cache miss), it then checks the secondary cache (if it's configured). If it doesn't find the item there (another cache miss), it calls the factory method to get the item from the original data source. It then caches the item in both primary and secondary caches. The factory method is never called if the item is found in the primary or secondary cache (a cache hit).

The HybridCache service ensures that only one concurrent caller for a given key calls the factory method, and all other callers wait for the result of that call. The CancellationToken passed to GetOrCreateAsync represents the combined cancellation of all concurrent callers.

I love this.

Hope this helps.

More read:

Happy Coding.

Regards,
Jaliya