Thursday, September 26, 2024

Azure AD B2C: Single Tenant to Serve Multiple Environments using Custom Policies

In this post, let's see how we can use a single Azure AD B2C tenant to serve multiple environments using custom policies. This is using the nice Inheritance Model support in Azure AD B2C custom policies.

Let's go by an example scenario: 

Say I have an AADB2C tenant that is serving 2 environments. As part of a user journey, the policy calls a REST endpoint and the REST endpoint is environment-specific. 

AADB2C doesn't have a concept like app settings, however, we can override policies in the base files.

Say, I have the following ClaimsProvider in TrustFrameworkExtensions.xml .

<ClaimsProviders>
  ...
  <ClaimsProvider>
    <DisplayName>Get Additional Claims via REST</DisplayName>
    <TechnicalProfiles>
      <TechnicalProfile Id="REST-GetAdditionalClaims">
        <DisplayName>Get Additional Claims via REST call and transform claims</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <Metadata>
          <!--TODO: Override in RP File -->
          <Item Key="ServiceUrl">{{ServiceUrl}}</Item>
          <Item Key="SendClaimsIn">QueryString</Item>
          <Item Key="AuthenticationType">None</Item>
          <Item Key="AllowInsecureAuthInProduction">true</Item>
        </Metadata>
        <InputClaims>
          <InputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="objectId" />
        </InputClaims>
        <OutputClaims>
          ...
        </OutputClaims>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>

Here the ServiceUrl is environment-specific. So in this case, we can have 2 RP (Relying Party) files.

SignUpOrSignin_AD_DEV_001.xml

<TrustFrameworkPolicy ...>

  <BasePolicy>
    <TenantId>some-tenant.onmicrosoft.com</TenantId>
    <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
  </BasePolicy>

  <ClaimsProviders>
    <ClaimsProvider>
      <DisplayName>Get Additional Claims via REST</DisplayName>
      <TechnicalProfiles>
        <TechnicalProfile Id="REST-GetAdditionalClaims">
          <Metadata>
            <!--Override ServiceUrl-->
            <Item Key="ServiceUrl">https://app-user-api-dev-001.azurewebsites.net/api/users</Item>
          </Metadata>
        </TechnicalProfile>
      </TechnicalProfiles>
    </ClaimsProvider>
  </ClaimsProviders>

  <RelyingParty>
    ...
  </RelyingParty>

</TrustFrameworkPolicy>

SignUpOrSignin_AD_QA_001.xml

<TrustFrameworkPolicy ...>

  <BasePolicy>
    <TenantId>some-tenant.onmicrosoft.com</TenantId>
    <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
  </BasePolicy>

  <ClaimsProviders>
    <ClaimsProvider>
      <DisplayName>Get Additional Claims via REST</DisplayName>
      <TechnicalProfiles>
        <TechnicalProfile Id="REST-GetAdditionalClaims">
          <Metadata>
            <!--Override ServiceUrl-->
            <Item Key="ServiceUrl">https://app-user-api-qa-001.azurewebsites.net/api/users</Item>
          </Metadata>
        </TechnicalProfile>
      </TechnicalProfiles>
    </ClaimsProvider>
  </ClaimsProviders>

  <RelyingParty>
    ...
  </RelyingParty>

</TrustFrameworkPolicy>

Now the consumer can use the correct RP file based on the environment.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, September 17, 2024

Azure Pipelines: Passing Variables Between Stages

In my last post I wrote about Azure Pipelines: Passing Variables Between Jobs, and in this let's see how we can pass variables between stages in Azure Pipelines.

Let's say we need to pass a variable from StageA to StageB.

trigger:
main

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  variables:
    someVariable: ''
  jobs:  
  - job: JobA
    displayName: Job A
    steps:    
    # Create output variable and set value
    - script: |
        echo "##vso[task.setvariable variable=stageAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

stage: StageB
  displayName: Stage B
  variables:
    # Map the output variable from Stage A into Stage B
    stageBVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobB
    displayName: Job B
    steps:
    # stageBVariable has the value set in JobA
    - script: |
        echo $(stageBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

Note how StageA.JobA.outputVariable is outputting the variable using isOutput=true. And then StageB is getting it mapped via stageDependencies

When we have multiple stages in a pipeline, by default, they run sequentially in the order in which they are defined in the YAML file. So in the above use case where we have 2 stages, we don't explicitly need to use dependsOn

Now let's add another Step: StageC.

trigger:
main

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  variables:
    someVariable: ''
  jobs:  
  - job: JobA
    displayName: Job A
    steps:
    # Create output variable      
    - script: |
        echo "##vso[task.setvariable variable=stageAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

stage: StageB
  displayName: Stage B
  variables:
    # Map the output variable from Stage A into Stage B
    stageBVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobB
    displayName: Job B
    steps:
    # stageBVariable has the value set in JobA
    - script: |
        echo $(stageBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

stage: StageC
  displayName: Stage C
  # Need to explictely depend on Stage A
  dependsOn: 
  - StageA
  - StageB
  variables:
    # Map the output variable from Stage A into Stage C
    stageCVariable: $[ stageDependencies.StageA.JobA.outputs['outputVariable.stageAVariable'] ]
  jobs:  
  - job: JobC
    displayName: Job C
    steps:
    # stageCVariable has the value set in JobA
    - script: |
        echo $(stageCVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

Here especially for StageC, we need to explicitly depend on StageA. You might skip it thinking they run sequentially meaning StageA -> StageB -> StageC, so technically StageC is depending on StageA.  But unfortunately depending just on the previous stage is not enough. You need to explicitly declare the dependency on the stage from which you are mapping the variable from.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Thursday, September 12, 2024

Azure Pipelines: Passing Variables Between Jobs

In this post let's see how we can pass variables between jobs in Azure Pipelines.

Let's say we need to pass a variable from JobA to future jobs.

trigger:
main/*

pool:
  vmImage: ubuntu-latest

stages:
stage: StageA
  displayName: Stage A
  jobs:  
  - job: JobA
    displayName: Job A
    steps:
    # Create output variable and set value
    - script: |
        echo "##vso[task.setvariable variable=jobAVariable;isOutput=true]someValue"
      name: outputVariable
      displayName: Output Variable

  - job: JobB
    displayName: Job B
    dependsOn: JobA
    variables:
      # Map the output variable from Job A into Job B
      jobBVariable: $[ dependencies.JobA.outputs['outputVariable.jobAVariable'] ]
    steps:
    # This will print the updated value of the variable
    - script: |
        echo $(jobBVariable)
      name: printUpdatedVariable
      displayName: Print Updated Variable

  - job: JobC
    displayName: Job C
    dependsOn: JobA
    variables:
      # Map the output variable from Job A into Job C
      jobCVariable: $[ dependencies.JobA.outputs['outputVariable.jobAVariable'] ]
    steps:
    # This will print the updated value of the variable
    - script: |
       echo $(jobCVariable)     
      name: printUpdatedVariable
      displayName: Print Updated Variable

Note how StageA.JobA.outputVariable is outputting the variable using isOutput=true. By default jobs run in parallel. So first we need to wait till JobA  completes using dependsOn. Then we can map the variable into any job through dependencies.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Sunday, September 8, 2024

Azure Pipelines: DurableTask.Netherite: Netherite backend requires 64bit, but current process is 32bit.

Recently one of our .NET Isolated Azure Durable Functions got switched to use Netherite as its storage provider and suddenly all the integration tests started to fail in the Azure DevOps CI pipeline. 

The underlying error was:  

DurableTask.Netherite: Netherite backend requires 64bit, but current process is 32bit

In the pipeline, we were using FuncToolsInstaller@0 to install Azure Functions Core Tools.

task: FuncToolsInstaller@0
  displayName: Install Azure Func Core Tools

Upon investigating, noticed FuncToolsInstaller@0 is using x86 version of Azure Functions Core Tools.

FuncToolsInstaller@0
Checked choco (chocolatey), it was also using x86 as well, fortunately, npm was using x64 by default.
bash: |
    npm i -g azure-functions-core-tools@4 --unsafe-perm true
  displayName: Install Azure Func Core Tools

And that solved the issue.

Created microsoft/azure-pipelines-tasks enhancement request for FuncToolsInstaller@0.
   [enhancement]: FuncToolsInstaller@0: Use win-x64 for Windows

Hope this helps.

Happy Coding.

Regards,
Jaliya