Wednesday, October 23, 2024

.NET 9.0: Out-of-order Metadata Reading in System.Text.Json

Release of .NET 9.0 is like 3 weeks away and in this post, let's have a look at one of the most awaited System.Text.Json features.

Let's have a look at an example.

Consider the below types.
[JsonDerivedType(typeof(Student)"student")]
record Person(string Name);

record Student(string Namestring StudentId) : Person(Name);
The following code will throw an error on Deserialize.
using System.Text.Json.Serialization;
using System.Text.Json;

JsonSerializerOptions options = new();

Person person = new Employee("John Doe""STU001");
string serializedPerson = JsonSerializer.Serialize(personoptions);
// {"$type":"student","StudentId":"STU001","Name":"John Doe"}

// Change the order of $type
serializedPerson = """
{
    
"StudentId":"STU001",
    
"Name":"John Doe",
    
"$type":"student"
}
"""
;

person = JsonSerializer.Deserialize<Person>(serializedPersonoptions)!; // Exception
// System.Text.Json.JsonException: The metadata property is either not supported by the type or is not the first property in the deserialized JSON object.
The reason is (as the exception says) that the metadata property $type is not the first property in the JSON string, and it has to be. While there is a reason for that requirement, until .NET 9, it was a known limitation (see #dotnet/runtime/issues/72604).

Finally, in .NET 9, we can enable AllowOutOfOrderMetadataProperties in JsonSerializerOptions.
JsonSerializerOptions options = new()
{
    AllowOutOfOrderMetadataProperties = true
};
And now the above would work just fine.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, October 21, 2024

Ingressing Custom Text Log Files from an Azure Virtual Machine into Log Analytics

I recently had a requirement where I wanted to see some logs in an Azure Virtual Machine, but didn't want to SSH into the VM to see the logs. Instead, it would have been nice if I could see those in a Log Analytics workspace (LAW) in Azure. This gives a lot of advantages, like I can set up alerts on those logs if I want to.

So in this post, let's see how we can ingress some logs that is being written into some custom log file in an Azure Virtual Machine into LAW.

Assuming we already have a LAW created, the first step is creating a custom table there to ingress the logs into.

# Connect-AzAccount
$tableName = "<TableName>_CL"
$tableParams = @"
{
    "properties": {
        "schema": {
            "name": "$
tableName",
            "columns": [
                {
                    "name": "TimeGenerated",
                    "type": "DateTime"
                },
                {
                    "name": "RawData",
                    "type": "String"
                },
                {
                    "name": "FilePath",
                    "type": "String"
                },
                {
                    "name": "Computer",
                    "type": "String"
                }
            ]
        }
    }
}
"@


Invoke-AzRestMethod `
    -Path "/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCE_GROUP>/providers/microsoft.operationalinsights/workspaces/<LAW_NAME>/tables/$($tableName)?api-version=2021-12-01-preview" `
    -Method PUT `
    -payload $tableParams

Here I am using a default template as for the demo purposes I am not doing any transformations.

Make sure the table is created in your LAW.
Log Analytics Workspace
Next, I am going to Azure Monitor (search for Monitor in Azure Portal) and creating a Data Collection Endpoint (DCE).
Create data collection endpoint
It's pretty straightforward.

Now from Azure Monitor, I am creating a Data Collection Rule (DCR).

Create Data Collection Rule
My VM is on Linux so I am selecting Linux as the Platform Type, and I am selecting the DCE I created in the previous step as the Data Collection Endpoint.

Next on Resources, I am adding a new Resource and selecting the target VM and the DCE.
Create Data Collection Rule: Resources
Now going to Collection and deliver tab and selecting a Data Source of type Custom Text Logs.
Add data source: Custom Text Logs

Add data source: Custom Text Logs
Here, for the File pattern, I am giving the path of the file where I need the logs ingested from and for the Table name, giving the name of LAW table I created before. And for the Transform, I am leaving it as it is.

Now configuring the destination as Azure Monitor Logs as follows.

Add data source: Custom Text Logs
And that's it. I am creating the DCR and Azure will deploy Azure Monitor Agent to the target VM and configure the ingestion.

And once deployment is completed, after some time, I can see the logs in LAW.
Ingested Logs
More read:
   Data collection rules (DCRs) in Azure Monitor
   Collect data with Azure Monitor Agent
   Troubleshooting guidance for the Azure Monitor agent on Linux virtual machines and scale sets

Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, October 14, 2024

Azure AD B2C: Copy signInNames.emailAddress to email Claim in a Custom Policy

I had a requirement where in an Azure AD B2C custom policy, as part of an UserJourney, there was an OrchestrationStep to invoke a REST endpoint passing in the email claim (if it exists). While the REST endpoint was executed for AzureADSignUpOrSignIn, it wasn't being executed for LocalSignUpOrSignIn. So obviously email was missing in the claims bag. But for LocalSignUpOrSignIn, it has signInNames.emailAddress

In this post, let's see how we can make use of a TechnicalProfile to copy signInNames.emailAddress claim to email  claim.

The first step is creating a ClaimsTransformation.
<ClaimsTransformations>
  ...
  <ClaimsTransformation Id="CopySignInNamesEmailAddressToEmail" TransformationMethod="CopyClaim">
    <InputClaims>
      <InputClaim ClaimTypeReferenceId="signInNames.emailAddress" TransformationClaimType="inputClaim" />
    </InputClaims>
    <OutputClaims>
      <OutputClaim ClaimTypeReferenceId="email" TransformationClaimType="outputClaim" />
    </OutputClaims>
  </ClaimsTransformation>
</ClaimsTransformations>
Then create a TechnicalProfile to use ClaimsTransformation.
<ClaimsProviders>
  ...
  <ClaimsProvider>
    ...
    <TechnicalProfiles> ...
      <TechnicalProfile Id="Transform-SignInNamesEmailAddressToEmail">
        <DisplayName>Transform SignInNames.EmailAddress To Email</DisplayName>
        <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
        <OutputClaims>
          <OutputClaim ClaimTypeReferenceId="email" />
        </OutputClaims>
        <OutputClaimsTransformations>
          <OutputClaimsTransformation ReferenceId="CopySignInNamesEmailAddressToEmail" />
        </OutputClaimsTransformations>
        <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
      </TechnicalProfile>
    </TechnicalProfiles>
  </ClaimsProvider>
</ClaimsProviders>
Then update UserJourney adding an OrchestrationStep.
<UserJourneys>
  ...
  <UserJourney Id="LocalSignUpOrSignIn">
    <OrchestrationSteps>
      ...
      <OrchestrationStep Order="6" Type="ClaimsExchange"> <!--only execute if email claim does not exist AND signInNames.emailAddress claim exists -->
        <Preconditions>
          <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
            <Value>email</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
          <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
            <Value>signInNames.emailAddress</Value>
            <Action>SkipThisOrchestrationStep</Action>
          </Precondition>
        </Preconditions>
        <ClaimsExchanges>
          <ClaimsExchange Id="TransformSignInNamesEmailAddressToEmail" TechnicalProfileReferenceId="Transform-SignInNamesEmailAddressToEmail" />
        </ClaimsExchanges>
      </OrchestrationStep> ...
    </OrchestrationSteps>
  </UserJourney>
</UserJourneys>
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, October 2, 2024

DNS Resolution in Internal Container Apps Environments (CAE)

In this post, let's see how an App A can communicate with a container app: App B deployed on a Container Apps Environment (CAE) that is integrated into a VNet using internal virtual IP. App A is running inside the same VNet.
Create Container Apps Environment
The container app: App B, has Ingress enabled and accepts traffic from anywhere within the VNet.
Container App: Ingress settings
Now from App A, let's try to communicate with App B.
Server can't find
App A can't resolve the address. 

The reason:
So let's create and configure an Azure Private DNS Zone for domain resolution.

I am using the Azure  CLI (on Windows).
# Declare Variables
$RESOURCE_GROUP = '<RESOURCE_GROUP>'
$CA_ENV_NAME = '<CONTAINER_APP_ENVIRONMENT_NAME>'
$VNET_NAME = '<VNET_NAME>'

# Retrieve the default domain of the Container App Environment
$CA_ENV_DEFAULT_DOMAIN = az containerapp env show `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_NAME `
  --query properties.defaultDomain `
  --output tsv

# Retrieve the static IP of the Container App Environment
$CA_ENV_STATIC_IP = az containerapp env show `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_NAME `
  --query properties.staticIp `
  --output tsv
Let's create the Private DNS Zone first.
# Create the Private DNS Zone
az network private-dns zone create `
  --resource-group $RESOURCE_GROUP `
  --name $CA_ENV_DEFAULT_DOMAIN
Create the Private DNS Zone
Next, we need to add a Virtual Network Link to the target VNet.
# Create the Virtual Network Link
az network private-dns link vnet create `
  --resource-group $RESOURCE_GROUP `
  --name $VNET_NAME `
  --virtual-network $VNET_NAME `
  --zone-name $CA_ENV_DEFAULT_DOMAIN `
  --registration-enabled true
Create the Virtual Network Link
Finally, create a A Record to point to the default domain of CAE.
# Create the A Record
az network private-dns record-set a add-record `
  --resource-group $RESOURCE_GROUP `
  --zone-name $CA_ENV_DEFAULT_DOMAIN `
  --record-set-name '*' `
  --ipv4-address $CA_ENV_STATIC_IP
Create the A Record
Now let's try to communicate with App B again.
Success
And that's it.

Hope this helps.

Happy Coding.

Regards,
Jaliya