Friday, January 9, 2026

Isolated Azure Functions: Multiple Output Bindings with HTTP Functions

In this post, let's look at how we can use multiple output bindings with HTTP-triggered Azure Functions running on the isolated worker model.

When working with HTTP-triggered functions, you often need to return an HTTP response to the caller while also sending data to other services like Azure Service Bus, Azure Queue Storage, etc. This is where multiple output bindings come in handy.

HttpRequestData vs HttpRequest

With Azure Functions isolated worker model, by default we have HttpRequestData and HttpResponseData types from the Microsoft.Azure.Functions.Worker.Extensions.Http package.

However, we can also use ASP.NET Core integration via the Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package. This enables using standard ASP.NET Core types including HttpRequestHttpResponse, and IActionResult in HTTP Triggers.

Let's see how multiple output bindings work with both approaches.

To have multiple outputs, we need to create a class that contains our output binding properties.

Using HttpRequestData

using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using System.Net;

namespace FunctionApp1;

public class HttpStartWithHttpRequestData
{
    [Function(nameof(HttpStartWithHttpRequestData))]
    public async Task<HttpStartOutputWithHttpResponseDataRun(
        [HttpTrigger(AuthorizationLevel.Function, "get""post")] HttpRequestData req)
    {
        HttpResponseData response = req.CreateResponse(HttpStatusCode.OK);
        await response.WriteStringAsync("Message Sent");

        return new HttpStartOutputWithHttpResponseData()
        {
            ServiceBusMessageContent = "Some Content",
            HttpResponse = response
        };
    }
}

public class HttpStartOutputWithHttpResponseData
{
    [ServiceBusOutput("sbt-test-topic"ServiceBusEntityType.Topic, Connection = "ServiceBusConnection")]
    public string? ServiceBusMessageContent { getset; }

    public HttpResponseData HttpResponse { getset; }
}

With HttpRequestData, the HttpResponseData property doesn't require any special attribute. The function runtime automatically recognizes it as the HTTP response.

Note: The ServiceBusMessageContent property is nullable. If the value isn't set (i.e., it's null), no message will be sent to the Service Bus. This allows you to conditionally send messages based on your business logic.

Using HttpRequest (ASP.NET Core Integration)

When using HttpRequest from the ASP.NET Core integration, things are slightly different. We need to use the [HttpResult] attribute to indicate which property is the HTTP response.

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using System.Net;

namespace FunctionApp1;

public class HttpStartWithHttpRequest
{
    [Function(nameof(HttpStartWithHttpRequest))]
    public HttpStartOutputWithHttpResponse Run(
        [HttpTrigger(AuthorizationLevel.Function, "get""post")] HttpRequest req)
    {
        return new HttpStartOutputWithHttpResponse()
        {
            ServiceBusMessageContent = "Some Content",
            HttpResponse = new ObjectResult("Message Sent")
            {
                StatusCode = (int)HttpStatusCode.OK
            }
        };
    }
}

public class HttpStartOutputWithHttpResponse
{
    [ServiceBusOutput("sbt-test-topic"ServiceBusEntityType.Topic, Connection = "ServiceBusConnection")]
    public string? ServiceBusMessageContent { getset; }

    [HttpResult]
    public IActionResult HttpResponse { getset; }
}

The [HttpResult] attribute is required here because without it, the runtime won't know which property represents the HTTP response.

Note: Similar to the previous example, the ServiceBusMessageContent property is nullable. If the value isn't set, no message will be sent to the Service Bus.

Limitations

There's a significant limitation when it comes to Service Bus output bindings in the isolated worker model. The content type has to be the message body itself - it can be a simple type (like string) or a complex type (a POCO).

You cannot use ServiceBusMessage from the Azure.Messaging.ServiceBus SDK as the output type. If you try to do so, the entire ServiceBusMessage object gets serialized as JSON, resulting in something like this:

{
  "Body""eyJDb250ZW50IjoiU29tZSBDb250ZW50In0=",
  "MessageId"null,
  "PartitionKey"null,
  "SessionId"null,
  "TimeToLive""10675199.02:48:05.4775807",
  "CorrelationId"null,
  "Subject"null,
  "ContentType"null,
  "ApplicationProperties": {}
}

This means you cannot set message properties like CorrelationIdSessionIdSubject, or ApplicationProperties using output bindings in the isolated worker model. If you need to set these properties, you'll have to use the ServiceBusClient directly.

This is a known limitation and has been discussed in several GitHub issues:

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, January 6, 2026

Copying Azure Cosmos DB NoSQL Containers to a Database in another Account

In this post, let's see how to copy Azure Cosmos DB NoSQL containers from one database to another database in a different Azure Cosmos DB NoSQL account.

Currently within the Azure Portal itself, only Container Copy within the same account is supported.

Container Copy

There are other options like Azure Data Factory etc, but it's much easier to do this using  az cosmosdb copy from az cli cosmosdb-preview extension.

Let's have a look.

First step is adding  cosmosdb-preview extension. 

az extension add --name cosmosdb-preview

Now I am declaring script level variables.

$sourceSubsciptionId = "<sourceSubscriptionId>"
$sourceAccountResourceGroup = "<sourceAccountResourceGroup>"
$sourceAccountName = "<sourceAccountName>"

$destinationSubsciptionId = "<destinationSubscriptionId>"
$destinationAccountResourceGroup = "<destinationResourceGroup>"
$destinationAccountName = "<destinationAccountName>"

# ManagedIdentity of destination Cosmos DB account
$destinationUserAssignedManagedIdentityName = "<destinationUserAssignedManagedIdentityName>"

$destinationUserAssignedManagedIdentityPrincipalId = az identity show `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationUserAssignedManagedIdentityName `
    --query principalId `
    --output tsv

$destinationUserAssignedManagedIdentityResourceId = az identity show `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationUserAssignedManagedIdentityName `
    --query id `
    --output tsv

We can copy containers in two modes: Online copy and Offline copy. With Offline copy, we need to stop operations on source container, I don't want to do that. So in this post, we are going to be doing Online copy. I am also using User-Assigned managed identities for database access.

Requirements

Source Account

  1. Enable continuous backup on source Azure Cosmos DB account.
  2. Enable All version and delete change feed mode (preview) feature on the source account.
    Features: All version and delete change feed mode (preview)
  3. Add capability: EnableOnlineContainerCopy

# Existing capabilities of your account.
$cosmosdb = az cosmosdb show `
    --resource-group $sourceAccountResourceGroup `
    --name $sourceAccountName

$capabilities = (($cosmosdb | ConvertFrom-Json).capabilities)

# Append EnableOnlineContainerCopy capability in the list of capabilities.
$capabilitiesToAdd = @()
foreach ($item in $capabilities) {
    $capabilitiesToAdd += $item.name
}
$capabilitiesToAdd += "EnableOnlineContainerCopy"

# Update Cosmos DB account
az cosmosdb update `
    --resource-group $sourceAccountResourceGroup
    --name $sourceAccountName `
    --capabilities $capabilitiesToAdd

Verify the capabilities and ensure EnableOnlineContainerCopy is added.

Source Account Capabilities

Destination Account
  1. Currently cross account container copy is only supported for accounts with System-Assigned or User-Assigned default identity. So make sure in the destination database, default identity is set to destination User-Assigned managed identity.
# Show the current default identity of the destination Cosmos DB account
az cosmosdb show `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationAccountName `
    --query defaultIdentity

# Update default identity for the destination Cosmos DB account
az cosmosdb update `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationAccountName `
    --default-identity=UserAssignedIdentity=$destinationUserAssignedManagedIdentityResourceId
Now we need to grant the destination Cosmos DB account’s managed identity read-only access to the source Cosmos DB account, so we can read the data.
az account set --subscription $sourceSubsciptionId

$roleDefinitionId = "00000000-0000-0000-0000-000000000001" # Read-Only Role Definition Id
az cosmosdb sql role assignment create `
    --resource-group $sourceAccountResourceGroup `
    --account-name $sourceAccountName `
    --role-definition-id $roleDefinitionId `
    --scope "/" `
    --principal-id $destinationUserAssignedManagedIdentityPrincipalId

Now we are all set. Next step is creating a job to copy container.

Create Container Copy Job:

az account set --subscription $destinationSubsciptionId

$jobName = "<jobName>"

$sourceDatabase = "<sourceDatabase>"
$sourceContainer = "<sourceContainer>"

$destinationDatabase = "<destinationDatabase>"
$destinationContainer = "<destinationContainer>"

az cosmosdb copy create `
    --resource-group $destinationAccountResourceGroup `
    --job-name $jobName `
    --src-account $sourceAccountName `
    --src-nosql database=$sourceDatabase container=$sourceContainer `
    --dest-account $destinationAccountName `
    --dest-nosql database=$destinationDatabase container=$destinationContainer `
    --mode Online

az cosmosdb copy create
Once it's started, we can query the job status.

Query Job Status:

az cosmosdb copy show `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

az cosmosdb copy show

When the processedCount becomes greater than or equal to the totalCount, complete the job.

Complete the Job:

az cosmosdb copy complete `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

After some time, query the status again and make sure it's Completed.

az cosmosdb copy show
That's it.

Some useful commands

# List all copy jobs
az cosmosdb copy list `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName

# Pause copy job
az cosmosdb copy pause `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

# Resume copy job
az cosmosdb copy resume `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

# Cancel copy job
az cosmosdb copy cancel `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

Hope this helps.

More read:
   Copy jobs in Azure Cosmos DB (preview)
   Create and manage container copy jobs in Azure Cosmos DB (Preview)

Happy Coding.

Regards,
Jaliya

Sunday, January 4, 2026

.NET 10: Default Container Images Changed from Debian to Ubuntu

When you are upgrading your .NET application to .NET 10 and using Docker, you might run into build errors where certain apt-get packages are no longer available.

Starting with .NET 10, the default Linux distribution for .NET container images has been changed from Debian to Ubuntu 24.04 "Noble Numbat". This was based on a poor match between Debian and .NET lifecycles.

This means if your Dockerfile installs any Debian-specific packages using apt-get, those packages might not be available or might have different names in Ubuntu.

FROM mcr.microsoft.com/dotnet/aspnet:10.0 AS base
RUN apt-get update -y \
    && apt-get install -y some-debian-package  # This might fail!

The fix is to find the equivalent Ubuntu package name and update your Dockerfile accordingly. You can search for Ubuntu packages at https://packages.ubuntu.com/.

Note that Debian container images will NOT be shipped for .NET 10. If you specifically need Debian-based images, you will need to create custom container images.

So this is something to look out for when upgrading.

More information:
   .NET 10: Default .NET images use Ubuntu

Hope this helps.

Happy Coding.

Regards,
Jaliya

Friday, January 2, 2026

Sending Emails using Azure Communication Services in .NET

In this post, let's see how to send emails using Azure Communication Services (ACS) in a .NET application. With SendGrid's free tier being discontinued, ACS has become an attractive alternative for sending transactional emails from Azure-hosted applications.

First, you need to create two resources in Azure:
  1. Email Communication Services - This handles email domain configuration
  2. Communication Services - This is the main resource your application connects to
Creating Email Communication Services

In the Azure Portal, search for Email Communication Services and create a new resource. Once created, you need to add a domain. You have two options:
  1. Azure Managed Domain: Quick setup, gives you a subdomain like DoNotReply@xxxxxxxx.azurecomm.net 
  2. Custom Domain: Use your own domain like noreply@yourdomain.com

For a custom domain, you'll need to add following DNS records for verification, Azure will provide the values.

  • TXT: Domain ownership verification
  • TXT: SPF (Sender Policy Framework)
  • CNAME: DKIM key 1
  • CNAME: DKIM key 2

You can do this by clicking on Provision domains -> Custom domain and following the steps.

Email Communication Services: Add Custom Domain

Note: If you have an existing SPF record (e.g. from email forwarding), you need to merge them. You can only have one SPF record per domain:
v=spf1 include:spf.protection.outlook.com include:other.service.com -all
After domain verification, add a MailFrom address under your domain. 
Email Communication Services Domain: MainFrom addresses

Linking Domain to Communication Services

Create a
Communication Services resource, then navigate to Email -> Domains -> Connect domains and link your verified domain.
Communication Services: Link Email Domain
Implementing in .NET

First we need install the following NuGet Package.
dotnet add package Azure.Communication.Email

Add the following to your appsettings.json:

{
  "Email": {
    "Endpoint""https://<your-acs-resource>.communication.azure.com",
    "SenderAddress""<configured_mailfrom_address>"   }
}

We can define an Options Class to map Email settings.

namespace YourApp.Options;

public record Email
{
    public required string Endpoint { getinit; }

    public required string SenderAddress { getinit; }
}

Now let's register the Email options, EmailClient and an EmailService.

// In Program.cs

builder.Services.AddOptions<Email>()
    .Bind(builder.Configuration.GetSection("Email"));

builder.Services.AddSingleton<EmailClient>(sp =>
{
    var emailOptions = sp.GetRequiredService<IOptions<Email>>();
    var credential = new DefaultAzureCredential();

    return new EmailClient(new Uri(emailOptions.Value.Endpoint), credential);
});

builder.Services.AddScoped<EmailService>();

Now let's create a simple EmailService.

using Azure.Communication.Email;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

namespace YourApp.Services;

public class EmailService
{
    private readonly EmailClient _emailClient;
    private readonly Email _emailOptions;
    private readonly ILogger<EmailService> _logger;

    public EmailService(
        EmailClient emailClient,
        IOptions<EmailemailOptions,
        ILogger<EmailServicelogger)
    {
        _emailClient = emailClient;
        _emailOptions = emailOptions.Value;
        _logger = logger;
    }

    public async Task SendEmailAsync(
        string recipientEmail,
        string subject,
        string htmlContent,
        string plainTextContent,
        CancellationToken cancellationToken = default)
    {
        var emailMessage = new EmailMessage(
            senderAddress: _emailOptions.SenderAddress,
            recipientAddressrecipientEmail,
            contentnew EmailContent(subject)
            {
                Html = htmlContent,
                PlainText = plainTextContent
            });

        try
        {
            EmailSendOperation operation = await _emailClient.SendAsync(
                Azure.WaitUntil.Completed,
                emailMessage,
                cancellationToken);

            _logger.LogInformation(
                "Email sent successfully. MessageId: {MessageId}",
                operation.Id);
        }
        catch (Exception ex)
        {
            _logger.LogError(ex"Failed to send email");
            throw;
        }
    }
}

Managed Identity RBAC Role

When using DefaultAzureCredential with an Managed Identity in Production, you must assign the Communication and Email Service Owner RBAC role to your identity on the Communication Services resource.

az role assignment create `
  --assignee <managed-identity-principal-id> `
  --role "Communication and Email Service Owner" `
  --scope /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.Communication/CommunicationServices/<acsName>

Once email is sent:

Received Email

Hope this helps.

More read:
   Azure Communication Services Email Overview
   Quickstart: Send Email
   Email Pricing

Happy Coding.

Regards,
Jaliya

Thursday, December 25, 2025

Content Understanding Studio: Adding Microsoft Foundry Resource: Defaults have not yet been set Error

I had a fresh Microsoft Foundry resource created and when attempting to add the resource into Content Understanding Studio, I made sure to check "Enable auto-deployment for required models if no defaults are available."
Content Understanding Studio: Add new connected resource
But I was still getting the following error.
Content Understanding Studio: Add new connected resource
Upon checking it was on, 
{
  "error": {
    "code""InvalidRequest",
    "message""Invalid request.",
    "innererror": {
      "code""DefaultsNotSet",
      "message""Defaults have not yet been set. Call 'PATCH /contentunderstanding/defaults' first."
    }
  }
}
The fix is (as mentioned in the error message), manually call the PATCH endpoint with an empty body before adding the resource.
curl --location --request PATCH 'https://{foundry-resource}.cognitiveservices.azure.com/contentunderstanding/defaults?api-version=2025-11-01' `
--header 'ocp-apim-subscription-key: {ocp-apim-subscription-key}' `
--header 'Content-Type: application/json' `
--data '{
    
}'
Once this PATCH request is made, the Foundry resource gets added successfully without any errors.

This appears to be a bug. When "Enable auto-deployment for required models if no defaults are available" is checked, the PATCH call should be handled internally by the Studio.

Happy Coding.

Regards,
Jaliya

Tuesday, December 23, 2025

Studio 3T and Azure Cosmos DB for MongoDB

When working with MongoDB, having a reliable GUI tool makes all the difference. Studio 3T has been my go-to tool for years, and it obviously works seamlessly with MongoDB. And most importantly, it works very well with Azure Cosmos DB for MongoDB.

There are several MongoDB GUI tools available, but Studio 3T stands out for a few reasons:
  • Visual Query Builder: Build queries visually if you prefer not to write JSON
  • IntelliShell: Auto-completion for MongoDB queries with syntax highlighting
  • Aggregation Editor: Step-by-step pipeline builder with stage-by-stage output preview
  • SQL Query: Write SQL and have it translated to MongoDB query language
  • Import/Export: Easily move data between MongoDB, JSON, CSV, and SQL databases
Connecting to Azure Cosmos DB for MongoDB from Studio 3T is just as easy as any MongoDB. Just copy the Connection String and paste it in, and you are connected.

Note: Azure Cosmos DB for MongoDB requires SSL, which is already included in the connection string.

SQL to MongoDB


If you are coming from a SQL background, the SQL Query feature is a lifesaver. Write a query like:
SELECT * 
FROM employees 
WHERE department = 'IT'
ORDER BY name 
LIMIT 10
And Studio 3T translates it to:
db.employees
.find({ department"IT" })
   .sort({ name: 1 })
   .limit(10);
Do try it out.

Happy Coding.

Regards,
Jaliya

Thursday, December 18, 2025

DefaultAzureCredential: Troubleshooting Local Development Issues

DefaultAzureCredential is the recommended approach for authenticating with Azure services, and in most cases, we rarely rely on access keys anymore, authentication is typically handled through managed identities.

However, during local development, when authentication falls back to the developer’s user account, this can occasionally introduce unexpected complexity and frustration.

I usually use the following DefaultAzureCredentialOptions:
DefaultAzureCredentialOptions credentialOptions = new()
{
    // Explicitly specify the tenant to avoid cross-tenant issues
    TenantId = "<TenantId>",

    // Prioritize local development credentials
    ExcludeAzureCliCredential = false,          // Azure CLI (az login)
    ExcludeAzureDeveloperCliCredential = false// Azure Developer CLI (azd auth login)
    ExcludeVisualStudioCredential = true,

    // Exclude irrelevant credentials
    ExcludeInteractiveBrowserCredential = true,
    ExcludeWorkloadIdentityCredential = true,

    // Keep managed identity for production.
    ExcludeManagedIdentityCredential = false,
};

DefaultAzureCredential defaultAzureCredential = new(credentialOptions);
Key points:
  • Always specify TenantId to avoid cross-tenant issues
  • Always avoiding exclude VisualStudioCredential, and relying on Azure CLI and Azure Developer CLI credentials
  • Keep ManagedIdentityCredential enabled so the same code works in production
If you want to enable logging for any troubleshooting:
using Azure.Core.Diagnostics;
using System.Diagnostics.Tracing;

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);

// Add services to the container.

WebApplication app = builder.Build();

ILoggerFactory loggerFactory = app.Services.GetRequiredService<ILoggerFactory>();
ILogger azureIdentityLogger = loggerFactory.CreateLogger("Azure.Identity");

using var listener = new AzureEventSourceListener((argsmessage) =>
{
    if (args.EventSource.Name == "Azure-Identity")
    {
        azureIdentityLogger.LogInformation("{Message}"message);
    }
}, EventLevel.Verbose);

// Configure the HTTP request pipeline.
app.Run()
Hope this helps.

Happy Coding.

Regards,
Jaliya

Monday, December 15, 2025

Azure Functions: Running with Production App Settings Locally

In this post, let's have a look at how to run Azure Functions locally with Production app settings and why some explicit configuration is required compared to ASP.NET Core applications.

When developing Azure Functions, you might want to test your function app locally using Production configuration. This is a common requirement when you want to verify settings before deployment or troubleshoot production-specific issues.

This is usually how we add appsettings.*.json files to Configuration.
using Microsoft.Azure.Functions.Worker.Builder;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;

FunctionsApplicationBuilder builder = FunctionsApplication.CreateBuilder(args);

builder.Configuration
    .AddJsonFile("appsettings.json"optionaltruereloadOnChangefalse)
    .AddJsonFile($"appsettings.{builder.Environment.EnvironmentName}.json"optionaltruereloadOnChangefalse)
    .AddEnvironmentVariables();

// Other configurations

IHost host = builder.Build();

host.Run();
In ASP.NET Core applications, you can simply set the ASPNETCORE_ENVIRONMENT or DOTNET_ENVIRONMENT environment variable, and the application will automatically load appsettings.<EnvironmentName>.json.

For Azure Functions, things work a bit differently.

First, Azure Functions uses AZURE_FUNCTIONS_ENVIRONMENT to determine the current environment. If not set, then it falls back to DOTNET_ENVIRONMENT.

You can set it on your launchSettings.json
{
  "profiles": {
    "MyProfile": {
      "commandName""Project",
      "commandLineArgs""--port 7052",
      "environmentVariables": {
        "AZURE_FUNCTIONS_ENVIRONMENT""Production"
      }
    }
  }
}
Or in your local.settings.json:
{
  "IsEncrypted"false,
  "Values": {
    "AzureWebJobsStorage""UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME""dotnet-isolated",
    "AZURE_FUNCTIONS_ENVIRONMENT""Production"
  }
}
You can also use DOTNET_ENVIRONMENT instead of AZURE_FUNCTIONS_ENVIRONMENT, both will work.

Even after setting the environment variable, you might notice that your appsettings.<EnvironmentName>.json settings are not being loaded. This is because the file is not being copied to the output directory.

Here's the important difference:
  • Microsoft.NET.Sdk.Web (used by ASP.NET Core apps) automatically copies all appsettings.*.json files to the output directory.
  • Microsoft.NET.Sdk (used by Azure Functions, Class Libraries, Console apps) does not.
Azure Functions projects use Microsoft.NET.Sdk by default, which means you need to explicitly configure the project file to copy your environment-specific settings files.

Something like below:
<ItemGroup>
  <None Update="appsettings.Production.json">
    <CopyToOutputDirectory>Always</CopyToOutputDirectory>
  </None>
</ItemGroup>
Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, December 2, 2025

Microsoft Entra External ID: Disable Sign Up in a User Flow

I was setting up an application on Microsoft Entra External ID and in my User Flow, I didn't want to enable Sign Up.
Sign Up/Sign In
So wanted to remove No account? Create one.

Apparently Microsoft Entra admin center  doesn't seem to have a functionality to remove this within the portal.

It however can be done using Graph Beta API.

# Install the Microsoft Graph Beta module (required for authentication events flow management)
Install-Module Microsoft.Graph.Beta -Scope CurrentUser -Force
 
# Print version of Microsoft Graph Beta module
$mgBetaModule = Get-Module Microsoft.Graph.Beta -ListAvailable `
    | Sort-Object Version -Descending `
    | Select-Object -First 1
Write-Output "Using Microsoft.Graph.Beta: $($mgBetaModule.Version)" # As of today: 2.32.0
 
# Connect to Azure Account
Write-Output "Connecting to Azure Account..."
Connect-AzAccount
 
$tenantId = "<tenant-id>"
$targetFlowName = "<user-flow-name>"
 
# Connect to Microsoft Graph with required permissions
# Required scopes:
#   - Policy.ReadWrite.AuthenticationFlows: To read and modify authentication flows
#   - EventListener.Read.All/ReadWrite.All: To read and modify event listeners
#   - Application.Read.All/ReadWrite.All: To read and modify applications
Connect-MgGraph `
    -TenantId $tenantId `
    -Scopes "Policy.ReadWrite.AuthenticationFlows", `
        "EventListener.Read.All", `
        "EventListener.ReadWrite.All", `
        "Application.Read.All", `
        "Application.ReadWrite.All"
 
# Verify the connected tenant
$tenantId = (Get-MgContext).TenantId
Write-Output "Successfully connected to tenant: $tenantId"
 
# Retrieve all authentication events flows
$authenticationEventsFlows = Invoke-MgGraphRequest -Method GET `
    -Uri "https://graph.microsoft.com/beta/identity/authenticationEventsFlows"
 
# Find the ID of the target flow
$targetFlowId = ($authenticationEventsFlows.value `
    | Where-Object { $_.displayName -eq $targetFlowName }).id
 
if (-not $targetFlowId) {
    Write-Output "ERROR: Flow '$targetFlowName' not found."
    exit 1
}
 
# Get the target flow
$targetFlow = Invoke-MgGraphRequest -Method GET `
    -Uri "https://graph.microsoft.com/beta/identity/authenticationEventsFlows/$targetFlowId"
  
if ($targetFlow.onInteractiveAuthFlowStart.isSignUpAllowed -eq $false) {
    Write-Output "Sign-up is already disabled for this flow $targetFlowName."
    exit 0
}

Write-Output "Disabling sign-up for flow $targetFlowName..."
 
# Request body to disable sign-up
$body = @{
    "@odata.type" = "#microsoft.graph.externalUsersSelfServiceSignUpEventsFlow"
    "onInteractiveAuthFlowStart" = @{
        "@odata.type" = "#microsoft.graph.onInteractiveAuthFlowStartExternalUsersSelfServiceSignUp"
        "isSignUpAllowed" = $false
    }
} | ConvertTo-Json -Depth 5
 
# PATCH
Invoke-MgGraphRequest -Method PATCH `
    -Uri "https://graph.microsoft.com/beta/identity/authenticationEventsFlows/$targetFlowId" `
    -Body $body `
    -ContentType "application/json"
 
# Verify the update by retrieving the flow again
$updatedFlow = Invoke-MgGraphRequest -Method GET `
    -Uri "https://graph.microsoft.com/beta/identity/authenticationEventsFlows/$targetFlowId"
 
Write-Output "Updated: $($updatedFlow.onInteractiveAuthFlowStart.isSignUpAllowed)"
And that's it.
Sign In
Hope this helps.

Happy Coding.

Regards,
Jaliya