Saturday, February 28, 2026

Connecting Azure to MongoDB Atlas via Private Endpoints

In this post, let's see how we can set up Azure Private Endpoints to connect to a MongoDB Atlas cluster.

When we connect to a MongoDB Atlas cluster, we typically use a connection string like this:

mongodb+srv://<CLUSTER_NAME>.bzmphh.mongodb.net

This goes over the public internet. If we are on Azure and want traffic to stay private and routed through Azure's backbone network, we need a Private Endpoint.

Setting up a Private Endpoint between Azure and Atlas involves both sides:
  • Atlas
    • A Private Link Service that our Azure VNet can connect to
  • Azure
    • A Private Endpoint in our VNet's subnet that gets a private IP
    • An Azure Private DNS Zone so our apps resolve the Atlas hostname to the private IP instead of the public one

Let's walk through how to achieve this step by step using PowerShell. Please note that we need az cli and atlas cli for this.

$atlasProjectId = "<ATLAS_PROJECT_ID>"
$atlasClusterName = "<ATLAS_CLUSTER_NAME>"
$atlasRegion = "<ATLAS_REGION>"

$subscription = "<AZURE_SUBSCRIPTION_ID>"
$resourceGroup = "<RESOURCE_GROUP>"
$location = "<REGION>"
$vnetName = "<VNET>"
$subnetName = "<SNET_FOR_PRIVATE_ENDPOINT>"

$peName = "<PE_NAME>"
$peNicName = "<PE_NAME>_nic"
First, we need to create the Private Link Service in Atlas. This is the resource that Azure will connect to.
$peService = atlas privateEndpoints azure create `
    --projectId $atlasProjectId `
    --region $atlasRegion -o json | ConvertFrom-Json

$endpointServiceId = $peService.id
$privateLinkServiceResourceId = $peService.privateLinkServiceResourceId
Now we create the Private Endpoint in our Azure VNet. 
az account set --subscription $subscription

$pe = az network private-endpoint create `
    --resource-group $resourceGroup `
    --location $location `
    --name $peName `
    --nic-name $peNicName `
    --vnet-name $vnetName `
    --subnet $subnetName `
    --private-connection-resource-id $privateLinkServiceResourceId `
    --connection-name "$peName-connection" `
    --manual-request true | ConvertFrom-Json

$peResourceId = $pe.id
Note the "--manual-request true" flag is required because Atlas needs to accept the connection on their side.

The Private Endpoint creates a NIC in our subnet. We need its private IP for our next step.
$pePrivateIp = az network nic show `
    --subscription $subscription `
    --resource-group $resourceGroup `
    --name $peNicName `
    --query "ipConfigurations[0].privateIPAddress" -o tsv
Now we need to ask Atlas to accept the connection.
atlas privateEndpoints azure interfaces create $endpointServiceId `
    --privateEndpointId $peResourceId `
    --privateEndpointIpAddress $pePrivateIp `
    --projectId $atlasProjectId
Now we need a Private DNS Zone so that apps inside the VNet resolve Atlas hostnames to our private IP. We can derive the DNS Zone Name from the cluster's connection string.
$cluster = atlas clusters describe $atlasClusterName `
    --projectId $atlasProjectId -o json | ConvertFrom-Json

$srvHost = $cluster.connectionStrings.standardSrv -replace "mongodb\+srv://", ""
$dnsZoneName = $srvHost.Substring($srvHost.IndexOf('.') + 1)
# Result: bzmphh.mongodb.net

az network private-dns zone create `
    --resource-group $resourceGroup `
    --name $dnsZoneName
Note that we use a specific subdomain (bzmphh.mongodb.net) rather than mongodb.net. This avoids hijacking DNS resolution for all MongoDB Atlas clusters, only our cluster's traffic goes through the private endpoint.

The DNS zone needs to be linked to our VNet so resources inside it can resolve the private records.
$vnetResourceId = az network vnet show `
    --subscription $subscription `
    --resource-group $resourceGroup `
    --name $vnetName `
    --query "id" -o tsv

az network private-dns link vnet create `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --name $vnetName `
    --virtual-network $vnetResourceId `
    --registration-enabled false
After registering the PE, Atlas generates private endpoint-specific connection strings. We can find those using the following:
$connectionStrings = atlas clusters connectionStrings describe $atlasClusterName `
    --projectId $atlasProjectId `
    -o json | ConvertFrom-Json
atlas clusters connectionStrings describe
This gives us everything we need, the PE-specific hostnames, ports, and replica set info.

Now we parse the Atlas connection strings and create the DNS records in our Private DNS Zone.
$peConnStr = $connectionStrings.privateEndpoint[0]

$srvHostFull = $peConnStr.srvConnectionString -replace "mongodb\+srv://", ""
$srvPrefix = $srvHostFull.Split('.')[0]

$connPart = ($peConnStr.connectionString -replace "mongodb://", "").Split('?')[0].TrimEnd('/')
$hostPortEntries = $connPart.Split(',')
$aRecordHostFull = $hostPortEntries[0].Split(':')[0]
$aRecordName = $aRecordHostFull -replace "\.$dnsZoneName$", ""
$ports = $hostPortEntries | ForEach-Object { $_.Split(':')[1] }

$queryParams = ($peConnStr.connectionString -split '\?')[1]
$txtValue = ($queryParams -split '&' | Where-Object {
    $_ -match "authSource|replicaSet"
}) -join '&'
Now create the actual records:
# A Record
az network private-dns record-set a add-record `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --record-set-name $aRecordName `
    --ipv4-address $pePrivateIp

# SRV Records
foreach ($port in $ports) {
    az network private-dns record-set srv add-record `
        --resource-group $resourceGroup `
        --zone-name $dnsZoneName `
        --record-set-name "_mongodb._tcp.$srvPrefix" `
        --target $aRecordHostFull `
        --priority 0 --weight 0 --port $port
}

# TXT Record
az network private-dns record-set txt add-record `
    --resource-group $resourceGroup `
    --zone-name $dnsZoneName `
    --record-set-name $srvPrefix `
    --value "`"$txtValue`""
Once done, I can see something like this in our Private DNS Zone.
Private DNS Zone: Recordsets

Why Not Just an A Record?

If the private endpoint gives us a single private IP, why do we need three types of DNS records?

When our app uses a connection string like:
mongodb+srv://<CLUSTER_NAME>-pl-0.bzmphh.mongodb.net
The MongoDB driver doesn't just do a simple hostname lookup. It performs three DNS queries:
  • SRV Record
    • This tells the driver which hosts and ports to connect to. For private endpoints, the port is 1024 (not the standard 27017). The SRV record returns:
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1024
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1025
<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1026
Without SRV records, the driver wouldn't know which port to use and would default to 27017, which won't work over Private Link.
  • TXT Record
    • This provides the replica set name and auth database:
authSource=admin&replicaSet=atlas-5x58u7-shard-0
Without this, the driver wouldn't know which replica set to join or where to authenticate.
  • A Record
    • This resolves the hostname to the Private IP address of our Private Endpoint. This is what actually routes traffic through Azure Private Link instead of the public internet.
What If You Skip SRV and TXT?

You could technically use a "mongodb://" connection string instead of "mongodb+srv://" and hardcode everything:
mongodb://<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1024,<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1025,<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net:1026/?authSource=admin&replicaSet=atlas-5x58u7-shard-0
But that means your application configuration now contains infrastructure details, ports, replica set names, and host entries. If anything changes on Atlas's side, you'd need to update and redeploy your app. With "mongodb+srv://", your app connection string is just "mongodb+srv://<CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net", clean and stable. If something changes, you update the DNS records (infrastructure), not the app config.

So all three are required for "mongodb+srv://" to work over Private Link.

Once everything is set up, you can run the following and verify inside the VNet:
# Check DNS resolution
nslookup <CLUSTER_NAME>-pl-0-0.bzmphh.mongodb.net

# Test connection
mongosh 'mongodb+srv://<CLUSTER_NAME>-pl-0.bzmphh.mongodb.net' \
    --username dbadmin --password 'yourpassword'
The A record should resolve to your private IP, and mongosh should connect without going over the public internet.

Note: The domain (bzmphh.mongodb.net), -pl-x suffix, and port numbers shown here are examples. Update them to match the values for your own Atlas cluster and Private Endpoint setup.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, February 25, 2026

Connecting to Azure Managed Redis Instance from Redis Insight

Redis Insight is one of the must-have tools if you are working with Redis. It provides a nice GUI to browse keys, run commands, and monitor your Redis instance.

Few years ago, I have previously blogged about Connecting to Azure Cache for Redis Instance from RedisInsight

Azure Cache for Redis is scheduled for retirement on September 30, 2028. Microsoft recommends migrating existing Azure Cache for Redis instances to Azure Managed Redis well before the retirement date to ensure continued support and access to the latest features and improvements.

In this post, let's see how to connect to Azure Managed Redis instance from Redis Insight.

First, you need to configure your Azure Managed Redis instance to allow the connection. If you are connecting from your local machine, Navigate to your Redis cluster in Azure Portal. Go to Networking  and nake sure you have Public Access enabled.

Under Authentication -> Access Keys, make sure Access Keys Authentication is enabled and copy the Access Key. It's going to be the password for Redis Insight connection.
Connection settings: General
Enable Use TLS.
Connection settings: Security
Now test the connection.
Success
Hope this helps.

Happy Coding.

Regards,
Jaliya

Wednesday, February 4, 2026

Sending messages to Azure Service Bus from Azure API Management

With Azure API Management (APIM), now you can send/publish messages directly to Azure Service Bus. Note: at the time of writing this post, this feature is still in preview. I am very excited for this feature, because now we can have a simple architecture, we don't need a separate backend code/service to send/publish messages.

This is done via the send-service-bus-message policy.

Let's have a look.

I have a Service Bus namespace and a user assigned Managed Identity. The Managed Identity has the Azure Service Bus Data Sender role assigned to the namespace. In the Service Bus namespace, I have a topic: sbt-apim

In my APIM, I have an API with an operation configured with the following policy.
<!-- Add policies as children to the <inbound>, <outbound>, <backend>, and <on-error> elements -->
<policies>
  <!-- Throttle, authorize, validate, cache, or transform the requests -->
  <inbound>
    <send-service-bus-message 
      topic-name="sbt-apim" 
      namespace="<ServiceBusNamespaceName>.servicebus.windows.net" 
      client-id="<ManagedIdentityClientId>">
      <message-properties>
        <message-property name="TenantId">@(context.Request.Headers.GetValueOrDefault("x-tenantId",""))</message-property>
      </message-properties>
      <payload>
        @(context.Request.Body.As<string>(preserveContent: true))
      </payload>
    </send-service-bus-message>
    <return-response>
      <set-status code="202" reason="Accepted" />
      <set-header name="Content-Type" exists-action="override">
        <value>application/json</value>
      </set-header>
      <set-body>{ "status": "Message Accepted" }</set-body>
    </return-response>
  </inbound>
  <!-- Control if and how the requests are forwarded to services  -->
  <backend>
    <base />
  </backend>
  <!-- Customize the responses -->
  <outbound>
    <base />
  </outbound>
  <!-- Handle exceptions and customize error responses  -->
  <on-error>
    <base />
  </on-error>
</policies>
Here I am using the send-service-bus-message policy in the inbound section to publish messages to a Service Bus topic. The client-id refers to the user assigned Managed Identity. You can also add custom message properties. Here I am passing the value of x-tenantId header as a custom property for demo purposes. Since we don't need a backend service, I am using return-response to immediately return a 202 Accepted response.

Let's test this.
curl 'https://<APIM_BASE_URL>/messaging/publish' `
--header 'x-tenantId: cfecb449-b568-4d0f-b24f-ea9762fb2bf6' `
--header 'Content-Type: application/json' `
--data '{
    "firstName": "John",
    "lastName": "Doe"
}'
Test cURL
And I can see a message published under the topic.
Published Message
Hope this helps.

More read:

Happy Coding.

Regards,
Jaliya

Monday, February 2, 2026

.NET Isolated Azure Functions: Dependency Injection in Durable Entities

In this post, let's have a look at how we can use Dependency Injection (DI) with Azure Durable Entities in .NET 10 Isolated Durable Functions

Say we have the following service.
public interface ISomeService
{
    Task DoSomething();
}

public class SomeService : ISomeService
{
    public async Task DoSomething()
    {
        await Task.Delay(1000);
    }
}
And in Program.cs, this service is registered in DI container.

Using TaskEntity<T> with Constructor Injection


The TaskEntity<T> base class from Microsoft.Azure.Functions.Worker.Extensions.DurableTask provides a clean way to implement entities with DI support and is the recommended approach.

We can have a State and Entity like the following.
public class HelloHistoryState
{
    [JsonPropertyName("messages")]
    public List<string> Messages { get; set; } = [];
}

public class HelloHistoryEntity : TaskEntity<HelloHistoryState>
{
    private readonly ISomeService _someService;

    public HelloHistoryEntity(ISomeService cacheService)
    {
        _someService = cacheService;
    }

    public async Task Add(string message)
    {
        await _someService.DoSomething();

        State.Messages.Add(message);
    }

    // Async for future proofing
    public async Task Reset()
    {
        State = new HelloHistoryState();
    }

    // Async for future proofing
    public Task<HelloHistoryState> GetState() => Task.FromResult(State);

    [Function(nameof(HelloHistoryEntity))]
    public Task DispatchAsync([EntityTrigger] TaskEntityDispatcher dispatcher)
    {
        return dispatcher.DispatchAsync(this);
    }
}
In this approach:
  • The entity class inherits from TaskEntity<T> where T is the state type
  • Constructor injection works as expected
  • The State property provides access to the entity's state
  • The DispatchAsync method uses dispatcher.DispatchAsync(this) to dispatch to the entity instance
And we can call the entity as follows.
[Function(nameof(RunOrchestrator))]
public static async Task<HelloHistoryState> RunOrchestrator(
    [OrchestrationTrigger] TaskOrchestrationContext context)
{
    EntityInstanceId entityId = new(nameof(HelloHistoryEntity), "helloHistory");

    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Reset));

    string result = await context.CallActivityAsync<string>(nameof(SayHello), "Tokyo");
    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Add),
        result);

    result = await context.CallActivityAsync<string>(nameof(SayHello), "Seattle");
    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Add),
        result);

    result = await context.CallActivityAsync<string>(nameof(SayHello), "London");
    await context.Entities.CallEntityAsync(entityId,
        nameof(HelloHistoryEntity.Add),
        result);

    HelloHistoryState state =
        await context.Entities.CallEntityAsync<HelloHistoryState>(entityId, nameof(HelloHistoryEntity.GetState));

    return state;
}

[Function(nameof(SayHello))]
public static string SayHello([ActivityTrigger] string name,
    FunctionContext executionContext)
{
    return $"Hello {name}!";
}

Using TaskEntity<T> with ActivatorUtilities


Now let's look at another approach where we need to pass additional parameters that are not registered in DI container. For example, CancellationToken from FunctionContext.

We can use ActivatorUtilities.CreateInstance<T>() to create the entity instance. It will resolve services from DI and use any manual arguments for the rest.
public class HelloHistoryEntity : TaskEntity<HelloHistoryState>
{
    private readonly ISomeService _someService;
    private readonly CancellationToken _cancellationToken;

    // Constructor: ISomeService from DI, CancellationToken as manual parameter
    public HelloHistoryEntity(ISomeService someService, CancellationToken cancellationToken)
    {
        _someService = someService;
        _cancellationToken = cancellationToken;
    }

    public async Task Add(string message)
    {
        _cancellationToken.ThrowIfCancellationRequested();

        await _someService.DoSomething();

        State.Messages.Add(message);
    }

    // Async for future proofing
    public async Task Reset()
    {
        State = new HelloHistoryState();
    }

    // Async for future proofing
    public Task<HelloHistoryState> GetState() => Task.FromResult(State);

    [Function(nameof(HelloHistoryEntity))]
    public static Task Run([EntityTrigger] TaskEntityDispatcher dispatcher, FunctionContext functionContext)
    {
        // Create entity using ActivatorUtilities
        // Resolving services from DI using FunctionContext and also can pass any manual args
        HelloHistoryEntity entity = ActivatorUtilities.CreateInstance<HelloHistoryEntity>(
            functionContext.InstanceServices,
            functionContext.CancellationToken);

        return dispatcher.DispatchAsync(entity);
    }
}
Note that the function method is now static. When the function method is non-static (like in the previous approach), the runtime needs to create an instance of the class to call the method, so it uses DI to resolve constructor dependencies. With a static method, no instance is needed, so we manually create it using ActivatorUtilities.

Hope this helps.

Happy Coding.

Regards,
Jaliya

Friday, January 9, 2026

.NET Isolated Azure Functions: Multiple Output Bindings with HTTP Functions

In this post, let's look at how we can use multiple output bindings with HTTP-triggered Azure Functions running on the isolated worker model.

When working with HTTP-triggered functions, you often need to return an HTTP response to the caller while also sending data to other services like Azure Service Bus, Azure Queue Storage, etc. This is where multiple output bindings come in handy.

HttpRequestData vs HttpRequest

With Azure Functions isolated worker model, by default we have HttpRequestData and HttpResponseData types from the Microsoft.Azure.Functions.Worker.Extensions.Http package.

However, we can also use ASP.NET Core integration via the Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package. This enables using standard ASP.NET Core types including HttpRequestHttpResponse, and IActionResult in HTTP Triggers.

Let's see how multiple output bindings work with both approaches.

To have multiple outputs, we need to create a class that contains our output binding properties.

Using HttpRequestData

using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using System.Net;

namespace FunctionApp1;

public class HttpStartWithHttpRequestData
{
    [Function(nameof(HttpStartWithHttpRequestData))]
    public async Task<HttpStartOutputWithHttpResponseDataRun(
        [HttpTrigger(AuthorizationLevel.Function, "get""post")] HttpRequestData req)
    {
        HttpResponseData response = req.CreateResponse(HttpStatusCode.OK);
        await response.WriteStringAsync("Message Sent");

        return new HttpStartOutputWithHttpResponseData()
        {
            ServiceBusMessageContent = "Some Content",
            HttpResponse = response
        };
    }
}

public class HttpStartOutputWithHttpResponseData
{
    [ServiceBusOutput("sbt-test-topic"ServiceBusEntityType.Topic, Connection = "ServiceBusConnection")]
    public string? ServiceBusMessageContent { getset; }

    public HttpResponseData HttpResponse { getset; }
}

With HttpRequestData, the HttpResponseData property doesn't require any special attribute. The function runtime automatically recognizes it as the HTTP response.

Note: The ServiceBusMessageContent property is nullable. If the value isn't set (i.e., it's null), no message will be sent to the Service Bus. This allows you to conditionally send messages based on your business logic.

Using HttpRequest (ASP.NET Core Integration)

When using HttpRequest from the ASP.NET Core integration, things are slightly different. We need to use the [HttpResult] attribute to indicate which property is the HTTP response.

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using System.Net;

namespace FunctionApp1;

public class HttpStartWithHttpRequest
{
    [Function(nameof(HttpStartWithHttpRequest))]
    public HttpStartOutputWithHttpResponse Run(
        [HttpTrigger(AuthorizationLevel.Function, "get""post")] HttpRequest req)
    {
        return new HttpStartOutputWithHttpResponse()
        {
            ServiceBusMessageContent = "Some Content",
            HttpResponse = new ObjectResult("Message Sent")
            {
                StatusCode = (int)HttpStatusCode.OK
            }
        };
    }
}

public class HttpStartOutputWithHttpResponse
{
    [ServiceBusOutput("sbt-test-topic"ServiceBusEntityType.Topic, Connection = "ServiceBusConnection")]
    public string? ServiceBusMessageContent { getset; }

    [HttpResult]
    public IActionResult HttpResponse { getset; }
}

The [HttpResult] attribute is required here because without it, the runtime won't know which property represents the HTTP response.

Note: Similar to the previous example, the ServiceBusMessageContent property is nullable. If the value isn't set, no message will be sent to the Service Bus.

Limitations

There's a significant limitation when it comes to Service Bus output bindings in the isolated worker model. The content type has to be the message body itself - it can be a simple type (like string) or a complex type (a POCO).

You cannot use ServiceBusMessage from the Azure.Messaging.ServiceBus SDK as the output type. If you try to do so, the entire ServiceBusMessage object gets serialized as JSON, resulting in something like this:

{
  "Body""eyJDb250ZW50IjoiU29tZSBDb250ZW50In0=",
  "MessageId"null,
  "PartitionKey"null,
  "SessionId"null,
  "TimeToLive""10675199.02:48:05.4775807",
  "CorrelationId"null,
  "Subject"null,
  "ContentType"null,
  "ApplicationProperties": {}
}

This means you cannot set message properties like CorrelationIdSessionIdSubject, or ApplicationProperties using output bindings in the isolated worker model. If you need to set these properties, you'll have to use the ServiceBusClient directly.

This is a known limitation and has been discussed in several GitHub issues:

Hope this helps.

Happy Coding.

Regards,
Jaliya

Tuesday, January 6, 2026

Copying Azure Cosmos DB NoSQL Containers to a Database in another Account

In this post, let's see how to copy Azure Cosmos DB NoSQL containers from one database to another database in a different Azure Cosmos DB NoSQL account.

Currently within the Azure Portal itself, only Container Copy within the same account is supported.

Container Copy

There are other options like Azure Data Factory etc, but it's much easier to do this using  az cosmosdb copy from az cli cosmosdb-preview extension.

Let's have a look.

First step is adding  cosmosdb-preview extension. 

az extension add --name cosmosdb-preview

Now I am declaring script level variables.

$sourceSubsciptionId = "<sourceSubscriptionId>"
$sourceAccountResourceGroup = "<sourceAccountResourceGroup>"
$sourceAccountName = "<sourceAccountName>"

$destinationSubsciptionId = "<destinationSubscriptionId>"
$destinationAccountResourceGroup = "<destinationResourceGroup>"
$destinationAccountName = "<destinationAccountName>"

# ManagedIdentity of destination Cosmos DB account
$destinationUserAssignedManagedIdentityName = "<destinationUserAssignedManagedIdentityName>"

$destinationUserAssignedManagedIdentityPrincipalId = az identity show `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationUserAssignedManagedIdentityName `
    --query principalId `
    --output tsv

$destinationUserAssignedManagedIdentityResourceId = az identity show `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationUserAssignedManagedIdentityName `
    --query id `
    --output tsv

We can copy containers in two modes: Online copy and Offline copy. With Offline copy, we need to stop operations on source container, I don't want to do that. So in this post, we are going to be doing Online copy. I am also using User-Assigned managed identities for database access.

Requirements

Source Account

  1. Enable continuous backup on source Azure Cosmos DB account.
  2. Enable All version and delete change feed mode (preview) feature on the source account.
    Features: All version and delete change feed mode (preview)
  3. Add capability: EnableOnlineContainerCopy

# Existing capabilities of your account.
$cosmosdb = az cosmosdb show `
    --resource-group $sourceAccountResourceGroup `
    --name $sourceAccountName

$capabilities = (($cosmosdb | ConvertFrom-Json).capabilities)

# Append EnableOnlineContainerCopy capability in the list of capabilities.
$capabilitiesToAdd = @()
foreach ($item in $capabilities) {
    $capabilitiesToAdd += $item.name
}
$capabilitiesToAdd += "EnableOnlineContainerCopy"

# Update Cosmos DB account
az cosmosdb update `
    --resource-group $sourceAccountResourceGroup
    --name $sourceAccountName `
    --capabilities $capabilitiesToAdd

Verify the capabilities and ensure EnableOnlineContainerCopy is added.

Source Account Capabilities

Destination Account
  1. Currently cross account container copy is only supported for accounts with System-Assigned or User-Assigned default identity. So make sure in the destination database, default identity is set to destination User-Assigned managed identity.
# Show the current default identity of the destination Cosmos DB account
az cosmosdb show `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationAccountName `
    --query defaultIdentity

# Update default identity for the destination Cosmos DB account
az cosmosdb update `
    --resource-group $destinationAccountResourceGroup `
    --name $destinationAccountName `
    --default-identity=UserAssignedIdentity=$destinationUserAssignedManagedIdentityResourceId
Now we need to grant the destination Cosmos DB account’s managed identity read-only access to the source Cosmos DB account, so we can read the data.
az account set --subscription $sourceSubsciptionId

$roleDefinitionId = "00000000-0000-0000-0000-000000000001" # Read-Only Role Definition Id
az cosmosdb sql role assignment create `
    --resource-group $sourceAccountResourceGroup `
    --account-name $sourceAccountName `
    --role-definition-id $roleDefinitionId `
    --scope "/" `
    --principal-id $destinationUserAssignedManagedIdentityPrincipalId

Now we are all set. Next step is creating a job to copy container.

Create Container Copy Job:

az account set --subscription $destinationSubsciptionId

$jobName = "<jobName>"

$sourceDatabase = "<sourceDatabase>"
$sourceContainer = "<sourceContainer>"

$destinationDatabase = "<destinationDatabase>"
$destinationContainer = "<destinationContainer>"

az cosmosdb copy create `
    --resource-group $destinationAccountResourceGroup `
    --job-name $jobName `
    --src-account $sourceAccountName `
    --src-nosql database=$sourceDatabase container=$sourceContainer `
    --dest-account $destinationAccountName `
    --dest-nosql database=$destinationDatabase container=$destinationContainer `
    --mode Online

az cosmosdb copy create
Once it's started, we can query the job status.

Query Job Status:

az cosmosdb copy show `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

az cosmosdb copy show

When the processedCount becomes greater than or equal to the totalCount, complete the job.

Complete the Job:

az cosmosdb copy complete `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

After some time, query the status again and make sure it's Completed.

az cosmosdb copy show
That's it.

Some useful commands

# List all copy jobs
az cosmosdb copy list `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName

# Pause copy job
az cosmosdb copy pause `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

# Resume copy job
az cosmosdb copy resume `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

# Cancel copy job
az cosmosdb copy cancel `
    --resource-group $destinationAccountResourceGroup `
    --account-name $destinationAccountName `
    --job-name $jobName

Hope this helps.

More read:
   Copy jobs in Azure Cosmos DB (preview)
   Create and manage container copy jobs in Azure Cosmos DB (Preview)

Happy Coding.

Regards,
Jaliya