why do we need the outbox pattern and make our receivers idempotent

  1. infrastructure failures/lease expiry/exception : if the receiver is not able to ack the receipt of a message, though it processed it successfully, most of the message brokers will re-enlist the message back into the queue again because of the “AtLeastOnceDelivery” model, as a result the receiver will re-process the duplicate message again !!!
  2. Say in a method if we have the following logic a) SaveToDb b) Send an OrderSaved message, now what if one of them fails ? We would have published an OrderSaved message though our SaveToDb call failed/ SaveToDb was succeded but OrderSave message was never sent.

so how do we solve it ?

  1. Make all the receiver Idempotent, basically if we process a message N no of times the result should be the same. Maintain a list of processed message Id’s or some property in a cache or a DB and before doing processing the message, check if its listed in the cache or DB, if it is don’t process it.
  2. Use the OutBox pattern : in a nutshell the message to be sent is logged into a table (OutBox) using the same single DataStore transaction as SaveToDb. So if the SaveToDb failed the message will never be sent out. If the DataStore transaction succeeded the the message is then published in its own transaction along with marking the message table (OutBox) row as sent. If a failure happens then in the same transaction the message table row will be marked as not sent.
OutBox (particular.net)

patterns for message ordering in azure service bus

by default asb is FIFO in nature, first in and first out. But the business required message order is something that the application publisher and consumer should handle.

the following are the patterns that we could use to accomplish this.

  1. only have a single publisher and subscriber and ensure that the publisher always publishes the business required messages in order, and the we can utilize the FIFO nature of the asb to processes messages in order. the issue with this approach is we cannot scale of consumers and publisher.
  2. deferred messaging, if we received an order line item before an order, we could catch the “OrderNotFoundException” and defer the message for few mins.
  3. Message sessions, with sessions we can group all the related messages using a sessionId, azure service bus will then deliver all the related messages to a single consumer who determine on how to process these messages.
  4. Use a message framework like NServiceBus to create a saga and process the lineitems only when an order is already present in the db. https://particular.net/blog/you-dont-need-ordered-delivery

Managed Identity for Azure Cosmos DB

the only way to assign a managed identity is via the PS or az cli, command below, the UI role assignments for example “Owner” are only for the managment. For any roles with the data plane should use the following.

readOnlyRoleDefinitionId = ”
principalId = ”
az cosmosdb sql role assignment create –account-name $accountName –resource-group $resourceGroupName –scope “/” –principal-id $principalId –role-definition-id $readOnlyRoleDefinitionId

from: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/how-to-setup-rbac.md#using-azure-resource-manager-templates-1

one of the main and many reasons why we need CQRS

When you start building distributed systems/services, we quite often end up using some sort of messaging.

Any DB writes must definitely go through a durable messaging system with either sagas/transaction.

Any DB reads DO NOT need to go through a messaging or a transaction/Saga, we should always have a separate flow/models for queries.

Azure storage firewall configuration with app service.

Azure storage and Azure webapps are most widesly used services in azure. The storage account security is implemented via using client secret, managed identity (storage tables is not yet supported) and SAAS tokens.

However if we want to secure at the network level we have the following options


​ using a private endpoint

​ using a service endpoint

and finally the third option is using the storage account Firewall, this blog focuses on the firewall option with an app service is talking to a azure storage account.

by default Allow access from all networks is enabled, unless if you have specified otherwise while creating the account. To enable the firewall, we have to select the “Selected Networks” option, Once we do this the Azure shows a Firewall list box where we can enter the IP’s of the services that we want to access the storage account.

If you services that are calling into this storage account are in the same region, azure ignores these setting yep, you read it correct if the caller service and azure storage account are in the same region, azure does not respect the entries. This setting only works if the storage account is in a different region. (We should probably ask MS why ?).

So if your org is fine with having you appservice and storage account in diffrent regions, and latency is not an issue this is the approach to take, unless you want to PAY for Vnet option, which is only available from the “Standard” and above app service sku, which is about $100 per month, excluding the storage account charges.

For the appservice and storage account firewall configuration, get the outbound ip’s of the app service and add them the storage account’s firewall list. If any other “Azure” service needs access to the storage account say for example Azure resource manger, then the public IP of these need to be whitelisted as well.

Concurrency, scaling in a scenario involving Azure Table storage + external apis

The scenario is a background processor which does the following :

  1. A method writes a new entity in to table storage with status “unprocessed”
  2. Get unprocessed records from Table storage
  3. Call service A
  4. Call Service B (if A Succeeds)
  5. Update Azure Table storage with the results of A and B (irrespective of if A and B are succeeded)

A and B are idempotent so even when we scaled the app we did not see any issues, as Azure table storage was not using any concurrency (Last Write wins).

Now the new scenario is

  1. A method writes a new entity in to table storage with status “unprocessed”
  2. Get unprocessed records from Table storage
  3. Call service A
  4. Call Service B (if A Succeeds)
  5. Update Azure Table storage with the results of A and B (irrespective of if A and B are succeeded)
  6. Finally send an email

Now the Last Write wins strategy is not going to work because sending a mail is not idempotent, if only a single instance of this was running it is still fine but if we scale this say 2 instances, users will be receiving mails twice.

Solutions 1: Using blob lease

We can create a Blob and create a lease, which the background processor can check and only retrieve records when the lease is available, you can corelate this with a SP in SQL which fetches records using a LOCK statement, using some flag say “IsProcessed”.

This solution works but though we scale the processors at any point of time only a single processor would be working and the remaining would be just waiting for the lease to be available.

Solution 2: using table storage queue

  1. A method writes a new entity in to table storage with status “unprocessed”
  2. After writing to table storage in the same call put a message in the table storage Queue
  3. Get unprocessed records from Table storage, processor reads from the queue and
  4. Calls service A
  5. Calls Service B (if A Succeeds)
  6. Updates Azure Table storage with the results of A and B (irrespective of if A and B are succeeded)
  7. Finally send an email

The advantage of this over the blob approach is

  1. All the scaled out processors are always working
  2. No concurrency issues
  3. Only 1 mail is sent.

docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: “–env”: executable file not found in $PATH: unknown.

while running the following docker command, i was running in to the issue “–env”: executable file not found in $PATH: unknown.” as you see below all i am trying to do is pass few environmental variables to the run command.

docker run 84aa8c74fbc8  --env azclientId='00000000000000' --env azclientSecret='0000000000' --env aztenantId='00000000000'  

but what the docs dont mention is that we need to pass the image after the –env variables so the following run command fixed the issue.

docker run  --env azclientId='00000000000000' --env azclientSecret='0000000000' --env aztenantId='00000000000'  84aa8c74fbc8

download files with httpclient

the title of the blog post is self explanatory, but what i want to highlight here is the significance of Path.Combine, most of the time i see code where we concatenate the folder name and the filename for ex:

var path =”C:\\testfolder\\filename.txt”;

this works as expected but imagine moving the code to a linux environment ?? this small line of code will break the app completely.

Path.Combine like Environment.Newline, sets up the path based on your respective OS, for linux it would /mnt/ss/filename.txt, for windows it would be c:\ss\filename.txt

 await DownloadFile(http://sfasfsa.com/safdas/main.txt, Path.Combine( Environment.CurrentDirectory, "filename.txt"));
  private static async Task DownloadFile(string uri, string outputPath)
            if (!Uri.TryCreate(uri, UriKind.Absolute, out _))
                throw new InvalidOperationException("URI is invalid.");

            var stream = await _httpClient.GetStreamAsync(uri);
            await using var fileStream = new FileStream(outputPath, FileMode.Create);
            await stream.CopyToAsync(fileStream);


Azure functions isolated host findings and adding appsettings.json for configuration.

  1. Starting worker process failed, the operation has timed out : Visual studio hasn’t caught up with the function tooling yet, to overcome this issue, install the func cli and run the the command func start, if you are comfortable with VSCODE, you will be at home.
  2. There are lot of great articles out there on how to get started with the azure isolated host functions, but the most complete one that i found is the post Azure Functions and .NET 5: Dependency Injection – DEV Community by Kenichiro Nakamura – DEV Community
  3. There is no documentation out there at least from what I found on how to add configuration Ex: appsettings.json to the isolated functions project. So lets see how we can add this feature.
    1. Add an appsettings file to the root folder and change the file properties to copy
    2. Update the Program.cs class to include the following
  public static void Main()
            var host = new HostBuilder()
                .ConfigureAppConfiguration(e =>
                    e.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true).AddEnvironmentVariables().Build()
                .ConfigureServices(services =>

3. Update the Function with the following

  public class ConfigurationTest
        private readonly DomainFacade _domainFacade;
        private readonly IConfiguration _configuration;

        public ConfigurationTest(DomainFacade domainFacade, IConfiguration configuration)
            _domainFacade = domainFacade;
            _configuration = configuration;

        public async Task Run([TimerTrigger("0 */1 * * * *")] FunctionContext context)
            var logger = context.GetLogger("Function1");
            logger.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" + _configuration["CosmosDb:AccountPrimaryKey"]);

            await _domainFacade.DomainMethod(string someValue );

4. And this will work locally as well as on the cloud, without tinkering with the path of the appsettings, the example is for a timetrigger function, but this will work for any trigger.