one of the main and many reasons why we need CQRS

When you start building distributed systems/services, we quite often end up using some sort of messaging.

Any DB writes must definitely go through a durable messaging system with either sagas/transaction.

Any DB reads DO NOT need to go through a messaging or a transaction/Saga, we should always have a separate flow/models for queries.

Azure storage firewall configuration with app service.

Azure storage and Azure webapps are most widesly used services in azure. The storage account security is implemented via using client secret, managed identity (storage tables is not yet supported) and SAAS tokens.

However if we want to secure at the network level we have the following options


​ using a private endpoint

​ using a service endpoint

and finally the third option is using the storage account Firewall, this blog focuses on the firewall option with an app service is talking to a azure storage account.

by default Allow access from all networks is enabled, unless if you have specified otherwise while creating the account. To enable the firewall, we have to select the “Selected Networks” option, Once we do this the Azure shows a Firewall list box where we can enter the IP’s of the services that we want to access the storage account.

If you services that are calling into this storage account are in the same region, azure ignores these setting yep, you read it correct if the caller service and azure storage account are in the same region, azure does not respect the entries. This setting only works if the storage account is in a different region. (We should probably ask MS why ?).

So if your org is fine with having you appservice and storage account in diffrent regions, and latency is not an issue this is the approach to take, unless you want to PAY for Vnet option, which is only available from the “Standard” and above app service sku, which is about $100 per month, excluding the storage account charges.

For the appservice and storage account firewall configuration, get the outbound ip’s of the app service and add them the storage account’s firewall list. If any other “Azure” service needs access to the storage account say for example Azure resource manger, then the public IP of these need to be whitelisted as well.

Concurrency, scaling in a scenario involving Azure Table storage + external apis

The scenario is a background processor which does the following :

  1. A method writes a new entity in to table storage with status “unprocessed”
  2. Get unprocessed records from Table storage
  3. Call service A
  4. Call Service B (if A Succeeds)
  5. Update Azure Table storage with the results of A and B (irrespective of if A and B are succeeded)

A and B are idempotent so even when we scaled the app we did not see any issues, as Azure table storage was not using any concurrency (Last Write wins).

Now the new scenario is

  1. A method writes a new entity in to table storage with status “unprocessed”
  2. Get unprocessed records from Table storage
  3. Call service A
  4. Call Service B (if A Succeeds)
  5. Update Azure Table storage with the results of A and B (irrespective of if A and B are succeeded)
  6. Finally send an email

Now the Last Write wins strategy is not going to work because sending a mail is not idempotent, if only a single instance of this was running it is still fine but if we scale this say 2 instances, users will be receiving mails twice.

Solutions 1: Using blob lease

We can create a Blob and create a lease, which the background processor can check and only retrieve records when the lease is available, you can corelate this with a SP in SQL which fetches records using a LOCK statement, using some flag say “IsProcessed”.

This solution works but though we scale the processors at any point of time only a single processor would be working and the remaining would be just waiting for the lease to be available.

Solution 2: using table storage queue

  1. A method writes a new entity in to table storage with status “unprocessed”
  2. After writing to table storage in the same call put a message in the table storage Queue
  3. Get unprocessed records from Table storage, processor reads from the queue and
  4. Calls service A
  5. Calls Service B (if A Succeeds)
  6. Updates Azure Table storage with the results of A and B (irrespective of if A and B are succeeded)
  7. Finally send an email

The advantage of this over the blob approach is

  1. All the scaled out processors are always working
  2. No concurrency issues
  3. Only 1 mail is sent.

docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: “–env”: executable file not found in $PATH: unknown.

while running the following docker command, i was running in to the issue “–env”: executable file not found in $PATH: unknown.” as you see below all i am trying to do is pass few environmental variables to the run command.

docker run 84aa8c74fbc8  --env azclientId='00000000000000' --env azclientSecret='0000000000' --env aztenantId='00000000000'  

but what the docs dont mention is that we need to pass the image after the –env variables so the following run command fixed the issue.

docker run  --env azclientId='00000000000000' --env azclientSecret='0000000000' --env aztenantId='00000000000'  84aa8c74fbc8

download files with httpclient

the title of the blog post is self explanatory, but what i want to highlight here is the significance of Path.Combine, most of the time i see code where we concatenate the folder name and the filename for ex:

var path =”C:\\testfolder\\filename.txt”;

this works as expected but imagine moving the code to a linux environment ?? this small line of code will break the app completely.

Path.Combine like Environment.Newline, sets up the path based on your respective OS, for linux it would /mnt/ss/filename.txt, for windows it would be c:\ss\filename.txt

 await DownloadFile(, Path.Combine( Environment.CurrentDirectory, "filename.txt"));
  private static async Task DownloadFile(string uri, string outputPath)
            if (!Uri.TryCreate(uri, UriKind.Absolute, out _))
                throw new InvalidOperationException("URI is invalid.");

            var stream = await _httpClient.GetStreamAsync(uri);
            await using var fileStream = new FileStream(outputPath, FileMode.Create);
            await stream.CopyToAsync(fileStream);


Azure functions isolated host findings and adding appsettings.json for configuration.

  1. Starting worker process failed, the operation has timed out : Visual studio hasn’t caught up with the function tooling yet, to overcome this issue, install the func cli and run the the command func start, if you are comfortable with VSCODE, you will be at home.
  2. There are lot of great articles out there on how to get started with the azure isolated host functions, but the most complete one that i found is the post Azure Functions and .NET 5: Dependency Injection – DEV Community by Kenichiro Nakamura – DEV Community
  3. There is no documentation out there at least from what I found on how to add configuration Ex: appsettings.json to the isolated functions project. So lets see how we can add this feature.
    1. Add an appsettings file to the root folder and change the file properties to copy
    2. Update the Program.cs class to include the following
  public static void Main()
            var host = new HostBuilder()
                .ConfigureAppConfiguration(e =>
                    e.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true).AddEnvironmentVariables().Build()
                .ConfigureServices(services =>

3. Update the Function with the following

  public class ConfigurationTest
        private readonly DomainFacade _domainFacade;
        private readonly IConfiguration _configuration;

        public ConfigurationTest(DomainFacade domainFacade, IConfiguration configuration)
            _domainFacade = domainFacade;
            _configuration = configuration;

        public async Task Run([TimerTrigger("0 */1 * * * *")] FunctionContext context)
            var logger = context.GetLogger("Function1");
            logger.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" + _configuration["CosmosDb:AccountPrimaryKey"]);

            await _domainFacade.DomainMethod(string someValue );

4. And this will work locally as well as on the cloud, without tinkering with the path of the appsettings, the example is for a timetrigger function, but this will work for any trigger.

Pagination in cosmos db

the following code snippet details on how to do pagination is cosmos db , we just need to pass a pagesize and the continuation token, when we are calling the method for the first time, we can just pass in a null and for the subsequent requests, we can pass in the continuation token, which is returned as a part of the response of this method.

 public async Task<ProductResponse> GetAllProducts(int pageSize, string continuationToken)
            List<Product> products = new List<Product>();
            ProductsResponse productsResponse = new ProductsResponse();
            QueryDefinition queryDef = new QueryDefinition("select * from Prodcuts p");
            string token = null;
            if (!string.IsNullOrWhiteSpace(continuationToken))
                token = continuationToken;

            using FeedIterator<Product> resultSet = _container.GetItemQueryIterator<Product>(queryDefinition: queryDef, continuationToken: token, new QueryRequestOptions { MaxItemCount = pageSize });
                var items = await resultSet.ReadNextAsync();
                foreach (Product item in items)

  productsResponse.ContinuationToken = items.ContinuationToken;

            productResponse.Products = products;
            return productResponse;

Azure Bicep template for Azure Cosmos DB serverless

Azure Cosmos DB serverless went GA last week, but the documentation has not caught up yet, if you are looking to provision an Azure Cosmos DB serverless account using Bicep you can use the following snippet, the magic here is the capabilities section.

resource cosmos 'Microsoft.DocumentDB/databaseAccounts@2021-04-15' = {
  name: cosmosName
  location: location
  kind: 'GlobalDocumentDB'
  properties: {
    //enableFreeTier: true
    consistencyPolicy: {
      defaultConsistencyLevel: 'Session'
    locations: [
        locationName: location
        failoverPriority: 0
        isZoneRedundant: false
    capabilities: [
        name: 'EnableServerless'
    databaseAccountOfferType: 'Standard'

Handling azure data bricks errors in azure datafactory

issue: when an exception happens in azure datafactory notebooks only just a url contianing the details is sent back to the datafactory, it does not go to the error flow.

one of the solutions we implemented was to catch an exception in the notebook is to send a json string containing the following



“ExceptionDetails”:”some exception happend”


then datafactory will parse the json object log it if it wants and then choose the appropriate flow to handle the exception.

if you want to terminate the pipeline then do something like the following, where in use dynamic sql to raise an error.

azure data factory 2 – How do I raise an error from if activity in ADFV2? – Stack Overflow

hope this helps..