This is the seventh part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:
- Tutorial for building Microservice Applications with Azure Container Apps and Dapr – Part 1
- Deploy backend API Microservice to Azure Container Apps – Part 2
- Communication between Microservices in Azure Container Apps – Part 3
- Dapr Integration with Azure Container Apps – Part 4
- Azure Container Apps State Store With Dapr State Management API – Part 5
- Azure Container Apps Async Communication with Dapr Pub/Sub API – Part 6
- Azure Container Apps with Dapr Bindings Building Block – (This Post)
- Azure Container Apps Monitoring and Observability with Application Insights – Part 8
- Continuous Deployment for Azure Container Apps using GitHub Actions – Part 9
- Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps – Part 10
- Azure Container Apps Auto Scaling with KEDA – Part 11
- Azure Container Apps Volume Mounts using Azure Files – Part 12
- Integrate Health probes in Azure Container Apps – Part 13
Azure Container Apps with Dapr Bindings Building Block
In this post, we are going to extend the backend background processor service named “ACA-Processer Backend” which we created in the previous post, we will rely on Dapr Input and Output Bindings to achieve 3 scenarios as the following:
- Trigger a process on the “ACA-Processer Backend” based on a configurable interval schedule, this implements a background worker to wake up (at a regular interval) and check if tasks created are overdue and mark them as overdue, then store the updated state on Azure Cosmos DB.
- Trigger a process on the “ACA-Processer Backend” based on a message sent to a specific Azure Storage Queue, this is a fictitious scenario but we will assume that this Azure Storage Queue is an external system to which external clients can submit tasks to this queue and our “ACA-Processer Backend” will be configured to trigger a certain process when a new message is received.
- From the service “ACA-Processer Backend” we will invoke an external resource that is storing the content of the incoming task from the external queue as a JSON blob file on Azure Storage Blobs.
Let’s take a look at the high-level architecture diagram below to understand the flow of input and output bindings in Dapr:
The source code for this tutorial is available on GitHub. You can check the demo application too.
Let’s assume that there is an external system (outside of our Tasks Tracker microservice application) that needs to integrate with our Tasks Tracker application, this external system can publish a message on Azure Storage Queue which contains information about a task that needs to be stored and maintained in our Tasks Tracker application, so our system needs to react when a message is added to the Azure Storage Queue.
To achieve this in a simple way and without writing a lot of plumbing code to access the Azure Storage Queue, our system will expose an event handler (aka Input Binding) that receives and processes the message coming to the storage queue. Once the processing of the message completes and we store the task into Cosmos DB, our system will trigger an event (aka Output binding) that invokes a fictitious external service that stores the content of the message into an Azure Blob Storage container.
Note: When I started looking at Dapr Bindings Building Block, I noticed a lot of similarities with the Pub/Sub Building Block we covered in the previous post. But remember that Pub/Sub Building Block is meant to be used for Async communication between services within your solution, the Binding Building Block has a wider scope and it mainly focuses on connectivity and interoperability across different systems, disparate applications, and services outside the boundaries of your own application. For a full list of supported bindings visit this link.
Overview of Dapr Bindings Building Block
Let’s take a look at the detailed Dapr Binding Building Block architecture diagram that we are going to implement in this post to fulfill the use case we discussed earlier:
Looking at the diagram we notice the following:
- In order to receive events and data from the external resource (Azure Storage Queue) our “ACA-Processor Backend” service need to register a public endpoint that will become an event handler.
- This binding configuration between the external resource and our service will be configured by using the “Input Binding Confgiration yaml” file, the Dapr sidecar of the background service will read the configuration and subscribe to the endpoint defined for the external resource, in our case, it will be a specific Azure Storage Queue.
- When a message is published to the storage queue; the input binding component running in the Dapr sidecar picks it up and triggers the event.
- The Dapr sidecar invokes the endpoint (event handler defined in the “ACA-Processer Backend” Service) configured for the binding. In our case, it will be an endpoint that can be reached by invoking a POST operation http://localhost:3502/ExternalTasksProcessor/Process and the request body content will be the JSON payload of the published message to the Azure Storage Queue.
- When the event is handled in our “ACA-Processer Backend” and the business logic is completed, this endpoint needs to return an HTTP response with a 200 ok status to acknowledge that processing is complete. If the event handling is not completed or there is an error, this endpoint should return HTTP 400 or 500 status code.
- In order to enable our service “ACA-Processor Backend” to trigger an event that invokes an external resource, we need to use the “Output Binding Configuration Yaml” file to configure the relation between our service and the external resource (Azure Blob Storage) and how to connect to it.
- Once the Dapr sidecar reads the binding configuration file, our service can trigger an event that invokes the output binding API on the Dapr sidecar, in our case, the event will be creating a new blob file containing the content of the message we read from the Azure Storage Queue.
- With this in place, our service “ACA-Processor Backend” is ready to invoke the external resource by sending a POST operation to the endpoint
http://localhost:3502/v1.0/bindings/ExternalTasksBlobstore and the JSON payload will contain the below content, or we can use the Dapr client SDK to invoke this output biding to invoke the external service and store the file in Azure Blob Storage
123456789{"data": "{"taskName": "Health Readiness Task","taskAssignedTo": "tayseer_joudeh@hotmail.com","taskCreatedBy": "tjoudeh@bitoftech.net","taskDueDate": "2022-08-19T12:45:22.0983978Z"}","operation": "create"}
Let’s now update our Backend Background Processer project and define the input and output bindings configuration files and event handlers.
To proceed with this tutorial we need to provision this fictisios external service (Azure Storage Account) to start responding to messages published to a queue and use the same storage account to store blob files as an external event. To do so, you can run the below PowerShell script to create Azure Storage Account and get the master key.
1 2 3 4 5 6 7 8 9 10 11 |
$STORAGE_ACCOUNT_NAME = "<replace with unique storage name>" az storage account create ` --name $STORAGE_ACCOUNT_NAME ` --resource-group $RESOURCE_GROUP ` --location $LOCATION ` --sku Standard_LRS ` --kind StorageV2 # list azure storage keys az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT_NAME |
Updating the Backend Background Processer Project
Step 1: Create an event handler (API endpoint) to respond to messages published to Azure Storage Queue
Let’s add an endpoint that will be responsible to handle the event when a message is published to Azure Storage Queue, this endpoint will start receiving the message published from the external service, to do so, add a new controller named “ExternalTasksProcessorController.cs” under “Controllers” folder and use the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
namespace TasksTracker.Processor.Backend.Svc.Controllers { [Route("ExternalTasksProcessor")] [ApiController] public class ExternalTasksProcessorController : ControllerBase { private readonly ILogger<ExternalTasksProcessorController> _logger; private readonly DaprClient _daprClient; public ExternalTasksProcessorController(ILogger<ExternalTasksProcessorController> logger, DaprClient daprClient) { _logger = logger; _daprClient = daprClient; } [HttpPost("process")] public async Task<IActionResult> ProcesseTaskAndStore([FromBody] TaskModel taskModel) { try { _logger.LogInformation("Started processing external task message from storage queue. Task Name: '{0}'", taskModel.TaskName); taskModel.TaskId = Guid.NewGuid(); taskModel.TaskCreatedOn = DateTime.UtcNow; //Dapr SideCar Invocation (save task to a state store) await _daprClient.InvokeMethodAsync(HttpMethod.Post, "tasksmanager-backend-api", $"api/tasks", taskModel); _logger.LogInformation("Saved external task to the state store successfuly. Task name: '{0}', Task Id: '{1}'", taskModel.TaskName, taskModel.TaskId); //ToDo: code to invoke external binding and store queue message content into blob file in auzre storage return Ok(); } catch (Exception) { throw; } } } } |
What we have added here is simple, we just defined an action method named “ProcesseTaskAndStore” which can be accessed by sending HTTP POST operation on the endpoint “ExternalTasksProcessor/Process” and this action method accepts the TaskModel in the request body as JSON payload, this is what will be received from the external service (Azure Storage Queue). Within this action method, we are going to store the received task into Cosmos DB using Dapr state store API covered in this post, and then we return 200 OK to acknowledge that message received is processed successfully and should be removed from the external service queue.
Step 2: Create Dapr Input Binding Component file
Now we need to create the component configuration file which will describe the configuration and how our backend background processor will start handling events coming from the external service (Azure Storage Queues). To do so, add a new file named “dapr-bindings-in-storagequeue.yaml” under folder “components” and paste the below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: externaltasksmanager spec: type: bindings.azure.storagequeues version: v1 metadata: - name: storageAccount value: "taskstracker" - name: storageAccessKey value: "" - name: queue value: "external-tasks-queue" - name: decodeBase64 value: "true" - name: route value: /externaltasksprocessor/process |
The full specifications of yaml file with Azure Storage Queues can be found on this link, but let’s go over the configuration we have added here:
- The type of binding is “bindings.azure.storagequeues”.
- The name of this input binding is “externaltasksmanager”.
- We are setting the “storageAccount” name, “storageAccessKey” value, and the “queue” name. Those properties will describe how the event handler we added can connect to the external service. You can create any queue you prefer on the Azure Storage Account we created to simulate an external system.
- We are setting the “route” property to the value “/externaltasksprocessor/process ” which is the address of the endpoint we have just added so POST requests are sent to this endpoint.
- We are setting the property “decodeBase64” to “true” as the message queued in the Azure Storage Queue is Base64 encoded.
Step 3: Create Dapr Output Binding Component file
Now we need to create the component configuration file which will describe the configuration and how our backend background processor will be able to invoke the external service (Azure Blob Storage) and be able to create a JSON file that contains the content of the message received from Azure Storage Queues. To do so, add a new file named “dapr-bindings-out-blobstorage.yaml” under folder “components” and paste the below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: externaltasksblobstore spec: type: bindings.azure.blobstorage version: v1 metadata: - name: storageAccount value: "taskstracker" - name: storageAccessKey value: "" - name: container value: "externaltaskscontainer" - name: decodeBase64 value: false |
The full specifications of yaml file with Azure blob storage can be found on this link, but let’s go over the configuration we have added here:
- The type of binding is “bindings.azure.blobstorage”.
- The name of this output binding is “externaltasksblobstore”. We will use this name when we use the Dapr SDK to trigger the output binding.
- We are setting the “storageAccount” name, “storageAccessKey” value, and the “container” name. Those properties will describe how our backend background service will be able to connect to the external service and create a blob file. We will assume that there is a container already created on the external service and named “externaltaskscontainer” All our JSON blob files created will be under this container.
- We are setting the property “decodeBase64” to “false” as we don’t want to encode file content to base64 images, we need to store the file content as is.
Step 4: Use Dapr client SDK to invoke the output binding
Now we need to invoke the output binding by using the .NET SDK, to do so, open the file named “ExternalTasksProcessorController.cs” and update the action method code as the below
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
[HttpPost("process")] public async Task<IActionResult> ProcesseTaskAndStore([FromBody] TaskModel taskModel) { try { _logger.LogInformation("Started processing external task message from storage queue. Task Name: '{0}'", taskModel.TaskName); taskModel.TaskId = Guid.NewGuid(); taskModel.TaskCreatedOn = DateTime.UtcNow; //Dapr SideCar Invocation (save task to a state store) await _daprClient.InvokeMethodAsync(HttpMethod.Post, "tasksmanager-backend-api", $"api/tasks", taskModel); _logger.LogInformation("Saved external task to the state store successfuly. Task name: '{0}', Task Id: '{1}'", taskModel.TaskName, taskModel.TaskId); //code to invoke external binding and store queue message content into blob file in auzre storage IReadOnlyDictionary<string,string> metaData = new Dictionary<string, string>() { { "blobName", $"{taskModel.TaskId}.json" }, }; await _daprClient.InvokeBindingAsync("externaltasksblobstore", "create", taskModel, metaData); _logger.LogInformation("Invoked output binding '{0}' for external task. Task name: '{1}', Task Id: '{2}'", OUTPUT_BINDING_NAME, taskModel.TaskName, taskModel.TaskId); return Ok(); } catch (Exception) { throw; } } |
Looking at lines 17-24, you will see that we calling the method “InvokeBindingAsync” and we are passing the binding name “externaltasksblobstore” defined in the configuration file, as well the second parameter “create” is the action we need to do against the external blob storage. You can for example delete or get a content of a certain file. For a full list of supported actions on Azure Blob Storage, visit this link.
Notice how are setting the file name we are storing at the external service, we need the file names to be created using the same Task Identifier, all we need to do is to pass the key “blobName” with the file name values into the “metaData” dictionarity.
Step 5: Test Dapr bindings locally
Now we are ready to give it an end-to-end test on our dev machines, to do so, run the 3 applications together using Debug and Run button from VS Code. You can read how we configured the 3 apps to run together in this post.
Open Azure Storage Explorer, if you don’t have it you can install it from here. Login to your Azure Subscription and navigate to the storage account already created, create a queue, and use the same name you already used in the Dapr Input configuration file.
The content of the message that Azure Storage Queue excepts should be as below, so try to queue a new message using the tool as the image below
1 2 3 4 5 6 |
{ "taskName": "Task from External System", "taskAssignedTo": "tayseer_joudeh@hotmail.com", "taskCreatedBy": "tjoudeh@bitoftech.net", "taskDueDate": "2022-08-19T12:45:22.0983978Z" } |
If all is configured successfully you should be able to see a JSON file created as a blob in the Azure Storage Container named “externaltaskscontainer” based on your configuration.
Step 6: Create Input and Output Binding Component files matching Azure Container Apps Specs
Go ahead and add a new file named “containerapps-bindings-in-storagequeue.yaml” under the folder “aca-components” and paste the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
componentType: bindings.azure.storagequeues version: v1 metadata: - name: storageAccount value: "taskstracker" - name: storageAccessKey secretRef: storagekey - name: queue value: "external-tasks-queue" - name: decodeBase64 value: "true" - name: route value: /externaltasksprocessor/process secrets: - name: storagekey value: "<value>" scopes: - tasksmanager-backend-processor |
The properties of this file is matching the ones used in Dapr component-specific file, it is a component of type “bindings.azure.storagequeues”.
The only difference is that we are using “secretRef” when setting the “storageAccessKey” and we will be setting the actual value from the Azure Portal after we add those Dapr Input binding component to Azure Container Apps Env.
Let’s add a new file named “containerapps-bindings-out-blobstorage.yaml” under the folder “aca-components” and paste the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
componentType: bindings.azure.blobstorage version: v1 metadata: - name: storageAccount value: "taskstracker" - name: storageAccessKey secretRef: storagekey - name: container value: "externaltaskscontainer" - name: decodeBase64 value: "false" - name: publicAccessLevel value: "none" secrets: - name: storagekey value: "<value>" scopes: - tasksmanager-backend-processor |
The properties of this file is matching the ones used in Dapr component-specific file, it is a component of type “bindings.azure.blobstorage”.
The only difference is that we are using “secretRef” when setting the “storageAccessKey” and we will be setting the actual value from the Azure Portal after we add those Dapr Output binding component to Azure Container Apps Env.
With those changes in place, we are ready to rebuild the backend background processor container image, update Azure Container Apps Env, and redeploy a new revision, but I want to add one small piece and introduce a special type of input binding which is Cron Jobs. So let’s do this before.
Overview of Cron Input Binding
Cron binding is a special type of input binding, it doesn’t subscribe for events coming from an external system, the corn biding can be used to trigger application code in our service periodically based on a configurable interval, so for example, if we want to trigger certain code to scan all the tasks in the system every 4 hours and mark the tasks which are overdue, the corn binding is suitable to do this.
Step 1: Add Cron binding configuration
The first step to configuring Cron binding is to add a component file that describes where is the code that needs to be triggered and on which intervals it should be triggered, to do so add a new file named “dapr-scheduled-cron.yaml” under folder “components” and use the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: ScheduledTasksManager namespace: default spec: type: bindings.cron version: v1 metadata: - name: schedule value: "@every 10m" scopes: - tasksmanager-backend-processor |
What we have done here is the following:
- Added new input binding of type “bindings.cron”
- Provided the name “ScheduledTasksManager” for this binding, this means that an HTTP POST endpoint on the URL “/ScheduledTasksManager” should be added as it will be invoked when the job is triggered based on the Cron intervals.
- Setting the interval for this Cron job to be triggered every 10 minutes, for full details and available options on how to set this value, visit the Cron binding documentation.
Step 2: Add Cron binding configuration matching Azure Container Apps Specs
Now we will add a new file named “containerapps-scheduled-cron.yaml” under folder “aca-components”. this file will be used when updating the Azure Container App Env and enable this binding, and use the code below:
1 2 3 4 5 6 7 8 |
componentType: bindings.cron version: v1 metadata: - name: schedule value: "@every 6h" # Application scopes scopes: - tasksmanager-backend-processor |
Note that the name of the binding is not part of the file metadata, we are going to set the name of the binding to the value “ScheduledTasksManager” when we update the Azure Container Apps Env.
Step 3: Add the endpoint which will be invoked by Cron binding
As we saw in the previous steps, the Cron job configuration is very simple, we now need to add an endpoint which accepts POST request when the Cron job is triggered, to do so add a new file named “ScheduledTasksManagerController.cs” under the project “TasksTracker.Processor.Backend.Svc” and use the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
namespace TasksTracker.Processor.Backend.Svc.Controllers { [Route("ScheduledTasksManager")] [ApiController] public class ScheduledTasksManagerController : ControllerBase { private static string STORE_NAME = "periodicjobstatestore"; private static string WATERMARK_KEY = "PeriodicSvcWatermark"; private readonly ILogger<ScheduledTasksManagerController> _logger; private readonly DaprClient _daprClient; public ScheduledTasksManagerController(ILogger<ScheduledTasksManagerController> logger, DaprClient daprClient) { _logger = logger; _daprClient = daprClient; } [HttpPost] public async Task CheckOverDueTasksJob() { var currentWatermark = DateTime.UtcNow; _logger.LogInformation($"ScheduledTasksManager::Timer Services triggered at: {currentWatermark}"); var overdueTasksList = new List<TaskModel>(); var waterMark = await _daprClient.GetStateAsync<DateTime>(STORE_NAME, WATERMARK_KEY); _logger.LogInformation($"ScheduledTasksManager::reading watermark from state store, watermark value: {waterMark}"); var tasksList = await _daprClient.InvokeMethodAsync<List<TaskModel>>(HttpMethod.Get, "tasksmanager-backend-api", $"api/overduetasks?waterMark={waterMark}"); _logger.LogInformation($"ScheduledTasksManager::completed query state store for tasks, retrieved tasks count: {tasksList.Count()}"); foreach (var taskModel in tasksList) { if (currentWatermark.Date> taskModel.TaskDueDate.Date) { overdueTasksList.Add(taskModel); } } if (overdueTasksList.Count> 0) { _logger.LogInformation($"ScheduledTasksManager::marking {overdueTasksList.Count()} as overdue tasks"); await _daprClient.InvokeMethodAsync(HttpMethod.Post, "tasksmanager-backend-api", $"api/overduetasks/markoverdue", overdueTasksList); } _logger.LogInformation($"ScheduledTasksManager::storing watermark to state store, watermark value: {currentWatermark}"); await _daprClient.SaveStateAsync(STORE_NAME, WATERMARK_KEY, currentWatermark); } } } |
Let’s highlight what we have added to this controller:
- A new action method named “CheckOverDueTasksJob” contains the business logic which will be triggered by the Cron job configuration on a certain interval
- I’m using a watermark value that holds the last run of this job, the value of the watermark (timestamp of the last run) is stored in a persistent store (in my case it is stored as a blob file) using Dapr State management API on Azure Blob Storage, you can use output binding for this as well. This is the beauty o Dapr Building Blocks; it is flexible and you can choose different ways to achieve the same thing. We will add the State management Dapr component file in the next step.
- I have added 2 methods named “GetTasksByTime” and “MarkOverdueTasks” to the interface “ITasksManager” in project “TasksTracker.TasksManager.Backend.Api”. Those methods are called via the Dapr Service Invocation Building Block. You can see the complete code of method “GetTasksByTime” and method “MarkOverdueTasks” by visiting the links.
Step 4: Add Azure Blob Storage State Store Components to store and retrieve Watermark
Add a new file named “dapr-statestore-blobstorage-periodic.yaml” under the folder “components” and paste the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: periodicjobstatestore spec: type: state.azure.blobstorage version: v1 metadata: - name: accountName value: "taskstracker" - name: accountKey value: "" - name: containerName value: "periodicjobcontainer" |
Next, we need to add a component file that matches Azure Container Apps Specs, so add a new file named “containerapps-statestore-blobstorage-periodic.yaml” under the folder “aca-components” and paste the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
componentType: state.azure.blobstorage version: v1 metadata: - name: accountName value: "taskstracker" - name: accountKey secretRef: storagekey - name: containerName value: "periodicjobcontainer" secrets: - name: storagekey value: "" scopes: - tasksmanager-backend-processor |
What we have done here should be familiar to you now, we have added a state store component file of type “state.azure.blobstorage”. This component file will describe to our service how it will store blob files on the storage account named “taskstracker” under container “periodicjobcontainer”. We will be setting the actual “storageKey” from the Azure Portal after we update Azure Container Apps Env.
Deploy the Backend Background Processer and the Backend API Projects to Azure Container Apps
Step 1: Build the Backend Background Processer and the Backend API App images and push them to ACR
As we have done previously we need to build and deploy both app images to ACR so they are ready to be deployed to Azure Container Apps, to do so, continue using the same PowerShell console and paste the code below (Make sure you are on directory “TasksTracker.ContainerApps”):
1 2 3 |
az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_API_NAME" --file 'TasksTracker.TasksManager.Backend.Api/Dockerfile' . az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_SVC_NAME" --file 'TasksTracker.Processor.Backend.Svc/Dockerfile' . |
Step 2: Add 4 Dapr Components to Azure Container Apps Environment
We need to run the command below commands to add the 4 component files we have defined in this post, to do so use the same PowerShell console and paste the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
##Input binding component for Azure Storage Queue az containerapp env dapr-component set ` --name $ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name externaltasksmanager ` --yaml '.\aca-components\containerapps-bindings-in-storagequeue.yaml' ##Output binding component for Azure Blob Storage az containerapp env dapr-component set ` --name $ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name externaltasksblobstore ` --yaml '.\aca-components\containerapps-bindings-out-blobstorage.yaml' ##Cron binding component az containerapp env dapr-component set ` --name $ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name scheduledtasksmanager ` --yaml '.\aca-components\containerapps-scheduled-cron.yaml' ##State Store component for Azure Blob Storage az containerapp env dapr-component set ` --name $ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name periodicjobstatestore ` --yaml '.\aca-components\containerapps-statestore-blobstorage-periodic.yaml' |
Once the components are added to Azure Container Apps Env. Don’t forget to update the “secretRef” where it is needed from the Azure Portal. It will be similar to what we have done previously in this post
Step 3: Deploy new revisions of the Backend API and Backend Background Processer to Azure Container Apps
As we’ve done multiple times, we need to update the Azure Container App hosting the Backend API & Backend Background Processer with a new revision so our code changes are available for end users, to do so run the below PowerShell script:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
## Update Backend API App container app and create a new revision az containerapp update ` --name $BACKEND_API_NAME ` --resource-group $RESOURCE_GROUP ` --revision-suffix v20220829-1 ` --cpu 0.25 --memory 0.5Gi ` --min-replicas 1 ` --max-replicas 2 ## Update Backend Background Processer container app and create a new revision az containerapp update ` --name $BACKEND_SVC_NAME ` --resource-group $RESOURCE_GROUP ` --revision-suffix v20220829-1 ` --cpu 0.25 --memory 0.5Gi ` --min-replicas 1 ` --max-replicas 5 |
With those changes in place and deployed, from the Azure Portal, you can open the log streams of the container app hosting the “ACA-Processor-Backend” and check the logs generated after queuing a message into Azure Storage Queue as an external system, you should receive logs similar to the below
That’s it for now, this post turned out to be a bit lengthy 🙂 stay tuned and happy coding!
Hi Taiseer, am loving this blog. Its very detailed and as someone who is currently building a system with Azure container apps and dapr I am finding it incredibly useful. One thing I have noticed from the Dapr docs is that they are very sparse when it comes to gRPC. We are using gRPC in our system and things like input bindings and pub / sub I believe should work but there is no documentation about how to configure the Dapr component or gRPC services to receive the things that come through the binding. If you have any experience of this I believe it would be a great addition to your blog!
Hello Sam, thanks for your message.
I totally agree with you regarding the Dapr documentation related to gRPC in Dapr docs and in ACA as well. I’m planning to add a blog post covering 2 services that talk over gRPC and have some Dapr buildings blocks enabled such as Pub/Sub and simple bindings, I believe this could be available by end of next week. So keep tuned and I will let you know when I publish this post, might not be part of this series but it will be a dedicated post.
Hi, I can’t get the Cron Binding working. Has the syntax changed?
Hello Robson,
There should be no change in syntax. Make sure there is 1 replica running always and no scale down rule, which sets the number of replicas to zero.
https://docs.dapr.io/reference/components-reference/supported-bindings/cron/