This is the tenth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:
- Tutorial for building Microservice Applications with Azure Container Apps and Dapr – Part 1
- Deploy backend API Microservice to Azure Container Apps – Part 2
- Communication between Microservices in Azure Container Apps – Part 3
- Dapr Integration with Azure Container Apps – Part 4
- Azure Container Apps State Store With Dapr State Management API – Part 5
- Azure Container Apps Async Communication with Dapr Pub/Sub API – Part 6
- Azure Container Apps with Dapr Bindings Building Block – Part 7
- Azure Container Apps Monitoring and Observability with Application Insights – Part 8
- Continuous Deployment for Azure Container Apps using GitHub Actions – Part 9
- Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps – (This Post)
- Azure Container Apps Auto Scaling with KEDA – Part 11
- Azure Container Apps Volume Mounts using Azure Files – Part 12
- Integrate Health probes in Azure Container Apps – Part 13
Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps
In the previous post we’ve used GitHub Actions to continuously build and deploy the 3 Azure Container Apps after any code commits to a specific branch, on this post we will be working on defining the proper process to automate the infrastructure provisioning by creating the scripts/templates to provision the resources, this process is known as IaC (Infrastructure as Code).
Once we have this in place, IaC deployments will benefit us in key ways such as:
- Increase confidence in the deployments, ensure consistency, reduce human error in resource provisioning, and ensure consistent deployments.
- Avoid configuration drifts, IaC is an idempotent operation, which means it provides the same result each time it’s run.
- Provision of new environments, during the lifecycle of the application you might need to run penetration testing or load testing for a short period of time in a totally isolated environment, with IaC in place it will be a matter of executing the scripts to recreate an identical environment to the production one.
- When you provision resources from the Azure Portal many processes are abstracted, in our case think of when creating an Azure Container Apps Environment from the portal, behind the sense it will create a log analytics workspace and associate with the environment. With IaC it can help provide a better understanding of how Azure works and how to troubleshoot issues that might arise.
ARM Templates in Azure
ARM templates are files that define the infrastructure and configuration for your deployment, the template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it.
Within Azure there are 2 ways to create IaC, we can use the JSON ARM templates or Bicep (domain-specific language). From my personal experience, I’ve used JSON ARM templates in real-world scenarios which tend to be complex to manage and maintain especially when the project grows and the number of components and dependencies increases.
For Bicep, this is my first time to use it, so far I found it easier to work with and you can be more productive compared to ARM templates, it is worth mentioning that Bicep code will be compiled into ARM templates eventually, this process called “Transpilation”.
If you want to learn more about Bicep, I highly recommend checking Microsft learn website Fundamentals of Bicep and the post Getting started with Azure Bicep by Tobias Zimmergren
The source code for this tutorial is available on GitHub. You can check the demo application too.
Creating the Infrastructure as Code using Bicep
Lets’ get started by defining the Bicep modules needed to create the Infstatrcure code, what we want to achieve by the end of this post is to have a new resource group containing all the resources and configuration (connection strings, secrets, env variables, dapr components, etc..) we used to build our solution, we should have a new resource group which contains the below resources.
Step 1: Add the needed extension to VS Code Or Visual Studio
If you are using VS Code, you need to install an extension named “Bicep“, and if you are using Visual Studio, PG team announced the release of Bicep extension for Visual Studio version 17.3 and higher, you can get it from here. Both extensions will simplify building Bicep files as they will offer IntelliSense, Validation, listing all available resource types, etc..
Step 2: Define an Azure Log Analytics Workspace
Add a new folder named “deploy” on the root project directory then add a new file named “logAnalyticsWorkspace.bicep”, use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
param logAnalyticsWorkspaceName string param location string = resourceGroup().location resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2021-06-01' = { name: logAnalyticsWorkspaceName location: location properties: any({ retentionInDays: 30 features: { searchVersion: 1 } sku: { name: 'PerGB2018' } }) } output workspaceResourceId string = logAnalyticsWorkspace.id output logAnalyticsWorkspaceCustomerId string = logAnalyticsWorkspace.properties.customerId //Do not use outputs to return secrets, we will use a different way //var sharedKey = listKeys(logAnalyticsWorkspace.id, logAnalyticsWorkspace.apiVersion).primarySharedKey //output logAnalyticsWorkspacePrimarySharedKey string = sharedKey |
This module takes 2 input parameters “logAnalyticsWorkspaceName” and “location”, the location defaults to the container resource group location. Bicep has a function named resourceGroup() which you can get the location from, yet you can override this default value if you provided the location when calling this module.
The output of this module is 2 parameters, first one is “workspaceResourceId” and the second one is “logAnalyticsWorkspaceCustomerId” both outputs are needed as input for the Azure Container Apps Environment and Application Insights resource which we will provision next.
Important note: You can see that I left a commented code on how to return the secret/shared key using outputs, this was the initial way I followed but it turned out it is not secure as those outputs will be available on the deployment level (when viewing the Resource Group Deployment section) in Azure Portal in plain text, so anyone has access to the resource group will see those keys and secrets.
I will be using a different approach to return the secrets from modules. Keep reading to see how. You can read more about the 2 possible approaches on the official documentation.
In my personal opinion, I wished there is a “@Secure” param we can put on the output parameters to keep things modular. There is a GitHub issue opened and hopefully, PG will add it.
Step 3: Define an Azure Application Insights resource
Add a new file named “appInsights.bicep” under the folder “deploy” and use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
param location string = resourceGroup().location param workspaceResourceId string param appInsightsName string resource appInsights 'Microsoft.Insights/components@2020-02-02' = { name: appInsightsName location: location kind: 'web' properties: { Application_Type: 'web' WorkspaceResourceId:workspaceResourceId } } //Do not use output params to pass keys for other resources //output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey |
This module accepts 3 parameters, the “location”, and “workspaceResourceId “, and “appInsightsName”, we are associating the log analytics workspace with this Application Insights resource by setting the property “WorkspaceResourceId”
Step 4: Define an Azure Container Apps Environment resource
Add a new file named “acaEnvironment.bicep” under the folder “deploy” and use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
param acaEnvironmentName string param location string = resourceGroup().location @secure() param instrumentationKey string param logAnalyticsWorkspaceCustomerId string @secure() param logAnalyticsWorkspacePrimarySharedKey string resource environment 'Microsoft.App/managedEnvironments@2022-03-01' = { name: acaEnvironmentName location: location properties: { daprAIInstrumentationKey:instrumentationKey appLogsConfiguration: { destination: 'log-analytics' logAnalyticsConfiguration: { customerId: logAnalyticsWorkspaceCustomerId sharedKey: logAnalyticsWorkspacePrimarySharedKey } } } } output acaEnvironmentId string = environment.id |
This module accepts 5 input parameters, mainly those to associate the log analytics workspace with the Environment and set the Application Insights Key for dapr distributed calls tracing and telemetry. This module outputs the Container App Environment Id as an output parameter which will be used when defining the 3 Azure Container Apps. More about the “@secure” attribute below.
Step 5: Define Azure CosmosDB resources
We will start defining the supporting resources needed for our solution, we will start defining the module needed to provision Azure Cosmos Account, Database, and Container, to do so, add a new file named “cosmosdb.bicep” and use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
@description('Cosmos DB account name, max length 44 characters, lowercase') param accountName string @description('Location for the Cosmos DB account.') param location string = resourceGroup().location @description('The primary replica region for the Cosmos DB account.') param primaryRegion string @description('The default consistency level of the Cosmos DB account.') @allowed([ 'Eventual' 'ConsistentPrefix' 'Session' 'BoundedStaleness' 'Strong' ]) param defaultConsistencyLevel string = 'Session' @description('The name for the database') param databaseName string = 'tasksmanagerdb' @description('The name for the container') param containerName string = 'taskscollection' var accountName_var = toLower(accountName) var consistencyPolicy = { Eventual: { defaultConsistencyLevel: 'Eventual' } ConsistentPrefix: { defaultConsistencyLevel: 'ConsistentPrefix' } Session: { defaultConsistencyLevel: 'Session' } BoundedStaleness: { defaultConsistencyLevel: 'BoundedStaleness' maxStalenessPrefix: 100000 maxIntervalInSeconds: 300 } Strong: { defaultConsistencyLevel: 'Strong' } } var locations = [ { locationName: primaryRegion failoverPriority: 0 isZoneRedundant: false } ] resource databaseAccount 'Microsoft.DocumentDB/databaseAccounts@2021-01-15' = { name: accountName_var kind: 'GlobalDocumentDB' location: location properties: { consistencyPolicy: consistencyPolicy[defaultConsistencyLevel] locations: locations databaseAccountOfferType: 'Standard' } } resource accountName_databaseName 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2021-01-15' = { parent: databaseAccount name: databaseName properties: { resource: { id: databaseName } } } resource accountName_databaseName_containerName 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2021-01-15' = { parent: accountName_databaseName name: containerName properties: { resource: { id: containerName partitionKey: { paths: [ '/partitionKey' ] kind: 'Hash' } } options: { autoscaleSettings: { maxThroughput: 4000 } } } } output documentEndpoint string = databaseAccount.properties.documentEndpoint // Do not use output param for returning cosmos db master key //var primaryMasterKeyValue = listKeys(accountName_resource.id, accountName_resource.apiVersion).primaryMasterKey //output primaryMasterKey string = primaryMasterKeyValue |
On this file, we are creating Azure Cosmos Account, Database, and a Collection, I’m defaulting the names of Database and Collection to use the same names we’ve previously used in the tutorial, but we are going to send a different Cosmos Account. For full details on Cosmos DB Bicep module parameters, you can check it here.
Step 6: Define Azure Service Bus resource
Add a new file named “serviceBus.bicep” under the folder “deploy” and use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
@description('The location where we will deploy our resources to. Default is the location of the resource group') param location string = resourceGroup().location @description('The name of the service bus namespace') param serviceBusName string var topicName = 'tasksavedtopic' resource serviceBus 'Microsoft.ServiceBus/namespaces@2021-11-01' = { name: serviceBusName location: location sku: { name: 'Standard' } } resource topic 'Microsoft.ServiceBus/namespaces/topics@2021-11-01' = { name: topicName parent: serviceBus } //var listKeysEndpoint = '${serviceBus.id}/AuthorizationRules/RootManageSharedAccessKey' //var sharedAccessKey = '${listKeys(listKeysEndpoint, serviceBus.apiVersion).primaryKey}' //var connectionStringValue = 'Endpoint=sb://${serviceBus.name}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=${sharedAccessKey}' //output connectionString string = connectionStringValue |
This module will be needed to provision the Azure Service Bus and a Topic named “tasksavedtopic” by default. The connection string of the service bus will be returned in a different way, not like the commented code.
Step 7: Define Azure Storage Account resource
Add a new file named “storageAccount.bicep” under the folder “deploy” and use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
param storageAccountName string param location string = resourceGroup().location var externalTasksQueueName = 'external-tasks-queue' resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = { name: storageAccountName location: location sku: { name: 'Standard_LRS' } kind: 'StorageV2' } resource storageQueues 'Microsoft.Storage/storageAccounts/queueServices@2021-09-01' = { name: 'default' parent: storageAccount } resource external_queue 'Microsoft.Storage/storageAccounts/queueServices/queues@2021-09-01' = { name: externalTasksQueueName parent: storageQueues } //var storageAccountKeyValue = storageAccount.listKeys().keys[0].value //output storageAccountKey string = storageAccountKeyValue |
This module will be needed to provision an Azure Storage Account and a queue named “external-tasks-queue”.
Step 8: Define Azure Container Apps module
Now we will create a reusable Bicep module which will be used to define the 3 Azure Container Apps in our solution, so add a new file named “containerApp.bicep” and use the content below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
param containerAppName string param location string = resourceGroup().location param environmentId string param containerImage string param targetPort int param isExternalIngress bool param containerRegistry string param containerRegistryUsername string param isPrivateRegistry bool param enableIngress bool param registryPassName string param minReplicas int = 0 param maxReplicas int = 1 @secure() param secListObj object param envList array = [] param revisionMode string = 'Single' param useProbes bool = false resource containerApp 'Microsoft.App/containerApps@2022-03-01' = { name: containerAppName location: location properties: { managedEnvironmentId: environmentId configuration: { activeRevisionsMode: revisionMode secrets: secListObj.secArray registries: isPrivateRegistry ? [ { server: containerRegistry username: containerRegistryUsername passwordSecretRef: registryPassName } ] : null ingress: enableIngress ? { external: isExternalIngress targetPort: targetPort transport: 'auto' traffic: [ { latestRevision: true weight: 100 } ] } : null dapr: { enabled: true appPort: targetPort appId: containerAppName appProtocol: 'http' } } template: { containers: [ { image: containerImage name: containerAppName env: envList probes: useProbes? [ { type: 'Readiness' httpGet: { port: 80 path: '/api/health/readiness' scheme: 'HTTP' } periodSeconds: 240 timeoutSeconds: 5 initialDelaySeconds: 5 successThreshold: 1 failureThreshold: 3 } ] : null } ] scale: { minReplicas: minReplicas maxReplicas: maxReplicas } } } } output fqdn string = enableIngress ? containerApp.properties.configuration.ingress.fqdn : 'Ingress not enabled' |
The Azure Container Apps module is reusable and almost accepts input parameters for all Azure Container Apps properties, we will be using it to provision the 3 container apps. The output parameter of this module is the fully qualified domain name (FQDN) of the container app if the “enableIngress” property is set to true.
Notice how conditional boolean parameters such as “useProbes”, or “enableIngress” control how the template will be provisioned differently based on those values.
The property “envList” is an array that accepts a dictionary of environment variables, we will see how we are sending those values next.
The property “secListObj” is an object with a “@secure” attribute his attribute can be applied on string and object parameters that might contain secret values, when it is used Azure won’t make the parameter values available in the deployment logs nor on the terminal if you are using Azure CLI.
Step 9: Define the Main module for the solution
Lastly, we need to define the Main Bicep module which will link all other modules together, this will be the file that is referenced from AZ CLI command or GitHub action when creating the entire resources, to do so add a new file named “main.bicep”, I will not list the file content here as it is more than 500 lines but I will go over it step by step, you can always get the file from GitHub repo. The file “main.bicep” can be accessed from the previous link.
What we have added to the file is the following:
- We have defined many parameters to make the deployment flexible, yet we default most of them, this means that there is no need to provide values for the defaulted ones. The only mandatory parameters are “containerRegistryPassword ” and “sendGridApiKey“.
- We used the attribute “@secure” on the properties “containerRegistryPassword” and “sendGridApiKey” as those are secrets and should not be logged or visible when typing them in Azure CLI.
- We are calling the modules defined earlier “cosmosdb.bicep”, “serviceBus.bicep”, “storageAccount.bicep”, “logAnalyticsWorkspace.bicep”, and then “appInsights.bicep” by bundling them in 1 single module named “primaryResources.bicep”.
- We want to get the secrets, keys, and connection strings from the resources without using the output parameter, so we are going to use the
existing keyword to obtain a strongly typed reference to the pre-created resource. Then use the listKeys() function to obtain the key or connection string.
For example, let’s take a look at how we are getting a reference for the Application Insights Resource by providing its name as an input, then using the resource named “appInsightsResource” to get the InstrumentationKey by calling “appInsightsResource.properties.InstrumentationKey“. - Looking at how we are creating the Azure Container App “Backend API App” using the module “containerApp.bicep” we notice the following:
- We are setting the array “dependsOn” to help the Bicep interpreter to understand the dependencies between components, this is not needed if we are not using the “existing” keyword.
- We are passing various mandatory parameters for Azure Container Apps, we are getting the value of “environmentId” from the output parameter defined in ACA Environment.
- We are passing the needed information to access the ACR to pull the right image for each ACA. We will pass the registry password from Azure CLI.
- We are passing the property “secListObj” by adding secrets to a dictionary named “secArray”, we did this because @secure attribute is only applicable on strings and objects, so this is a workaround to do this by creating an object which wraps an array. Thanks Alex for the hint.
- Looking at one of the Dapr components, for example, “stateStoreDaprComponent” you will notice the following:
- Setting the component name while prefixing it with ACA environment name “${environmentName}/statestore” (Parent/Child) relation in Bicep.
- Setting “secrets”, “metadata” and “scopes” dictionaries of this component in a consistent way among all other Dapr components.
Step 10: Calling the main.bicep file from Azure CLI and deploy the Infrastructure
Now we are ready to start the actual deployment by calling az deployment group create to do so, open the PowerShell console and use the content below, don’t forget to replace the “containerRegistryPassword” and SendGrid API key with your values. You need to create an empty resource group before. I’m using a resource group named “tasks-bicep-rg”:
1 2 3 4 5 6 7 8 |
az group create ` --name "tasks-bicep-rg" ` --location "eastus" az deployment group create ` --resource-group "tasks-bicep-rg" ` --template-file ./deploy/main.bicep ` --parameters '{ \"containerRegistryPassword\": {\"value\":\"XXXX\"}, \"sendGridApiKey\": {\"value\":\"SG.XXXXX\"} }' |
Azure CLI will take the Bicep module, and start creating the deployment in the resource group “tasks-bicep-rg”
Step 11: Verify the final results
If the deployment succeeded; you should see all the resources created under the resource group, as well you can navigate to the “Deployments” tab to verify the ARM templates deployed, it should look like the below image:
That’s it for now, Bicep is a quite powerful DSL language to manage your Infrastructure, I highly recommend checking the reference below to get more familiar with Bicep and IaC.
Leave a Reply