This is the twelfth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:
- Tutorial for building Microservice Applications with Azure Container Apps and Dapr – Part 1
- Deploy backend API Microservice to Azure Container Apps – Part 2
- Communication between Microservices in Azure Container Apps – Part 3
- Dapr Integration with Azure Container Apps – Part 4
- Azure Container Apps State Store With Dapr State Management API – Part 5
- Azure Container Apps Async Communication with Dapr Pub/Sub API – Part 6
- Azure Container Apps with Dapr Bindings Building Block – Part 7
- Azure Container Apps Monitoring and Observability with Application Insights – Part 8
- Continuous Deployment for Azure Container Apps using GitHub Actions – Part 9
- Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps – Part 10
- Azure Container Apps Auto Scaling with KEDA – Part 11
- Azure Container Apps Volume Mounts using Azure Files – (This Post)
- Integrate Health probes in Azure Container Apps – Part 13
Azure Container Apps Volume Mounts using Azure Files
In the previous post, we saw how we could store and persists files on external service such as Azure Blob Storage. Still, there are some cases in which persisting files on the container itself are a must to deploy the service successfully, for example, in the last post titled “Deploy Meilisearch into Azure Container Apps” I showed how to deploy a Meliesearch instance on Container Apps requires persistence of different files such as (Lightning Memory-Mapped Database files, configurations, etc…) on the container storage as a volume mounted to a file share from Azure Files is a requirement for the service to function correctly.
The source code for this tutorial is available on GitHub. You can check the demo application too.
Storage options for Azure Container Apps
There are 3 different types of storage Container Apps that can utilize any of them or all of them based on the use case, the official documentation covers this thoroughly. In this post, I will be focusing on configuring Container Apps Volume Mounts using Azure Files as permanent storage for writing files to Azure Files share and making them accessible to other containers.
Storage type | Description | Usage examples |
---|---|---|
Container file system | Temporary storage scoped to the local container | Writing a local app cache. |
Temporary storage | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. |
Azure Files | Permanent storage | Writing files to a file share to make data accessible by other systems. |
The use case that we will cover today is the following; once the user creates a new task, the Backend API service will capture the raw JSON of the task, create a JSON file, then store it locally on the service’s storage volume-mounted to an Azure Files share, then we’ll configure the Frontend Web Container App service to use the same volume mount and file share so the files written by the Backend API will be available for the Frontend Web App for download/view from the UI. We will see that creating new revisions or restarting the containers will not have any impact on the persistence of the files, they will be always preserved and accessible by the container apps.
Updating the Backend API Project
Step 1: Save files locally on container storage
We will introduce now few changes on the Backend API code base to write files on the container’s local storage, to do so open the file named “TasksStoreManager.cs” in the project “TasksTracker.TasksManager.Backend.Api” and add the below method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
private async Task WriteFileAsync(TaskModel taskModel) { var options = new JsonSerializerOptions() { WriteIndented = true }; var jsonString = System.Text.Json.JsonSerializer.Serialize(taskModel, options); var directory = Path.Combine(Directory.GetCurrentDirectory(), "attachments"); if (!Directory.Exists(directory)) { Directory.CreateDirectory(directory); } var filePath = Path.ChangeExtension(Path.Combine(directory, taskModel.TaskId.ToString()), ".json"); _logger.LogInformation("Trying to write file for task with id '{0}' on path {1}", taskModel.TaskId.ToString(),filePath); await File.WriteAllTextAsync(filePath, jsonString); } |
What we have done here is simple, we are just taking the task JSON content and writing it to a file locally when we call the method Directory.GetCurrentDirectory() while running on Container Apps, it will return the Work directory of your service which is /app so this means that we are going to store the files locally on the container storage on this directory /app/attachments.
We need to call the method “WriteFileAsync”, so update the method named “CreateNewTask” and invoke “WriteFileAsync” as the highlighted line below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
public async Task<Guid> CreateNewTask(string taskName, string createdBy, string assignedTo, DateTime dueDate) { //code removed for brevity await _daprClient.SaveStateAsync<TaskModel>(STORE_NAME, taskModel.TaskId.ToString(), taskModel); _logger.LogInformation("Write task file as json with name: '{0}' to permanent file storage", taskModel.TaskName); await WriteFileAsync(taskModel); await PublishTaskSavedEvent(taskModel); //code removed for brevity } |
Updating the Frontend Web App Project
Step 1: Download files from the container storage
Next, we will update the Frontend Web App to read/download the saved JSON task file, we will add a new button on the edit page that is responsible to read the stored file, to do this open the page named “Tasks/Edit.cshtml.cs” in the project named “TasksTracker.WebPortal.Frontend.Ui” and add the below method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
public IActionResult OnGetDownloadFile(string fileNameWithoutExtension) { byte[] bytes; var fileName = Path.ChangeExtension(fileNameWithoutExtension, ".json"); var directory = Path.Combine(Directory.GetCurrentDirectory(), "attachments"); var filePath = Path.Combine(directory, fileName); try { //Read the File data into Byte Array. bytes = System.IO.File.ReadAllBytes(filePath); //Send the File to Download. return File(bytes, "application/octet-stream", fileName); } catch (FileNotFoundException) { var result = new NotFoundObjectResult(new { message = "File Not Found" }); return result; } } |
As you can see above, we are reading the Tasks JSON files from the same storage location directory /app/attachments, remember that those are 2 different services and each one has its own local storage, but once we configure both services local storage to a volume mounted to the same Azure Files share, both services will be able to see the same files under this directory. More about this in the next steps.
Now, we just need to add a link on the Edit screen to download the raw JSON for each task, to do this, open the file named “Tasks/Edit.cshtml” and add the highlighted line below:
1 2 3 4 5 6 7 8 9 10 11 12 |
<div class="col-md-4"> <form method="post"> @* code removed for brevity *@ <div class="form-group"> <input type="submit" value="Save" class="btn btn-primary" /> <a class="btn btn-primary" download href="@Url.Page("Edit", "DownloadFile", new { fileNameWithoutExtension = Model.TaskUpdate!.TaskId })">Download Raw</a> </div> </form> </div> |
Create the Azure File Share
If you are following up with the tutorial, we can use the same Azure Storage account used earlier to create Azure File share (skip step 1 below), if not you can create a new storage account as the step 1 below
Step 1: Create a new Store account
From a PowerShell console run the command below:
1 2 3 4 5 6 7 8 9 10 11 12 |
$RESOURCE_GROUP="tasks-tracker-rg" $STORAGE_ACCOUNT="taskstracker" $LOCATION="eastus" az storage account create ` --resource-group $RESOURCE_GROUP ` --name $STORAGE_ACCOUNT_NAME ` --location "$LOCATION" ` --kind StorageV2 ` --sku Standard_LRS ` --enable-large-file-share ` --query provisioningState |
Step 2: Create Azure Storage File Share
Now we will create the File Share which will be used to mount containers’ local storage/volumes to it, to do use PowerShell and run the command below:
1 2 3 4 5 6 7 8 9 10 11 |
$STORAGE_ACCOUNT="taskstracker" $SHARE_NAME = "permanent-file-share" $RESOURCE_GROUP="tasks-tracker-rg" az storage share-rm create ` --resource-group $RESOURCE_GROUP ` --storage-account $STORAGE_ACCOUNT ` --name $SHARE_NAME ` --quota 1024 ` --enabled-protocols SMB ` --output table |
Step 3: Link the Storage File Share to Container Apps Environment
The below step will define the storage mount link from our Container Apps environment to the Azure Storage account:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
## Get storage account key $STORAGE_ACCOUNT_KEY=$(az storage account keys list -n $STORAGE_ACCOUNT --query "[0].value" -o tsv) ##Create the storage link in the environment. $STORAGE_MOUNT_NAME = "permanent-storage-mount" az containerapp env storage set ` --name $ENVIRONMENT ` --access-mode ReadWrite ` --azure-file-account-name $STORAGE_ACCOUNT ` --azure-file-account-key $STORAGE_ACCOUNT_KEY ` --azure-file-share-name $SHARE_NAME ` --storage-name $STORAGE_MOUNT_NAME ` --resource-group $RESOURCE_GROUP ` --output table |
Basically, this above command creates a link between the Container App environment and the file share created in step 2 when using the az storage share-rm command.
Step 4: Update the Backend API service to use the storage mount
Now we are ready to configure the Backend API service to use this storage mount, to do so w need to do 2 things:
- Define a storage volume
- Define a volume mount
To achieve this, the az container app has no direct API to update the storage configuration, so what we are going to do, is download the YAML file of the service, do the modification manually by hand on the YAML file, then update the container app using the updated YAML. If you are using ARM/Bicep templates you can do this directly from there.
But first, we need to push our code changes to ACR, to do this run the PowerShell command below:
1 2 3 4 |
$BACKEND_API_NAME="tasksmanager-backend-api" $ACR_NAME="taskstrackeracr" az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_API_NAME" --file 'TasksTracker.TasksManager.Backend.Api/Dockerfile' . |
Next, we will download the Backend API YAML file, so we can run the manual edits, the command below will download a file named “app-backend-api.yaml” to the directory you are in:
1 2 3 4 |
az containerapp show ` --name $BACKEND_API_NAME ` --resource-group $RESOURCE_GROUP ` --output yaml > app-backend-api.yaml |
Once the file “app-backend-api.yaml” is downloaded, go ahead and open with VS Code or an editor which supports editing yaml files, we need to do 3 things on the file:
- Define a “volumes” array on the template level with a single entry/item named “azure-file-volume” for the storage linked to the container app environment “permanent-storage-mount”, this volume entry should be of storage type “AzureFile”
- Define “volumeMounts” array for the container with an entry/item with a volume name “azure-file-volume” and with a mount path of value “/app/attachments”. This path should match the path our backend API writes JSON files to and our frontend web app reads JSON files from. You can define many volume mounts if needed to support other storage use cases
- Update the “revesionSuffix” to a unique value, so the container app will not complain that revesionSuffix is already used.
Your modified application YAML should look similar to the below image, don’t forget to save the file 🙂
Next, we need to update the container app and create a new revision using the updated YAML file, to do so, run the command below:
1 2 3 4 5 |
az containerapp update ` --name $BACKEND_API_NAME ` --resource-group $RESOURCE_GROUP ` --yaml app-backend-api.yaml ` --output table |
Step 5: Update the Frontend Web App service to use the storage mount
Now we’ve to do the same steps we have done before for the backend API to make the Container App Frontend WebApp storage volume mounted to AzureFile share and can read the JSON files written by the Backend API
Push code changes to ACR as below:
1 |
az acr build --registry $ACR_NAME --image "tasksmanager/$FRONTEND_WEBAPP_NAME" --file 'TasksTracker.WebPortal.Frontend.Ui/Dockerfile' . |
Download the YAML file of the Frontend Web App
1 2 3 4 |
az containerapp show ` --name $FRONTEND_WEBAPP_NAME ` --resource-group $RESOURCE_GROUP ` --output yaml > app-frontend-ui.yaml |
Manually update the YAML entries to create “volumes” and “volumeMounts”, the files will be identical to what used in the Backend API
Lastly, update the Frontend Web App container by creating a new revision as the command below:
1 2 3 4 5 6 |
##Update container app using yaml file az containerapp update ` --name $FRONTEND_WEBAPP_NAME ` --resource-group $RESOURCE_GROUP ` --yaml app-frontend-ui.yaml ` --output table |
With both changes in place, we can give it a try and save a new task from the UI, it should create a JSON file in the backend, and when you edit the file, you should see a download button that allows you to download the Raw JOSN file from the shared Azure File.
You can access both containers by using “exec” and navigating to the directory “/app/attachments”, you should be able to see the files stored in this local location which s mounted to Azure Files.
1 2 3 4 5 6 7 |
##Access Backend API container az containerapp exec --name $BACKEND_API_NAME --resource-group $RESOURCE_GROUP ls /app/attachments ##Access Frontend WebApp container az containerapp exec --name $FRONTEND_WEBAPP_NAME --resource-group $RESOURCE_GROUP ls /app/attachments |
And you can verify that files are stored in the shared Azure Files Storage from Azure Portal or Azure Storage Explorer.
Update the Bicep template to reflect changes in solution components
To keep things consistent, we need to update the Bicep template to match the changes we’ve done using the “az containerapp”, the changes are on GitHub so i will just put links to the impacted files with a brief description of what changed
- Update on file “storageAccount.bicep“: Adding the new file service and the file share
- Update on file “acaEnvironment.bicep“: Creating environment storages which created the link between Azure File Share and container Apps environment
- Update on file “containerApp.bicep“: Creating the “volumeMounts” array and the “volumes” for all containers
- Update on file “main.bicep” by updating the values for some parameters and passing the storage mount name for the container apps module.
Conclusion
As we saw in this post, we can mount a file share from Azure Files as a volume inside a container, and once this setup we can achieve the below:
- Files written under the mount location are persisted to the file share.
- Files in the share are available via the mount location.
- Multiple containers can mount the same file share, including ones that are in another replica, revision, or container app.
- All containers that mount the share can access files written by any other container or method.
- More than one Azure Files volume can be mounted in a single container.
Hi Taiseer Joudeh,
Greate article series. I have really enjoyed your articles about Dapr.
In this series of articles, I have not come across how to use Dapr secret store. If you write an article on how to integrate Azure Vault or point me to the right article will be helpful.
Thanks for your message. Currently you can refer to the official docs which cover Dapr secret store integration, I’m thinking of compiling a post which uses Dapr secret store instead of connection strings.
Thanks. I will be waiting for it 🙂