This post is inspired by the amazing article Monitoring Azure Container Apps With Azure Managed Grafana written by Paul Yu.
In his post, Paul walks us through provisioning Azure Managed Grafana instance and Container Apps using Terraform AzAPI provider and how we can add Container Apps dashboards into the Managed Grafana instance.
In this post, I’ll walk you through how to monitor in Grafana a simple Microservices App which consists of 2 Azure Container Apps, Azure Service Bus, and Azure Storage. And I will put the system under some load so we can see how Container Apps are scaling based on KEDA scaling rules and how these metrics are reflecting beautifully in Managed Grafana dashboards.
The source code is available on GitHub. A detailed Tutorial of Container Apps Can be accessed here.
The simple Microservices App components are shown below:
An overview of Azure Managed Grafana
Grafana is an open and composable observability and data visualization platform that permits you to visualize metrics, logs, and traces from multiple sources. Grafana is also available as Azure Managed Grafana, a managed service that is optimized for the Azure environment that enables us to run Grafana natively within the Azure cloud platform.
As we will see in this post Azure Managed Grafana will allow us to bring together all our telemetry data in one place. It can access a wide variety of data sources, including our data stores in Azure as it has built-in support for Azure Monitor and Azure Data Explorer.
Setting up Azure Managed Grafana
We will start now by creating an Azure Managed Grafana workspace using the Azure CLI.
Step 1: Create a new resource group
We need to create a resource group that will contain our resources, open the PowerShell terminal, use the command below:
1 2 3 4 5 6 |
$RESOURCE_GROUP="grafana-poc-rg" $LOCATION="eastus" az group create ` --name $RESOURCE_GROUP ` --location $LOCATION |
Step 2: Create an Azure Managed Grafana workspace
Run the command below to create an Azure Managed Grafana instance with a system-assigned managed identity, this is the default authenticate method for Azure Managed Grafana instances:
1 2 3 4 5 |
$GRAFANA_NAME = "grafana-aca-poc" az grafana create ` --name $GRAFANA_NAME ` --resource-group $RESOURCE_GROUP |
When running the command above, you must log in to Azure CLI with a user that has an Owner Role assigned on the subscription, this is needed because when a Grafana instance is created, Azure Managed Grafana grants it the Monitoring Reader role for all Azure Monitor data and Log Analytics resources within the subscription.
With this in place, any resource provisioned in the subscription will inherit from the subscription a Monitoring Reader role (Can read all monitoring data).
Once the deployment is complete, you’ll see a note in the output of the command line stating that the instance was successfully created, Take note of the endpoint URL listed in the CLI output. Based on the resource naming and location we’ve picked it will be similar to the following URL: https://grafana-aca-poc-<randomstring>.eus.grafana.azure.com/
Now we are ready to login into Managed Grafana instance and start defining our dashboard, we will do this after we deploy our simple microservices app.
Deploy microservices application resources
In the below steps, we will deploy the application components listed below, if you need through details on how those components are setup, you can visit the corresponding links:
- Azure Service Bus namespace, a topic, and subscription as a service broker
- Azure Storage Account to store processed messages.
- Azure Container Apps Environment
- 2 Dapr components (State Management API) and (Pub/Sub API)
- Azure Container Apps to host Backend Processor/API with internal ingress
- Azure Container Apps to host Frontend API with external ingress
Step 1: Deploy Azure Service Bus and Azure Storage Account
Use the commands below to deploy the Service Bus and Storage account:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
$NamespaceName="shipmentsservices" $TopicName="shipmentstopic" $TopicSubscription="shipments-processor-subscription" ##Create Service Bus namespace az servicebus namespace create ` --resource-group $RESOURCE_GROUP ` --name $NamespaceName ` --location $LOCATION ##Create a topic under namespace az servicebus topic create ` --resource-group $RESOURCE_GROUP ` --namespace-name $NamespaceName ` --name $TopicName ##Create a topic subscription az servicebus topic subscription create ` --resource-group $RESOURCE_GROUP ` --namespace-name $NamespaceName ` --topic-name $TopicName ` --name $TopicSubscription ##Create Storage Account $STORAGE_ACCOUNT_NAME = "shipmentstore" az storage account create ` --name $STORAGE_ACCOUNT_NAME ` --resource-group $RESOURCE_GROUP ` --location $LOCATION ` --sku Standard_LRS ` --kind StorageV2 |
Step 2: Deploy Azure Container App Environment
Now we will create the ACA environment and associate 2 dapr components pubsub-servicebus and shipmentsstatestore, the content of the dapr component file pubsub-svcbus.yaml and statestore-tablestorage.yaml can be found on the links.
Creating ACA environment will create by default a Log Analytics Workspace which will contain all the metrics, telemetry, and logs produced by any container apps deployed within this environment, and by default Managed Gragana Instance has a Monitoring Reader role so container apps metrics will be visible on Grafana dashboards.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
$ENVIRONMENT="orders-services-aca-env" az containerapp env create ` --name $ENVIRONMENT ` --resource-group $RESOURCE_GROUP ` --location $LOCATION ## Set Dapr Components az containerapp env dapr-component set ` --name $ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name pubsub-servicebus ` --yaml '.\components\pubsub-svcbus.yaml' az containerapp env dapr-component set ` --name $ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name shipmentsstatestore ` --yaml '.\components\statestore-tablestorage.yaml' |
Step 3: Deploy Azure Container App Backend Processor
We will deploy now the backend API which is responsible to process messages published into Service Bus and will store the processed messages in Table Storage. This API will be configured to use KEDA Service Bus scaler to autoscale for up to 10 instances if the length of messages in the Service Bus Topic exceeds 10 messages. As well this service will expose an internal endpoint to allow the frontend API to invoke it over HTTP using dapr service to service invocation. To read more about KEDA autoscaling you can visit my previous post.
I’m building and pushing the docker images to Azure Container Registry, you can use the same approach or you can use another registry such as Docker Hub. The source code of the backend processor can be found here.
To deploy the Backend processor, use the commands below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
$BACKEND_SVC_NAME="shipments-backend-processor" $ACR_NAME="taskstrackeracr" ## Create Backend App az containerapp create ` --name $BACKEND_SVC_NAME ` --resource-group $RESOURCE_GROUP ` --environment $ENVIRONMENT ` --registry-server "$ACR_NAME.azurecr.io" ` --image "$ACR_NAME.azurecr.io/$BACKEND_SVC_NAME" ` --cpu 0.50 --memory 1.0Gi ` --target-port 80 ` --ingress 'internal' ` --secrets "svcbus-connstring=<Conn string>" ` --enable-dapr ` --dapr-app-id $BACKEND_SVC_NAME ` --dapr-app-port 80 ` --min-replicas 1 ` --max-replicas 10 ` --scale-rule-name "queue-length-autoscaling" ` --scale-rule-type "azure-servicebus" ` --scale-rule-metadata "topicName=shipmentstopic" ` "subscriptionName=shipments-processor-subscription" ` "namespace=shipmentsservices" ` "messageCount=10" ` "connectionFromEnv=svcbus-connstring" ` --scale-rule-auth "connection=svcbus-connstring" |
Step 4: Deploy Azure Container App Frontend API
This API is very simple, it exposes a single external API endpoint which we are going to load test using K6, this endpoint reads data from Azure Table Storage by invoking an internal endpoint in the Backend Processor. HTTP autoscaling rule will be configured to trigger if the container app replica receives 10 concurrent requests per second or more. It can scale up to 10 instances maximum as per the configuration.
I’m building and pushing the docker images to Azure Container Registry, you can use the same approach or you can use another registry. The source code of the backend processor can be found here.
To deploy the Frontend API, use the commands below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
$FRONTEND_WEBAPP_NAME="shipments-frontend-api" ## Create Frontend App az containerapp create ` --name $FRONTEND_WEBAPP_NAME ` --resource-group $RESOURCE_GROUP ` --environment $ENVIRONMENT ` --registry-server "$ACR_NAME.azurecr.io" ` --image "$ACR_NAME.azurecr.io/$FRONTEND_WEBAPP_NAME" ` --cpu 0.25 --memory 0.5Gi ` --target-port 80 ` --ingress 'external' ` --enable-dapr ` --dapr-app-id $FRONTEND_WEBAPP_NAME ` --dapr-app-port 80 ` --min-replicas 1 ` --max-replicas 10 ` --scale-rule-name "http-requests-rule" ` --scale-rule-http-concurrency 10 ` --scale-rule-type "http" |
Generate Load to scale up the Container Apps
In order to see some realistic dashboard data on Grafana, we need to simulate some load on the microservices app, to do so we need to publish a large number of messages to Service Bus, and we need to simulate high traffic coming from the internet on the Frontend API, to achieve this we need do the following:
Step 1: Publish a large number of messages to Service Bus Topic
I’ve created a sample console application that asks the user how many messages to publish on the topic and it will send every 25 messages in a batch, use can provide for example 10000 messages and the console application will publish them in seconds, this will make the Topic fill up quickly so KEDA Service Bus scaler which is monitoring the Topic will trigger the auto-scaling for Backend Processor and start spinning up more replicas to cope up with this sudden number of messages in the topic. The source code of this console application can be found on the link.
Step 2: Generate HTTP traffic load to scale up Container App Frontend API
I will be using an open source tool named K6 to simulate virtual user traffic to the Frontend API, the traffic which we will generate should trigger the HTTP auto-scaling Frontend API and start spinning up more replicas to handle the sudden number of concurrent HTTP requests.
Using the tool is very simple, you need to create a JavaScript file and then invoke this file from K6 CLI. To install the K6 CLI follow the instructions document on this link. On Windows, they have .msi package too which makes installation very simple.
The content of the JavaScript file will contain the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
import http from 'k6/http'; import {sleep} from 'k6'; const baseUrl = 'https://shipments-frontend-api.happysea-573aaf45.eastus.azurecontainerapps.io'; export let options = { vus: 100, duration: '180s' }; export default function () { var idfrom = getRandomInt(1,10000); http.get(`${baseUrl}/api/shipments?idfrom=${idfrom}`); sleep(1); } function getRandomInt(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min) + min); // The maximum is exclusive and the minimum is inclusive } |
This script will use 100 virtual users for the duration of 3 minutes and will keep sending GET requests to the public endpoint /api/shipments?idfrom=<RandomId>
To invoke the script, open the command line and use the command below:
1 |
k6 run script.js |
By now we’ve generated a load of data on every component within our simple microservices app, let’s start configuring Grafana Dashboard and exploring the metrics.
Configure Managed Grafana Dashboards
Step 1: Login to Managed Grafana Instance
Now we’ll use the Managed Grafa endpoint URL to login to Grafana, to get the endpoint URL you can use the command below:
1 2 3 |
az grafana show ` --name $GRAFANA_NAME ` --query properties.endpoint |
Open the URL in the browser and login with a corporate account as Managed Grafana doesn’t support login with personal Microsoft accounts at the time of writing this post.
After you login successfully we need to Import a couple of Grafana dashboards. Grafana has more than 40 Azure Dashboards built by the community and some of them by Microsoft. You can explore Azure dashboards by clicking on this link.
In our case we are interested in 4 Dashboards as the below:
- Single Azure Container App view: https://grafana.com/grafana/dashboards/16592-azure-container-apps-container-app-view/
- Azure Container Apps Aggregate view: https://grafana.com/grafana/dashboards/16591-azure-container-apps-aggregate-view/
- Azure Storage account: https://grafana.com/grafana/dashboards/14469-azure-insights-storage-accounts/
- Azure Service Bus: https://grafana.com/grafana/dashboards/10533-azure-service-bus/
Importing the dashboards is a simple process, after you click on the Import menu item, you will be redirected to the below page
Provide the URL of the dashboard or the ID of the dashboard and click on the Load button. Once the metadata of the dashboard is loaded, you need to set the Datasource to Azure Monitor as in the image below, and then click on Import:
You need to do the same steps for the remaining 3 dashboards (Azure Service Bus, Azure Storage, and Container Apps aggregated view)
After you import the 4 dashboards, you can navigate to the Dashboards menu item, select Browse and you should see the 4 dashboards imported successfully as in the image below:
You can now select any of the dashboards and start seeing the beautiful visualization of metrics in one place, let’s check a snapshot of each dashboard when the system was under load:
Azure Service Bus Dashboard
Pay attention to the panel with the title Service Bus – Messages in Queue / Topic it represents the spike of messages when the console application was publishing a load of messages. There are many other useful panels you can explore.
Azure Storage Dashboard
Very useful panels to show for example the size of incoming data while the Backend Processor was writing processed messages to Table Storage, and the size of outgoing data when the same processor was serving read requests from the table storage.
Backend Processor Azure Container App Dashboard
Check the panel titled Replica Count and notice how 10 replicas are provisioned at the same time the Service Bus was receiving a load of messages, there are many other useful panels cush as Memory and Networks, etc.. which are not shown on this snapshot.
Frontend API Azure Container App Dashboard
Take a look at the panel titled Max Request Count which gives an indication of the number of inbound HTTP requests and how container apps started to scale up when there is a sudden increase in the HTTP requests count.
Azure Container Apps Aggregated View
Simple dashboard which will be useful if you have various Container Apps or Container Apps Environments, this dashboard will act as an entry point to access Container Apps.
Summary
As we saw in this post, Managed Grafana lets you bring together all your telemetry data into one place, you can customize dashboards the way you prefer and you can depend on community dashboards to help you get started. To know more about customization visit Grafana documentation.
Leave a Reply