App Services, Service Fabric and Serverless Azure Function are becoming flagship computational services for Microsoft Azure. I am trying to find an economical benchmark for Computing in Azure. It is not a straight-forward task, but I am employing a simple methodology to find cost/computing rational between all three computational platforms.
Summary
Following is the highest throughput and zero error load test’s summary out 5 various load patterns.
Hourly Cost (3 x Nodes) | 0.305159022863118 per Hour | Azure Pricing |
Hourly Test Count | 687600 Api Calls | Load Test Stats & Profile |
Zero-Fault Average CPU Consumption | 88.234625% | Performance Stats |
Load Test – Detail Breakdown
Test Methodology
I ran user load of 600, 1300, 500, 500, 500, 50, 50, 50 respectively. My objective was to find Zero-Fault maximum throughput from the Service Fabric cluster.

The user load was well distributed with planned load spikes for a successful run.

Service Fabric Cluster Profile
Node Type | 1 |
Node Count | 3 |
Application Type | 1 |
Application | Stateless Web Api (OWIN) |
Machine Profile
"sku": {
"name": "Standard_DS2_v2",
"tier": "Standard"
}
Machine Configuration
Size | Standard_DS2_v2 |
CPU cores | 2 |
Memory GiB | 7 |
Local SSD GiB | 14 |
Max data disks | 4 |
Max cached disk throughput IOPS / MBps (cache size in GiB) | 8,000 / 64 (86) |
Max uncached disk throughput: IOPS / MBps | 6,400 / 96 |
Max NICs / Network bandwidth | 2 / high |
Microsoft’s Quote DSv2-series [as of 25th April 2017]
DSv2-series* (ACU: 210-250)
The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may impact the performance of your running workload. The relative performance is outlined below as the expected baseline, subject to an approximate variability of 15 percent.
Azure Pricing [as of 25th April 2017]
- 0.101719674287706 per Hour per Node – Based on Detail Usage Report from Azure Portal
- 0.101719674287706 x 3 = 0.305159022863118 per Hour per Service Fabric Cluster (3xNode)
Web API Harness – Sample Code
Web Api deploy two test endpoints,
- APIEndpoint1 – light-weight and high-volume scenario.
- APIEndpoint2 – CPU Cruncher with complex graphics and mathematical calculations.
Generate random Guid
and apply list and hash operations.
# API Endpoint - 1
var guidList = new List<dynamic>();
for (var i = 0; i < 300000; i++)
{
var randomGuid = Guid.NewGuid().ToString();
var guid = randomGuid;
var hash = randomGuid.GetHashCode();
var reverseGuid = Reverse(randomGuid);
}
for (var i = 0; i < 1000; i++)
{
var randomGuid = Guid.NewGuid().ToString();
guidList.Add(new { Guid = randomGuid, Hash = randomGuid.GetHashCode(), ReverseGuid = Reverse(randomGuid) });
}
// Return GuidList using JsonConvert
Accept Base64 2048×2048+300dpi High Resolution image and applying some rigorous image manipulation.
# API Endpoint - 2
ISupportedImageFormat format = new JpegFormat { Quality = 70 };
Size size = new Size(150, 0)
using (MemoryStream inStream = new MemoryStream(photoBytes))
{
using (MemoryStream outStream = new MemoryStream())
{
using (ImageFactory imageFactory = new ImageFactory(preserveExifData:true))
{
imageFactory.Load(inStream)
.Hue(180, true)
.Brightness(40)
.Filter(matrixFilter)
.Resize(size)
.Pixelate(20)
.Format(format)
.Save(outStream);
}
// Converting Outstream to Base64 and add into HttpResponse.
}
}
Load Test Stats & Profile
I find TestId/2006 was the best match for comparison. Following is the summary of the test run.
Max User Load | 500 (No warm up or think time) |
Tests/Sec | 191 (687,600 Tests per Hour) |
Tests Failed | 0 |
Avg. Test Time (sec) | 0.31 (310 ms) |
Avg. Http Response Size (KiB) | 137.21 |
Throughput/Sec (KiB) | 26257.8777 (25.6425 MiB) |
Azure Metrics Query
https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/%7B%7Byour">https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/{{your</a> resource group}}/providers/Microsoft.Compute/virtualMachineScaleSets/{{your sf node}}/providers/microsoft.insights/metrics?api-version=2016-09-01&$filter=(name.value eq 'Percentage CPU') and aggregationType eq 'Average' and startTime eq 2017-04-25T20:30:00Z and endTime eq 2017-04-25T20:50:00Z and timeGrain eq duration'PT1M'
Performance Stats
You can also find raw result set at Gist .
Timestamp | Average CPU |
---|---|
2017-04-25T20:30:00Z | 89.28 |
2017-04-25T20:31:00Z | 87.41 |
2017-04-25T20:32:00Z | 83.3025 |
2017-04-25T20:33:00Z | 89.345 |
2017-04-25T20:34:00Z | 86.6525 |
2017-04-25T20:35:00Z | 80.5025 |
2017-04-25T20:36:00Z | 92.3225 |
2017-04-25T20:37:00Z | 91.505 |
2017-04-25T20:38:00Z | 85.275 |
2017-04-25T20:39:00Z | 91.205 |
2017-04-25T20:40:00Z | 85.565 |
2017-04-25T20:41:00Z | 86.255 |
2017-04-25T20:42:00Z | 92.5275 |
2017-04-25T20:43:00Z | 87.2025 |
2017-04-25T20:44:00Z | 90.84 |
2017-04-25T20:45:00Z | 90.32 |
2017-04-25T20:46:00Z | 90.3725 |
2017-04-25T20:47:00Z | 90.5775 |
2017-04-25T20:48:00Z | 83.9275 |
2017-04-25T20:49:00Z | 90.305 |