Breakbeat Technology

The Bogdan & The Bugatti – Scalable vs Performant

Or How I Found An Excuse to Benchmark EBS & Instance Store

Humanity’s ability to communicate is one of our most valuable tools.

How good we are with this tool is never more apparent than when we discuss complex topics such as technology, business objectives, or process design.

In this article, I hope to help you understand the difference between Scalable/Scalability and Performance/Performant as well as perform some basic benchmarking on two commonly used AWS storage products, EBS and EC2 Instance Store to show how this language can pop up in the wild.

Scalable and performant are often used interchangeably but are actually two completely different things.

Understanding this difference is incredibly important when discussing business objectives, reading documentation, or designing infrastructure/processes.

The first portion section of this article is intended for non-technical individuals (CEOs, Project Managers, etc) while the latter part is more suited for technical personnel (CTO’s, Engineers, Devops). Hopefully everyone gets a bit of value out of it :).

Basic Definitions

Scalable/Scalability

A thing (Infrastructure, process, etc) that is scalable is able to continue reliably serving it’s intended function as demand on this thing increases.

Performance/Performant

A thing that is performant is able to handle it’s assigned function efficiently as determined by metrics this thing is measured on.

The Bogdan & The Bugati

Let’s use the Bugatti Veyron and Bogdan A401 as an analogy for easily understanding (and remembering) the difference.

For vehicles, let’s assume that our metric for performant is “Goes real fast” and our metric for scalable is “Get’s lots of people from Point A to Point B”

The Bugati Veyron is a high performance 2 seat sports car designed for going fast and looking really cool while it’s doing it.

It can go 0-60 MPH in 2.5 seconds (Just long enough for you to say prayers to your deity of choice) and a max speed of 268 MPH (You said those prayers right?).

The Bogdan A401 is a not so high performance 30 seat bus designed for city transport.

It has a 0-60 MPH time of “eventually” and a max speed of “Niet!”.

The Bugatti is clearly the more performant of the two but is pretty bad on the scalability point. If you need to transport more than 2 people at a time, you’re out of luck (Maybe you can strap someone to the roof?)

The Bogdan, while slower than bureaucracy, is pretty scalable. If you need to transport up to 30 people, you’d be able to do it all in one easy trip.

When you’re thinking of scalable, remember the humble Bogdan A401 ferrying people around your city.

When you’re thinking of performant, imagine the ritzy Bugatti Veyron screaming by you at over 250 MPH.

In The Cloud

Let’s go in a slightly more technical direction to see how this can matter when designing infrastructure.

Ideally, when designing infrastructure you want something to be both performant AND scalable but as with anything in life, trade offs are required.

Let’s compare two types of storage available for AWS EC2 instances, AWS EBS and EC2 Instance Store.

Common assumptions say that EBS is very scalable (and somewhat performant) while Instance Store is very performant (but not nearly as scalable).

Let’s put those assumptions to a test.

EBS Characteristics

EBS provides a few volume types when provisioning including gp2 (general purpose SSD storage), io1 (Provisioned IOPS SSD), and two HDD types (st1 & sc1).

Much of this is very well laid out in their documentation but there are a few key characteristics they don’t mention such as latency, something that can be incredibly important when serving many small files (Common when serving web applications).

This comparison will focus on the gp2 volume type.

  • Are NOT directly attached to the instance (Reside on different hardware)
  • Storage size can be scaled up to ~17 TB.
  • Up to 16,000 IOPS per volume
  • Up to 250 MiB/s throughput per volume
  • Actual performance modified by a number of different considerations
  • Can be modified while in use (Size, Volume Type, etc).
  • Can be easily detached or attached to different EC2 instances.
  • Is persistent through hardware failures and instance stop/reboot/termination.
  • Can be easily backed up with snapshots.
  • Supports full volume encryption.

Instance Store Characteristics

Instance Store (Also called ephemeral storage) has a few different types and sizes, both of which are determined by the EC2 instance type chosen.

You’ll immediately notice that for EC2 Instance Store, almost no details are provided by AWS about any of the performance numbers.

Let’s look at what we already know about Instance Store volume types and then we’ll proceed to do some discovery of our own.

  • Is directly attached to the instance on the same hardware (Does not need to go over the network)
  • Is NOT persistent through hardware failures and instance stop/termination. It will persist through reboots.
  • Can NOT be attached or reattached to different instances
  • Can NOT resize an Instance Store drive.
    • Drive type and size determined by the EC2 instance type. A m5d.large for example has a 75 GB NVMe SSD.

Benchmarking Fun Times

There is a lot of talk about the maximum performance you can see with EBS and not much at all said about Instance Store

Let’s do some benchmarking to compare the scalable EBS to the (hopefully performant) Instance Store.

The Hardware

I’ll be spinning up a single m5d.large instance type with both the instance storage and EBS Volume attached to help minimize potential differences (For example if the instance was spun up on different hardware).

It’s also worth noting that we are testing “Out of the box” performance without any optimizations one could perform (Such as RAID striping EBS volumes).

  • System in us-east-1 (North Virginia)
  • An m5d.large instance type
  • EBS Volume
    • gp2 volume type
    • Formatted as XFS
    • 385 GiB volume size
      • This helps ensure maximum throughput without regard for EBS burst credits.
  • Instance Storage
    • 1×75 GB NVMe drive provided by the m5d.large instance type
      • As a side note, the available storage as reported by fdisk -l is 69.9 GiB
    • Formatted as XFS

Latency

For this test we’ll be utilizing the v1.2 release of io-ping

On each drive we’ll issue the following command:

./ioping -c 30 $DRIVEPATH

Let’s take a look at the results.

EBS

[root@ip-172-31-46-145 ioping-1.2]# ./ioping -c 30 /root/ 
# Output truncated for readability
--- /root/ (xfs /dev/nvme0n1p1 385.0 GiB) ioping statistics --- 
29 requests completed in 7.41 ms, 116 KiB read, 3.91 k iops, 15.3 MiB/s 
generated 30 requests in 29.0 s, 120 KiB, 1 iops, 4.14 KiB/s 
min/avg/max/mdev = 218.4 us / 255.6 us / 367.3 us / 32.3 us

EBS offers an incredibly consistent experience with a low median deviation of only 32 microseconds and a respectable 255 microseconds of latency on average. The maximum latency seen is 367 microseconds, which is still very respectable.

Instance Store

[root@ip-172-31-46-145 ioping-1.2]# ./ioping -c 30 /mnt/instancestore/ 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=1 time=143.8 us (warmup) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=2 time=183.4 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=3 time=196.6 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=4 time=174.8 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=5 time=172.5 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=6 time=163.9 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=7 time=166.8 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=8 time=207.4 us (slow) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=9 time=176.8 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=10 time=163.7 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=11 time=184.1 us 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=12 time=165.6 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=13 time=2.59 ms (slow) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=14 time=140.0 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=15 time=194.3 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=16 time=131.5 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=17 time=139.0 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=18 time=202.1 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=19 time=161.3 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=20 time=139.8 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=21 time=135.4 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=22 time=136.4 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=23 time=107.5 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=24 time=159.8 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=25 time=98.5 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=26 time=136.0 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=27 time=101.1 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=28 time=129.6 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=29 time=134.2 us (fast) 
4 KiB <<< /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB): request=30 time=110.2 us (fast) 

--- /mnt/instancestore/ (xfs /dev/nvme1n1 69.8 GiB) ioping statistics --- 
29 requests completed in 6.91 ms, 116 KiB read, 4.20 k iops, 16.4 MiB/s 
generated 30 requests in 29.0 s, 120 KiB, 1 iops, 4.14 KiB/s 
min/avg/max/mdev = 98.5 us / 238.1 us / 2.59 ms / 446.1 us

Instance store does not offer a nearly as consistent experience.

While it’s average is slightly lower than EBS (238 microseconds) and the minimum measurement is half that of EBS (98 microseconds), it has a median deviation of 446 microseconds.

It’s important to note that most of this can be accounted for by a single request that took ~2.5 milliseconds (Which is very high).

When that particular request isn’t included, Instance Store stats start to look better but it’s lack of consistency should definitely be considered.

Latency Conclusions

It’s clear our assumption about latency being better on Instance Store isn’t entirely correct.

This also helped us see the limitations of our testing/benchmarking method. This test could be performed later with a larger sample size to provide a better data set for more rigorous comparisons.

Despite these flaws in testing, we still get an idea that Instance Store will offer better latency (in most cases) but may be experience some rather wide fluctuations.

EBS on the other hand will provide worse latency but will be incredibly consistent about doing so.

Random R/W

For random read/write stats & IOPS we’ll be utilizing fio, the tool suggested by AWS for benchmarking disk performance.

We’ll be installing this tool via the package manager. The version reported is fio-2.14.

The command we’ll be utilizing on each drive:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

That one is a doozy! This command will randomly read and write to a 4 GB section of the drive at a 75 percent mix (I.E 3 reads for every write) to approximate the work a database would perform.

On to the testing!

EBS

Full output for posterity.

[root@ip-172-31-46-145 ~]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75  
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64 
fio-2.14 
Starting 1 process 
test: Laying out IO file(s) (1 file(s) / 4096MB) 
Jobs: 1 (f=1): [m(1)] [100.0% done] [9108KB/2888KB/0KB /s] [2277/722/0 iops] [eta 00m:00s] 
test: (groupid=0, jobs=1): err= 0: pid=3383: Mon May  4 16:07:55 2020 
  read : io=3070.4MB, bw=9016.2KB/s, iops=2254, runt=348673msec 
  write: io=1025.8MB, bw=3012.4KB/s, iops=753, runt=348673msec 
  cpu      : usr=0.37%, sys=0.75%, ctx=104346, majf=0, minf=9 
  IO depths   : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 
   submit   : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
   complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 
   issued   : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 
   latency  : target=0, window=0, percentile=100.00%, depth=64 

Run status group 0 (all jobs): 
  READ: io=3070.4MB, aggrb=9016KB/s, minb=9016KB/s, maxb=9016KB/s, mint=348673msec, maxt=348673msec 
  WRITE: io=1025.8MB, aggrb=3012KB/s, minb=3012KB/s, maxb=3012KB/s, mint=348673msec, maxt=348673msec 

Disk stats (read/write): 
  nvme0n1: ios=785502/262539, merge=0/11, ticks=10587472/3562516, in_queue=13785412, util=99.82%

Let’s break out the most relevant section so we can understand this a bit better.

test: (groupid=0, jobs=1): err= 0: pid=3383: Mon May  4 16:07:55 2020 
  read : io=3070.4MB, bw=9016.2KB/s, iops=2254, runt=348673msec 
  write: io=1025.8MB, bw=3012.4KB/s, iops=753, runt=348673msec 
  cpu      : usr=0.37%, sys=0.75%, ctx=104346, majf=0, minf=9 

Oof.jpg! These stats aren’t looking too good.

Read performance is ~9000KB/s with 2254 IOPS on average.

Write performance is ~3000KB/s with 753 IOPS on average.

Let’s take a look at how our Instance Store volume stacks up.

Instance Store

Full output for posterity.

[root@ip-172-31-46-145 instancestore]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixr
ead=75 
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64 
fio-2.14 
Starting 1 process 
test: Laying out IO file(s) (1 file(s) / 4096MB) 
Jobs: 1 (f=1): [m(1)] [100.0% done] [117.2MB/40168KB/0KB /s] [29.1K/10.5K/0 iops] [eta 00m:00s] 
test: (groupid=0, jobs=1): err= 0: pid=3406: Mon May  4 16:11:08 2020 
  read : io=3070.4MB, bw=128211KB/s, iops=32052, runt= 24522msec 
  write: io=1025.8MB, bw=42832KB/s, iops=10707, runt= 24522msec 
  cpu      : usr=9.19%, sys=22.76%, ctx=647405, majf=0, minf=9 
  IO depths   : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 
   submit   : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
   complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 
   issued   : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 
   latency  : target=0, window=0, percentile=100.00%, depth=64 

Run status group 0 (all jobs): 
  READ: io=3070.4MB, aggrb=128210KB/s, minb=128210KB/s, maxb=128210KB/s, mint=24522msec, maxt=24522msec 
  WRITE: io=1025.8MB, aggrb=42831KB/s, minb=42831KB/s, maxb=42831KB/s, mint=24522msec, maxt=24522msec 

Disk stats (read/write): 
  nvme1n1: ios=783491/261757, merge=0/0, ticks=1478028/68052, in_queue=1536000, util=99.82%

Let’s break out the most relevant section so we can understand this a bit better.

test: (groupid=0, jobs=1): err= 0: pid=3406: Mon May  4 16:11:08 2020 
  read : io=3070.4MB, bw=128211KB/s, iops=32052, runt= 24522msec 
  write: io=1025.8MB, bw=42832KB/s, iops=10707, runt= 24522msec 
  cpu      : usr=9.19%, sys=22.76%, ctx=647405, majf=0, minf=9 
  IO depths   : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 

WOW! We can immediately see that the Instance Store is drastically faster than EBS in both IOPS and bandwidth.

Read performance is ~128000KB/s with 32052 IOPS on average.

Write performance is ~43000KB/s with 10707 IOPS on average.

That’s about 14X faster for both R/W bandwidth and IOPS.

Random R/W Conclusions

From looking at these stats we can see that the Instance Store offers vastly superior performance (Of about 14X in our comparison) when compared to a single EBS Volume, particularly when cost is considered.

The Instance Store is already included in the cost of our instance while the EBS volume would have costed an additional 38 dollars a month.

Benchmarking Conclusion

When it comes to raw performance per dollar EC2 Instance Store is a clear winner over EBS.

It provides R/W bandwidth and IOPS 14 times higher than EBS at no additional cost.

EBS is definitely more scalable and has a number of other benefits over Instance Store but achieving even remotely similar performance would require significantly increased cost and some additional configuration (For example, RAID striping).

Final Conclusion

Language is an important tool (and so is benchmarking apparently!).

After reading this I hope you’re able to wield your tools just a little better than yesterday.

If you happen to be looking for a company with proven expertise to help improve your Cloud infrastructure, why not give us a call?

You can easily schedule a free, one hour initial consultation below.

 

1 comment

  1. […] Confused about the difference? You can read a follow up here. […]

Comments are closed.