
#aws #googlecloud #cloud #devopsTime for the (not exactly) yearly cloud compute VM comparison. I started testing back in October 2025, but the benchmarking scope was increased, not just due to more VM…
Time for the (not exactly) yearly cloud compute VM comparison. I started testing back in October 2025, but the benchmarking scope was increased, not just due to more VM families tested (44), but also due to testing the instances over more regions to attain a possible range of performance, as in many cases not all instances are created equal. I will not spoil much if I tell you that there is one new CPU that dominates the top-end results more clearly than any previous year.
Quick Overview
Like last time, this is all about generic CPU performance and especially what you can actually get per $ spent on compute VM instances. Due to the focus on CPU workloads, burstable instances are not included. Single-thread performance is evaluated separately, as there are always workloads that cannot be further parallelized. For multi-thread, each instance type is tested in a 2vCPU configuration which is usually the minimum unit you can order (it corresponds to a single core for SMT-enabled systems, like all Intel and most AMD). The more threads your workload can utilize, the more multiples of that unit you can order.
The comparison should help you maximize performance or price depending on your requirements, by either using the optimal VM types of your provider, or perhaps by launching on a different provider.
If you don't need all the details, you can use the TOC below to jump to what's relevant to you.
Table Of Contents:
I kept the same 7 providers as last year (which was down from my max 10 providers from the 2023 comparison), but expanded to 44 VM types tested.
Other changes:
As mentioned, I will focus on 2x vCPU instances, as that's the minimum scalable unit for a meaningful comparison (and generally minimum for several VM types), given that most AMD and Intel instances use Hyper-Threading (HT) / Simultaneous Multi Threading (SMT). So, for those systems a vCPU is a Hyper-Thread, or half a core, with the 2x vCPU instance giving you a full core with 2 threads. This will become clear in the scalability section.
I am skipping some very old instance types that are obviously uncompetitive. I am still trying to configure at 2GB/vCPU of RAM (which is variably considered as "compute-optimized", or "general-purpose") and 30GB SSD (not high-IOPS) boot disk for the price comparison to make sense (exceptions will be noted).
The pay-as-you-go/on-demand prices refer to the lower cost region in the US (or Europe). For providers with variable pricing, cheapest regions are almost always in the US. Unlike last year, I will not include the 100% sustained discounts for GCP, as they are not technically on-demand so I may have been unfair to other providers.
For providers that offer 1 year and 3 year committed/reserved discounted prices, the no-downpayment price was listed with that option. The prices were valid for January 2026 - please check for current prices before making final decisions.
As a guide, here is an overview of the various generations of AMD, Intel and ARM CPUs from older (top) to newer (bottom), roughly grouped horizontally in per-core performance tiers, based on this and the previous comparison results:
| Instance Type | CPU type | RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month | Spot $/Month |
|---|---|---|---|---|---|---|
| C5a.large (R) | AMD Rome | 4/30 | 64.45 | 41.09 | 29.41 | 29.08 |
| C5.large (S) | Intel Skylake | 4/30 | 68.10 | 45.47 | 31.60 | 28.02 |
| C6a.large (M) | AMD Milan | 4/30 | 63.83 | 43.04 | 29.45 | 28.20 |
| C6i.large (I) | Intel Ice Lake | 4/30 | 70.66 | 47.55 | 32.45 | 29.02 |
| C6g.large (G2) | AWS Graviton2 | 4/30 | 55.03 | 36.64 | 26.86 | 26.61 |
| C7a.large (G) | AMD Genoa | 4/30 | 84.82 | 56.92 | 38.69 | 32.07 |
| C7i.large (SR) | Intel Sapphire Rapids | 4/30 | 74.07 | 49.81 | 33.95 | 24.62 |
| C7g.large (G3) | AWS Graviton3 | 4/30 | 58.46 | 40.65 | 28.97 | 29.31 |
| C8a.large (T) | AMD Turin | 4/30 | 88.94 | 64.94 | 44.19 | 31.82 |
| C8i.large (GR) | Intel Granite Rapids | 4/30 | 77.65 | 51.84 | 35.43 | 28.74 |
| C8g.large (G4) | AWS Graviton4 | 4/30 | 66.22 | 44.62 | 30.50 | 27.93 |
Amazon Web Services (AWS) pretty much originated the whole "cloud provider" business - even though smaller connected VM providers predated it significantly (e.g. Linode comes to mind) - and still dominates the market. The AWS platform offers extensive services, but, of course, we are only looking at their Elastic Cloud (EC2) VM offerings for this comparison.
There are 2 new CPUs introduced since last year. Intel's Granite Rapids makes an appearance, while the AMD EPYC Turin-powered C8a follows the previous C7a in having SMT disabled (providing a full core per vCPU). I don't want to spoil much, but if you take the fastest CPU by a margin, and disable SMT, expect some impressive "per-2vCPU" results...
With EC2 instances you generally know what you are getting (instance type corresponds to specific CPU), although there's a multitude of ways to pay/reserve/prepay/etc which makes pricing very complicated, and pricing further varies by region (I used the lowest cost US regions). In the 1Y/3Y reserved prices listed, there is no prepayment included - you can lower them a bit further if you do prepay. The spot prices vary even more, both by region and are updated often (especially for newly introduced types), so you'd want to keep track of them.
| Instance Type | CPU type | RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month | Spot $/Month |
|---|---|---|---|---|---|---|
| n2-2* (I) | Intel Ice Lake | 4/30 | 63.45 | 40.19 | 29.65 | 22.15 |
| n2d-2* (M) | AMD Milan | 4/30 | 55.46 | 35.22 | 26.06 | 13.10 |
| c2d-2 (M) | AMD Milan | 4/30 | 68.28 | 43.76 | 31.82 | 15.87 |
| t2d-2 (M) | AMD Milan | 8/30 | 63.68 | 40.86 | 29.76 | 12.14 |
| c3-4/2** (SR) | Intel Sapphire Rapids | 4/30 | 63.69 | 40.72 | 29.54 | 11.09 |
| c3d-4/2** (G) | AMD Genoa | 4/30 | 56.32 | 36.08 | 26.23 | 9.90 |
| n4d-2 (T) | AMD Turin | 4/30 | 53.77 | 34.46 | 25.08 | 22.47 |
| c4a-2 (AX) | Google Axion (Arm) | 4/30 | 56.90 | 38.09 | 26.49 | 19.74 |
| c4d-2 (T) | AMD Turin | 3/30 | 57.57 | 36.86 | 26.79 | 23.40 |
| n4-2 (E) | Intel Emerald Rapids | 4/30 | 57.47 | 36.80 | 26.74 | 19.50 |
| c4-2 (E) | Intel Emerald Rapids | 4/30 | 63.69 | 40.72 | 29.54 | 27.20 |
| c4-lssd-4/2** (GR) | Intel Granite Rapids | 8/30+375GB SSD | 103.75 | 65.45 | 47.57 | 43.70 |
* min_cpu_platform needs to be set to get tested CPU.
** Extrapolated 2x vCPU instance - type requires 4x vCPU minimum size.
The GCP Platform (GCP) follows AWS quite closely, providing mostly equivalent services, but lags in market share (3rd place, after Microsoft Azure). We are looking at the Google Compute Engine (GCE) VM offerings, which is one of the most interesting in respect to configurability and range of different instance types. However, this variety makes it harder to choose the right one for the task, which is exactly what prompted me to start benchmarking all the available types. To add extra confusion, some types may come with an older (slower) CPU if you don't set min_cpu_platform to the latest available for the type - so you need the extra configuration to get a faster machine for the same price.
This year, we have the addition of the AMD EPYC Turin (c4d and n4d), they are not yet in all regions/zones, but availability is expanding. We also had the introduction of two Intel-based 4th gen instances (n4 and c4). They both feature Emerald Rapids, however the latter can be configured with a local SSD, in which case they come with the newer Intel Granite Rapids. Until GCP allows setting min_cpu_platform to Granite Rapids (they are thinking about it AFAIK), you have to pay for the extra SSD to get the performance. Last year I covered separately the introduction of the Google Axion - powered c4a ARM type, but it is on a full VM comparison for the first time.
At this point, I should mention that the reason I did more extensive testing this year across different regions is the disappointing performance of Emerald Rapids in practice, compared to its showing on my original benchmarks. It seems that as it started to get used, exhibited a performance variance that looks consistent with boost behavior + node contention (i.e. more sensitive to noisy neighbors). I suspect this is why GCP offers the option to turn boost clock off in Emerald Rapids instances for "consistent performance".
GCP prices vary per region and feature some strange patterns. For example when you reserve, t2d instances which give you a full AMD EPYC core per vCPU and n2d instances which give you a Simultaneous Multi-Thread (i.e. HALF a core) per vCPU have the same price per vCPU, but n2d is cheaper on demand and gets a 20% discount for sustained monthly use.
Note that c3, c3d and c4-lssd types have a 4x vCPU minimum. This breaks the price comparison, so I am extrapolating to a 2x vCPU price (half the cost of CPU/RAM + full cost of 30GB SSD). GCP gives you the option to disable cores (you select "visible" cores), so while you have to pay for 4x vCPU minimum, you can still run benchmarks on a 2x vCPU instance for a fair comparison.
| Instance Type | CPU type | RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month | Spot $/Month |
|---|---|---|---|---|---|---|
| D2as_v5 (M) | AMD Milan | 8/32 | 65.18 | 39.40 | 26.26 | 15.16 |
| D2ls_v5 (I) | Intel Ice Lake | 4/32 | 64.45 | 38.98 | 25.98 | 17.37 |
| D2pls_v5 (A) | Ampere Altra | 4/32 | 52.04 | 31.65 | 21.26 | 11.57 |
| D2pls_v6 (CO) | Azure Cobalt 100 | 4/32 | 47.66 | 29.07 | 19.59 | 11.18 |
| D2ls_v6 (E) | Intel Emerald Rapids | 4/32 | 67.59 | 42.82 | 27.82 | 20.04 |
| D2als_v6 (G) | AMD Genoa | 4/32 | 61.09 | 37.07 | 24.71 | 13.36 |
Azure is the #2 overall Cloud provider and, as expected, it's the best choice for most Microsoft/Windows-based solutions. That said, it does offer many types of Linux VMs, with quite similar abilities as AWS/GCP. The various types are not easy to use as on AWS/GCP though, for some reason even enterprise accounts start with zero quota on many types, so I had to request quota increases to even test tiny instances.
The v6 instances are new for the comparison, featuring AMD EPYC Genoa, Intel Emerald Rapids and Azure's own Cobalt 100 ARM CPU.
The Azure pricing is at least as complex as AWS/GCP, plus the pricing tool seems worse. They also lag behind the other two major providers in CPU releases - Turin and Granite Rapids are still in closed preview at the time of writing this.
| Instance Type | CPU type | RAM/SSD | Price $/Month | Spot $/Month |
|---|---|---|---|---|
| Standard.E6 (T) | AMD Turin | 4/30 | 29.00 | 15.13 |
| Standard.A1 (A) | Ampere Altra | 4/30 | 20.24 | 10.75 |
| Standard.A2 (AO) | Ampere AmpereOne | 4/30 | 17.32 | 17.32 |
| Standard.A4* (AM) | Ampere AmpereOne M | 4/30 | 19.22 | 10.24 |
* Limited availability currently.
Oracle Cloud Infrastructure (OCI) was the biggest surprise in my 2023 comparison test. It was a pleasant surprise, not only does Oracle offer by far the most generous free tier (credits for the A1 type ARM VM credits equivalent to sustained 4x vCPU, 24GB RAM, 200GB disk for free, forever), their paid ARM instances were the best value across all providers - especially for on-demand. The free resources are enough for quite a few hobby projects - they would cost you well over $100/month in the big-3 providers.
Note that registration is a bit draconian to avoid abuse, make sure you are not on a VPN and also don't use oracle anywhere in the email address you use for registration. You start with a "free" account, which gives you access to a limited selection of services and apart from the free-tier eligible A1 VMs, you'll struggle to build any other types with the free credit you get at the start.
Upgrading to a regular paid account (which still gives you the free tier credits), you get a selection of VMs. New this year are the AMD EPYC Turin Standard.E6 VMs and the next generation ARM Standard.A4 type powered by the AmpereOne M CPU. If you recall from last year, the AmpereOne A2 instances were slower in quite a few tasks than the older Altra A1. Ampere really needed a step forward, and AmpereOne M (A4) finally delivers meaningful gains in this year’s dataset. I had trouble building older-gen AMD instances, so in the end I did not include them. I also could only build Standard.A4 in one region (Ashburn), even though I tried in Phoenix which Oracle had in the availability list, to no avail.
Oracle Cloud's prices are the same across all regions, which is nice. They do not offer any reserved discounts, but do offer a 50% discount for preemptible (spot) instances. One complication is that their prices are per "Oracle CPU" (OCPU). This seemed to make sense originally, as it corresponded to physical cores - the A1 instances had 1 OCPU per core, so 1 OCPU = 1 vCPU, while SMT x86 had 1 OCPU = 2 vCPU (threads). But then, possibly thinking that their users are getting comfortable with it, they threw a wrench by making 1 OCPU for newer (still non-SMT) ARM types A2 and A4 be equal to 2 vCPU / 2 full Cores. I can't think of a reason for this other than to confuse their customers.
| Instance Type | CPU type | RAM/SSD | Price $/Month |
|---|---|---|---|
| Linode 4GB* (M) | AMD Milan | 4/80 | 24.00 |
| G7 4x2 (M) | AMD Milan | 4/80 | 43.00 |
| G8 4x2 (T) | AMD Turin | 4/40 | 45.00 |
* Shared core.
Linode, the venerable cloud provider (predating AWS by several years), has now been part of Akamai for a few years.
From the previous years we saw that their shared core types ("Linodes") are the best bang for buck, but it depends on what CPU you are assigned on creation. It seems that currently the most common configuration features an AMD EPYC Milan. I tried to build quite a few and that's what you usually get (if you manage to build an ancient Intel or AMD Rome, try again), I did not see any newer CPUs pop up. The latest EPYC Turin though is available as a dedicated CPU instance. They now mark dedicated instances with their generation, so a G8 should always be the same CPU. As always, the dedicated instances come with SMT, so you are normally getting a core per 2 vCPUs, while the shared instances are virtual cores, so twice the vCPUs gives you twice the multi-thread performance - the caveat is that performance per thread varies depending on how busy the node that holds your VM is.
It is a bit of an annoyance that without testing your VM after creation you can't be sure of what performance to expect, unless you go for the more expensive dedicated VMs, but otherwise, Akamai/Linode is still easy to set up and maintain and has fixed, simple pricing across regions.
| Instance Type | CPU type | RAM/SSD | Price $/Month |
|---|---|---|---|
| Basic 2/4* (B) | Intel Broadwell | 4/80 | 24.00 |
| PremInt 2/4* (C) | Intel Cascade L | 4/120 | 32.00 |
| PremAMD 2/4* (R) | AMD Rome | 4/80 | 28.00 |
* Shared core.
DigitalOcean was close to the top of the perf/value charts a few years ago, providing the best value with their shared CPU Basic "droplets". I am actually using DigitalOcean droplets to help out by hosting a free weather service called 7Timer, so feel free to use my affiliate link to sign up and get $200 free - you will help with the free project's hosting costs if you end up using the service beyond the free period. Apart from value, I chose them for the simplicity of setup, deployment, snapshots, backups.
However, they seem to have stopped upgrading their fleet for quite a while now, so you end up with some very old CPUs. If you don't mind the low per-thread performance, they are still not a bad value, given the low prices. I like their simple, region-independent and stable pricing structure, but I wish they would upgrade their shared core data centers.
| Instance Type | CPU type | RAM/SSD | Price $/Month |
|---|---|---|---|
| CCX13 (M) | AMD Milan | 8/80 | 17.27 |
| CAX11 (A**)* | Ampere Altra | 4/40 | 5.46 |
| CPX22 (G**)* | AMD Genoa | 4/80 | 8.63 |
| CX23 (R**)* | AMD Rome | 4/40 | 4.31 |
| CX23 (S**)* | Intel Skylake | 4/40 | 4.31 |
* Limited/Eu-only availability.
** Shared core.
Hetzner is a quite old German data center operator and web host, with a very budget-friendly public cloud offering. They are often recommended as a reliable extra-low-budget solution, and I've had much better luck with them than other similar providers.
On the surface, their prices seem to be just a fraction of those of the larger providers, so I did extended benchmark runs over days to make sure there is no significant oversubscribing - except perhaps the cheapest variant (CX23). Only the CCX13 claims dedicated cores. Ironically, those dedicated instances vary significantly in performance depending on which data center you create them in. In the end, the CPX22 (AMD) and CAX11 (ARM) shared core instances are the most stable in performance across instances and regions.
Note that the cheap shared-core types are not widely available, not found in the US regions and they even show no availability at times in the European regions. And while I included a CX23 with EPYC Rome, you will normally get a slower Skylake. I will not include the shared instances in the price/performance charts this time around, as I am thinking that the limited availability does not make them equal contenders.
In order to have much more test runs, I streamlined the test suite into a docker image which you can run yourself. Almost all instances were on 64bit Debian 13, although I had to use Ubuntu 24.04 on a couple, and Oracle's ARM were only compatible with Oracle Linux. To run the entire suite on a system with docker you would do:
docker run -it --rm dkechag/cloud-bench
The suite comprises of:
As every year, the main weight is on my my own benchmark suite, which you can now also run on its own docker image. It has both proven very good at approximating real-world performance differences in the type of workloads we use at SpareRoom, and is also good at comparing single and multi-threaded performance (with scaling to hundreds of threads if needed). To run DKbench by itself on a system with docker:
docker run -it --rm dkechag/dkbench
I created multiple instances in different regions and recorded min and max of all runs (both single-thread and dual-thread).
I have kept Geekbench, both because it can help you compare results from previous years and because Geekbench 6 seems to be much worse - especially in multi-threaded testing (I'd go as far to say it looks broken to me).
I simply kept the best of 2 runs, you can browse the results here. There's an Arm version too at https://cdn.geekbench.com/Geekbench-5.4.0-LinuxARMPreview.tar.gz.
Apart from being popular, Phoronix benchmarks can help benchmark some specific things (e.g. AVX512 extensions) and also results are openly available.
I ran the following benchmarks:
phoronix-test-suite benchmark compress-7zip
Very common application and very common benchmark - average compression/decompression scores are recorded.
phoronix-test-suite benchmark nginx
Select option 3.
phoronix-test-suite benchmark openssl
Select option 1. This benchmark uses SSE/AVX up to AVX512, which might be important for some people. Older CPUs that lack the latest extensions are at a disadvantage.
Blender's Big Buck Bunny video was transcoded to an H264 mp4 via FFmpeg, both in single and dual-thread mode.
The raw results can be accessed on this spreadsheet (or here for the full Geekbench results).
In the graphs that follow, the y-axis lists the names of the instances, with the CPU type in parenthesis:
(GR) = Intel Granite Rapids
(E) = Intel Emerald Rapids
(SR) = Intel Sapphire Rapids
(I) = Intel Ice Lake/Cooper Lake
(C) = Intel Cascade Lake
(S) = Intel Skylake
(B) = Intel Broadwell
(T) = AMD Turin
(G) = AMD Genoa
(M) = AMD Milan
(R) = AMD Rome
(G4) = Amazon Graviton4
(G3) = Amazon Graviton3
(G2) = Amazon Graviton2
(CO) = Azure Cobalt 100
(AM) = Ampere AmpereOne M
(AO) = Ampere AmpereOne
(A) = Ampere Altra
Single-thread performance can be crucial for many workloads. If you have highly parallelizable tasks you can add more vCPUs to your deployment, but there are many common types of tasks where that is not always a solution. For example, a web server can be scaled to service any number of requests in parallel, however the vCPU's thread speed determines the minimum response time of each request.
We start with the latest DKbench, running the 19 default benchmarks (Perl & C/XS) which cover a variety of common server workloads. I tried to build 2-3 instances at different times across at least 3 regions (if the provider allowed), to get a min/max range of performance. Here are the results for single thread:
I think it's the first time in my series of comparisons where a CPU had this clear of a performance lead. AMD's EPYC Turin is simply a tier above anything else. AWS has the fastest setup with that CPU, while GCP’s more expensive C4d seems to vary a lot in performance when their own, cheaper N4d gave more consistent results. Overall, if you are looking for maximum performance per thread, EPYC Turin seems to be the answer if your cloud provider has it.
In the 2024 comparison Intel Emerald Rapids did quite well, but it turns out that is only on non-busy nodes, where the cpu allows for a generous boost - at least for GCP. This is reflected as the range you see on the graph. The new Granite Rapids seems to fix this, providing a bit higher, but mainly more stable performance. So, a solid step forward from Intel, it's just that Turin is really impressive.
As we are waiting for AWS to release Graviton5 publicly, GCP’s Axion is the leader for ARM solutions, impressively offering EPYC Genoa-level performance per thread. I tested Azure's own Cobalt 100 for the first time - it sits between Graviton3 and Graviton4 performance. Ampere's new AmpereOne M finally offers some tangible improvement over the aging Altra, but only matches AWS's older Graviton3.
Lastly, among the lower-cost providers, DigitalOcean has lagged behind in performance, signaling that their fleet is due for an upgrade. Both Akamai and Hetzner offer some fast Milan instances, although for both providers you are not guaranteed what performance level you are going to get when creating an instance - there is the variation shown in the chart. It's not oversubscribing, the performance is stable, it's just that groups of servers are setup differently.
DKbench runs the benchmark suite single-threaded and multi-threaded (2 threads in this comparison as we use 2x vCPU instances) and calculates a scalability percentage. The benchmark obviously uses highly parallelizable workloads (if that's not what you are running, you'd have to rely more on the single-thread benchmarking). In the following graph 100% scalability means that if you run 2 parallel threads, they will both run at 100% speed compared to how they would run in isolation. For systems where each vCPU is 1 core (e.g. all ARM systems), or for "shared" CPU systems where each vCPU is a thread among a shared pool, you should expect scalability near 100% - what is running on one vCPU should not affect the other when it comes to CPU-only workloads.
Most Intel/AMD systems though give you a single core that has 2x threads (Hyper-Threads / HT in Intel lingo - or Simultaneous Multi Threads / SMT if you prefer) as a 2x vCPU unit. Those will give you scalability well below 100%. A 50% scalability would mean you have the equivalent of just 1x vCPU, which would be very disappointing. Hence, the farther up you are from 50%, the more performance your 2x vCPUs give you over running on a single vCPU.
As expected, the ARM and shared CPUs are near 100%, i.e. you are getting twice the multithreaded performance going from 1x to 2x vCPUs. You also get that from three x86 types: AWS's Genoa C7a and Turin C8a alongside GCP's older Milan t2d.
From the rest we note that, traditionally, AMD does SMT better than Intel, although the latter has improved from the dismal Ice Lake days when it barely managed over 50%.
Bizarrely, the Akamai AMD Turin give an unusually high (given SMT) scalability of 71.9%. I have verified the result several times, and I can't figure out what their setup is - the single-threaded performance at the same time is very low compared to every other Turin.
From the single-thread performance and scalability results we can guess how running DKbench multithreaded will turn out, but in any case here it is:
Give the clearly fastest instance two full cores instead of threads and you get the Turin-powered AWS C8a completely dominating the chart. Interestingly, the Google Axion seems at least as good here as the leader from the previous comparison, the Genoa C7a - with Graviton4 very close and Cobalt 100 trailing not far behind.
The SMT-enabled Turin instances follow, with the Top-10 completing with the venerable Milan in a non-SMT Tau instance. Long time followers of these comparisons may remember this was the top of the chart in the 2023 edition.
At the bottom, as expected we have very old Intel Broadwell/Skylake not-as-old Ice Lake and AMD Rome.
The old Geekbench 5 is provided for comparison reasons (and I don't trust Geekbench 6):
Both for single and multi-core, the results are very close to what we get with DKbench. Which is a good thing, as both suites try a range of benchmarks to get a balanced generic CPU score.
Moving on to some popular specific benchmarks - starting with 7zip which is sensitive to memory latency and cache:
While Turin still leads overall, Axion and Graviton4 are impressive and actually even beat it in the decompress part of the benchmark. In fact, Cobalt 100 is the top performer for decompression, but overall the ARM solutions show great performance.
Results from the 100 connections benchmark:
Another Turin showcase, with the non-SMT AWS C8a in particular almost doubling the score of the second and tripling the score of the C7a. Granite Rapids is also making a great showing.
It's the first time I am running this popular benchmark, and I am a bit puzzled about some of the Milan types coming last.
Another first for this comparison is video compression using FFmpeg and libx264. Results for both single and dual-thread mode:
Once more, EPYC Turin comes first. If we look at single-thread performance only Granite Rapids comes somewhat close. When using 2 full cores Axion can pull ahead of all SMT (i.e. single core) instances except Turin.
Lastly, in case you have software that can be accelerated by AVX512, I am including an OpenSSL RSA4096 benchmark. They are Intel's extensions so they are on all their CPUs since Skylake, whereas Genoa was the first AMD CPU to implement them. Older AMD CPUs and ARM architectures will be at a disadvantage in this benchmark:
Like in our previous comparison, AMD outperforms Intel at their own game. It's quite a margin for Turin and even Genoa is ahead of anything Intel. Intel does not seem to be prioritising vector performance, as even the latest Granite Rapids does not bring much improvement over the aging Ice Lake.
As expected, ARM and older AMD CPUs that don't support AVX512 are slower than Intel Skylake and newer.
One factor that is often even more important than performance itself is the performance-to-price ratio.
I will start with the "on-demand" price quoted by every provider. While I listed monthly costs on the tables, these prices are actually charged per minute or hour, so there's no need to reserve for a full month.
The first chart is for single-thread performance/price. I will have to separate Hetzner's shared instances because they are not available in the US and sometimes run out even in Europe (esp. CX23), so I feel they are not exact competition - CCX13 though is available and is included.
Hetzner and Oracle top the list like last year. However, thanks to the incredible performance of Turin, Oracle pretty much matches Hetzner's dedicated instance in performance to cost. They are followed by Linode and also GCP's n4d. The latter, again thanks to the leading single-thread performance of AMD's latest CPU even manages to bring better value than DigitalOcean, which is then followed by in-house ARM solutions like Google Axion and Azure Cobalt 100.
AWS is definitely the worst value on-demand. Their Turin is the best they can do, while their previous gen and older CPUs are the worst values on the table. Unlike the previous comparison, even Azure seems to do better in value.
At this point I think we should see the limited availability Hetzner VMs in comparison to the best value dedicated:
The inexpensive shared-cpu types offer unbeatable value - if you manage to get them. The top one overall (Rome CX23) is actually the hardest to provision, as the CX23 type usually gives you a slow Skylake.
Moving on to 2x threads for evaluating multi-threaded performance:
All the non-SMT VMs get a bump here, hence Oracle's ARM take the lead with the new AmpereOne M, with Hetzner and shared core Linode following closely. The second tier consists of Google Axion and Azure Cobalt 100, as well as DigitalOcean droplets. AWS's non-SMT Turin is not that far behind this time, although their older gen 5/6 x86 are again at the very bottom of the chart.
The Hetzner shared-core instances get the bump as well, they provide superb on-demand value compared to the competition:
The three largest (and most expensive) providers offer significant 1-year reservation discounts. To get the maximum discount you have to lock into a specific VM type, which is why it is extra important to know what you are getting out of each. Also, for AWS you can actually automatically apply the 1 year prices to most on-demand instances by using third party services like DoIT's Flexsave (included in their free tier!), so this segment may still be relevant even if you don't want to reserve.
The first chart is again for single-thread performance/price.
The 1-year discount is enough for GCP’s Turin to match Oracle near the top of the value ranking. On Azure you get some good value running Cobalt 100 or Genoa. If you are on AWS your best bet are the latest C8 family.
Moving on to evaluating multi-threaded performance using 2x vCPUs:
OCI ARM instances are still at the top, joined by Azure Cobalt 100 with Axion almost keeping up. This is the first instance where AWS can offer similar value, thanks to the C8a with the fast Turin offering twice the physical cores, making up for the higher price.
Finally, for very long term commitments, AWS, GCP and Azure provide 3-year reserved discounts:
GCP with its Turin instances finally comes just ahead of Oracle and even Hetzner's dedicated VM. Azure also provide good value with their Cobalt 100 and Turin types. It should be noted that even if AWS lags behind the other, at a 3 year commitment it still offers better value than the "classic" value providers Akamai and DigitalOcean.
Switching to multi-thread, the number of physical cores per vCPU makes the difference:
I didn’t expect this, but Azure Cobalt 100 tops the chart! It is followed by GCP and OCI ARM solutions, but AWS's and GCP's Turin are not far behind.
The large providers (AWS, GCP, Azure, OCI) offer their spare VM capacity at an - often heavy - discount, with the understanding that these instances can be reclaimed at any time when needed by other customers. This "spot" or "preemptible" VM instance pricing is by far the most cost-effective way to add compute to your cloud. Obviously, it is not applicable to all use cases, but if you have a fault-tolerant workload or can gracefully interrupt your processing and rebuild your server to continue, this might be for you.
AWS and OCI will give you a 2-minute warning before your instance is terminated. Azure and GCP will give you 30 seconds, which should still be enough for many use cases (e.g. web servers, batch processing etc).
The discount for Oracle's instances is fixed at 50%, but varies wildly for the other providers per region and can change often, so you have to be on top of it to adjust you instance types accordingly.
For a longer discussion on spot instances see 2023's spot performance/price comparison. Then you can come back to this year's results below.
Applying the lowest January 2026 US spot prices we get:
We get that Oracle's Turin will always be top value, as it has a fixed spot price. From the big 3, GCP and Azure offer the deepest discounts (Genoa and Cobalt 100 types), the former getting top place here. If you compare to the 3 year reservation chart, you are getting about twice the performance per dollar. AWS is much less generous, if you are on their cloud, Turin is once more your best bet. But even with AWS you are getting better value if you are using spot instances than other low cost providers.
The multi-thread chart:
Azure's Cobalt 100 tops the chart with OCI's AmpereOne M following. Interestingly, in third place and best for the GCP cloud, is the aging t2d Milan which was noted as a great value in previous years. AWS once more has Turin saving the day by just making it into the top 10. You can get great value with all providers that offer spot instances, but you do have to monitor prices.
As always, I provide all the data so you can draw your own conclusions. If you have highly specialized workloads, you may want to rely less on my benchmarks. However, for most users doing general computing, web services, etc. I'd say you are getting a good idea about what to expect from each VM type. In any case, I'll share my own conclusions, some reasonably objective, others perhaps somewhat subjective.
Let’s begin with some quick take-aways, especially for things that are new this year:
I'll further comment with my picks for various usage scenarios:
Finally, I'll make some comments per provider. Besides, we can't always pick a provider or switch, so we have to try to work with what's available to us.
min_cpu_platform="Intel Ice Lake") come out ahead.cat /proc/cpuinfo on Linux), otherwise you might be paying the same for less. In most of my test instances I would indeed get a Milan though. At least for dedicated instances you now select a "generation", so basically you pick the CPU. They are not as good value as the Linodes, but the G8 is the best of them in both performance and value, although it's kind of a bizarre Turin setup where single-thread performance is on a lower tier than any other provider, but SMT gives a surprisingly (and inexplicable to me) high boost when multi-threading.Finally, remember that choosing a cloud provider involves considering network costs, fluctuating prices, regional requirements, RAM, storage, and other factors that vary between providers. This comparison will only assist with part of your decision.
To see the comments or leave new ones, visit original article on DEV.to.
I just ran some massive tests on our own CI. I use AMD Turin for this on gcp, which was noted as one of the fastest ones in the article.
The most insane part here is that the AMD EPYC 4565p can beat the turin's used on the cloud providers, by as much as 2x in the single core.
Our tests took 2 minutes on GCP, 1 minute flat on the 4565p with its boost to 5.1ghz holding steady vs only 4.1ghz on the gcp ones.
GCP charges $130 a month for 8vcpus. ALSO this is for SPOT that can be killed at any moment.
My 4565p is a $500 cpu... 32 vcpus... racked in a datacenter. The machine cost under 2k.
i am trying hard to convince more people to rack themselves especially for CI actions. The cloud provider charging $130 / mo for 3x less vcpus you break even in a couple months, it doesn't matter if it dies a few months later. On top of that you're getting full dedicated and 2x the perf. Anyways... glad to see I chose the right cpu type for gcloud even though nothing comes close to the cost / perf of self racking
Hetzner charge between €10 and €48 for an 8vcpu setup, depending on how many other users you're happy to share with.
For €104/mo you can get a 16-core Ryzen 9 7950X3D (basically identical to your 4565p) w/ 128GB DDR5, 2x2TB PCIE Gen4 SSD.
That's not to say you're wrong about dedicated being much better value than VPS on a performance per dollar basis, but the markup that the European companies charge is much, much lower compared to what they'd charge in the US.
In this instance you're looking at a ~17 month payback period even ignoring colo fees. Assuming a ~$100 colo fee that sibling comment suggested, you're looking at closer to 8 years.
Great points. If we’re going to talk about dedicated servers and long lock-in contracts, you have to look at the equivalent prices for hosted alternatives.
It’s fun to start thinking about building your own server and putting in a rack, but there’s always a lot of tortured math to compare it to completely different cloud hosted solutions.
One of the great things about cloud instances is that I can scale them up or down with the load without being locked into some hardware I purchased. For products I’ve worked on that have activity curves that follow day-night cycles or spike on holidays, this has been amazing. In some cases we could auto scale down at night and then auto scale back up during the day. As the user base grows we can easily switch to larger instances. We can also geographically distribute servers and provide lower latency.
There is a long list of benefits that are omitted when people make arguments based solely on monthly cost numbers. If we’re going to talk about long term dedicated server contracts we should at least price against similar options from companies like Hetzner.
> One of the great things about cloud instances is that I can scale them up or down with the load without being locked into some hardware I purchased. For products I’ve worked on that have activity curves that follow day-night cycles or spike on holidays, this has been amazing. In some cases we could auto scale down at night and then auto scale back up during the day.
At work we have this day / night cycle. But for some reason we're married to AWS. If we provisioned 24/7/365 a bunch of servers at Hetzner or such to cover the peaks with some margin, it would still be cheaper by a notable margin. Sure, 90% of them would twiddle their thumbs from 22 PM to 10 AM. So what?
Sure, if your clients are completely unpredictable and you'll see x100 traffic without notice, the cloud is great.
But how many companies are actually in that kind of situation? Looking back over a year or two, we're quite reliably able to predict when we'll have more visitors and how many more compared to baseline. We could just adjust the headroom to be able to take in those spikes. And I suppose if you want to save the environment, you could just turn off the Hetzner servers while they sit unused.
I’ve ran MP game servers that follow this pattern. A good rule of thumb is you should cover 75% of your peak load with your cheaper steady state pre allocated machines, and burst for the last 25%. It really is that expensive to do.
If you can reasonably estimate your usage and the peak total usage is less than ~5x the minimum, it still makes sense to just rent hardware at Hetzner.
You even have the possibility of managed racks, whereby you rent one or more racks, but the servers are still provided by Hetzner so you don't have to handle procurement, logistics or replacements.
I'd be terrified to run anything other than a classic web server on Hetzner, have heard too many stories of them arbitrarily terminating workloads they didn't understand.
> My 4565p is a $500 cpu... 32 vcpus... racked in a datacenter. The machine cost under 2k.
> The cloud provider charging $140 / mo for 3x less vcpus you break even in a couple months, it doesn't matter if it dies a few months later
How do you calculate break even in a couple months if the machine costs $2,000 and you still have to pay colo fees?
If your colo fees were $100 month you wouldn’t break even for over 4 years. You could try to find cheaper colocation but even with free colocation your example doesn’t break even for over a year.
the 140/mo is for 3x less vcpu, so $420/mo savings if you use all those same cores. sorry for the poor comparison wording there. in a few months already up to $1300+ by 6 months already paid the machine.
colo fees are cheap if you need more than just 1u. even with a 50-100 fee you easily get way more performance and come ahead within a year
> by 6 months already paid the machine.
You originally said “a couple months” but now it’s 6 months and assumption of $0 collocation fees which isn’t realistic
In my experience situations rarely call for precisely 32 cores for a fixed period of 3 years to support calculations like this anyway. We start with a small set of cloud servers and scale them up as traffic grows. Today’s tooling makes it easy to auto scale throughout the day, even.
When trying to rack a server everyone aims higher because it sucks to start running into limits unexpectedly and be stuck on a server that wasn’t big enough to handle the load. Then you have to start considering having at least two servers in case one starts failing.
Racking a single self-built server is great for hobby projects but it’s always more complicated for serving real business workloads.
Don't nit-pick the "couple". It was used casually - like to mean not terribly long time. So the 2-6 spread, while technically big, is still just a trifle. While I'm nit-picking; up thread is talking about a limited box for CI and you're talking about scaling up real business workloads. That's just like the difference between 2 and 6. Give it a rest.
Everyone: run your scenarios and expectations in a spreadsheet and then use real data to run your CBA. Your case will be unique(ish) so make your case for your situation.
> So the 2-6 spread, while technically big, is still just a trifle.
I think you’re misreading. Even the 6 month thread was based on invalid assumptions of $0 collocation fees. Add in even cheap collocation fees and it’s pushed out even further
That’s not really a nit pick when the claims were based on impossible math. It’s more of a Motte and Bailey where they come in with a “couple of months” claim that sounds awesome on the surface but then falls back to a completely different number if anyone looks at the details.
It’s even dumber than that.
Let’s not forget that if even three engineers are working on this migration for only a week your cost is now 10’s of thousands for this couple hundred euros cost saving.
(assuming avg all-in engineer costs in europe)
It makes no sense to optimise cost for infrastructure mostly, it does make sense to make it faster, since almost all your spend is on engineers.
Spending thousands to save hundreds is not a healthy business.
yeah thanks for that i was just meaning a very fast return
You can take a hybrid approach and use the rack for base capacity, cloud for scaling.
minor point to say but I have seen in some locations colocation costs to be around $30-40 as well. $100 is usually reserved for say colocating within Hetzner for example (iirc)
Just as a rule of thumb, if your servers cost more than 1k$ per month or even 500$ maybe even, depending upon if you are okay with colocation and everything. I have found it to break even (more than even the cheapest say hetzner or similar actually so for GCP or anything which charge significantly more, maybe you should warrant a deeper analysis on what is better colocation or dedicated servers or for short burstable units maybe even vps)
I used to run a site that compares prices[0]. Not only is the ecosystem pull to the cloud strong, but many developers today look at bare metal as downright daunting.
Not sure where that fear comes from. Cloud challenges can be as or more complex than bare metal ones.
> Cloud challenges can be as or more complex than bare metal ones.
Big +1 to this. For what I thought was a modest sized project it feels like an np-hard problem coordinating with gcloud account reps to figure out what regions have both enough hyperdisk capacity and compute capacity. A far cry from being able to just "download more ram" with ease.
The cloud ain't magic folks, it's just someone else's servers.
(All that said... still way easier than if I needed to procure our own hardware and colocate it. The project is complete. Just delayed more than I expected.)
> The cloud ain't magic folks, it's just someone else's servers.
The cloud is where the entire responsibility for those servers lives elsewhere.
If you're going to run a VM, sure. But when you're running a managed db with some managed compute, the cost for that might be high in comparison. But you just offloaded the whole infra management responsibility. That's their value add
But any serious deployment of "cloud" infrastructure still needs management, you're just forcing the people doing it to use the small number of knobs the cloud provider makes available rather than giving them full access to the software itself.
not sure what you mean by a serious deployment, but a lot of companies will be perfectly fine with, some compute, object storage and a managed rdbms.
Will that be more expensive than running it yourself? Absolutely. Does it allow teams to function and deliver independently, yes. As an org, you can prioritize cost or something else.
> a lot of companies will be perfectly fine with, some compute, object storage and a managed rdbms.
Right, and who or what causes those services to be provisioned, to be configured, etc.?
The cloud is magic. If it is down nobody is in trouble. You just throw your hands in the air and say oh azure / aws / gcloud is down.
But if you are the admin of a physical machine you are in deep trouble.
> Not sure where that fear comes from.
Probably because most developers these days have not known a world without using cloud providers, with AWS being 20 years old now.
Racking your own hardware doesn’t get you web UIs and APIs out of the box. At least it didn’t 2 decades ago.
Sure, now it does however (via the many OSS PaaS) so the calculus must also therefore change.
Which OSS PaaS are there that are noteworthy? Or do you mean something like Kubernetes?
Coolify is usually loved by the community.
Dokploy is another good one.
Kubero seems nice for more kubernetes oriented tasks.
But I feel like if someone is having a single piece of hardware as the OP did. Kubernetes might not be of as much help and Coolify/Dokploy are so much simpler in that regards.
Thanks. I will look into those.
I suppose kubernetes with the right operators installed and the right node labels applied could almost work as a self service control plane. But then VMs have to run in kubevirt. There is crossplane but that needs another IaaS to do its thing.
Partitioning a server! Omg lol
It’s funny, bc AWS did not start this tour of business. What they did do is make it possible to pay by the hour. The ephemeral spare compute is what they started.
Yet almost nobody understood the ephemeral part.
You might even be better off running a macmini at home fiber, especially for backend processing
The fragmentation and friction! Comparing prices usually requires 10 open browser tabs and a spreadsheet, which is what keeps people locked into their default cloud. I built a tool to solve this called BlueDot (ie, Earth, where all the clouds are)[0]. It’s a TUI that aggregates 58,000+ server configurations across 6 clouds (including Hetzner). It lets you view side-by-side price comparisons and deploy instantly from the terminal. It makes grabbing a cheap Hetzner box just as easy as spinning up something on AWS/GCP.
I use serververify which is created by jbiloh from the lowendtalk forum which uses yabs (yet-another-benchmark-script) to give details about lot more things than usually meets the eye.
That being said, I have found getdeploying.com to be a decent starting point as well if you aren't too well versed within the Lowend providers who are quite diverse and that comes at both costs and profits.
Btw legendary https://vpspricetracker.com (which was one of the first websites that I personally had opened to find vps prices when I was starting out or was curious) is also created by jbiloh.
So these few websites + casually scrolling LET is enough for me to nowadays find the winner with infinitely more customizability but I understand the point of TUI but actually the whole hosting industry has always revolved around websites even from the start. So they are less interested in making TUI's for such projects generally speaking atleast that's my opinion
Thanks! A planned next iteration is to include more non-mainstream cheap providers in the TUI. But that's not as simple as the current model which wraps official CLIs, as these alt providers typically don't have CLIs and diverse listing and control surfaces.
Self-racking lets you rack a bunch of gear you'd never find in VM/dedicated rentals, like consumer parts or older, still very good parts. Overclocking options are available as well if you DIY.
If you need single-threaded performance, colo is really the only way to go anyway.
We have two full racks and we're super happy with them.
Or under clocking and under volting for even better performance to price/power/longevity ratios
For a single rack, you really don’t have too many choices for power. You make a choice to provision and pay, I never had anyone check how much of that I used and give me money back. Maybe things have changed though.
No doubt. Especially for GPU inference at scale. We overclock/overvolt for training and tune way down for inference.
You can go on OVH and get a dedicated server with 384 threads and a Turin cpu for $1147 a month. You have to pay $1147 for installation and the default has low ram and network speeds but even after upgrading those it's going to be 1/5 of what it would cost on public clouds.
This is basically the premise of https://www.blacksmith.sh/ as far as I know, though without the need to host the hardware yourself and the potential complexity they comes with that.
I did have some MySQL servers racked for over a decade and I was afraid to restart the machines. And yes as new versions of MySQL came out I did have to compile them myself.
Similar lower specced machines that were closer to the public internet had boot disk failures, but I had a few of them, so it wasn’t an issue. Spinning metal and all.
One of the db servers dying would have required a next day colo visit… so I never rebooted.
Big cloud is ludicrously expensive. It’s truly amazing. Bandwidth is even worse. It’s like a 10000X markup.
It’s wild that no one knows just how cheap bandwidth really is. AWS pulled one over on people and it’s like the movie studios still demanding 10% of the top for VHS distribution. Today.
That’s with every industry
Make things look like a complicated black box. Make sure it feels scary to roll your own. Hide the core technical skills behind abstracted skills
Cloud has done a truly epically awesome job at this. People are now afraid to set stuff up.
"VCPUS" are a bit of scam in my experience. You usually don't get what the hardware (according to /proc/cpuinfo) is capable of.
Just want to say something in defence of cloud providers
- sometimes you need to limit the list of available CPU features to allow live migration between different hypervisors
- even if you migrate the virtual machine to the latest state of the art CPU, /proc/cpuinfo won't reflect it (linux would go crazy if you tried to switch the CPU information on the fly) (the frequency boost would be measurable though, just not via /proc/cpuinfo )
> i am trying hard to convince more people to rack themselves especially for CI actions.
What do you think the typical duty cycle is for a CI machine?
Raw performance is kind of meaningless if you aren't actually using the hardware. It's a lot of up front capex just to prove a point on a single metric.
Raw performance, in the sense of single core performance, is still one of the most important factors for us. Yes, we have parallelised tests, in different modules, etc. But there are still many single threaded operations in the build server. Also, especially in the cloud, IO is a bottleneck you can almost only get around by provisioning a bigger CPU.
Our CI run smaller PR checks during the day when devs make changes. In the “downtime” we run longer/more complex tests such as resilience and performance tests. So typically a machine is utilised 20-22/7.
A 16-core 4565p is of course faster in max single thread speed than a 96-core that GCP is running at an economically optimal base clock.
A year ago I gave a talk about optimizing Cloud cost efficiency and I did a comparison of colocation vs cloud over time. You might find it interesting here, linking to the relative part: https://youtu.be/UEjMr5aUbbM?si=4QFSXKTBFJa2WrRm&t=1236
TLDR, colocation broke even in 6 to 18 months for on-demand and 3y reserve cloud respectively. But spot instances can actually be quite cheaper than colocation.
You generally don't go to the cloud for the price (except if we are talking hetzner etc).
Yeah I expected this benchmark to include hosted "metal" hardware with the "per instruction cost" benchmark to see how provider like Hetnzer fare against classic AWS VMs. It's a bit apple to oranges I know, but I think nowadays is what most people compared pure performance cost are interested in. I'm not going to migrate from AWS VMs to GCP or Hetzner VMs, but I might be open to Hetzner hosted servers instead for a massive enough cost reduction.
> ... but I might be open to Hetzner hosted servers instead for a massive enough cost reduction.
Don't use Hetzner for anything actually important to you. :(
A good business would send you a warning a month before your credit card expires, not after the fact.
For some reason parent is using the word "expired" when they really mean "cancelled by the issuing bank".
To be honest, I find it hard to believe this is common. They have been around for ages and are quite beloved by many. Maybe something went wrong in this case?
Guess I will find out, think my cc expires soon.
Also, you can pay by bank transfer, at least for dedicated.
> To be honest, I find it hard to believe this is common.
I agree. But it still happened, with literally no warning (I actually checked), and their support staff refused to even call me to get updated card details when I was in the middle of an actual cyclone. ie phone service worked, internet didn't
Directly impacting our customers, who were extremely unhappy (to say the least).
"Fuck Hetzner!" is not nearly strong enough to convey the sentiment.
I mean, the context here is that a company stopped providing services after a bank cancelled a credit card they had been charging.
For all they know, your legitimate charges were the fraudulent charges that triggered the cancellation.
I cannot fathom why you keep using the term "expired" when that is a very different scenario to "cancelled by the issuing bank".
> For all they know, your legitimate charges were the fraudulent charges that triggered the cancellation.
Literally years of paying the bills. ;)
> I cannot fathom why you keep using the term "expired" when that is a very different scenario to "cancelled by the issuing bank".
That seems like a you problem. No worries, hope your day is going ok.
> That seems like a you problem.
I dunno man, it wasn't me having a breakdown in public because I forgot to update a biller after I cancelled my card.
Both Datapacket & OVH have the 4565p.
This proc is a hidden gem.
For most workloads it’s not just the most performant, but also the best bang-for-buck.
I don't see the 4565P at Datapacket or OVH. But that doesn't invalidate your comment.
They have the higher cache variant (4585PX - same clock speed & core count)
> The most insane part here is that the AMD EPYC 4565p can beat the turin's used on the cloud providers, by as much as 2x in the single core.
That is ... hard to believe for a CPU-bound task. Do you have any open benchmark which can reproduce that?
Disclosure: I work on VMs at Google Compute Engine :)
This was a really, really good write-up. I appreciated the breadth of VMs tested and the spread of benchmarks. A few random observations:
1. Turin is a beast.
2. The data on price-performance makes Hetzner look really fantastic, especially for small scale projects where region placement doesn’t matter much and big bursty scaling isn’t required.
3. I think the first ever cloud VM I ever provisioned was on DigitalOcean. I was surprised at how old their fleet was, but I guess they have some limited Emerald Rapids offerings now: https://www.digitalocean.com/blog/introducing-5th-gen-xeon-p...
Huh, I have not been able to provision any newer CPUs after dozens of tests, certainly not Emerald Rapids. And that blog post is weird, their charts don't even have a key shown, it's like they bought a few CPUs and threw that quickly together to get people's hopes up. A real shame, I am still running DO droplets, but they are behind the times...
Yeah, a $45 hetzner box would probably be at the top of all these charts, but it's a little more work to provision.
I tend to mostly use dedicated servers from Hetzner for my own projects and for my client's projects. Whenever they explicitly want US servers, I tend to go with Vultr's dedicated servers which been serving us well for many years.
I've read several reports from customers saying that their customer service is really bad. Difficult to know with online reviews of course. Does anyone have positive stories to share? I am looking at Australian hosts specifically and Hetzner doesn't have any data centers here.
We use them heavily for test boxes and running experiments. Standard off-the-shelf machines are provisioned almost instantly, and never had any problems.
More custom stuff (eg 100Gb/s NICs) takes a bit longer, but they've always been super responsive and quick to sort out any issues!
The price / performance you get from something like their AX162 is just crazy, although unfortunately with the whole RAM / NVMe shortage the setup fee has gone up quite a lot.
Using them for production for years, never dissapointed.
What you should be aware of is their new exploration of s3 storage. I mean, the s3 works and everything but it's still too eaely - the servers are kind of slow and sometimes fail to upload/download. They are still tuning out the storage architecture. The api key management is kind of too primitive (although much more headache free than configuring aws), and the online file browser is lacking
But for vps servers - they are battletested veterans
Genoa was a big leap from Milan. Turin is a huge leap again. AMD really is doing spectacularly well at the moment. Kudos to Lisa Su and the team.
> Kudos to Lisa Su and the team.
They're a typical hardware maker unable to focus on software, which is why NVIDIA is now a multi-trillion dollar corporation and AMD is "just" a few hundred billion.
They've focused too much on CPUs and completely dropped the ball on AI and compute accelerators.
It's especially sad considering that the MI300 and related accelerators on paper are competitive with NVIDIA hardware, it's just that they have nowhere near the same software stack, so nobody cares.
Don’t really care.
We were stuck with Intel, its nice that we have better CPUs.
Yeah, remember when 4 core 8 threads were the high-end CPUs until AMD Ryzen came out? If AMD didn't do their best job then we're still stuck with the norm of 4 cores for more years as I can imagine.
AMD has to fight both Intel and Nvidia in the market. It chose to take on Intel and clearly it was a wise decision. You can’t win if you fight every battle at once against much stronger opponents.
And don’t get me started on the valuation of companies riding the AI bubble.
> completely dropped the ball on AI and compute accelerators
AMD produces AI chips, and they seem to be doing quite well.[0] If they didn't, AMD wouldn't be worth anywhere near what it is.
[0] https://openai.com/index/openai-amd-strategic-partnership/
Nvidia datacenter GPUs have awful software. If they focus on it they're not doing a good job.