
OrangePi 6 Plus Review: The New Frontier for ARM64 SBC Performance
So after our previous reviews (that started mainly around RISC-V since we are really interested in this new architecture) of SBC, we continue to review what’s available these days in the world of small, versatile computers. Today this is going to be about the OrangePi 6 Plus, following our previous review of the OrangePi 5 ultra board.
This is NOT a super small, credit card format SBC. We are talking about something that’s definitely larger, and that comes directly with an integrated heatsink.
At the bottom there’s a wealth of ports, and it’s where you will install the necessary wireless module if you want Wifi and Bluetooth.

Now let’s dive into what you can do with this.
Here is what you can get from the top of the device. Note that most of it will be hidden from view as the board comes pre-installed with the heatsink that covers the SOC and the memory chips.

And the bottom view.

You can tell just from the format that you should have higher expectations from the hardware.
Here are the specs in a table format:
| Component | Specification |
|---|---|
| SoC | CIX CD8180 / CD8160 (12-core 64-bit) |
| CPU Architecture | 4× Cortex-A720 (High-perf) + 4× Cortex-A720 (Main) + 4× Cortex-A520 (Efficiency) |
| GPU | Arm Immortalis-G720 MC10 (Ray Tracing & 8K Decoding support) |
| NPU (AI) | Up to 45 TOPS (System-wide); ~30 TOPS Dedicated NPU |
| RAM | 16GB / 32GB / 64GB LPDDR5 (128-bit) |
| Storage | 2× M.2 2280 slots (PCIe 4.0 x4 NVMe), 1× MicroSD (TF) slot |
| Networking | Dual 5GbE (5000Mbps) Ethernet ports |
| Wireless | M.2 Key-E (2230) slot for Wi-Fi 6E/7 & Bluetooth 5.4 |
| Video Output | 1× HDMI 2.1 (8K@60Hz), 1× DP 1.4, 2× USB-C (DP Alt Mode), 1× eDP |
| USB Ports | 2× USB 3.0 Type-A, 2× USB 2.0 Type-A, 2× Full-function USB Type-C |
| Camera (MIPI) | 2× 4-lane MIPI CSI interfaces |
| Expansion | 40-pin GPIO header (UART, I2C, SPI, PWM, etc.) |
| Audio | 3.5mm Headphone/Mic jack, 2× Speaker headers, 1× Analog MIC header |
| Power Supply | 100W Dual USB Type-C PD (20V/5A) |
| Dimensions | 115mm × 100mm |
As you can see from the specs, this is no joke. 16GB RAM by default, 12-cores processor, with a powerful GPU (Immortalis G720), and a NPU that has a claimed performance up to 30 TOPS. Not just that, but there’s numerous ports on this SBC, with 2 full sized M2 2280 slots! You can tell that the IO is not going to be a joke here.
If you are wondering about the SOC itself, we have more info about what to expect from CIX.
| Feature | Specification |
|---|---|
| SoC Model | CIX CD8180 / CD8160 (Codename: CIX P1) |
| Architecture | Armv9.2-A (64-bit) |
| Total CPU Cores | 12 Cores (Tri-cluster configuration) |
| Big Cores | 4× Cortex-A720 @ Up to 2.8 GHz (Performance) |
| Medium Cores | 4× Cortex-A720 @ Up to 2.4 GHz (Mainstream) |
| Little Cores | 4× Cortex-A520 @ 1.8 GHz (Efficiency) |
| L3 Cache | 12MB Shared L3 Cache |
| GPU | Arm Immortalis-G720 MC10 |
| Graphics Features | Hardware Ray Tracing, Vulkan 1.3, OpenGL ES 3.2, OpenCL 3.0 |
| NPU (AI Engine) | Arm-China Zhouyi: 30 TOPS (Dedicated); ~45 TOPS (Total System AI) |
| AI Precision | INT4, INT8, INT16, FP16, TF32 |
| VPU (Video) | Linlon V8: 8K@60fps Decode (AV1/H.265/VP9), 8K@30fps Encode (H.265) |
| Memory Interface | 128-bit LPDDR5 / LPDDR5X (Up to 5500 MT/s) |
| Memory Bandwidth | Up to 96 GB/s (Theoretical peak) |
| PCIe Support | PCIe Gen4 (Supports x8, x4, and x2 configurations) |
| System Security | Integrated Security Engine (Standard Arm SystemReady / ACPI support) |
2.8Ghz ARM Big Cores processors! That’s no joke, this is way beyond the typical frequency we see for small boards where things are usually below 2 Ghz. The memory interface also has a huge bandwidth, and we get full PCI4 with 8 lanes! This means that this board could probably be attached to an external GPU (eGPU) and be able to drive it (provided adequate software support).
About the NPU, the usual problem on Linux is that there is poor software support, and it’s certainly not in the mainline either. To leverage the 30 TOPS of dedicated AI power, you cannot simply use standard versions of PyTorch or TensorFlow out of the box. You must use the NeuralONE AI SDK. This also means that the NPU cannot use regular weights, and need to compile them into a different format to make them work. On paper, the NPU is highly versatile and supports the following data types: INT4, INT8, INT16, FP16, BF16, and TF32. The board has extensive documentation on a bunch of embedded models for vision detection, automation, such as YOLOv8 (Vision), ResNet50, OpenPose, and DeepLabv3.
Now let’s jump into the actual user experience.
There is a Debian Bookworm (12) image available at the time of writing. I was actually waiting for a 24.04 Ubuntu image to be made available, and that was the plan at some point according to my OrangePi contact, but they had to shift priorities apparently and now there is no ETA for the Ubuntu image. So instead of waiting, I decided to go ahead and review what’s possible with this Debian image. As you know, Debian Bookworm is not that new anymore (the base is from Mid-2023), and is now superseded by Debian Trixie (released in 2025). This Debian image comes with two kernels available, 6.1 and 6.6. The 6.6 kernel is also from 2023, and it’s not surprising you don’t get a more recent kernel, since the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road: maybe some general functionality can be upstreamed, but will the NPU have working drivers for a more recent kernel? Your guess is as good as mine.

You can burn the image directly on the NVME drive - no need to boot on a MicroSD card this time around. Note that at the beginning, my OrangePi did not boot, but I could see from the BIOS (yes, this board comes with a BIOS-like interface!) that everything was supposed to be seen by the SBC. Turns out that the firmware required an update to be able to boot on this Linux kernel, and after imaging the latest firmware on a USB stick and booting on it, a few minutes later, things worked as expected.
In any case, we get a GNOME desktop after boot. And everything works pretty much as you’d expect. Once thing that is immediately apparent when you start with the desktop is how snappy everything is. SBC boards like the Raspberry Pi 4 provide a good desktop experience but have some general sluggishness to them. On this board, this is pretty much like having a X86_64 experience. It recognized immediately my Ultra Wide Display and supported the 3440 x 1440 resolution without a hitch. Everything from navigating the desktop and settings is very fast. You get Chromium by default (and Firefox ESR in the repos) and the browser experience is very clean and fast, too. This board has absolutely no problem to play Youtube streams, even at 4k. This thing is FAST!
A quick look at vulkaninfo shows that we have working Vulkan drivers on this board! This is something that was initially very exciting, but it turns out that there are some limitations. More on that later.

Turns out the Vulkan driver is limited to some early 1.3 version. You have options to upgrade Mesa, by using Debian backports - it gets you to a 25.07 Mesa version, where you don’t rely anymore on the proprietary driver - no, this time you get Panfrost, which means a more robust driver potentially… except that you are stuck on the 6.6 Linux Kernel, and Panfrost requires a more recent kernel to work properly (6.10+ apparently). So the solution would be to move to a more recent kernel, right? Do the Debian backports have it? Yes. Problem is, moving away from 6.6 will break a lot of patches that are necessary for the hardware of this board to work. Such as HDMI out at resolutions higher than 1080p, and NPU support! So, it’s a major trade-off.
Even though Bluetooth shows in the GNOME desktop controls, and that it could see some of my peripherals, it could not connect to any of my external audio devices. But I don’t give up so fast. Turns out there’s an issue with a missing pipewire dependency to allow for Bluetooth audio to work. Here’s how you can get it done:
sudo apt install libspa-0.2-bluetooth pipewire-audio-client-libraries
Afterwards you can launch bluetoothctl from the command line, and you can execute the following commands one by one
power on
agent on
default-agent
scan on
After the scan is activated you should see Bluetooth device popping up in the terminal. Note the ID of the Bluetooth device you want to connect, and then:
pair <device id>
trust <device id>
connect <devic_id>
Once this is fixed, things work as expected. And now I get audio!
OBS is not available as Flatpak for ARM64, and not in the repos either. This means, that you are in for a compilation from sources. This is not the thing that scares me. But in our situation, it’s a little more convoluted that I expected. First, turns out that OBS did not like the fact that the Cmake was relatively old. So I had to get one of the recent Cmake binaries. Thankfully they have arm64 binaries already available on their website, so that was easy.

Next, OBS would complain of a too old FFmpeg version. Not too surprising for something that depends so heavily on it. So I had to go for a full FFmpeg compilation. Here’s what I did to save you time:
git clone https://github.com/FFmpeg/FFmpeg.git
cd ffmpeg
git checkout release/7.1 # in order to have a stable release, not the master branch in itself
./configure --prefix=/usr/local --enable-shared --disable-static --enable-gpl --enable-libx264 --enable-libx265
make -j$(nproc)
sudo make install
Finally, compiling OBS is a game of cat and mouse, you need to basically give up one extension after the other as new errors arise. Most of the things you need to remove are not critical (NVENC, not relevant on non-Nvidia hardware, browser support, not really our thing either) and at the end you get a fairly long series of command.
git clone https://github.com/obsproject/obs-studio.git
cd obs-studio
# remove the obs-browser plugin from the obs folder root directlroy
mv plugins/obs-browser plugins/obs-browser.bak
# from the obs root folder
git submodule update --init --recursive
# a couple of exports to avoid the compilation to fail because of FFmpeg warnings
export CFLAGS="-Wno-error=deprecated-declarations"
export CXXFLAGS="-Wno-error=deprecated-declarations"
# since the flags can be ignored, we also edit the following file from the root directory
echo 'target_compile_options(obs-ffmpeg PRIVATE "-Wno-error=deprecated-declarations")' >> plugins/obs-ffmpeg/CMakeLists.txt
# use the path to the cmake version you just downloaded
<path_to_newer_cmake_binary>/cmake -DCMAKE_PREFIX_PATH=/usr/local -DFFMPEG_INCLUDE_DIR=/usr/local/include -DPKG_CONFIG_PATH=/usr/local/lib/pkgconfig -DENABLE_AJA=OFF -DUNIX_STRUCTURE=1 -DENABLE_GIO=OFF -DENABLE_VPL=OFF -DENABLE_QSV11=OFF -DENABLE_BROWSER=OFF -DENABLE_WEBRTC=OFF -DENABLE_NATIVE_NVENC=OFF -DCMAKE_COMPILE_WARNING_AS_ERROR=OFF ..
make -j$(nproc)
sudo make install
It took a little while, but it worked!
I must admit, I did not expect that it would go so well in the end. Now, thanks to this, you will get a lot of videos from the board in action that no other site reviewing that board has been able to offer.
The board is very quiet by default. Even when the fan is activated, you barely hear it, at least under fairly typical conditions. Things change when you push the board to its full power, for example a long compilation time. In such conditions, the fan becomes louder with a woosh kind of sound. You will definitely hear it when doing benchmarks, and in my case, when running LLMs, for example.

In terms of temperature, the control is very good. At 100% usage even for long durations, while the fan becomes clearly noticeable, it maintains the temperature under 60 C (this is winter right now, and the temperature remained at 58 C). Kudos for the good engineering there. It’s as good as the custom cooling solution for my RTX3090.
I don’t have a way to measure the power draw currently, but based on reports from other publications it looks like we are looking at:
So, if you are looking for something that has almost no footprint at idle, this is not it. At idle the OrangePi 5 Ultra consumes more than 3 times less, so that sound more like something you’d use for a server.
A quick run at Geekbench 6 shows a very strong single core score, and an exceptional multi-core score.
Now the details in single score:
And in multi-core where we can see that the OrangePi 6 absolutely crushes what you can find on a Raspberry Pi 5.
Of course, this is a relatively cheap system that is not going to win the benchmark charts. But look at the results. On single core, we get a score equivalent to a i5-10500 running at a similar frequency (2.3 Ghz).

On multicore this is much more impressive, thanks to its twelve cores. And it gets very close, according to the bench, to what an AMD Ryzen 7 4800H (8 cores) can deliver.

This is clearly a powerhouse, CPU-wise. For a starting price at 199 USD for the 16GB version, this is a fantastic value proposition. We reviewed the OrangePi 5 Ultra a few months back and the price point is very close (around 160 USD), and unless you need something very small and fanless, the OrangePi 6 Plus is (very clearly) a better deal.
Since we have both a fairly powerful SOC, and a working Vulkan driver, this means that we can expect Box64 to do some magic for us there. I compiled Box64 to make it possible to launch Steam, and for some reason there is some Vulkan related error that prevents Steam from launching. Too bad. I still have a GOG account with a few games that have Linux clients. I tried the following:
And with some degree of success! Beholder 2 worked just fine, and are definitely playable on this board, while the framerate remains between the 20s and 30s FPS (here a little slower on this video because of software OBS capture).
Here’s Oxenfree’s Linux x86_64 client running in Full HD on the OrangePi 6 Plus, using Box64:
Torchlight 2 runs nicely too, at something like 20 to 30 FPS in Full HD.
Shadow Warrior refused to launch, complaining about the graphics drivers not being recent enough (turns out that Zink provides OpenGL support, and there is some issue to detect that the OpenGL version is properly supported… my guess). Day of the Tentacle crashes at start too, not sure why. When games work, the performance is not staggering but convincing for a somewhat small SoC at the end of the day. AMD and Intel are certainly ahead in terms of graphics performance on integrated chips, but they have much larger processors and much more expensive ones, too.
I also tried FOSS games or engines:
GZDoom works exceptionally well with the OpenGLES renderer at Full HD. It’s fast, responsive. Stable at 60 FPS on Full HD. And Doom is still as fun as it was in the days, or more so if you run Brutal Doom.
While I don’t have a sample video here, Quake 3 Arena with the IOQuake3 engine works perfectly, and runs at 60 fps on Full HD without a sweat.
Luanti (ex Minetest) works extremely well - I turned most of the details to the max at 1080p and it kept at a solid 60 FPS. Sure, that’s no AAA game, but it’s a good alternative to Minecraft.
0Ad works extremely well, even in full screen at my Ultra Wide screen resolution. I took a video at FUll HD, and while OBS does slow the framerate a little, you can still see it’s very smooth.
0ad has come a long way, I really need to revisit a recent version of the game to see how much it has changed!
This is a very capable board that would make a very powerful server. The Debian image comes with Docker, and as we have seen before during the OrangePi 5 Ultra review, the landscape of ready-to-use Docker Hub images is huge and can get you started with numerous server-side applications in no time. Since we have at least 16Gb of memory, very fast I/O (PCIe4!), and a very fast CPU, this SBC will be able to wonders with a wide variety of applications. The only problem is the power draw. It looks like we are stuck with 15W as an idle baseline, and that seems a bit too much for some light server use.
This board comes with a very large repository (60 GB!) that you can install and sync automatically, with tons of applications and demos you can try out. There is a large part of their documentation dedicated to that part in their manual. And for what I could try, it seems to work as long as you follow their instructions.
In my case I like to work with LLMs for a range of applications, so I was interested to see how fast this board could run some small-ish models. One of the major limitations is that we don’t have working NPU support for llama.cpp (oh no!). I was thinking, “no worries, we have a vulkan driver so let’s use Vulkan instead”. I like to be optimistic sometimes. Turns out that the Vulkan driver is below the minimum Vulkan drivers specs required by llama.cpp recently, and I would need to go back to a end 2024 build to be able to compile for Vulkan support. Going back one year on llama.cpp… not an option, sorry (too many models would not be supported going back so far in time). So, we are left with using the raw power of the CPU. Which means it won’t be quiet - expect some good old noise fan in such use cases.
Anyway, I went for Qwen3 1.7b - downloaded the safetensors weights, then converted them to gguf, followed by a quantization step to make them IQ4_S. Running the model at this kind of quantization gives us about 14 tokens per second when doing inference. Definitely usable, very usable even, while this is a small model.
Ideally, you’d want to have a board that can run a 7b model with proper hardware acceleration and no fan usage. Not sure if having actual NPU support in the future would help for that or not. In other words, we are not there yet.
As usual there is not a single vendor who can provide you with a CIX experience. This time the main competitor is Radxa, and they have two boards called Orion 6 that feature the same chip - one that is much bigger in footprint (mini-ITX size), and another one that is more similar to the OrangePi 6 Plus size (the Orion 6N). While I don’t have access to the Radxa ones, I can’t comment on how good their support is, but they are very likely to suffer from the same limitations, software wise.

As expected, they also only provide a Debian Bookworm image under the Radxa OS nickname so this is no different from what you can get on the OrangePi 6 Plus. Price-wise, they are both available at around similar price points, so you could make you decision based on the best deal you can get.
This is a very impressive SBC, with an exceptional value proposition. Its performance profile puts if far away from the toy category from what we have seen before in the SBC category, and puts it right into the desktop performance realm. It’s still relatively small so you could easily attach it behind a monitor and power a personal computer this way. As a Linux user, things are so fast that you’d be very surprised this is an ARM board running under the hood. For server use, you’d probably want to avoid this one, because of the fairly heavily baseline power consumption.
As usual, the pitfalls are always going to be the same. This time around you get an older Debian image (bookworm), with an older kernel (6.6), and some proprietary drivers that you have to live with. Ultimately if you want to upgrade the software running on your SBC that will mean breaking things, and there’s often a fairly long time (if ever) for some hardware components to be supported in newer distros. It’s not necessarily a deal breaker these days. This board is fast enough to compile software fairly quickly if needed. You have Flatpak as well providing some good coverage for a lot of application (even if performance may be sub-par). There are ongoing efforts to mainline the GPU found in the chip, so it could be that a year from now we are in a better place when it comes to proper hardware support beyond kernel 6.6.
In any case, this is a surprising new entry in terms of performance / price point. This puts the bar very high for future SBCs ARM64 SBCs. On a personal note, I’d like to see some serious hardware dedicated to running AI models (instead of large desktops chaining multiple GPUs), and maybe highly customized ARM64 SBCs are going to become one option at some point.
If you are interested in getting one, there are many resellers, but one of the most direct ones are on Aliexpress:
Since the difference of price between 16 GB and 32GB is so small currently, it would make total sense to go for the 32GB.
Note: we were provided with a review unit from OrangePi (more specifically the OrangePi 6 Plus 16GB version).
My experience with the OrangePi 4 LTS has been poor, and I'm unwilling to purchase more of their hardware. Mine is now running Armbian because I didn't care for the instability, or for the Chinese repos.
They seem uninterested in trying to get their hardware supported by submitting their patches for inclusion in the Linux kernel, and popular distros. Instead, you have to trust their repos (based in PRC).
"Chinese repos" is a very charitable interpretation of the Google drive links they used to distribute the os. It seemed like it was on the free plan too, it often didn't work because it tripped the maximum downloads per month limit.
It's always better than a link in the sticky post on the manufacturer's phpbb forum. I bought some audio equipment directly from a Chinese company, and everything look like a hobbies/student project.
Keep in mind that for a lot of Chinese companies, it's difficult to (legally) access some outside resources.
My company hosts our docker images on quay.io and docker hub, but we also have a tarball of images that we post to our Github releases. Recently our release tooling had a glitch and didn't upload the tarballs, and we very quickly got Github issues opened about it from a user who isn't able to access either docker registry and has to download the tarball from Github instead.
It doesn't surprise me that a lot of these companies have the same "release process" as Wii U homebrew utilities, since I can imagine there's not a lot of options unless you're pretty big and well-experienced (and fluent in English).
Is it? A google drive link to an OS image is worse IMO
I bought a MiniPC directly from a Chinese company (an AOOSTAR G37) and the driver downloads on their website are MEGA links. I thought only piracy and child porn sites used those..
I am somewhat amazed how you can manufacture such expensive high tech equipment yet are too cheap to setup a proper download service for the software, which would be very simple and cheap compared to making the hardware itself.
Maybe it is a Chinese mentality thing where the first question is always "What is the absolutely cheapest way to do this?" and all other concerns are secondary at best.
..which does not inspire confidence in the hardware either.
Maybe Chinese customers are different, see this, and think "These people are smart! Why pay more if you don't have to!".
> "Chinese repos" is a very charitable interpretation of the Google drive links they used to distribute the os.
"Chinese repos" refer to the fact that the debian repos links for updates point to custom Huawei servers.
> it often didn't work because it tripped the maximum downloads per month limit.
it always work if you login into a Google account prior to downloading. If you don't, indeed the downloads will regularly fail.
> it always work[s]
That was not my experience, at least for very large files (100+ GB). There was a workaround (that has since been patched) where you could link files into your own Google drive and circumvent the bandwidth restriction that way. The current workaround is to link the files into a directory and then download the directory containing the link as an archive, which does not count against the bandwidth limit.
I see. I never had to download such large files from Drive. For files up to 10Gb I never had any issue though.
[dead]
I opened the review and immediately ctrl-F'd "kernel". It said no upstream support so I closed the article.
I would never buy one of these things without upstream kernel support for the SoC and a sane bootloader. Even the Raspberry Pi is not great on this front TBH (kernel is mostly OK but the fucked up boot chain is a PITA, requires special distro support).
so what would you recommend for arm which has good proper support.
I feel like rasp pi has the most community support for everything so I had the intution that most things would just work out of the box on it or it would have the best arm support (I assumed the boot chain to be that as well)
what do you mean by the boot chain being painful to work with and can you provide me some examples perhaps?
I would recommend x86.
Ok that's mostly a joke, I'm just not up to date on what platforms exist these days that are done properly. Back in my day the Texas Instruments platforms (BeagleBoard) were decent. I think there are probably Rockchip-based SBCs today (Pine64 maybe?) that add up to something sensible but I dunno.
The thing with the boot chain is that e.g. the Pi has a proprietary bootloader that runs via the GPU. You cannot just load a normal distro onto the storage it needs to be a special build that matches the requirements of this proprietary bootloader. If your distro doesn't provide a build like that, well, hopefully you're OK with changing distro or ready to invest many hours getting your preferred distro working.
(Why only "mostly joking?" I recently repurposed an old ThinkPad to use as a home server and it's fucking great. Idles under 4W, dramatically more powerful than a Pi5, has proper UEFI and proper ACPI and all the drivers work properly, including the GPU. Would cost about the same on eBay as a Pi. Only remaining reason I can see for an Arm board is if you're specifically interested in Arm or have very specific space constraints).
At my last job, I found Toradex boards well-supported by Yocto. YMMV
Hm, if I may ask, what were they used for in your last job. To me they seem more entreprise focused than indie focused from a quick glance at their website.
I have this experience with most of these SBC-s. The new Radxa board boots 50% of the time. The only reliable SBCs I have are RPI3|4.
That's a shame to hear, I was looking forward to that one. So much hardware these days seems to be let down by bad software.
I mean, I'm sure there's some bad hardware out there too, but it's usually the software that is letting things down more than the hardware.
Yeah. I mean, I still hold out hope that enough of the driver support will get mainlined or just rolled into some distro I want to use that I can use it for _something_ but right now it just sits unplugged on my desk. :(
Last I checked BredOS was close to having a reasonable experience with it but I haven't had the time to poke at it again. Personally I'd prefer an Arch derivative like BredOS or FreeBSD. I don't really want to buy a GPU to put in it but it seems like that's my only option at the moment?
I don't have any Radxa model, but I have a bunch of SBCs from different makers and I have never seen a problem with boot working half of the time only.
I have a Radxa zero 3E that boots and runs fine.
I have 2 nvmes and i have tried it with several sd card.
That's always the problem with these non-Pi SBCs. They never have good software support.
Olimex does provide both open source hardware and open source software for example: https://www.olimex.com/Products/OLinuXino/STMP1/STMP157-OLin...
Open source hardware is such a fascinating concept, I had thought of such examples but I always assumed they would be the case of risc-v chips, I wonder how it's an arm chip
I always thought that one day we will get completely open source risc-v chips that if another company wants, they can create in their own chip-making process (I imagine it to be beyond extremely difficult but still it opens up a pathway)
what's the progress of risc-v nowadays?
Also Can you please link me other such projects like this, it would be good to have a bookmark/list of all such projects too
Even bigger brands such as Nvidia seem to expect us to recycle SBCs every couple years.
The Jetson Nano launched with Ubuntu 18.04, today, this is still the only officially supported distro for it. I have no reason to think this would be different with the Orin and Thor series, or even with the DGX Spark with its customized Ubuntu/"DGX OS".
I still don't understand why they couldn't support them properly. There are so many situations in which they could be better than alternatives, only to be hamstring by the poorest OS support.
You see, a small startup like NVIDIA just doesn't have the budget to support their older devices the same way a multi-trillion dollar company like Raspberry Pi can.
The NanoPi models from FriendlyElec tend to have better support.
you keep insinuating PRC yet you don't realize you're already pwned just running their hardware no matter the OS.
Directly stating something twice is not insinuating…
The review shows ARM64 software support is still painful vs x86. For $200 for the 16gb model, this is the price point where you could just get an Intel N150 mini PC in the same form factor. And those usually come with cases. They also tend to pull 5-8w at idle, while this is 15w. Cool if you really want ARM64, but at this end of the performance spectrum, why not stick with the x86 stack where everything just works a lot easier?
From the article: "[...] the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road [...]".
The problem isn't support for the ARM architecture in general, it's the support for this particular board.
Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.
The exception (even those are questionable as running plain Debian did not work right on Pi 3B and others when I tried recently) proves the rule. You have to look really hard to find an x86 computer where things don't just basically work, the reverse is true for ARM. The power draw between the two is comparable these days, so I don't understand why anyone would bother with ARM when you've got something where you need more than minimally powerful hardware.
The Pi 3B doesn't have UEFI support, so it requires special support on the distro side for the boot process but for the 4 and newer you can flash (or it'll already be there, depending on luck and age of the device) the firmware on the board to support UEFI and USB boot, though installing is a bit of a pain since there's no easy images to do it with. https://wiki.debian.org/RaspberryPi4
I believe some other distros also have UEFI booting/installers setup for PI4 and newer devices because of this, though there's a good chance you'll want some of the other libraries that come with Raspberry PI OS (aka Raspbian) still for some of the hardware specific features like CSI/DSI and some of the GPIO features that might not be fully upstreamed yet.
There's also a port of Proxmox called PXVirt (Formerly Proxmox Port) that exists to use a number of similar ARM systems now as a virtualization host with a nice ui and automation around it.
This. The issue is the culture inside many of these HW companies that is oppositional to upstreaming changes and developing in the open in general.
Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show (or worse, a channel by which their IP could leak).
The Rockchip stuff is better, but still has similar problems.
These companies need to learn that their hardware will be adopted more aggressively for products if the experience of integrating with it isn't sub-par.
They exist in a strange space. They want to be a Linux host but they also want to be an embedded host. The two cultures are pretty different in terms of expectations around kernels. A Linux sysadmin will (rightly) balk at not having an upgrade path for the kernel while a lot of embedded stuff that just happens to use Linux, often has a single kernel released… ever.
I’m not saying one approach is better than the other but there is definitely a lot of art in each camp. I know the one I innately prefer but I’ve definitely had eyebrows raised at me in a professional setting when expressing that view; Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
> Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
Both are valid. The latter is often used as an excuse, though. No, your $50 wifi connected camera does not need the same level of stability as the WiFi connected medical device that allows doctor to remotely monitor medication. Yes, you should have a moderately robust way to update and build and distribute a new FW image for that camera.
I can't tell you the number of times I've gotten a shell on some device only to find that the kernel/os-image/app-binary or whatever has build strings that CLEARLY feature `some-user@their-laptop` betraying that if there's ever going to be an updated firmware, it's going to be down to that one guy's laptop still working and being able to build the artifact and not because a PR was merged.
The obvious counterpoint is that a PR system is also likely to break unless it is exercised+maintained often enough to catch little issues as they appear. Without a set of robust tests the new artifact is also potentially useless to a company that has already sold their last $50 WiFi camera. If the artifact is also used for their upcoming $54.99 camera then often they will have one good version there too. The artifact might work on the old camera but the risk/reward ratio is pretty high for updating the abandonware.
My uninformed normie view of the ecosystem suggests that it's the support for almost every particular board, and that's exactly the issue. For some reason, ARM devices always have some custom OS or Android and can't run off-the-shelf Linux. Meanwhile you can just buy an x86/amd64 device and assume it will just work. I presume there is some fundamental reason why ARM devices are so bad about this? Like they're just missing standardization and every device requires some custom firmware to be loaded by the OS that's inevitably always packaged in a hacky way?
Its the kernel drivers, not firmware. There is no bios or acpi, so the kernel itself has to support a specifc board. In practice it means there is a dtb file that configures it and the actual drivers in the kernel.
Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.
Same story as android devices not having updates two years after release.
But "no BOIS or ACPI" and requiring the kernel to support each individual board sounds exactly like the problem is the ARM architecture in general. Until that's sorted it makes sense to be wary of ARM.
It's not a problem with ARM servers or vendors that care about building well designed ARM workstations.
It's a problem that's inherit to mobile computing and will likely never change unless with regulation or an open standards device line somehow hitting it out of the park and setting new expectations a la PCs.
The problem is zero expectation of ever running anything other than the vendor supplied support package/image and how fast/cheap it is to just wire shit together instead of worrying about standards and interoperability with 3rd party integrators.
How so? The Steam Deck is an x86 mobile PC with all the implications of everything (well, all the generic hardware e.g. WiFi, GPU IIRC) work out of the box.
When I say mobile, I mean ARM SoCs in the phone, embedded and IoT lineage, not so much full featured PCs in mobile form factor.
What is ACPI other than a DTB baked into the firmware/bootloader?
Any SBC could buy an extra flash chip and burn an outdated U-Boot with the manufacturer's DTB baked in. Then U-Boot would boot Linux, just like UEFI does, and Linux would read the firmware's fixed DTB, just like it reads x86 firmware's fixed ACPI tables.
But - cui bono?
You need drivers in your main OS either way. On x86 you are not generally relying on your EFI's drivers for storage, video or networking.
It's actually nice that you can go without, and have one less layer.
It is more or less like wifi problem on laptops, but multiplied by the number of chips. In a way it's more of a lunux problem than arm problem.
At some point the "good" boards get enough support and the situation slowly improves.
We reached the state where you dont need to spec-check the laptop if you want to run linux on it, the same will happen to arm sbc I hope.
Is a decision of linux about how to handle HW in the ARM world. So is a little like in the middle.
It's the shape of the delivered artifact that's driven the way things are implemented in the ecosystem, not a really fundamental architecture difference.
The shape of historically delivered ARM artifacts has been embedded devices. Embedded devices usually work once in one specific configuration. The shape of historically delivered ARM Linux products is a Thing that boots and runs. This only requires a kernel that works on one single device in one single configuration.
The shape of historically delivered x86 artifacts is socketed processors that plug into a variety of motherboards with a variety of downstream hardware, and the shape of historically delivered x86 operating systems is floppies, CDs, or install media that is expected to work on any x86 machine.
As ARM moves out of this historical system, things improve; I believe that for example you could run the same aarch64 Linux kernel on Pi 2B 1.2+, 3, and 4, with either UEFI/ACPI or just different DTBs for each device, because the drivers for these devices are mainline-quality and capable of discovering the environment in which they are running at runtime.
People commonly point to ACPI+UEFI vs DeviceTree as causes for these differences, but I think this is wrong; these are symptoms, not causes, and are broadly Not The Problem. With properly constructed drivers you could load a different DTB for each device and achieve similar results as ACPI; it's just different formats (and different levels of complexity + dynamic behavior). In some ways ACPI is "superior" since it enables runtime dynamism (ie - power events or even keystrokes can trigger behavior changes) without driver knowledge, but in some ways it's worse since it's a complex bytecode system and usually full of weird bugs and edge cases, versus DTB where what you see is what you get.
This has often been the case in the past but the situation is much improved now.
For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.
There's even UEFI [1].
Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.
[0]: https://github.com/home-assistant/operating-system/releases
Yeah but you can get a n100 on sale for about the same price, and it comes with a case, nvme storage (way better then sd card), power supply, proper cooling solution, and less maintanance…
The Orange Pi 5 Plus on its own should be much cheaper than an N100 system. Only when you add in those extras does the price even out. I bought mine in an overpriced bundle for 182€ a few months ago.
It supports NVMe SSDs same as an N100.
Maintenance is exactly the same; they both run mainline Linux.
Where the N100 perhaps wins is in performance.
Where the Orange Pi 5 Plus (and other RK3588-based boards) wins is in power usage, especially for always-on, low-utilization applications.
You can get an n100 system for $110 on sale. Price went up but I still see $135 on eBay now. However YMMV because Europe prices are different
For power I don’t know about orange pi 5 but for many SBC power was a mixed bag. I had pretty bad luck with random SBC taking way more power for random reasons and not putting devices in idle mode. Even raspberry pi was pretty bad when it launched.
It’s frustrating because it’s hard to fix. With x64 you can often go into bios and enable power modes, but that’s not the case with arm. For example pcie4 can easily draw 2w+ when active. (The interface!)
See for example here:
https://github.com/Joshua-Riek/ubuntu-rockchip/issues/606
My n100 takes 6W and 8w (8 and 16gb). If pi5 takes 3w that’s not large enough to matter especially when it’s so inconsistent.
Now one place where I used to like rpi zero was gpio access. However I’m transitioning to rp2350 as it’s just better suited for that kind of work, easier to find and cheaper.
I have no idea what US prices are like but I put in a reasonable amount of effort and at least right now here in Europe, N100 and RK3588 prices are pretty similar for comparable packages (RAM, case, power etc.). One other thing to note is that the N100 is DDR4 while the RK3588 uses DDR5.
I never ran into that bug but I came to the Orange Pi 5 Plus in 2025, so there's a chance the issues were all worked out by the time I started using it.
Looking at a couple of reviews, the Orange Pi 5 Plus drew ~4W idle [0] while an N100 system drew ~10W [1].
1W over a year is 8.76kWh, which here costs ~$2. If those numbers hold (and I'm not saying they do necessarily but for the sake of argument) and with an estimated lifespan of 5 years, you might be looking at a TCO of $140 hardware + $40 power = $180 for an Orange Pi 5 vs. $140 hardware + $100 power = $240 for an N100. That would put an N100 at 33% more expensive. Even if it draws just 6W compared to 4W, that's $200 vs. $180, 11% more expensive.
I'm not saying the Orange Pi 5 Plus is clearly better but I don't think it's as simple as one might think.
[0]: https://magazinmehatronika.com/en/orange-pi-5-plus-review/
[1]: https://www.servethehome.com/fanless-intel-n100-firewall-and...
Maybe this was the case a few years ago, but I would argue the landscape has changed a lot since then - with many more distro options for Arm64 devices.
So, I agree but less than I did a few months ago. I purchased an Orange Pi 5 Ultra and was put off by the pre-built image and custom kernel. The “patch“ for the provided kernel was inscrutable as well. Now I’m running a vanilla 6.18 kernel on a vanilla uboot firmware (still a binary blob required to build that though) with a vanilla install of Debian. That support includes the NPU, GPU, 2.5G Ethernet and NVMe root/boot. I don’t have performance numbers but it’s definitely fast enough for what I use it for.
Interesting, where did you get an image with a 6.18 kernel that has NPU support?
NPU support in general seems to be moving pretty fast, it shares a lot of code with the graphics drivers.
I started with the published Debian image and then just built my own... and then installed onto an NVMe SSD.
No it's definitely a problem with the ARM architecture, specifically that it's standard to make black box SoCs that nobody can write drivers for and the manufacturer gives you one binary version and then fucks off forever. It's a problem with the ARM ecosystem as a whole for literally every board (except Raspberry Pi), likely stemming from the bulk of ARM being throwaway smartphones with proprietary designs.
If ARM cannot outdo x86 on power draw anymore then it really is entirely pointless to use it because you're trading off a lot, and it's basically guaranteed that the board will be a useless brick a few years down the line.
There's also a risk of your DeviceTree getting pruned from the kernel in X years when it's decided that "no one uses that board anymore", which is something that's happened to several boards I bought in the 2010's, but not something that's happened to any PC I've ever owned.
It’s weirded me out for a long time that we’ve gone from ‘we will probe the hardware in a standard way and automatically load the appropriate drivers at boot’ ideal we seemed to have settled on for computers in the 2000s - and still use on x86 - back to ‘we’ll write a specific description file for every configuration of hardware’ for ARM.
Isn't this one of the benefits of ACPI? That the kernel asks the motherboard for the hardware information that on ARM SoCs is stored in the device tree?
Yep
That makes sense, as the Pi is as easy as x86 at this point. I almost never have to compile from scratch.
I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?
Little bit of both. Pi still uses a sort of unique boot sequence due to it’s heritage. Most devices will have the CPU load the bootloader and then have the OS bring up the GPU. Pi sort of inverts this, having the GPU leading the charge with the CPU held at reset until after the GPU has finished it’s boot sequence.
Once you get into the CPU though the Aarch64 registers become more standardized. You still have drivers and such to worry about and differing memory offsets for the peripherals - but since you have the kernel running it’s easier to kind of poke around until you find it. Pi 5 added someone complexity to this with the RP1 South Bridge which adds another layer of abstraction.
Hopefully that all makes sense. Basically the Pi itself is backwards while everything else should conform. It’s not Arm specific, but how the Pi does things.
Apart from very rare cases, this will run any linux-arm64 binary.
Fot the Pi you have to rely on the manufacturer's image too. It does not run a vanilla arm64 distro
With this board the SoC is the main problem. CIX is working on mainlining that stuff for over a year and we still dont have gpu and npu support in mainline
I still have to run my own build of kernel on Opi5+, so that unfortunately tracks. At least I dont have to write the drivers this decade
Why? I'm running an Orange Pi 5+ with a fully generic aarch64 image of Home Assistant OS and it works great. Is there some particular feature that doesn't work on mainline?
> The problem isn't support for the ARM architecture in general,
Of course it is not. That's why almost every ARM board comes with it's own distro, sometimes bootloader and kernel version. Because "it is supported". /s
I was soured on ARM SBCs by the Orange Pi 5, which does not have an option to ignore its SD card during boot. Something trivial on basically every x86 platform I had been taking for granted.
With RAM it will be costing notably more, with 4 cores instead of 12. I'd expect this to run circles around an N150 for single-threaded perf too.
They are not in the same class, which is reflected in the power envelope.
BTW what's up with people pushing N150 and N300 in every single ARM SBC thread? Y'all Intel shareholders or something? I run both but not to the exclusion of everything else. There is nothing I've failed to run successfully on my ARM ones and the only thing I haven't tried is gaming.
> I'd expect this to run circles around an N150 for single-threaded perf too
It has basically the same single-core performance as an N150 box
Random N150 result: https://browser.geekbench.com/v6/cpu/10992465
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
At this point I expect a lot of people have been enticed by niche SBCs and then discovered that driver support is a nightmare, as this article shows. So in time, everyone discovers that cheap x86-64 boxes accomplish their generic computing goals easier than these niche SBCs, even if the multi-core performance isn't the same.
Being able to install a mainline OS and common drivers and just get to work is valuable.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
Because they have a great watt/performance ratio along with a GPU that is very well supported by a wide range of devices and mainline kernel support. In other words a great general purpose SBC.
Meanwhile people are using ARM SBCs, with SoCs designed for embedded or mobile devices, as general purpose computers.
I will admit with RAM and SSD prices sky rocketing these ARM SBC look more attractive.
Because most ARM SBCs are still limited to whatever linux distro they added support to. Intel SBCs might underperform but you can be sure it will run anything built for x86-64.
Are you sure you don't have single-threaded and multi-threaded backwards?
Why would the A720 at 2.8 GHz run circles around the N150 that boosts up to 3.6 GHz in single-threaded workloads, while the 12-core chip would wouldn't beat the 4-core chip in multithreaded workloads?
Obviously, the Intel chip wins in single-threaded performance while losing in multi-threaded: https://www.cpubenchmark.net/compare/6304vs6617/Intel-N150-v...
I can't speak to why other people bring up the N150 in ARM SBC threads any more than "AMD doesn't compete in the ~$200 SBC segment".
FWIW, as far as SBC/NUCs go, I've had a Pi 4, an RK3399 board, an RK3568 board, an N100 NUC from GMKTec, and a N150 NUC from Geekom, and the N150 has by far been my favorite out of those for real-world workloads rather than tinkering. The gap between the x86 software ecosystem and the ARM software ecosystem is no joke.
P.S. Stay away from GMKTec. Even if you don't get burned, your SODIMM cards will. There are stoves, ovens, and hot plates with better heat dissipation and thermals than GMKTec NUCs.
ARM SBCs that cost over $90 are totally not worth it considering those Nxxx options exist
Many of the NXXX options are, sadly, going up in price a lot right now due to the RAM shortages.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
For 90% of use cases, ARM SBCs are not appropriate and will not meet expectations over time.
People expect them to be little PCs, and intend to use them that way, but they are not. Mini PCs, on the other hand, are literally little PCs and will meet the expectations users have when dealing with PCs.
x86 based small computers are just so much easier to work with than most second- and third-string ARM vendors. The x86 scene has had standards in place for a long time, like PCIe and the PC BIOS (now UEFI) for hardware initialization and mapping, that make it a doddle to just boot a kernel and let it get the hardware working. ARM boards don't have that yet, requiring per-board support in the kernel which board manufacturers famously drag their feet on implementing openly let alone upstreaming. Raspberry Pi has its own setup, which means kernel support for the Pi series is pretty good, but it doesn't generalize to other boards, which means users and integrators may be stuck with whatever last version of Ubuntu or Android the vendor thought to ship. Which means if you want a little network appliance like a router, firewall, Jellyfin server, etc. it often makes more sense to go with an N150 bitty box than an ARM SBC because the former is going to be price- and power-draw-competitive with the latter while being able to draw on the OS support of the well-tested PC ecosystem.
ARM actually has a spec in place called SystemReady that standardizes on UEFI, which should make bringup of ARM systems much less jank. But few have implemented it yet. I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
> I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
Agree. When ARM announced the initiative, I thought that the raspberry pi people would be quick but they haven't even announced a plan to eventually support it. I don't know what the hold up is! Is it really that difficult to implement?
Apparently Pine64 and Radxa sell SystemReady-compliant SBCs; even a Raspberry Pi 4 can be made compliant (presumably by booting a UEFI firmware from the Raspberry's GPU-based custom-schmustom boot procedure, which then loads your OS).
The Pi boots on its GPU, which is a closed off Broadcom design. Likely complicates things a bit.
1. Wow, never thought I'd need to do an investment disclosure for an HN comment. But sure thing: I'm sure Intel is somewhere in my 401K's index funds, but also probably Qualcomm. But I'm not a corporate shill, thank you very much for the good faith. Just a hobbyist looking to not get seduced by the lastest trend. If I were an ARM developer that'd be different, I get that.
2. The review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
3. You can still get N150s with 16gb ram in a case for $200 all in.
> review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
Single core, yes. Multi core score is much higher for this SBC vs the N150.
But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files. Even if you want to use it in the living room as a console/emulator, you are better off with higher single core performance and fewer cores than the opposite.
> But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files.
You're probably right about "most workloads", but as a single counter-example, I added several seasons of shows to my N305 Plex server last night, and it pinned all eight threads for quite a while doing its intro/credit detection.
I actually went and checked if it would be at all practical to move my Plex server to a VM on my bigger home server where it could get 16 Skymont threads (at 4.6ghz vs 8 Gracemont threads at ~3ghz - so something like 3x the multithreaded potential on E-cores). Doesn't really seem workable to use Intel Quick Sync on Linux guests with a Hyper-V host though.
> in the living room as a console/emulator,
if you are talking about ancient hardware, yes, it's mostly driven by single core performance. But any console more recent than the 2000s will hugely benefit from multiple cores (because of the split between CPU and GPU, and the fact that more recent consoles also had multiple cores, too).
Depends on what you need - for pure performance regardless of power usage and 3D use cases like gaming, agreed. For performance per watt under load and video transcoding use cases, the 12th-gen E-core CPUs ala the N100 are _really_ hard to beat.
Yes x86 will win for convenience on about every metric (at least for now), but this SoC's CPU is much faster than a mere Intel N150 (especially for multicore use cases).
I've got i3 and i5 systems that do 15W or better idle, and I don't have to worry about the absolute clusterfuck of ARM hardware (and those systems used can be had for less and will probably long outlive mystery meat ARM SBCs).
One of my Arm systems idles at leas than 1W and has a max TDP less than your idle draw (10W). I also have an N200 box, and a 16-core workstation with an obscene power draw - each platform has its pros and cons.
I noticed nuance is the first thing discarded in the recurring x86 vs Arm flame wars, with each side minimizing the strength of the "opposing" platform. Pick the right tool for the right job, there are use-cases where the Orange Pi 6 is the right choice.
Agreed, at least for a likely "home use" case, such as a TV box, router, or general purpose file server or Docker host, I don't see how this board is better than something like a Beelink mini PC. The Orange Pi does not even come with a case, power supply or cooler. Contrast that with a Beelink that has a built-in power supply (no external brick) and of course a case and cooler.
This OrangePi 6 Plus board comes with cooling and a power supply (usb-c). No case, though.
Fair enough, but I suppose it does not come which storage (NVME). Typically ready to use NUCs that retail for around $200 do. That's often only about 0.5GB, so not a use amount of storage, but more than enough for a streaming box or retro console, say.
Correct, you have to buy the NVME storage separately.
It allows you to build for what is coming. In a couple of years arm hardware this powerful will cheap and common.
I feel like SBC stuff hasn't been worth it over x86 boxes like that for awhile now. Other than the GPIO being useful in certain use cases.
N100 boxes are cheap and use so little power, while having normal OS support and boot setup.
I've got two RK3588 boards here doing Linux-y things around my place (Jellyfin, Forgejo builders, Immich, etc) and ... I don't think I've run into pain? They're running various debian and just ... work? I can't think of a single package that I couldn't get for ARM64.
Likewise my VPS @ Hetzner is running Aarch64. No drama. Only pain is how brutal the Rust cross-compile is from my x86 machine.
I mean, here's Geerling running a bunch of Steam games flawlessly on a Aarch64 NVIDIA GB10 machine: https://www.youtube.com/watch?v=FjRKvKC4ntw
(Those things are expensive, but I just ordered one [the ASUS variant] for myself.)
Meanwhile Apple is pushing the ARM64 architecture hard, and Windows is apparently actually quite viable now?
Personally... it's totally irrational, but I have always had a grudge against x86 since it "won" in the early 90s and I had to switch from 68k. I want diversity in ISAs. RISC-V would be nice, but I'll settle for ARM for now.
the high end of the performance is impressive and this has idle power similar to the processors in it's performance range(AMD Ryzen 7 4800H idles at 45W). This is certainly not meant for lower power computing.
i use the rpi zero 2 for the IO pins
4b / 5 for the camera stuff.
i don’t think using these boards for just compute makes a lot of sense unless it’s for toy stuff like an ssh shell or pihole
We need an acronym for these types of boards: Yet Another SBC With Poor Longterm Support. YASBCWPLS. Really rolls off the tongue.
Or we should just have "STS" (Short Term Support) after the board names to let others know the board will be essentially obsolete (based on lack of software updates) in two months.
> We need an acronym for these types of boards: Yet Another SBC With Poor Longterm Support. YASBCWPLS.
Deadend is how I describe it.
STS - Shit Tier Support