We don't have AArch64 desktops because it's not possible to build an AArch64 motherboard. There are no standard sockets -- and every ARM vendor starts their system using a proprietary, inscrutable Rube Goldberg machine. On the Raspberry Pi, the GPU starts the processor for crying out loud.
I made this same complaint about Android/ARM a while back[1]. ARM is just random shit soldered to random pins. I wish Microsoft would open source all the drivers to their dead phone line, since at least that had ARM+UEFI. I suppose there's DeviceTree but .. honestly that's just laughable and I've seen very few boards even release a correctly configured/working device tree.
ARM defines SBSA and SBBR as the standard boot environment for servers, so you can take a Windows ARM ISO or a RHEL ARM ISO and boot either on the same server hardware. Desktop ARM - were there to be any demand for it - could support the same standard.
I'm not sure that the standard socket is so important. I don't remember ever changing the CPU on a motherboard: by the time significantly better CPUs are available, they need a different socket.
An AArch64 desktop could be built in the same way that the Raspberry Pi was built, just with a higher spec CPU and more connectors.
You're right about the socket, but the problem is that every chip/board basically needs a custom bootloader - you can't just plug in a USB stick with a Debian AArch64 installer and go to town, you need a custom packaged blob from the vendor, which usually comes with their own blessed Linux distribution (e.g. Raspbian), and you're out of luck if that doesn't meet your needs.
If this model works for you, then yeah, a Raspberry Pi or a similar board could work. I've been hearing pretty great things about the new Pinebook Pro laptop.
> But: Is the bootloader even unlockable to install alternative software?
IIRC, MS require the device to support Secure Boot, but whether or not you can disable it or whether you can add other signing keys is undefined (and I have no idea what vendors do). (Originally the requirement was it must be always enabled and only the OEM could add keys, but that has since changed.)
Totally agree! You'd think that vendors would agree too, since it makes it strictly easier for engineers to use their chips and design them into higher value products.
In addition to the custom bootloader mentioned above, this would require significantly more SKUs. Many combinations of motherboard and CPU in use at the moment would probably not even make it to market.
> We don't have AArch64 desktops because it's not possible to build an AArch64 motherboard.
For the most part, this doesn't matter. Most SoCs have all the features you want. Though, the article is specifically bemoaning lots of missing I/O. I think most vendors think that the USB connectivity is a big wildcard. But, I agree - NVMe would be nicer than mmc/SD, that's for sure.
> every ARM vendor starts their system using a proprietary, inscrutable Rube Goldberg machine.
This does matter, a lot. It matters so much that we didn't change how x86 booted for ~3 decades after the AT. ARM is trying to change things for their server products, at least [1]. I'm curious just how complicated/expensive the minimal product that satisfied this platform is. Could you still do it with SoCs in the same tier as the Raspi/ODROID/etc? Server-in-name-only?
In the x86 case, it's proprietary to Intel, and the blob for each generation is universal across products using that generation of CPU. That blob then loads up a standard higher-level firmware that presents a uniform environment to operating system bootloaders. ARM boards don't boot with a blob from ARM, but with a blob from the particular SoC vendor, and everything after that point is unpredictable and unstandardized.
Oh, there's a few more levels of proprietary nonsense you didn't even mention. There's the chipset initialization, which is proprietary to either Intel or AMD (depending on which vendor you're using). But that doesn't even come straight from the CPU manufacturer; it's baked into the firmware provided by your motherboard vendor, which typically includes a bunch of middleware from a third party BIOS manufacturer like AMI. And all of it is all completely proprietary, and highly specific to the particular motherboard you're using, even though most of the hardware is all identical across boards and manufacturers.
It's not quite as bad as you make it out to be. The early init stuff comes from the CPU manufacturer and is merely included verbatim into the firmware packages distributed by the motherboard vendor.
The UEFI environment is a mashup of modules from the CPU/chipset manufacturer, a BIOS vendor like AMI, and sometimes stuff from TianoCore/EDK. The end result is specific to each board model, but most of the modules going into the UEFI implementation are either universal/hardware-agnostic, or uniquely determined by the choice of components on the motherboard (primarily the chipset and superIO). The only stuff here that's excessively messy and error-prone is the ACPI tables.
The configuration UI is particular to the board vendor. The UI necessarily has some tweaks for each particular model, but vendors share most of the code and visual design across the entire product line. Occasionally, it's possible to skip the silly UI and fall back to the default old-fashioned text mode standard configuration UI provided by AMI or whoever.
It is actually, but it requires AMD or Intel to care. AMD could easily make an AArch64 CPU that is x570 compatible, it would require a BIOS update for sure but they could do it. There is nothing Physically stopping them from making a CPU with all the interconnect bits needed and a ton of AArch64 cores... other than there is really no fiscal incentive to do it.
Linaro has gone a step out from a "socket" to define board spec and mezzanine connectors.
Arm has a lot more variation than general purpose computing in the processor / SOC / SOM implementation, that is indeed the main appeal on the small end of the scale
The author's description of what is a "desktop system" sounded a lot like a "no true Scotsman" to me. I've had several desktop systems that didn't meet all of the author's criteria - in fact, I've had desktop systems that met none of these criteria. And these systems did meet what most people would consider to be "desktop systems": box shaped, designed to be put over or under the desk, tethered (no built-in battery other than the RTC battery), internally expansible (not all-in-ones or NUCs), with a x86-32 or x86-64 processor, and booting with traditional BIOS and/or UEFI.
I disagree, the author's description matches the majority of x86 motherboards in production, and if you were to wander in to a random computer store and ask for a standard desktop PC you'd be led to an array of machines that met all of those requirements. If it's not "small form factor" it almost certainly checks all the boxes.
I agree with the author, what AArch64 (or any alternative architecture) really needs to be taken seriously for more than "toy" or "appliance" type use cases is a real desktop board that an interested party could swap in place of the motherboard of one of their old PCs without having to invest in a pile of adapters.
To me, that means the following:
* Standard ATX or MicroATX formfactor
* Standard ATX power input
* At least two standard DIMM or SODIMM slots, preferably four
* At least 32 lanes of PCIe 3.0 or greater, exposed as at least one x16 and one x4 slot plus one x4 M.2, leaving eight lanes for onboard accessories or more slots as desired
* At least two SATA channels, preferably four
* At least one copper gigabit ethernet interface
* At least six USB 3.0 ports, preferably more and faster.
* Onboard audio sufficient for watching Youtube or participating in a VoIP/videoconference call.
Since I'm pretty sure that providing video from power on rather than after the OS loads requires support in the video card itself, onboard video of some sort is probably also a requirement for now as well. Specifics don't matter, but it should be able to handle an accelerated desktop and video decoding.
Yes, that does seem to be pretty much exactly what I'd want out of an ARM desktop platform. It would be nice if they'd offer more cores in the desktop variant of the board, and anything that has a "contact us for pricing" link annoys the crap out of me, but it does meet the specs.
That just means that he used a poor term or didn't qualify it. Ultimately those features are the desired ones. So that's what they are looking for, regardless of what it's called.
If you want a non-x86 desktop, but not AArch64 in particular, I'd recommend looking at the Talos POWER9 systems instead. I have a Talos II Lite, and my only two complaints (less pcie slots with only a single socket, and no built in sata) were solved with their new Blackbird motherboard. It's gotten closer to the desktop experience than any AArch64 computer I've tried, and in some ways it's even better (petitboot is awesome).
Also, those things suffer from the same issues as those pointed out in the OP: a server motherboard crammed into a desktop case; the built-in VGA graphics comes from the integrated "server management engine", limited hardware support/driver availability, etc. Lots of fun to be had for lots of money!
The cost is pretty comparable to the Ampere eMAG AArch64 system in the OP, which costs about $3k. It's an unfortunate fact of low volume production, though.
It can be useful to test code that's supposed to be portable on a system with a weaker than x86 memory model.
And a self-synchronizing instruction stream can be a security advantage and a fixed instruction width means Power and AArch64 are trivially self-synchronizing. And really just not having the dominant ISA is a pretty big security advantage in some cases.
I'm pretty sure that neither of those is a huge deal for most people but I can see wanting it.
> And a self-synchronizing instruction stream can be a security advantage and a fixed instruction width means Power and AArch64 are trivially self-synchronizing.
AArch64 is aligned, but not self-synchronizing. While it would not be possible to execute, reading code at an arbitrary offset can still create be valid code. I'd imagine Power has the same issue, since it's really hard to do this generally if you'd like to have a sane encoding and support immediate.
Somewhat interestingly, x86 is variable length but I have heard that it is often "eventually self-synchronizing": apparently if you start it off at the wrong offset, it will decode a couple of instructions incorrectly but usually end up disassembling to the correct ones.
Generally fixed width instruction sets require that instructions be aligned, that is instruction addresses end with two 0s if they're 32 bits. One benefit of this is that you can make a jump 4 times as long for a given constant size. In the case of AArch64 this lets them do 128MB branches with a 26 bit signed constant or 1 MB conditional branches with a 19 bit constant.
Another benefit is that the fetch stage doesn't have to handle corner cases like the instruction crossing cache line boundaries. I don't think the security implications were anything the designers cared about but they're a third benefit. Oh, and I think some language designers have stored garbage collection related information in the least significant bits of stored addresses since it doesn't affect flow control but I wouldn't swear to that.
You're right that a natural x86 instruction stream will tend to synchronize itself fairly quickly. The problem is a malicious instruction stream that can be designed not to do that for at least long enough to do its thing.
When multiple cores access the same memory location, it is expensive for one core to invalidate the cache of another or ensure operations don't get re-ordered in either core. These days most architectures when reading or writing shared data require memory barrier instructions to guarantee your core to sees writes from another, or other cores to see your writes, in a timely sequential fashion like we expect when writing code that accesses variables.
Historically there were architectures that would reorder these accesses in a "lax" way, making very few guarantees about what you will see from another core, on the theory that it will cost less to keep things synchronized between cores (most data is not shared anyway, so why waste work trying to create a unified view across cores? The CPU can also reorder work for better efficiency.). Intel is historically one of the most conservative, strict-ordering architectures, requiring fewer barrier instructions and creating the illusion that reads and writes more or less occur on a single timeline.
I don’t know why you’re getting downvoted. It’s a fair question to ask.
“Why can’t I buy” is essentially equivalent to “why doesn’t anyone sell”, which is basically “why isn’t there a market for?”
It’s possible to have a conversation around “why isn’t there a market for ARM desktops?” or equivalent but a conversation about “why isn’t there a market for anything but x86” would basically be enumerating the alternatives and examining each.
Nobody is going to open a store aimed at the “anyone but x86” market.
It's not that crazy. In other fields, there are stores for "alternative ___". There's a record store a couple blocks from me which sells every physical medium except CD. There's a car dealer which sells cars for every fuel except gasoline. There's many restaurants which serve foods with every ingredient except meat. There's government departments, education departments, and stores which deal in every language except English.
When one way is dominant, it's not uncommon for all the others to get rolled up into an "alternative" grouping. Any one of them by itself would be too small to amount to even a small store, but all of them together are a decent set.
x86 may be locally optimal, but I don't trust it not to be a dead end, or at least, have its lunch eaten by something else; I personally am interested in non-standard configurations just to make sure that we can try new things.
You want better performance per watt. Apple sells millions of iPads and Google OEMs sell some number of ARM Chromebooks and Tablets. Amazon is selling little android tablets as low as $40.
The desktop is essentially a legacy model as it stands today. Nobody is offering ARM because Windows doesn’t support it, and there really aren’t other options. Even enterprises are shipping the cheapest crap possible for PCs and innovating on mobile.
You’ll see some change when Apple ships ultralight laptops that aren’t as compromised as the intel platform devices.
Maybe people just want to play heavily modded Minecraft on a Raspberry Pi that they can quickly set up and shotgun to eager nephews all across the country. Just throwing that out there.
maybe you detest x86 and love aarch64. it's like cars. i dunno...why do you need to understand it for it to be a "thing" that exists in the world outside of your head?
it's the 64bit leap from the acorn risc machine, for crying out loud, it's its own goal, wantable for it's own sake.
pity that riscos will never be 64bit lol
I think the Jetson AGX Xavier is probably a better match out of the box to the requirements than anything listed on that page. The only thing it doesn't meet is the plus on 16+ GB of RAM. For the same price you could get a much better x86 desktop though.
- 8 core CPU
- 16 GB LPDDR4x
- Volta based GPU
- Gig ethernet (native)
- PCIe x8 (x16 physical slot)
- m.2 E key (PCIe wireless or LTE)
- m.2 M key (PCIe NVMe SSD)
- 2 USB 3.1
- HD audio header
- eSATAp through PCie bridge (allows native 2.5" SATA drives)
But really there isn't really much reason to use ARM on a desktop outside of needs relating to specifically needing to have ARM.
I'm trialing using AArch64 for desktop use. I bought a Raspberry Pi 4 4GB desktop kit awhile back and it sucked in the beginning. So I did what most people do with their pi; that is to leave it unused and unplugged like a Nintendo Wii.
Finally after the firmware fixes came about to improve cooling a few months ago and it was advised to just run the thing set up on its side, I decided to revisit running a 'desktop' on it.
Raspbian, as most people know, is 32-bit and the idea is you can plug the microsd card it into any model of pi ever made and it'll run. This sucks for performance and you can't run a version of firefox made the past few years. Chromium isn't an option on a 4GB ram system, it's just not.
I eventually settled on using sakaki's gentoo pi64 build and it was fairly painless to get running. Unfortunately, she hasn't kept the build up to date and there's a lot of weird customization she performed to get things running in the early days.
Anyways aside from having it run some compiles overnight and finding a new binhost to pull prebuilt stuff from, it runs pretty well. Actually better than my Intel Atom powered GPD Pocket (as it does not throttle ever). All the hardware works, and I've had no video issues since switching to the 5.4 kernel. It checks most of the boxes that OP asked for aside from the RAM being limited to 4GB. Unlike more powerful options on the market, it's got a wide release and a low price point so developers will be working on it.
So yeah, there's your AArch64 desktop you can buy today that everything works on for around $50. If you want to improve performance a bit more for IO, there's SSD/nvme adapters for it, and while I'm unsure about hooking into pci-e, the device has it...
Oh, and the audio thing is dumb as hell to complain about. Just output the audio through HDMI to your receiver and call it a day. What's the big friggin deal that the desktop lacks a 7.1 analog output. If you want high quality headphone audio, USB audio is practically plug and play on any OS these days. I don't get it.
Ubuntu is probably the way to go on the Pi if you're looking for compatibility. The 32 bit version is ARMv7 instead of ARMv6 with HF packages as Raspbian is. The 64 bit version is ARMv8. It's an official version from Canonical so you don't have to worry about some person abandoning their build or compiling your own packages all the time. With the newer CPU in the Pi 4 it's possible for them to support >4G models in the future but they aren't really interested in targeting $100+ use cases so it'll never make a "great" desktop. Useful as a fanless device that can run standard code though.
If you hack into PCIe you'd lose the USB. It's only a single x1 connection. I suppose you could hack a PCIe switch on top of that but even so it's x1 so it was already oversubscribed just hosting the USB 3 interfaces.
I don't know, probably adds to my CPU overhead a little and I don't feel like doing that.
Not using chromium or electron apps helps my RAM situation pretty strongly. The only time I run OOM is if the pi is doing on-device compiling with too many threads. I run a modest 8-12 firefox tabs. Usually just over half the ram is utilized throughout the day.
> Oh, and the audio thing is dumb as hell to complain about. Just output the audio through HDMI to your receiver and call it a day. What's the big friggin deal that the desktop lacks a 7.1 analog output. If you want high quality headphone audio, USB audio is practically plug and play on any OS these days. I don't get it.
Dude, audiophiles run complicated setups. Maybe their receiver doesn't take HDMI and they'd prefer to route that to the TV. Maybe they're using analog amps from the 70s that take 1/4" balanced TRS. Maybe lots of things. That thought process is exactly the kind of thinking that leads to rigid, inflexible designs that lock people into parts or hardware they don't want to use
> Measurable with an oscilloscope though. Not your ears.
Admittedly I’m running a pretty budget setup, so modern systems may have improved measurably, but I’ve noticed on both my desktop and my laptop (with 2010 Intel 5 Series chipsets) the onboard audio line out gets really noisy, so much so that I can hear different kinds of buzzing when I move the mouse and scroll :) Wouldn’t call myself an audiophile by any means either, haha.
Some devices have decent analog outputs. Apple's had a good history with their laptops. Most suck, all of the pi's analog outputs suck, there's too much noise and they don't have proper amp circuits and so aren't able to maintain full frequency response under a load. I wear IEM's that are extremely sensitive, so any noise gets picked up inside the system if it's not well isolated, yet because of the earlier mentioned problem they won't get full bass response. This leaves the sound very thin and noisy.
But that wasn't even my point. My point was: Any sort of sound being sent out of the device should be digital (pci-e to a sound card, USB to a headphone DAC, or HDMI/toslink/coax to a receiver). All of those options mostly eliminate the noise problem when done right.
Just wait until Apple releases a Mac mini powered by an A-series CPU. If I had to bet, I’d say this will be the first Arm Mac they will release. This way developers can easily buy one for porting their apps before A-series-based MacBooks are released.
I am going to go out on a very short limb and predict that Apple does no such thing in the foreseeable future, say 3 or 4 years. In the past, Apple has only transitioned to a new architecture once it was so powerful that it could easily emulate older hardware. Apple doesn't care about backwards compatibility as much as Microsoft on desktops but even they cannot release a new desktop/laptop that doesn't run existing software.
Although ARM processors are fast in certain benchmarks, they cannot compete with x86 for performance and certainly cannot emulate the instruction set at a speed users would find acceptable.
ARM makes great sense for low power devices and _maybe_ servers. Apple already makes tons of the former and seems uninterested in the later.
It should be about 4 years from now that the x86-64 patents have expired, so I wonder (not an expert on anything) if we might see Macs with dual-architecture features after that, whether full support for two instruction sets or just some mixture of software emulation and hardware acceleration for x86.
My iPad Pro is as fast on single-thread as all my Intel desktop machines, and 2/3 on multicore. In a non-thermally throttled enclosure, I would expect it to outperform.
3. rosetta only emulated the actual app binary; eventually it called in to native libs, so performance for the most part held up really well iirc (for apps, not sure about games)
That article is overly optimistic and only works for the very specific, tailored example application. However, Bitcode is essentially repackaged LLVM IR so it's not hard to see it coming to macOS.
I find it interesting that every Apple CPU transition has started at the top, but everyone and their dog seems to know that ARM will happen and this time
start at the bottom.
Apple rumors are half prediction and half wish. I don’t disagree with your wish (10 year old performance is fine for me) but it’s really not Apple’s MO. They aren’t going to release a new pro workstation and then turn around and make cheap low-end hardware aimed at developers.
My guess is a developer-focused supercar of a Macbook. 16 cores, FaceID, maybe OLED?
Jet black, announced for preorder at WWDC. They wouldn’t be able to make enough, and now a boring, disruptive CPU transition is sexy.
There have been some weird rumors out of Asia about a Mac “gaming laptop” that could easily be misinterpreting a product like this.
Also, I think folks ITT are overestimating the difficulty of acceptable performance binary translation (not direct emulation) of x64->AArch64, especially when Apple controls and has experts at every level of the stack. 32-bit apps weren’t de-supported this year just to save some memory.
But Apple's CPUs are so powerful because they have lots and lots of cache compared to most ARM SoCs. I'm not saying they can't release a Mac with ARM under the hood (they probably will), but I don't see it setting a trend for ARM desktops elsewhere.
Apple can use a lot of cache in their CPU designs because they can charge a lot for their hardware (L1/L2 cache is very expensive). But the rest of the market can't since they're mostly selling commodity hardware — with commodity chips — and price is a very sensitive point there.
So Apple could in theory make and sell plenty of Premium ARM laptops with performant x86 compatibility for older software while being fast and cool for native apps while the Windows/Linux competitors will be still lagging way behind with a poor user experience.
The Apple TV is more or less an iPad. It already supports bluetooth keyboards and other accessories, so you could really say that Apple already has an ARM desktop Mac, it just needs more user software.
Sigh, no definition of what a "desktop" would be. That would be a start. Does it have to have "on board audio" ? All my early desktops up through the Pentium 3 used an add in card (typically a Soundblaster) to provide audio. So why make it the "requirement" for a desktop? Graphics? All graphics were "plug in" in early desktops until transistors got so cheap you could throw one into the southbridge for "free." The list goes on.
It would a much better exercise to create a systems design document for what an AArch64 desktop should have, required, extras, and nice to have. Start there and work forward, not at a full fledged PC desktop with 30 years of evolution behind it and start there.
Desktop and workstations will be the last place to find non-x86 chips. Power use doesn’t matter, and critical consumer applications and raw performance are a must.
Its much more exciting to see what will happen with laptops and (in cloud) servers.
I had some representatives from ARM and NXP come and give a demo of some ARM boxes.
Now these were in the context of white box customer premises networking equipment, but what I found quite interesting was that these machines all supported UEFI.
There's certainly an effect to have some standardisation for ARM machines.
Yea I think the model here (especially since the article seems to really be conflating "desktop" with "High end desktop" especially in terms of the Arm ecosystem ), will be that we do see these features developed from the server side first, and then given the "content creator" treatment. I don't think that this is bad as the article implies. Rocking the top of the HEDT market is good for magazine covers, but it isn't exactly where the profits are. Filling datacenters will fund the work that we're interested in -- namely the robust, expandable, systems that have the muscle and the mainstream features that make for a cushy desktop experience.
But also, Aren't we still complaining about linux on the desktop?
Content creation is done almost exclusively through proprietary software that's heavily optimized for a given platform, so this would be one of the worst markets to tackle first with an architecture change.
On a related note, I just got my Pine64 Pinebook Pro, and it's much better than my beloved Samsung xe303c12 in all apects. Lightweight, FHD screen resolution. No need to fiddle with the OS.
I'm still waiting for mine, it seems like the most viable option for someone who wants to develop for armv8 Linux. I had the original PineBook and found it disappointing, how would the PineBook compare to, say, a MacBook Air?
Honestly, would like to see more RISC-V offerings instead of 64bit arm. Although the ISA is still relatively new. Also some the extensions like bit manip are not finalized. However, the base instructions are.
I'd love to have a RISC-V tablet one day, but think ARM will be an intermediate step, which will, through gained experience and already built infrastructure, ease the second switch of ISA.
His list of requirements for a "desktop system" is ok for a developer desktop PC, but for most people it's complete overkill.
The RPi4 running Raspbian is just a tad short of being perfectly acceptable for users like my GF. It's just a tad sluggish, so either a bit more streamlining of the platform, or a bit beefier CPU and GPU and it would be just fine for her.
The lack of quality storage interfaces is a killer too. Pi 4's still eat SD cards and even when they don't it's not exactly speedy. USB to mSATA/m.2 SATA adapters provide a step up but it's still quirky and you still need a functioning SD card to boot to it with the current firmware.
Needing an SD card to boot is especially annoying because the pi 3 did have the ability to boot from USB and I think network with no SD card present.
EDIT: To be clear, non-SD boot on the pi 4 "will be added in the future via optional bootloader updates"[0], so hopefully this is a temporary complaint.
The NVidia Jetson Nano is pretty close to a useable desktop nowadays too. I run the stock Ubuntu Gnome desktop on there and mostly the lag isn't a problem unless I start doing something somewhat intensive like Gimp or Blender.
I have a cluster of 6 in my basement running Kubernetes and Glusterfs, and I have been using them for all my fun CUDA practice to learn a bit more image processing.
I think the biggest missing feature right now for everyday use is browser support for the hardware video decoding. Mostly for videos on all social media and YouTube, or even Imgur serves up h264 video rather than gifs now.
I've recently been recommended YouTube videos by ExplainingComputers and that channel introduced me to the new breed of SBC (single-board computers) including the Odroid. One of the more recent videos is for the Khadas VIM3 [1] which was a very interesting ARM based SBC.
I am blown away these days by the power of these cheap and small boards. I just don't have any use for them right now and I know I would tinker with them for a few days then they would end up in a drawer collecting dust.
> 4U rack/tower case sounds clearly like “we know it is server but will try to sell it as a desktop”. No m.2, no audio, no mention about USB ports, no 1GbE network.
Isn't every server meant to have a 1GbE port nowadays? Dont most of the servers have m.2?
And for PCIe slots - aren't those extensible with raisers?
> That system was kind of Frankenstein’s monster. Audio over some USB stereo-only dongle, all USB devices plugged into hubs due to only two ports on mainboard I/O panel.
Perhaps one could just put a USB hub and the audio dongle inside the actual case?
Perhaps. But I haven't actually seen servers with SFP+ 10G on-board. All the servers we buy have a classic 1GbE RJ-45 built-in and we extend them with SFP+ 10G PCIe adapters when needed.
Every time I've tried to run AArch64 like a desktop (mostly via ODROIDs), it's been an exercise in misery. Endless tweaking, and almost always an important service, like 3D graphics or audio, doesn't work. Last I checked the driver situation there is just lacking
Support for the Rockchip RK3399-based boards seems pretty good. Audio and wireless on my NanoPi M4 have worked flawlessly for me. No problem playing YouTube videos, either, even at 1.5x speed. Editing 3D CAD files using Freecad in Debian 10 has also worked great.
The only things I miss are being able to run the latest versions of some software, which are often compiled only for x86 or MacOS. For instance, I'm currently stuck on Debian's Firefox ESR (currently Firefox 68.4) because Firefox doesn't offer precompiled binaries for AArch64 on Linux.
Why would one want an ARM desktop system (except if (s)he is an ARM developer)?
For my desktop I want as much power as money permits. And the most important thing is ergonomy, since I rather invest into my eyes and wrists than into the silicon.
I use a pedal-powered computer. For me, every watt counts.
My desktop system is a NanoPi M4 and a 16" Sceptre monitor. The combined system draws a total of about 6.5 W. Granted, there are some laptops that come near that level of power consumption. I simply like the form factor of a desktop PC.
I think that the main argument here is that there isn't a developer desktop machine that fits the HEDT slot for ARM. There is the Raspi / Odroid / Pine64 tinkerer scale, and the ThunderX2 scale, but nowhere in the middle.
I think the Gigabyte ThunderX Station is being marketed in some channels, like the main page of this site: https://www.phoenicselectronics.com
The new Fujitsu venture Socionext does sell stuff, but like everyone has said, they're interested in doing design-build for your factory robot controller or your edge supercomputer. But they do sell a box: http://www.socionext.com/en/products/assp/SynQuacer/Edge/
It just isn't marketed as a GP /HEDT box because that isn't their market
Raspberry PI 4 runs with 1 4k monitor pretty well and has audio and you can attach peripherals.
It is desktop like.
Arm desktops are just single board computers. Why? Because that is all you need.
If you want to assemble overprice parts yourself to get marginal benefits stick with x86.
The truth is the market for desktops even in the x86 market has shrunk considerably because there are few benefits except if you are rendering and even then you should be using cloud compute in that case.
Like an artist is supposed to use a remote desktop to do 3D modelling? Or an engineer doing CAD? Or a video editor creating 4k video?
Yes. High-end workstations are effectively obsolete for professionals rather than prosumers, it's just taking some IT departments a while to catch on. All those workloads are highly bursty and involve substantial collaboration, which makes virtualization a natural fit. I haven't tested and wouldn't necessarily recommend a Raspberry Pi as a client terminal, but it just doesn't make sense any more to put a really powerful machine under everyone's desk rather than having a rackload of servers as a pooled resource.
Without mention of Apple's plans for MacBooks, I don't buy this doom-and-gloom prediction. Remember that everyone thought AArch64 was DOA right up until Apple released an iPhone using it, and then all the other manufacturers instantly followed suit. I wouldn't be surprised if Apple is the first to release a mainstream AArch64 desktop/laptop, and then everyone else follows them.
It's vaporware until it exists. Sure, Apple seems to have the strongest ARM CPU at this time. It remains to be seen if they can single-handedly create an entire arm64 PC ecosystem.
I suspect they'd start with their server farms, just as Amazon is doing with its own ARM AWS instances based on the Graviton SOC it bought when it acquired Annapurna. Amazon is on record saying they are going to migrate their internal services like ELB over to ARM.