Arm Confidential Compute Architecture/Realms seem like both a potentially significant security tool and a good way to keep a device owner from daring to look at their own data or what their device is doing.
Sure, but hopefully this isn't enabled for consumer CPUs anytime soon. This is like how EPYC processors (IIRC since Zen 2) have 'platform secure boot' where, once a CPU is in a certain motherboard, the CPU can't be used on a different vendor's boards. This definitely increases security for the cloud providers who don't want to have some 0-day compromise a bunch of customers, even if it means the market for used EPYC cpus will be smaller and/or require sellers list the specific board the CPU used to be in.
This is more aimed at cloud providers, specifically so they can offer customers who don't want their cloud provider spying on them some privacy. Clients could be governments who want to outsource their datacenters but run sensitive computations.
There's a rumor on the grapevine that RPi doesn't pay the license fees for their cores. There's probably more to it than that (maybe they do for compute modules which are explicitly not for the .edu market?), but the word on street is that that baseline licensing cost is $0 for them from ARM.
I ultimately think you're right, and we won't see a V9 in an RPi for a while, but it's a more complex situation than most SoC integrators that has a slight chance of working out in RPi's favor. Does ARM care enough to make a tiny core in their gate count niche? Does ARM want to give the free new cores to increase market share and get V9 features in the hands of tinkerers? etc.
Why would RPi pay the license fee for ARM cores? Up until the Pi Pico, they didn't make their own SoCs. You don't have to be a licensee if you're only consuming processors.
edit - the microcontroller was called Pico, not Nano
They're not actually a subsidiary, but historically they've been pretty close. Regardless, Broadcom is absolutely the entity responsible for negotiating with ARM and paying the license fee.
If there's any truth to this rumor it's probably ARM discounting the license fee for the fraction of devices that Broadcom sells to the Raspberry Pi foundation.
ARM knows that RPi is the go-to ARM SBC, and they benefit tremendously when developers treat it as a first-class platform.
Sure I didn't say that they were a true subsidiary, I used the word "basically".
At that point, yes Broadcom is ultimately paying the fees to ARM, but separating RPi from that negotiation is an oversimplification. RPi absolutely has a seat at that table.
And ARM probably doesn't care that much about it being the go to SBC (some cheap chinese board would take on that mantle without RPi leading the charge), but instead that it's the practical successor to what ARM was founded to do. Put passable performance, hackable computers designed by Brits in front of British school children as cheaply as possible.
> The RPi seems to intentionally lag around 5 years behind the Arm roadmap so maybe the RPi 7 will have Armv9 in 2027.
The chance they'll have stuff using RISC-V in 2027 is definitely above zero. It's easy to forget that the Raspberry Pi Foundation have been RISC-V Foundation members for years.
Fingers crossed! A popular raspi-like sbc is definitely something that could boost RISC-V popularity. They already exist, yes, but are expensive for their power.
Rasperry Pi is one of the faster SBCs on the market right now. It’s amazing that it’s as cheap as it is.
The sad reality is that the faster ARM cores don’t trickle their way into mainstream ARM SoCs very quickly. The fastest ARM chips go into cellular phones, set top boxes, and other mass produced electronic goods from vendors who can afford to implement them.
I hope this changes in the coming years as more chipmakers embrace mainline Linux and open source drivers instead of relying on the old models.
Unfortunately that doesn't seem very likely. None of the major vendors (including ARM itself) seem terribly concerned about the open source world. Given they're dealing in 100's of millions to billions of units and the open source world (currently) accounts for perhaps low 10's of millions (all time total!) it's understandable from a business standpoint.
Honest question from an outsider: except for the hobbyist/techie points, is there a solid reason why you'd buy a Raspberry Pi?
Aren't there any Intel Celeron or some such super cheap x86 small boards? I imagine you'll get a ton more oomph, probably much better software support and they shouldn't cost that much more.
It's a standardized piece of kit at a great price point with broad and long-term availability around the world. There's simply not anything that hits all those points in the x86 world.
It's a good target if you want to provide something very close to a "turn-key appliance" without actually selling your own hardware: You can provide an image that a user writes to a microSD card, plugs in, and it just works.
For example: HomeAssistant [1] for home automation, OctoPi [2] 3D printer software, OpenElec [3] media center, RetroPi [4] game console emulator, Volumio [4] music player, and lots more [5].
> It's a good target if you want to provide something very close to a "turn-key appliance" without actually selling your own hardware: You can provide an image that a user writes to a microSD card, plugs in, and it just works.
Better yet - they boot from USB now (and even NVMe if you're using a CM4).
You're missing (IMO) the fun one: PXE. Micro SD cards go bad frequently, and this turns the RPi into a Line Replaceable Unit. Should one ever have coffee spilled on it, a new one is plug and play, with (excluding hardware availability) relatively little downtime.
For many use cases, yeah, PXE is great. Classrooms/labs, digital signage, or anywhere you need distributed I/O, it makes total sense.
The biggest downside is your PXE server going down is going to kill all the Pis, so if you are doing something important you need a highly-available PXE setup. And if you have that, it's probably a better place to host a lot of the services people use Pis for in the first place.
I've setup a small office with 5 lines + 10 extensions all running on one of those (+ an analog line adapter), and it's been trouble free for the 3-4 year uptime so far.
Oh, definitely a good example, I'm embarrassed I forgot that! I was part of the core FreePBX development team for a few years (before RasPBX was a thing), and eventually moved my home PBX setup to a Pi 1 running RasPBX, which I used for 4+ years.
Sadly, as our usage of the IP phones dwindled (cell phones, texting, and now conference software taking over), my significant other began evicting them from around the house ("too big and ugly"). A couple years ago, my instance was at the point I needed to do some major OS upgrades to it, and considering it was basically my office phone and a cordless left, I ended up retiring the Pi and PBX. Now I just register my last couple phones directly to my VoIP provider.
It looks like a Pi 4 can handle "dozens" of simultaneous channels (though depends on codecs used), and that would be my starting point if I got back into doing PBX work or deploying a system today -- though I'd probably run it from an SSD instead of SD card.
This is a great example of the benefits of the Pi hardware, too: People don't tolerate PBX downtime. With the Pi, it's incredibly cheap to have hardware for a failover server (try pricing that out on a commercial PBX!), and in an emergency you can run out and buy one locally or have it shipped overnight.
In my personal niche, there's also http://stratux.me. Commercial competitors are making their own devices that cost hundreds more, but for those willing to put in the few minutes to assemble the RPi and USB dongles into an enclosure, it's very cost-effective.
I bought an Intel Galileo board a couple of years after I bought one of the first Raspberry Pi boards. Guess which one is still supported, and which one the manufacturer bailed on a couple of years after I bought it?
It's not mysterious. Making and supporting Raspberry Pis is the Raspberry Pi Foundation's thing. Wandering around in a confused state is Intel's thing.
Almost every x86 board on the market is way more expensive and/or way more power-hungry than the Raspberry Pi.
The only one I'm aware of that's in the same ballpark is the Atomic Pi, which was apparently a limited production run using heavily discounted surplus components. It's not as popular as you'd expect given the $40 price point, which I assume is because there isn't anything like the same level of community support that the Raspberry Pi has.
Used Chromeboxes are inexpensive, and pretty close to a NUC other than the BIOS. But, people have figured out how to get them to boot Linux, Windows, etc, now. There's also pretty nice used USFF boxes from Lenovo, HP and Dell: https://www.servethehome.com/introducing-project-tinyminimic... The older ones, like a Lenovo M72 tiny, can be really inexpensive.
To me, they aren't always a better fit than an Rpi, but they often are.
I have installed bunch of T620, they’re great. There’s also 4c version with AMD GX-415GA for ~60USD. If you need PCIe check out Fujtsu Futro s920[1]. You can fit a low profile network card with a 5USD riser. About <100 USD where I buy them.
GPU is more powerful, CPU is comparable. Similar power usage. OTOH RP4 doesn’t have builtin SSD, you need additional case, additional heat sink if you plan to actually use it for CPU, and it’s not x86. Some also take PCIe cards without extra boards/hacks. If you don’t need GPIO ports and just want a low power server/appliance then thin clients are still much better option. I use them for VoIP, HomeAssistant etc.
Bought a couple hundred of them for on-site devices (intentionally avoiding IoT branding here). We use them as plug (into mains and net) and play appliances, zero knowledge required by the end user.
Cheap, robust, solid support and tools. Definitely hit the sweet spot for what we needed.
Raspberry Pis are on much older Arm architectures/manufacturing processes, with nowhere near the performance of a smartphone, and with very reduced energy efficiency.
Raspberry Pi 3 was a Cortex-A53 back ported to 40nm outright.
Raspberry Pi 4 is on 28nm. (for comparison, Apple A8 and Snapdragon 810 were on 20nm already, in 2014-15)
It runs with a Cortex-A72 at a low clock (1.5GHz, phones even are at twice that clock nowadays) and quite high power consumption because of the process node.
The memory interface is narrow (32-bit bus, phones ship with a 64-bit bus and laptops/desktops with a 128b one) at a low data rate (LPDDR4-3200).
In addition to that, the CPU can only use 5GB/sec of it, with the remainder being reserved for the GPU only.
Those were some of the sacrifices needed to reach this price point. (and why it isn't representative at all of the performance of higher-end ARMs)
The earlier models of compute stick had serious EMF issues with their USB and 2.4Ghz radio to the point that some configurations were practically unusable. This turned a lot of people away and it never got popular. Not to mention they ran pretty hot even with active cooling.
At the same time Intel was heavily subsidizing their x86 mobile SoCs to tablet manufacturers and for a while it was actually quite competitive price wise but it ultimately went bothered.
Actually, that would only be right if the Raspberry Pi was any good regarding power usage. However the Raspberry Pi is utterly crap at power savings, with practically no power saving modes and very power-hungry (though cheap) components. (excluding the RPI0 models, which have no peripherals whatsoever).
I have a _very old_ PN40 from Asus -- this is a full PC with an intel Celeron, a SATA SSD and about 8GB of RAM that idles at 1.7W (serving websites via GB Ethernet) as measured _at the wall_. This is just standard PC Linux distro with zero customization (other than `powertop --auto-tune`).
For comparison, the latest RPI4B without the SSD and with 1/8 RAM idles at 3.5ishW on the wall. Even the older RPI3 I could never get below 2W, and that is without any peripherals (no display, no Wifi, no eth, min USB) and significant tuning.
On the other hand the PN40 + components cost significantly more (probably close to 10x), and the CPU performance itself is not that good these days.
in the past as a relatively cheap x86 CPU. ($9.62 per unit)
However, it's a 400MHz 486 with some backports (1st generation Pentium, no MMX even) and broken LOCK prefix so that software specifically needed to be recompiled for it.
And at 2.2 W too, it never ended up succeeding anywhere, with no successor.
I'm working in embedded, admittedly I'm not a hardcore hardware guy. But everyone's using rpis for being the basic driver for our little gizmo during development. It's simply a good, cheap board to use for R&D and connecting stuff to.
The x86 based SBC are usually a lot more expensive. Usually they start around $100 or so, but at that point they use way outdated chips. For instance, the basic LattePanda has an Atom x5-Z8350, which is limmited to 2GB of DDR3 RAM.
The Pis really can do a lot too. Mine hosts a vpn, pihole, calibre-web, and a couple of other basic things. Really the only downside to the ARM based SBCs is they mostly use MicroSD cards, which probably are the greatest bottleneck in the system and mostly likely thing to fail.
As a desktop computer, outside of perhaps education? Probably not.
As a dedicated, small, cheap computer for the “brains” of various hobbies and certain professional projects (at least where reliability isn’t the top concern)? Absolutely.
It’s by far the popular $35 Linux box out there. People create distros for all sorts of use-cases. Off the top of my head:
Pi-hole, RetroPi, Volumio, OctoPrint, Home Assistant, piAware, Plex/Kodi, many more.
Currently I'm using one for cross referencing aarch64 neon assembly with other architectures. It's a cheap way to have real modern arm hardware that performs decently.
The raspberry pi is pretty popular here, but as someone who works on both embedded and server applications, I find it to be a massive piece of shit. The broadcom SoCs they use are complete garbage, with really poor reliability and several badly broken peripherals.
Raspberry Pi needs to do two things to become really useful.
1) ship chips with native-AES/hardware crypto
2) get rid of SD and have onboard NAND, like a phone
I’ve raised this on their forums but just get flamed for some reason by senior engineers suggesting that these ideas are ridiculous and it’s for education in Africa, I was even banned for suggesting they were shortsighted.
Maybe education’s where it started, I don’t see why they are blind to the reality that most current users are tinkerers and Linux hobbyists.
> 2) get rid of SD and have onboard NAND, like a phone
Oh dear. That will involve the process of flashing OS onto the Pi and will increase the risk of bricking it. That is the reason for using SD cards instead of a NAND chip.
Secondly, you can't upgrade the storage on it either and would have to choose a RPi with fixed storage space on it. Might as well get a M1 Mac Mini.
I cannot imagine having to choose a future RPi 5 having either a 8GB, 16GB, 32GB NAND' and being unable to upgrade the space on it. So no thanks and no deal to (2).
I agree, I have enough bad experience with on-board NAND from other SBCs; it's no fun at all.
The SD cards are nice for most use cases, but on some boards I'd really like to see a SATA port or M.2 slot which can be booted from and a x4 PCIe port (even if through an optional board or a HAT).
I see the SD cards on the Raspis as modern floppy discs. They are good to have, but not ideal for some cases.
I personally would not trade the hat real estate for an M.2 slot in most use cases. I use that hat slot for peripherals that enable unique and fun applications. Micro SD is horribly unreliable, but idk what the solution is without trading a significant amount of space.
Which ones are you seeing with M.2 slots? I've clicked a good 5-8 of the top listings from that list for which it would make sense to have such slot and they're missing it.
Just because there's onboard storage doesn't mean they couldn't still give it an M.2 or something. And you can install an OS just fine by booting off USB, just as you'd do with any other computer.
On lots of pine boards you can flip a switch or short some pins to prevent the bootrom from using the SPI flash or eMMC. The SPI is not removable, but flashing a bad payload to it is not an unrecoverable error.
The BeagleBoneBlack has eMMC and an SD card and a hardware switch to force it to boot off the SD card. Plus, flash is rarely really bricked, you can just hook it up to an SPI bus...
These aren't mutually exclusive. I would vote for onboard NAND and an M.2 slot, and perhaps also a jumper or something to select which to boot from.
Normal thing might be to boot from on-board NAND but put real workloads on M.2 if present.
I have to also vote for crypto extensions. The Pi 4 is the only ARM64 I have ever seen that lacks AES and CLMUL. Makes it pretty lame for web serving, router/firewall/VPN gateway, etc.
> Yep lack of crypto makes it dog slow for so many things. Including running a little Monero node or whatever.
Good.
The last thing we need is profit mad crypto miners buying up every single Pi and we end up with shortages and insane prices just like with gaming GPUs.
Doesn't change the fact I'm glad they excluded features that make it unfavourable to crypto people - for any reason - so that I can buy a pi for what it's intended for at a decent price instead of wondering if people are buying them to shove them up their arse.
Fair warning: the rk3568/rk3566 platform is very young. You may want to be sure that upstream uboot and Linux support the board before ordering. But the hardware looks promising.
What's so important about hardware crypto?
For me, the real game-changer would be some type of better GPGPU support on existing and future devices. Most of the FLOPs are in the GPU and it's not very practical to use it for compute due to lack of drivers and APIs.
Hardware crypto has an incredibly low gate count, and essentially all network communication requires crypto. Without it, you waste the (much more power and die-space intensive) CPU on AES.
There aren't a lot of flops at all in the RPi GPU, though. They probably want some sort of Neural Network inference engine, both because it is easy enough and makes for good educational content.
Rpi does seem to have already split to serve two different use cases. The normal board for hobby/educational purposes, and the compute module for embedded "real" use. That works for #2. I suspect #1 depends a lot on what Broadcom can give them at a price point that preserves the Rpi history of being dirt cheap.
Got to have proper power section first. The left-bottom section. And designated external power headers rather than the USB or GPIO backfeeding(PoE pins might work for that already?). Some claims the chronic SD card longevity issue in all Pi is coming from dirty power Pi feeds.
I'm really surprised that Apple released Mac ARM cores without SVE. It feels like Neon compat is going to be an albatross for a platform like Mac that can't be quite as a aggressive at removing ISA features as iOS devices can be.
But hey, maybe they just say 'screw it' and remove it anyway.
As Apple and AMD are currently clearly demonstrating, SIMD just really doesn't matter much.
Only a portion of the workloads that are commonly used can be profitably vectorized using SIMD. The curiously perverse nature of SIMD is that the wider the vectors, the smaller the proportion of time used by these portions is, and therefore the less you gain from further vector width increases. AMD is currently spanking Intel in almost all practical vector workloads, despite having half the vector width. Apple isn't far off either, despite a quarter of the vector width.
It wouldn't really ever significantly hurt Apple if they just literally never implemented any flavor of SVE. Spending all that engineering effort on improving scalar throughput probably has much better real-world payoff.
Your example fails to point out why AMD and Apple are able to compete despite having smaller vector widths, and no it isn't because "SIMD just really doesn't matter".
It is because AMD and Apple have wider architectures with more vector ports, they can execute 3 or 4 of these instructions per cycle while Intel can only execute 2(or even 1 in some cases with avx512).
AMD has already said they will be adding AVX512 to the next zen, so they apparently think SIMD matters.
Apple will almost certainly implement SVE, they would be stupid to not do so, and they aren't stupid.
The thing with SVE though is that it's one of those CDC6600 inspired designs like RV-V and ARM Helium that does a much better job abstracting the number of hardware vector lanes. That means way better power consumption at the low end transparently which is very much on Apple's radar.
The initial lack of SVE strikes me as very similar to the first round of Intel Macs launching with 32-bit only processors. It was 5 years before OS X dropped support for those machines, and 13 years before macOS dropped support for 32-bit applications. I wouldn't be surprised to see Apple accelerate those deadlines a bit this time around, and make macOS start requiring SVE 3-4 years after they introduce supporting hardware. That would probably be the point at which it was appropriate for third-party applications to start requiring SVE-capable hardware.
I don't think keeping NEON capability in hardware is going to hold back their chips much, so they probably won't be under any pressure to break compatibility with NEON-using apps anytime soon.
iOS would only be relevant if Apple was enabling you to run Mac apps on your phone, rather than running iPhone/iPad apps on your Mac. The latter capability does not prevent Mac apps from requiring SVE hardware that phones don't have.
Yeah, I really can't see how an SVE port couldn't handle NEON as well with only minor additional effort. If it's a wider than 128 bit port you'll be leaving capacity on the table but the NEON would still run just fine.
Apple pushes their Accelerate framework pretty heavily as "the" way to do vector stuff rather than directly writing to a SIMD extension. But I don't have a sense of how widely it's used in practice.
Question - now that Apple is shipping its own ARM silicon - is Apple beholden to the ARMv9/10/11/etc future? (edit: duh, Apple has been shipping ARM silicon long before M1, I forgot)
Does Apple now have enormous input into the ARM spec process? (Or maybe they did already, because iPhones?)
The conditions in the agreement between arm and Apple aren't public so you can't say for sure. Historically arm hasn't allowed architecture licensees to do their own thing. You're implementing arm standard architecture (and passing the conformance test suite) or nothing. Looks like they've given Apple some special dispensation to do things differently (e.g. the custom AMX instructions that have been uncovered https://gist.github.com/dougallj/7a75a3be1ec69ca550e7c36dc75...).
For ARMv9 this potentially means they can pick and choose what they want to implement. I'm sure Apple has had plenty of input into the specification (along with other partners).
Except we have evidence that they've been implementing a subset of the specification on M1. Like VHE stuck on.
The not well kept rumor is that Apple has a much looser than architectural license due to their very close relationship with ARM, both generally since the early 90s, and that Apple contributed very heavily to early AArch64 design and it's arguably theirs as much as it is ARM's.
The fact that the current M1 already ships with an undocumented Matrix Multiplication implementation leads me to believe that Apple was one of the development partners for this new version of the spec.
>All licensees are not equal however, the first few are called lead licensees and companies pay an added fee for this honor. ARM picks 2-3 lead licensees for each market segment and works closely with them.
Apple is one of the co-founders of the Arm holdings company all the way back then.
Even though Apple and Arm are possibly "at-arms" organizations these days (no one outside of these organizations really will know, except the lawyers, and the contractual obligations between Apple and Arm are likely locked up and highly secretive)
I'd say the other way around, that Arm is beholden to where Apple wants to take the Arm ecosystem, if only by way of showing other participants in the arm ecosystem what is possible. IE. Amazon is able to use the M1 as a gauge to how far it might be able to take its own Graviton cores.
Yeah, agreed. People assume that the power relationship between Apple and ARM is the same as between Apple and other architectural license holders, but all signs point to their relationship being very different with them being at least full partners and maybe Apple being the one dictating terms.
To be clear, Apple was a co-founder of Advanced RISC Machines Ltd (which became the current Arm Ltd), five years after the ARM (Acorn RISC Machine) architecture.
>"SVE2 was announced back in April 2019, and looked to solve this issue by complementing the new scalable SIMD instruction set with the needed instructions to serve more varied DSP-like workloads that currently still use NEON."
Could someone say what "DSP-like workloads" workloads would be? I understand what a DSP is but I'm wondering what type of workloads that are not signal processing share similar characteristics.
My guess is anything that produces an ordered sequence of numbers that need to be dealt with. For example a network card or any character based IO device. Just a couple of examples of the top of my head.
What's really interesting about SVE is that it allow lower than 128 bit parallelism e.g 64. I have seen mentions that some algorithms show best performance with such values.
This is true for graphics/video codecs where you want to move around pixels in blocks smaller than 128-bit. The MMX instructions nobody likes in x86 are actually still pretty useful here.
But you can do SIMD-in-GPR tricks, or dedicated hardware, or GPGPU to replace that, so it's not a big problem if it's missing.
I'm not sure what convinced MS and Sony to both go x64 last-gen but I wonder if part of it was that it made porting to desktop easier (and visa-versa in theory). I'm not entirely sure that the same argument makes sense for Arm making ports to mobile easier because the controls for a console game aren't as easy to map to a phone so the games tend to be different.
Frankly, having worked on consoles for 20 years and been through multiple architecture changes, I'd be perfectly happy to have another generation or two on x64 no matter how crusty it is -- devil you know and all.
> I'm not sure what convinced MS and Sony to both go x64 last-gen
Which generation are you referring to with 'last-gen'? If you mean PS4 / Xbox One then what else was on the high-performance CPU market other than x86 in 2012-2014? The POWER family had moved deep into HPC specialization. POWER7 was a bit old by 2013 and POWER8 wasn't quite there, but neither would make for a good gaming CPU with the heavy multithreading focus (4-way SMT on POWER7 & 8-way SMT on POWER8), and both would require substantial design changes to be scaled down to what a console would want. The ARM CPU of the time was the Cortex-A15, which was a fine mobile CPU but wasn't pushing any boundaries to make laptop or desktop CPUs nervous (and you'd have to substantially invest it its IO capabilities - there weren't really any ARM CPUs + PCI-E x16 + SATA SoCs laying around at the time)
If you mean the PS5/XSX generation then I think it's simply a why not Zen 2 & keep backwards compatibility? It's not like there's anything else you can buy that's a clearly better CPU offering anyway.
PS5 & Xbox Series X are both still largely considered "next generation" and both are x86. Next-next generation, or rather whatever happens in 7 years, is going to be hard to predict.
Basically, no. Amazon has their Graviton processors, but they're not selling them. Nuvia's Phoenix processors might have become that, but they've been bought by Qualcomm and we'll see what happens.
Qualcomm is likely going to want to focus their talent on the mobile market where they've historically been running only slightly customized ARM designs. I think Qualcomm would like to close the performance gap between it and Apple and fend off MediaTek who is taking an increasing amount of marketshare. As the US weans off CDMA, it's likely that Samsung might end up using their own chips more. So Qualcomm might want to focus Nuvia on mobile.
Qualcomm had tried its hand at Intel competitors, both on the consumer and server side. They seem to have given up on that for now.
Ampere is trying to get into the server space, and Anandtech notes that they're competitive with the AMD EPYC Rome series (https://www.anandtech.com/show/16315/the-ampere-altra-review...). Oracle said they'd launch some in 2021, but who knows how limited that will be or whether their plans will change.
Amazon will want to use their own chips rather than pay a third-party. More and more datacenter operations seem concentrated in the big three providers who might not want to let a new third-party get margin there (specifically, Amazon, Google, and Microsoft). I can't imagine Google not going with an in-house design if they wanted to launch an ARM platform.
Consumer devices are difficult. macOS won't work on non-Apple hardware. The Windows ARM experience will be sub-par because there's no dictator to force ARM on everyone (like Apple). Apple can say, "we're moving to ARM" and developers either get on board or are left behind. If Microsoft says, "we're moving to ARM" it's more like, "we're going to add ARM support, but we'll always be a first-class experience on Intel and you can still expect to run Win16 apps from 1990 on your new ARM computer and if developers and consumers don't show interest, we're flexible and we'll pivot away from ARM...so maybe don't buy an ARM machine right now because we haven't been able to convince developers...and since you won't buy the ARM machines, we'll probably just think it's a flop in two years and put fewer resources towards ARM...so there's no real reason for an ARM CPU company to want to make good CPUs...which reinforces why consumers shouldn't buy them..."
We are seeing movement in the space, but it's hard. I don't think we'll see a lot of consumer stuff for Windows and Linux comparable to Intel. I think it's just hard to break into that space. With Linux, the market is small already. With Windows, trying to convince consumers on a less-compatible experience or an experience that Microsoft is less committed to is hard. I think it's easier to compete against AMD and Intel in the server market where so much software is already CPU-independent and doesn't have the same reliance on consumer software and compatibility. I think if you're making consumer processors, you want to target Android and Chromebook where you won't be dealing with convincing consumers to select a lesser-compatible, lesser-supported alternative to Wintel.
> Amazon has their Graviton processors, but they're not selling them.
Graviton2 is basically just an implementation of ARM's N1 design which uses a Cortex A76 CPU core. The single thread performance is better than like a Kirin 990, likely thanks to 32mb l3 cache and 8-channel memory controller, but it's not going to win any gaming performance crowns either.
There are loads of high performance Arm chips, but they're pretty much all in the server space, ie power hungry. But does any of that matter for a console? The Switch seems to be phenomenally successful and yet is powered by a very modest 64 bit Arm chip (4 x Cortex A57, an 8 year old microarchitecture).
Since the Wii Nintendo has been successful occupying a different niche than Microsoft/Sony. The Xbox and the Playstation both sell based on top-of-the-line (console) performance; the Switch sells for other reasons.
I doubt either Microsoft or Sony are going to change tack and try to fight Nintendo (and probably lose) on Nintendo's home turf.
Still at least compared to the DS/3DS or Vita the Switch is very powerful, not to mention using a muchnmore sane architecture (especially compared to the DS devices).
That might be an Nvidia GPU - with DLSS - but who would manufacture the CPU?
Or, could we see Nvidia (since it has acquired ARM) jumping into the console space?
A suite of three consoles ranging from mobile/portable, to 1080p/TV, to 4K/Desktop quality could be very appealing, especially if the top-end model could also use a mouse and keyboard and dual-boot into ARM Windows.
Nvidia would just need to design a Linux game OS and storefront. If they offered a 10% developer commission and paid for some big exclusives, they could have a very compelling product.
If you want an Nvidia GPU you might as well let Nvidia design the whole SoC so they're going to use ARM cores that they're familiar with. Xavier and Orin are designed for cars but you can imagine how they could be modified into console SoCs.
Sort of. They appear to be in a bit of a holding pattern, having not released a competitive SoC in that space for some years.
The rumor is that the Switch contract only came because Nvidia had a firesale on those older chips that they expected to make their way into flag ship Android devices, but instead sat in inventory for years.
Maybe after the ARM acquisition goes through (if it does), they'll start looking down that line again.
Can you give some more pointers? There's a lot of former Nintendo engineers.
I find it very difficult to believe that contrary to the rumors, Nintendo had been sitting on a SoC for many years without releasing a product or even pushing for a die shrink. Like, the Tegra X1 was announced in 2014, and released into products by 2015, and the Switch didn't come out until 2017.
Turning off an entire core complex also points to it not being designed for them. Nintendo isn't known for paying for gates they aren't using.
Nice.
OpenJDK has managed a breakthrough with their hardware agnostic Vector API (the first of its kind?) so that every SIMD algorithm coded with it will work equally well on ARM (and also can target at runtime the best vector width if available)
TIL about JDK's vector API. That's awesome; I mostly work in the C and C++ space and have learned to embrace the gigantic ifdef's for various arch-specific vectorized instructions.