Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The End of x86? (fernstrategy.com)
203 points by mjfern on Oct 21, 2010 | hide | past | favorite | 115 comments


I think ARM is going to continue to bite into x86 market share significantly.

But this article is wrong to right off x86 so easily.

Firstly, power consumption. It's right that ARM has lower power draw than x86. The article is wrong by how much, though. Very low power ARM chips draw much, much less than 2-3 watts. These are mostly for embedded systems, though.

The 2-3 watts vs 5 watts for ARM vs Atom isn't too significant. The big problem with Atom was that the support systems (memory controller etc) draw ~20 watts. That situation is being improved for netbook systems atm.

For sub-netbook systems, Intel is launching it's Moorestown architecture. This is probably still isn't dropping into the Smartphone market in this generation (despite Intel's marketing: http://arstechnica.com/open-source/news/2010/01/moblin-linux...), but should be great for tablets: http://arstechnica.com/gadgets/news/2010/05/intel-fires-open...

The article also implies that Intel's foundries are a liability. That would be true if there really was useful "competition in the foundry market". Sure, if you want 45nm+ chips produced, there are a number of foundries that can do it. But once you start looking for 32nm foundries they get a lot rarer, and Intel has just announced it's building its new 22nm foundries. That's a whole generation ahead of anyone else in the industry and is a big competitive advantage (Smaller scale in chip foundries means more performance for the same power, or less power for the same performance.)


The 2-3 watts vs 5 watts for ARM vs Atom isn't too significant. The big problem with Atom was that the support systems (memory controller etc) draw ~20 watts. That situation is being improved for netbook systems atm.

Oh my god yes. If you could actually run an atom server in less than 10 watts for a 4GiB system, even before disk, I'd be renting those out instead of VPSs, and probably killing the competition on it, too.

My (perhaps unreasonably cynical) theory is that intel nerfed the desktop atoms because they don't want them to compete with their server chips in my applications. But, on the other hand, that's irrational, because when per-core performance doesn't matter, (and in virtualization, more, smaller cores are better than fewer, faster cores) AMD already beats Intel by quite a lot. And atoms certainly don't compete in applications where per-core performance matters, so yeah, that's probably not it.


The tagline from Innovator's Dilemma is that well-managed companies can get into trouble when they are being disrupted. The reason is that it's normally good business to get rid of low-margin products and focus your resources on the ones that make the most money. And then someone takes over the low end and expands into the high end.

But Intel is not falling for that one. They have made the Atom, a slow, cheap, low-power chip that competes directly with ARM. That's likely a wise move, but Intel now has the problem that low-margin chips are still bad business. They have to have their expensive best-in-industry fabs make low-margin Atoms, when they would much, much rather have them make expensive Xeons.

At one point they made a deal to have Atoms manufactured at TSMC, which would have helped a lot with this problem, but apparently that deal didn't work out. Even if it had worked out, the Atom would no longer have the process advantage, and then backwards compatibility would be the only advantage for x86. With Windows becoming less and less relevant, that's a big problem, considering the technical advantages ARM has over x86.

So fundamentally, Intel has a problem that CPUs are becoming commoditized, which means they will either have to take much lower margins or retreat to the high end. Both scenarios are unpleasant for them.


The technical advantages of ARM aren't that great.

Don't forget we are talking about CPUs that are getting close to Pentium 3 class performance. Intel proved back in the Pentium vs PowerPC days that x86 can compete well against superior architectures. In this fight they have a lot more performance to work with.

I do agree with your CPU-becoming-commoditized point, but Intel is very aware of that (cite: how they keep Atom performance just enough higher than ARM, but a lot slower than their more profitable higher end chips). It's a difficult area, but Intel is aware of the balancing act they have to do. I think their strategy is to increase performance of non-CPU components of their chipsets (ie, make sure Atom kills ARM on I/O) in order to keep their lead in the datacenter.


Intel killed off their ARM (XScale) for Atom...

PowerPC owns gaming. MIPS seems to still be common in networking devices. ARM is mobile and the rest is Intel and noise.

Seems to me that the killer feature ARM has isn't technical at all, it's that you can buy a core, graft on a DSP if your core doesn't have one (OMAPs already do) and your secret sauce needs a DSP (half the phone vendors have their own audio enhancement stuff or echo cancellation code that they think makes the difference) graft on a mobile chip and fab your own part. Then you add memory, flash, a battery and you've pretty much got a phone.

I'm not aware of any medium volume products that use custom Intel based hardware, I have no idea what the costs or terms are but I'm under the impression that unless you're going to make 5million chips, it's just not worth it with Intel parts. Now say intel opensourced an SSE echo killer and built some kind of 2watt Atom with GSM built in style chipset and it cost $15, I bet you'd start thinking ARM was in trouble.

Intel can pretty much build the highest performing per dollar chips, they can build them cheaper than anyone else, they have much more reliable processes than anyone else and they've shown time and time again that when they put their mind to it they can compete with anyone else.

I'd need to see some really compelling evidence that ARM was moving in to the serving market in any meaningful way. I thought they had a match with PowerPC, thought so with AMD, thought Linux was going to make alternative platforms viable, and probably thought they were in trouble several other times but they are fiendishly good.


The technical advantages of ARM aren't _that_ great.

True - see also this:

http://codingrelic.geekhold.com/2010/08/x86-vs-arm-mobile-cp...

However, if x86 has neither a process or a compatibility advantage, then even a tiny technical disadvantage turns into a tiny extra cost, which can be important for high volume chips.


However, if x86 has neither a process or a compatibility advantage, then even a tiny technical disadvantage turns into a tiny extra cost, which can be important for high volume chips.

Not true.

The marginal cost of pretty much anything on a chip itself is close to zero. For example, most 3 core chips are actually 4 core chips with one disabled. Putting the extra silicon on the chip is effectively free.

The money goes in the investment in the factory and the R&D, NOT the raw materials or production costs.


Putting the extra silicon on the chip is effectively free.

If you need that silicon to actually work, then it's not free. Floor-sweeping is necessary because the more stuff you put on a chip, the more likely it is to have defects. That really is an extra cost. I don't think there is any way around that.


Ok. Yes, there is extra silicon. But it's so insignificant that it doesn't matter.

But I think you are overestimating how low-end these chips are. See http://arstechnica.com/gadgets/news/2008/02/small-wonder-ins... for example, which shows that the Silverthorne architecture (1st Gen Atom) has pretty much the same transistor count as a Pentium 4.

No one thought that the fact the Pentium 4 had to support x86 was a significant factor vs other architectures. For the chips we are talking about, the cache memory takes more transistors, so the quality thing is pretty insignificant too.


Yes, they don't disable 1 core out of spite, it is because of the difficulty/cost of a high yield

The situation is even worse for graphics cards


The comment on the fabs is not exactly true. If Intel has maxed its fab capacity on higher margin chips, then producing lower margin Atoms is undesirable. If there is fab capacity remaining after producing sufficient quantities of the other chips to meet market demand, then it is advantageous to manufacture the lower margin chips. There is nothing lower margin than an idle fab.


Also intel is being disrupted from the higher end by GPU's and the new ARM processors(due to performance/watt), and multicore processors like tilera.


I haven't heard about Tilera until just now: it does have potential, but disrupting X86? That requires proof.

Also GPUs have their uses, but general-purpose they are not. GPUs are complements to CPUs (just as coprocessors specialized for floating-point calculations where back in the day).

Most people here probably haven't lived the day when 80387 had to be installed alongside i386 if you wanted decent graphics performance. It is actually interesting that it takes so long for GPUs to be merged with X86.

In my opinion ARM is the only credible threat in the data-center, especially for servers that are mostly I/O bound. But on the other hand costs can skyrocket when scaling horizontally and having 500 HTTP servers (like Digg) is not at all fun.

And IMHO, x86 chips have better performance / watt.

For desktop computing X86 will still dominate, at least for the following 10 years, simply because of Windows; which runs on over 1 billion computers.

Yeah, I know it is fashionable on HN to say Windows is not relevant anymore. Doesn't make it true.


tilera claims to release a competitor to the x86 for the LAMP stack in Q4 2010, togheter with quanta(a big hardware OEM). but yes, it this requires proof.

for some tasks(like HPC , multimedia) , the gpu can handle most of the computation , and the cpu is somewhat of a sidekick(like in the nVidia ION netbooks , for example).

while windows would still dominate, the importance of a strong x86 cpu would decrease, by all the changes in the ecosystems , and the costs of the x86 processor will decrease.


Tilera is a very expensive option, though, and only makes sense for a small subset of the x86 market.

I had a phone conversation with one of their managers about a year ago (I was trying to see could I get a sample PCIe board) and they are damned expensive, so it would only make sense if you have something that 1) parallelizes very well, 2) actually needs the performance, 3) runs on Linux* or can be easily ported to their architecture and 4) you have high enough volume to make their overheads worthwhile. On the plus side, they seemed were very willing to lend their engineering team to help port to their architecture (as part of the devkit cost). Sadly, they don't have evaluation boards - you pretty much have to buy a devkit to evaluate it and the devkits costs approx. 5 times the price of the lowest end PCIe board (I'm not sure what hardware this included and afaik its a once off up-front cost).

(* Running on Linux doesn't mean it magically works, but the Tilera PCIe boards can run Linux, so it at least makes porting a bit easier)

But that doesn't change the facts: Intel DOES have competition to their high-end market from GPUs and the likes of Tilera. I can see the newer ARM processors starting to challenge their midrange line and obviously with Atom too.

Just today, I was thinking about getting an ARM Cortex A8 powered BeagleBoard xM as adesktop linux box replacement.


They are in the process of releasing a 512 core server togheter with quanta , in Q4 2010 targeted for the LAMP stack. they claim great saving in power and space, and strong performance.

it would be interesting to know the lifetime cost(including power, cooling and space) relative to the x86.


I'm not going to write off Intel yet, but they're in a dangerous position. The number one reason Microsoft's stock has been moribund the past ten years is that Linux took over the datacenter. Imagine how much more revenue they'd have if the millions upon millions of X86 servers deployed since 2000 ran Windows. Instead, they're stuck in a saturated, slow-growth monopoly. If non-x86 chips take off in data centers, Intel will be in a similarly bad spot.


... And windows will be locked out of the datacenter until they create ARM-compatible versions of windows.

Next thing you know Windows runs on both ARM and x86 (with x86 emulation for applications) and pushes x86 out of the commercial / domestic arena.


Microsoft Windows on Itanium was once marketed as the path forward for x86 applications, too.

Intel hasn't had particular success with microprocessor designs outside of its core x86 business. Examples of other designs include iAPX432, i860/i960, StrongARM/XScale and Itanium.

These architecture transitions don't always work out. While Apple has some experience with porting and has maked it look (relatively) easy, Windows hasn't had particular success with its ports, whether via Itanium's x86 emulation or via translation tools such as FX!32.


As I wrote in a similar thread I think Intel and x86 is already dead - the writing is on the wall - Intel just doesn't know it yet. Below is how I believe a key customer has already planned to leave the x86 architecture as nothing but a footnote in their history (alongside the remnants of PowerPC).

---

Currently Apple relies on Intel for a major component in a key product. Strategically, Apple doesn't like to have to rely on a single source or supplier for key products. Apple will do whatever is possible to remove this reliance.

Hence a prediction: within less than 5 years a Mac will be running on an Apple designed ARM processor.

How? By slowly, step by step, providing a way towards this.

Step 1. Migrate your OS to the new architecture (e.g. iOS already, OS X not far behind) - done

Step 2. Migrate your developer base onto developer tools which you control and can easily change the architecture it targets (e.g. Xcode and LLVM) - done

Step 3. Provide a space where problematic applications which use other VMs or rely directly on getting too close to the hardware are not welcome (e.g. a Mac App Store) - announced

Step 4. Change the marketplace behaviour so that you control how the majority of applications are distributed and can quickly provide updates without user intervention. Such as an App store.

Step 5. Release a new Macbook with an ARM processor, absolutely killing on form factor, price and battery performance that Intel cannot compete with. Encourage your Mac App Store developers to flick a switch in Xcode, to recompile and upload their new Universal (x86 & ARM) versions of their Apps to the Mac App Store.

Result: you now control the processor direction and application distribution mechanism for a key product and no longer rely upon the whims of Intel.

Apple is all about controlling an integrated experience for their customers. Currently Intel is getting in the way of this for the Mac product.


Big problem with Step 5 - people couldn't run Windows on their Macbooks except under emulation. That would be a deal breaker for a lot of potential customers (pretty much all the gamers for example) and Apple knows it.

Personally, I don't think the transition to ARM on the Mac will happen any time soon. Apple will just try to convince more people to buy iOS devices instead.


I don't think ability to run Windows natively is something that Apple considers crucial. Sure it is nice, but not a dealbreaker.

I believe the main reason Apple migrated away from PowerPC was unreliability(delays, heat, speed etc.) from the supplier and progression not compatibility with x86.


>That would be a deal breaker for a lot of potential customers

Today it would be, but mobile is really taking off. Who knows where we'll be in even 2 years. Maybe by step 5 the remaining things we have to have windows for (e.g. Office) simply wont be a requirement anymore.


And to be frank, something like Office should be non-demanding enough to be fast enough emulated. (Of course software gets slower faster than hardware gets faster..)


Well, it's not just Office, it's a great deal of games and specialized software. But yes, I'm afraid Apple can live with Macs not supporting Windows.


What's next? Will Apple design their own GPUs instead of using third parties to design one for every product in the Apple Store just to control the costumer experience? What kind of business advantage Apple will have compared to NVidia that sells its technology? I think that Steve Jobs understands these questions very well, and that's partly what separate him from the Average Joe Tech-loving guy.


Apple has already licensed Imagination Technologies PowerVR graphics cores (both this and next generation) and hold a large portion of their stock (http://www.reuters.com/article/idUSLQ64592720090626).

Moving Macs to ARM means they can use these as the graphics cores, customise them, modify them and not deal with Nvidia if they don't want to.


You're talking about Apple, which has less than 10% of the worldwide computer market. What about Dell and HP? Currently the software commands the hardware, not the other way around.

A lot of businesses depend on MS Windows and lots of other x86 software, too much for a simple switch to ARM. Apple could do it because they have a firm grip on their ecosystem, but the rest of the market isn't in such a position.


I don't think the one supplier thing is the base problem. I think Intel's handling of NVIDIA and Intel's failure to provide a decent replacement with OpenCL compatibility is the major cause of tension. The comments Intel's executives are making about the iPad don't help either.


Yes, I agree to an extent. However if Intel weren't the only viable supplier then the handling of Nvidia and OpenCL compatibility issues wouldn't be as problematic for Apple as they are as they'd have an option to go to another supplier.

They currently don't have that option in the x86 space.


AMD could get there. Apple uses ATI parts and has the OpenCL optimizations for them. I don't really see it likely.

I do wonder if AMD got bought would they still be able to keep the x86 licenses (not sure on what final settlement was).


In addition: Extend Rosetta to translate x86 to ARM.


From my experience in running Debian on ARM, I can say that the software difference is really non-existence. Any of the normal Debian OSS software works fine.. I did not run into any problems with software not working except for the closed source programs that didn't have ARM packages. I really can't imagine the architecture shift being that big of a problem, as I'm assuming the bulk of the work for ARM with Debian was just recompiling packages.. I had a working XFCE desktop with common packages like Iceweasel (Firefox), OpenOffice.org, GIMP, and so on that was just as easy to set up as x86.


The last time I had an architecture issue was with a SNES emulator that had pieces hard coded in 32bit Asm.

Pretty rare these days. Maybe things like VirtualBox too.


It used to be the case that Java had serious performance problems on the ARM. I don't know if that's still true.


How about ffmpeg / libdv / mplayer Last time I looked they had hand coded asm.


I only tried with VLC player which didn't present any immediate problems, but I don't know if it uses any of these libraries.


Is ARM's power advantage really that significant for devices larger than the smart phone / tablet form factor? If an Atom-based CPU consumes ~2-3 extra watts but offers marginally better performance and (more importantly) compatibility with an enormous base of existing applications, that doesn't seem like a very compelling argument for switching.


Think of a data center situation though where you can get a ~50% power requirement difference for marginal performance decreases. The cost savings would be huge even if you require more servers to make up for the performance difference.

For most home users I think you're right for now but for businesses it could be huge.


> Think of a data center situation though where you can get a ~50% power requirement difference for marginal performance decreases.

50% lower processor power isn't the same as 50% less system power.


Of course, you've got to remember that which chip has the best performance per watt depends on the job. A sufficiently smart data center could determine this automatically and take that into account in its job scheduling.


Is ARM's power advantage really that significant for devices larger than the smart phone / tablet form factor?

I'll bet $5 that the MacBook Air in the future (18-24 months) will switch to ARM once the >2GHz ARM processors start shipping en masse.

The Asus Eee PCs running Android with Snapdragon processors already embed the Cortex-A9 MPCores.


"4GB RAM ought to be enough for anybody."

ARMS's Achilles' heel is that it's only 32-bit, and I don't think anyone wants to go back to a segmented address space any time soon.


That's a great point. Are there any 64 bit ARM chips on the horizon? I don't know much about chips, so maybe it's not even possible given the architecture.


I don't recall hearing any announced, though I wouldn't be surprised at all if it were currently in the works. The A15 mentioned in the article, however, I believe has something in the spirit of x86's PAE that offers an expanded (36 bits or so?) physical address space, for better or for worse (better in that you can address more memory, worse in that it's kind of an ugly hack and not a great long-term solution).


There were a lot of stories in the 2000 time frame, but nothing seems to have been actually done.


It's possible, but just like the x86-64 transition it won't be pretty.


It might go better for ARM given that most of the more tightly controlled software that is delivered on ARMs (app store or custom embedded).


I'm curious; have you faced any problems in that transition?

(seriously, I'm not trying to troll. I've had exactly one - making sure a project of mine would compile as PIC and that was pretty minor after the required reading)


There haven't been any major issues, it's just been glacially slow. Most software running on capable CPUs is still 32 bit, Visual Studio still doesn't have first class support for 64 bit and my MacBook still runs a 32 bit kernel.

I expect ARM's transition to be quite different though--with the highest ends of their business moving in only a couple product cycles, and the lower ends of their business sticking with 32 bit indefinitely.


One of the article's contentions is that the vast majority of users are satisfied with Windows and Microsoft programs as they are right now. Most of those programs are 32-bit.

By extension, if they are satisfied with 32-bit programs and don't feel a need to switch, then they won't miss 64-bit architectures.


Most Windows sold today is 64-bit, and it allows you to run more 32-bit programs without running out of memory.


Most people could care less about more applications...

Virtually all of the computer users that I know limit their use to an office application and a web browser on top of Windows. That's it. The web browser gives them everything they really need. I wonder how many computer owners could even tell you if they have 32-bit or 64-bit or how the two differ. Probably the same percentage that can tell you detailed specs about their car engines. Virtually none.

The only computer specs that matter anymore --exempting programmers-- are weight and battery life. A few hardcore gamers still care, but they're in the minority, even for video gamers. XBox anyone?


You think Apple is going to switch to an architecture that breaks VMWare?


Apple have demonstrated time and again that they are happy to break existing functionality if it suits their aims.


And just as important--Apple's customers have demonstrated time and again that they will follow wherever Apple leads.


What's your best 2 examples?


OSX and Intel processors...


OS X supported OS9 apps until long after people were puking at the mere sight of OS9 apps.

And when they converted to Intel, they also provided Rosetta, a completely ridiculous binary translation system to make PPC apps work.

Those are bad examples.


The most striking for me when Jobs announced move to Intel was the bit where he told that they had OS X builds for Intel since the very first version of OS X. Who knows what are they cooking in their secret kitchens.


NeXTStep was ported to x86 back in 1993, years before it was acquired and rebranded as Mac OS X. I would have been surprised if they had neglected to maintain an asset like that.


Motorola 68k -> Apple/Motorola/IBM PowerPC -> Intel ???


Prediction recorded: http://predictionbook.com/predictions/1861

Hope you're still around in 2 years; it can be interesting to ponder a falsified prediction and wonder, why did I believe that?


Except that the Atoms at same (and even a bit higher) clock don't even offer marginally better performance.

And another thing that I came to think of that weighs heavily in ARM's favor is the multitude of composite, multifunctional chips, SoCs etc. currently available in the ARM flavor. Is there ANYTHING like this for Atom, or any x86 at all?


Yes, AMD's Fusion line (http://en.wikipedia.org/wiki/AMD_Fusion) aims to provide SoC functionality.


Except that the Atoms at same (and even a bit higher) clock don't even offer marginally better performance.

Depends if we are talking Arm Cortex-A8 or Cortex-A9. Atom is much more powerful at same clock speed that Cortex-A8. Cortex-A9 is getting closer competitive, but doesn't have much of a power advantage over it. (Lies, damn lies and benchmarks, but A8 gets ~2000 MIPS, A9 ~2500 MIPS and singlecore 1st gen Atom gets ~3300 MIPS)

Also, multicore-Atom has been in the market for a while. Multicore ARM is just coming online now.

It's worth noting that Intel can speed up Atom whenever they want. There are plenty of easy wins in there (eg, better execution core, faster memory controller) that they can put in if they need to keep their lead.


(Lies, damn lies and benchmarks, but A8 gets ~2000 MIPS, A9 ~2500 MIPS and singlecore 1st gen Atom gets ~3300 MIPS)

Measured how, and at what frequencies? Also ARMs are really starved for bandwidth. Has that improved with the Cortex A9?


I ran some rough numbers once. I couldn't find exact numbers but estimated for about $1.5 million worth of ARM-based plug servers, you would have close to enough raw CPU power (in terms of FLOPS) compete with the 2003-version of Google.


ARM is new? It's been around longer than most Ruby developers have been alive ;)


I think the largest hurdle for ARM to get over is the preponderance of Windows installations with kernels only complied for x86. Linux and OS X (Mach) already run on ARM, and I think that possibly the NT kernel runs on ARM (Windows phone 7 is ARM, right?) I have trouble seeing Microsoft port over Windows proper to ARM until there's a really strong market for it. That being said, perhaps low power consumption ARM devices will provide that market. Perhaps this is another reason that Apple has their own ARM chip - to be at the forefront of the ARM revolution, displacing MSFT?


NT doesn't currently run on ARM (WP7 is still based on CE), though it was originally designed to be CPU independent. Probably the biggest obstacle for MSFT porting Windows to ARM is not the expense of the port itself, but reluctance to put out a version of Windows that's binary incompatible with all the Windows software out there.

Incidentally (and speaking of breaking compatibility), MSFT is working on a brand new kernel and operating environment (Midori) which does have ARM as a target. But this is an incubation project with no guaranteed release, though it's a very serious effort.


Windows NT originally was CPU independent, but only for Little Endian architectures.


This is no problem for ARM processors since they are bi-endian (as PowerPC).

Source: http://en.wikipedia.org/wiki/Endianness#Bi-endian_hardware


There is nothing in Windows which requires fixed endianness. While all known versions of the OS have been shipped on little-endian machines (except XBox 360, albeit running highly modified version of NT Kernel), there is very little dependency on the endianness for all modules except format parsers and network API. Changing that is a relatively simple undertaking, much simpler than building a new version of kernel, for XBox, for example


That can't be correct, since I remember running Windows NT 4 on a PowerPC system.


PowerPC (except PPC970 aka G5, as I recall) is bi-endian (configurable endianness); so are Alpha and Itanium which also had Windows NT ports.


Alpha is most definitely not a bi-endian architecture; it's little-endian only. PowerPC chips can be either, but the vast, vast majority are big endian.


Incorrect. The Cray T3E used Alpha processors in big-endian mode.


I don't think Windows Phone is based on NT.

Apple and Android have shown that you can build a mainstream platform ecosystem from nothing in 2-3 years, so who needs Windows compatibility?


Neither Apple iOS or Android were "build from nothing in 2-3 years".

iOS is based on OS-X, which is based on NeXT, which is based on BSD (~40 years of work)

Android is based on Linux (so ~20 years of work)


My point is that they went from zero apps and zero users to lots in 2-3 years. IMO the argument that Windows backwards compatibility is the only way to get the long tail of tens of thousands of apps has been refuted.


Oh right. Yes, I agree, then.

All a new platforms needs is a great webbrowser (which these days seems to mean "port webkit") and it's instantly viable.


Didn't OSX just recently get rid of ARM support? Might be wrong though..


You're confusing ARM with PowerPC, both of which are RISC architectures.

http://en.wikipedia.org/wiki/PowerPC


What portion of x86 transistors are dedicated to supporting the bad instruction set design?


That's an interesting question that I don't think anyone would be able to quantify, just because performance is so subjective to the applications that are run. In an interview, Alan Kay remarked that bad processor architectures degrade performance by three orders of magnitude [1]. If he's right, that's a pretty hefty tax. But I don't see anyone being able to prove or disprove his hypothesis. Maybe most of the tax is an inevitable byproduct of increasingly complex chipsets.

[1] http://queue.acm.org/detail.cfm?id=1039523 around 1/3 of the way in


Less than 10% of the transistors dedicated to cache.


I seem to remember from one article that is was the same amount of transistors as the older ARM cores (not the A8 or A9). This is a pretty big anchor to overcome.

previous on HN: http://news.ycombinator.com/item?id=1037051


"Indeed. RISC architecture is gonna change everything."

"Yeah. RISC is good."


"So Intel lobbied heavily to get us to stay with them … [but] we went with IBM and Motorola with the PowerPC. And that was a terrible decision in hindsight. If we could have worked with Intel, we would have gotten onto a more commoditized component platform for Apple, which would have made a huge difference for Apple during the 1990s. So we totally missed the boat." - John Sculley


Great article. As an example, it's fairly obvious Apple is building up it's internal CPU engineering abilities to inevitably put an ARM in a Macbook. But I don't think it will be in the near future due to breaking compatibility with current software, especially those who use SSE instructions. But who knows, maybe they'll create an x86 emulator for the transition like Rosetta. OS XI maybe?


I don't think its clear that ARM would actually be that competitive with x86 at the high end. Many ARM features, like conditional execution, are great for increasing IPC for traditional in-order designs, but make complicated OoO designs more difficult. As you have bigger more featureful chips the relative cost of x86's decoding stage becomes less significant too.

All of which isn't to say that the article is wrong and that ARM isn't about to take over the mainstream (I could see it happening but wouldn't bet on it).


I do wonder, for some of these higher performance devices could Intel / AMD build a version of their chip that was 64-bit only and removed a lot of the legacy instructions / vector attempts? Its not like a recompile / endian shift wouldn't be necessary going from ARM to x86, so it shouldn't add any time to the conversion. You have to account for all the variations of the x86 in a compiler anyway.


I'm a web developer (C# full stack) with 2 years of experience, I have been attracted to ARM development since the moment I studied it in university.

As I'll be mostly in technical functions for some time (3 years?) what should I consider when trying an experience in ARM development? What are the pros and cons career wise and technical challenges relatively to continue to do boring CRUD C# applications?


The x86 architecture will win out for the same reason it won out decades ago. Most of the world's software was compiled for x86. I have all sorts of software I've been running on my Windows laptop. When I go to buy a new laptop, it'll be another Windows x86 laptop because my software would cost more to replace than the machine upon which it's running.


Visualization today is vastly better today than 10 years ago. We've also got a big, growing market that intel isn't doing well in. I wouldn't count Intel out just yet, but them winning is far from sure.


Does anyone have any data that they can share on the following -- as a company, if I licensed the latest generation ARM processor (e.g., Cortex-A15) and then factored in any additional design and manufacturing costs (e.g., via a foundry), what would be the total cost advantage of using an ARM chip versus a comparable Intel chip (e.g., Atom)? Thanks!


The rest of the article aside, one thing jumped out at me - AMD's P/E ratio. What's the received wisdom for why they are so cheap?


They have tons of debt.


Just wanted to mention...

(e.g., A4 in a MacBook Air).

incorrect. Intel Core 2 Duo 1.4 GHz processor in a MacBook Air http://www.ifixit.com/Teardown/MacBook-Air-11-Inch-Model-A13...

I guess author wanted to mention iPad


Modern CPUs are pretty low-level RISC stuff internally with x86 instruction set layered on top, right? Now, it would be interesting to see what would happen if someone started making desktop-class cpus with the ARM instruction set instead of the legacy x86 instruction set.


In the article it states people are holding onto their PCs longer because they are satisfied with the capabilities of their older machine. I wonder if this is really proven or a misread caused by the XP -> Vista transition problems.


I tossed Ubuntu (with ext4) on my 2005 laptop, and it's proving to be a formidable word processor and web browsing machine. I think a lot of people are just realizing that they don't need another core or a chip built with a smaller process to play flash games and write e-mail.

Though I'm a little concerned about what the smaller number of hardware sales will do to development in the server market. Can they maintain R&D budgets?


That will surely happen. CPUs have for some time been limited by heating problems and the architecture with more performance with the same power consumption will win. If ARM can deliver that, then there's no question about it.


Intel is aware of this. Atom is the shot across the bow. They are working on making smaller and lower-power parts. It's interesting to watch ARM race up as Intel races down.


If we compare the RISC nature of ARM with the CISC-implemented-over-RISC approach of contemporary x86s, it's easy to believe it will prove simpler to upscale ARM than to downscale the Atom.

Transmeta-style emulation could be one possible way out of this death spiral for Intel.


History has shown that intel is very good at getting out of potential death spirals...


Yes but back then they had someone who understood disruption much better (Andry Grove was friend with the author of Innovator's Dilemma) and they never had to change the x86 platform. They only had to create simpler chips on the same platform (Celeron, Atom, the initial laptop chips - forgot their name).

This time however, they need to change the whole platform, because the whole platform brings much bigger inefficiencies than ARM, and they won't be able to improve down that much, while ARM will improve fast at a steady pace.

The only solution is to buy a big ARM maker. To have a chance to dominate, they'd need to buy Qualcomm for Snapdragon, but they've just wasted 7 billion on an anti-virus...

But even if they do that, I'm unsure of their potential domination, as I believe in 2011 Nvidia will dominate the mobile market with its Tegra chips. That's because in 2011 the battle will be over who has the best GPU not CPU. The GPU is increasingly more important (accelerating the UI, the browser, Flash, supporting higher resolutions, gaming, etc)


The GPU is increasingly more important (accelerating the UI, the browser, Flash, supporting higher resolutions, gaming, etc)

Don't forget offloading computation from the CPU like voice/face/gesture recognition and number crunching (there must be some use for that in a cellphone). And OpenCL is already providing a hardware-independent abstraction layer for that.


Indeed - they are not dying anytime soon. See also this:

http://codingrelic.geekhold.com/2010/08/x86-vs-arm-mobile-cp...

for why ARM's technical advantages may be overblown.


Intel's x86 killed the RISC workstation with Windows compatibility and price per MIPS. Intel then matched and surpassed AMD on AMD64 counting on its own size and on the complexity of building a very fast 64-bit x86.

All factors, Windows compatibility excepted, are on ARM's side this time. Of course Intel has survived bad situations before, but this time I am not betting on them.


Which factors exactly on ARM's side?

I can think of only two:

1) Power usage on low end chips. ARM will be ahead of x86 on platforms below the smartphone for the foreseeable future, and on smartphones for another generation (2-3 years)

2) Manufactures can customize ARM to build SOCs. I can't see Intel allowing complete customization like that, but I'd expect them to ship some competitive SOCs themselves.


It's simpler for ARM to adopt performance enhancements that make the core more complex than for Intel to adopt simplifications that make their cores more power efficient without losing performance.


while this might be true for the mobile world, the author does not seem to consider that people do more of their computations in the cloud or data centers that in turn have to purchase great numbers of classical high-performance chips. While people are happy with the performance they get on their home machines the demand for greater computing power in the cloud will grow further.


It's the return for Advanced RISC Machines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: