FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
62% Positive
Analyzed from 4800 words in the discussion.
Trending Topics
#risc#arm#more#platform#isa#apple#open#intel#cisc#standard

Discussion (121 Comments)Read Original on HackerNews
RISC-V is addressing this issue quite directly. For things like desktops, laptops, SBCs and servers we have the RVA23 profile which defines quite specifically what features a chip must support to ensure code portability.
On top of this, there are platform specifications. For example, the server spec is about to finalize next month. It extends RVA23 which things like UEFI, SBI, and ACPI to ensure that your can take something like a Linux distro and easily install it on any RISC-V server, like you can in the world of x86-64.
> we have phones that are only good for landfill once their support lifetime is up
RISC-V will probably not solve that problem in general.
First, the ISA cannot really demand that your phone avoid a Broadcom wireless chip that requires proprietary firmware for example.
Also, the phone vendor can still lock down the devices to prevent running arbitrary code.
Thankfully, the RISC-V world is developing a culture of openness. If a company wants to create a fully “open” phone, they are quite likely to adopt RISC-V. And, because of RISC-V, even the SoC itself could be fully Open Source.
But your typical Android phone is not going to get more Open just because they contain a RISC-V CPU.
A great case study is the companies that implemented the pre-release vector standard in their chips.
The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.
If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.
The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).
Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.
If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.
[0]: https://github.com/riscv-non-isa/riscv-server-platform
Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.
Is this really true? The computer ecosystem is more open now than ever. The original PC BIOS (which PC-compatible manufacturers needed to implement) was never an open, documented standard. It was a proprietary, closed system made by IBM. It's pretty fair to say that IBM didn't anticipate a PC/x86 ecosystem developing around their product. They even sued companies who made their own compatible BIOSes (like Corona). Intel didn't really have much to do with the success of the product at that point in time either, much less Microsoft.
In contrast, every widely-used modern system for hardware abstraction (UEFI/ACPI/DeviceTree/OpenSBI/etc) are open, royalty-free standards that anyone can use. Their implementation in ARM is newer, and inconsistent, but that's only because of how hugely diverse the ARM ecosystem is.
I think the issue is that desktop and server computing are “open” in the sense that you have full control over the software you run on them. So people interpret the dominant desktop and server platform architecture (the world of x86-64) as being open.
The embedded world is mostly closed, you are meant to run the software your hardware comes with. The platform’s popular there are considered less open (ARM and RISC-V).
Mobile devices like phones and tablets are historically closed devices, regardless of ISA. They are generally getting more closed in the name of security.
It is not the ISA that is “open” but the industry.
That said, in RISC-V, there is a sub-current of openness. I do not think that will overcome the industry tendencies in general, but there will be a small cadre of folks trying to create an open presence in every niche. The good news is that there is nothing to stop them. They will succeed eventually.
On the other hand, ARM sells the cores to SoC vendors (and doesn't care much what becomes of it), SoC vendors ducktape the ARM cores to a bunch of Synopsys peripherals and sell the resulting SoCs to smartphone and car makers (and doesn't care much for the product). System integrators throw Android on top and sell it to the customers. Then Google, who get all the cream via Play, hides all the mess behind a thousand layers of Java abstractions.
DeviceTree is an offshot of Sun's OpenFirmware (and it leaves out all the hard stuff - OpenFirmware had Forth, DeviceTree expects the kernel to support every single brand of fan switch). OpenSBI is a disaster. I'm sorry, but what kind of bright mind came up with the idea of hiding damn *timer* behind a privilege switch? Timers were enough of a pain point on x86 already, then it settled on userspace-accesable RDTSC. RISC-V SBI? Reproducing x86 one stupid decision at a time.
One reason UNIX became widely adopted, besides being freely available versus the other OSes, was that allowed companies to abstract their hardware differences, offering some market differentiation, while keeping some common ground.
Those phones common ground is called Android with Java/Kotlin/C/C++ userspace, folks should stop seeing them as GNU/Linux.
Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.
This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].
> Extensibility powers technology innovation
>> While this flexibility could cause problems for the software ecosystem...
"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
> How mature is the software ecosystem?
10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.
The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.
I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.
[1]: https://www.reuters.com/business/anthropic-weighs-building-i...
This would be a problem for any ISA with multiple/many vendors.
Imo this is pretty misguided. If you're writing above assembly level, you can read the performance optimization manual for Intel, and that code will also be really fast on AMD (or even apple/graviton). At the assembly level, compilers need to know a little bit more, but mostly those are small details and if they get roughly the right metrics, the code they produce is pretty good.
Unity, Bazaar, Mir, Upstart, Snap, etc.
All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.
Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.
You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.
IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.
(Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)
Named branches vs bookmarks in hg just means bike shedding about branching strategy. Bookmarks ultimately work more like lightweight git style branches, but they came later, and originally couldn't even be shared (literally just local bookmarks). Named branches on the other hand permanently accumulate as part of the repository history.
Git came out with 1 cohesive branch design from day 1.
Bazaar and Git were created around the exact same time.
Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.
If I ever go back to GNU/Linux full time, GNOME certainly won't be it.
What?
Bzr predates git (by a few days but still). Launchpad predated GitHub by a lot. canonical just played those cards horribly and lost.
You seem to be say it like it's a good thing?
Can't wait for that thing to explode and die.
Lying to users and turning apt install commands into shims for a barely functional replacement was disrespectful. Flatpak was and still is better, but even then if I say I want a system package you give me a system package. If you have infrastructural reasons why you cannot continue to provide that package then remove it, Debian based systems have many ways to provide such things.
Canonical did it because they wanted to boost Snap usage and if failed while sending a clear message they don't respect their user base.
Beyond the potential platform fragmentation due to the variability of the ISA (a very unfortunate design choice IMO), mentioned elsewhere in this thread, what I find most frustrating is the boot process / equivalent of BIOS in that world.
My impression: complete lack of standardization, a ton of ad-hoc tools native to each vendor, a complete mess, especially when it comes to get the board to boot from devices the vendor didn't target (eg SSDs).
Until two things happen:
1. a CPU with a somewhat competitive compute power appears (so far, all the SBC's I've tried are way behind ARM and x86)
2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)
the whole RISC-V thing will remain a tiny niche thing, especially because when a vendor loses interest in the platform, all of the SW that is native to the platform goes to rot immediately (not that it was particularly good quality in the first place).
I think this is going to embarrassingly wrong.
> all of the SW that is native to the platform
There are several RISC-V Linux distros where essentially all the software available for the x86-64 platform is also available on the RISC-V edition. Let’s use Ubuntu as an example.
> when a vendor loses interest in the platform > the platform goes to rot immediately
Ubuntu will provide updates for 15 years. That does not seem very immediate.
For RVA23 hardware, I expect even new Ubuntu releases to support it up to around 2030 at least. 15 years from then will be 2045. I cannot say that I am picking up what you are laying down here.
I got the same experience tinkering with ARM devices. It soured me so much that I have decided that until ARM offers a unified boot mechanism like x86 PCs do, I will ignore it, no matter the supposed benefits.
The RISC-V server spec mandates UEFI, ACPI, and SBI. Here is a RISC-V “desktop” motherboard that has the same:
https://milkv.io/titan
On the low end where RISC-V currently lives, simplicity is a virtue.
On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.
In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.
Do you think Apple spends more money than Intel on chip design?
Absolutely. Apple's R&D budget for 2025 was 34 Billion to Intel's ~18 Billion (and the majority of Intel's R&D budget goes to architecture, while for Apple, that is all TSMC R&D and Apple pays TSMC another ~$20 billion a year, of which, something like 8 billion is probably TSMC R&D that goes into apple's chips).
Sure not all of Apple's 34B is CPU R&D, but on a like-for-like basis, Apple probably has at least 50% more chip design budget (and they only make ~10-20 different chips a year compared to Intel who make ~100-200)
Apple business is vertical integration, they have zero presence in the chip market.
Pretty much every new ISA introduced since the 80’s has been RISC.
PowerPC was adopted by Apple (RISC), they went back to Intel (CISC), and then they went back to RISC (Apple Silicon).
ARM, pretty much all phones, tablets, and Chromebooks is RISC.
Windows runs on ARM now as well (Qualcomm X Elite).
The interest around RISC-V is that anybody can use it in their chips without having to ask permission.
To start, modern x86 chips are more hard-wired than you might think; certain very complex operations are microcoded, but the bulk of common instructions aren't (they decode to single micro-ops), including ones that are quite CISC-y.
Micro-ops also aren't really "RISC" instructions that look anything like most typical RISC ISAs. The exact structure of the microcode is secret, but for an example, the Pentium Pro uses 118-bit micro-ops when most contemporary RISCs were fixed at 32. Most microcoded CPUs, anyway, have microcodes that are in some sense simpler than the user-facing ISA but also far lower-level and more tied to the microarchitecture.
But I think most importantly, this idea itself - that a microcoded CISC chip isn't truly CISC, but just RISC in disguise - is kind of confused, or even backwards. We've had microcoded CPUs since the 50s; the idea predates RISC. All the classic CISC examples (8086, 68000, VAX-11) are microcoded. The key idea behind RISC, arguably, was just to get rid of the friendly user-facing ISA layer and just expose the microarchitecture, since you didn't need to be friendly if the compiler could deal with ugliness - this then turned out to be a bad idea (e.g. branch delay slots) that was backtracked on, and you could argue instead that RISC chips have thus actually become more CISC-y! A chip with a CISC ISA and a simpler microcode underneath isn't secretly a RISC chip...it's just a CISC chip. The definition of a CISC chip is to have a CISC layer on top, regardless of the implementation underneath; the definition of a RISC chip is to not have a CISC layer on top.
The way I understand it, back in the day when RISC vs CISC battle started, CPUs were being pipelined for performance, but the complexity of the CISC instructions most CPUs had at the time directly impacted how fast that pipeline could be made. The RISC innovation was changing the ISA by breaking complex instructions with sources and destinations in memory to be sequences of simpler loads and stores and adding a lot more registers to hold the temporary values for computation. RISC allowed shorter pipelines (lower cost of branches or other pipeline flushes) that could also run at higher frequencies because of the relative simplicity.
What Intel did went much further than just microcode. They broke up the loads and stores into micro-ops using hidden registers to store the intermediates. This allowed them to profit from the innovations that RISC represented without changing the user facing ISA. But internal load store architecture is what people typically mean by the RISC hiding inside x86 (although I will admit most of them don't understand the nuance). Of course Intel also added Out of Order execution to the mix so the CPU is no longer a fixed length pipeline but more like a series of queues waiting for their inputs to be ready.
These days high performance RISC architectures contain all the same architectural elements as x86 CPUs (including micro-ops and extra registers) and the primary difference is the instruction decoding. I believe AMD even designed (but never released) an ARM cpu [1] that put a RISC instruction decoder in front of what I believe was the zen 1 backend.
[1]: https://en.wikipedia.org/wiki/AMD_K12
Recently I encountered a view that has me thinking. They characterized the PIO "ISA" in the RPi MCU as CISC. I wonder what you think of that.
The instructions are indeed complex, having side effects, implied branches and other features that appear to defy the intent of RISC. And yet they're all single cycle, uniform in size and few in number, likely avoiding any microcode, and certainly any pipelining and other complex evaluation.
If it is CISC, then I believe it is a small triumph of CISC. It's also possible that even characterizing it as and ISA at all is folly, in which case the point is moot.
The tool chains that Espressif seem to work pretty well with these as well as their earlier (some sort of RISC) processors. I have had some code, however, that did not produce desired results until I upgraded the toolchain.
The other issue I've run into is that some cell phone battery packs that work well with Raspberry Pis won't stay on with the RISC-V ESPs because they draw so little power the battery pack doesn't detect the load.
I may be using this one soon:
https://store.deepcomputing.io/products/dc-roma-risc-v-mainb...
Meanwhile, wouldn't China be more heavily invested in Longsoon?
LoongArch is 32-bit instructions only. This means no MCUs due to poor code density. That forces them into RISCV anyway at which point, you might as well pour all your money and dev time into one ISA instead of two. RISCV has way more worldwide investment meaning LoongArch looks like a losing horse in the long term when it comes to software.
https://www.sifive.com/blog/investing-in-our-next-chapter-of...
And here is an example of Alibaba using RISC-V for inference and training in the cloud:
https://www.cnbc.com/amp/2026/04/08/china-alibaba-data-cente...
Those are both up and running today.
And of course there is Tenstorrrent:
https://tenstorrent.com/ip/risc-v-cpu
i've been hearing about arm computer for almost twenty years and only just recently general-purpose decently-priced arm laptops have been released (qualcomm laptops, the macbook neo).
and arm desktop are still not a thing, in practice.
https://chrisacorns.computinghistory.org.uk/Computers/A500.h...
The desktop market is not the only product space anymore.
Apple has had brilliant success with its ARM processors, proving that ARM is more than capable. Before Apple's switch, Chromebooks had been using ARM since 2011.
Android is the dominant operating system in mobile and most Android devices use the ARM platform. Many of these devices have desktop capability -- they are a viable convergence platform.
Side note: It's kinda funny to me that "the keyboard is detachable, the screen is glass and you can touch/write on it" makes it "lesser" than a laptop rather than being an upgrade.
But yeah, definitely happy to see more in this space. Now we just need e-Paper laptops to take off as well :)