ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
82% Positive
Analyzed from 7499 words in the discussion.
Trending Topics
#xor#sub#bit#carry#instruction#register#same#more#alu#cycle

Discussion (215 Comments)Read Original on HackerNews
A one-bit adder (which is subtraction in reverse) makes signals pass through two gates.
See https://en.wikipedia.org/wiki/Adder_(electronics)
You need the 2 gates for adding/subtracting because you care about carry. So if you're adding/subtracting 8 bits, 16 bits, or more, you're connecting multiples of these together, and that carry has to ripple through all the rest of the gates one-by-one. It can't be paralellized without extra circuitry, which increases your costs in other ways.
Without the AND gate needed for carry, all the XORs can fire off at the same time. If you added the extra circuitry for a parallelizable add/subtract to make it as fast as XOR, your actual parallel XOR would consume less power.
Because as the article notes on "any modern x86 processor" both xor r, r and sub r, r are handled by the frontend and have essentially no cost.
There is sequential implementation of ripple carry adder that uses clock and register, this will add 1-bit per cycle, but no body uses this for obvious reason, it's just a toy example for education. A normal ripple carry adder will have some delay in propagation time before the output is valid, but that is much less a clock cycle. You can also design a customized adder circuit for 4-bit 8-bit 16-bit etc separately that would greatly minimizes the propagation delay to only 2 or 3 levels of gates, instead of n gates like in the ripple carry adder.
Internally the adder (which is also used as a subtractor by ones complementing one of the inputs and inverting the initial carry in) uses xor, and you can implement the XOR logic op with the same gates.
Also, modern ALUs don't use ripple carries really any more, but instead stuff like a Kogge-Stone adder (or really, typically a hierarchical set of different techniques). https://en.wikipedia.org/wiki/Kogge%E2%80%93Stone_adder
(There was another, now deleted, comment somewhere in this thread that mentioned IBM's preference for SUB. Source of that statement was Claude, but it seems very likely to be correct. The BIOS code I've checked myself, lots of 'SUB AX,AX', no XOR)
In any case, I am describing equipment built mostly in late 60s through the late 70s at IBM Rochester and Poughkeepsie. The IBM PC was developed by an entirely different team at IBM Boca Raton, and IBM didn't design its CPU.
Merely pointing out that where both operations were available, there seems to have been a preference to use SUB instead, with some continuity from early business-oriented mainframes, to the 360, to the PC.
Will remember for the next time I write asm for Itanium!
MIPS - $zero
RISC-V - x0
SPARC - %g0
ARM64 - XZR
Even tiny tiny CPUs can do sub in one cycle, so I doubt that. On super-scalar CPUs xor and sub are normally issued to the same execution units so it wouldn't make a difference there either.
- As transistors got smaller, FP performance increased, memory latency stayed the same (or even increased).
- If you are doing a lot of floating point, you are probably doing array processing, so might as well go for a GPU or at least SIMD).
- Low instruction density is bad for I-cache. Yes, RISC fans, density matters! And VLIW is an absolute disaster in that regard. Again, this is less visible in number-crunching loads where the processor executes relatively small loops many times over.
Probably, there are ALU pipeline designs where you don't pay an explicit penalty. But not all, and so XOR is faster.
Surely, someone as awesome as Raymond Chen knows that. The answer is so obvious and basic I must be missing something myself?
Yes, but that need not scale linearly with the number of bits. https://en.wikipedia.org/wiki/Carry-lookahead_adder:
“A carry-lookahead adder (CLA) or fast adder is a type of electronics adder used in digital logic. A carry-lookahead adder […] can be contrasted with the simpler, but usually slower, ripple-carry adder (RCA), for which the carry bit is calculated alongside the sum bit, and each stage must wait until the previous carry bit has been calculated to begin calculating its own sum bit and carry bit. The carry-lookahead adder calculates one or more carry bits before the sum, which reduces the wait time to calculate the result of the larger-value bits of the adder.
[…]
Already in the mid-1800s, Charles Babbage recognized the performance penalty imposed by the ripple-carry used in his difference engine, and subsequently designed mechanisms for anticipating carriage for his never-built analytical engine.[1][2] Konrad Zuse is thought to have implemented the first carry-lookahead adder in his 1930s binary mechanical computer, the Zuse Z1.”
I think most, if not all, current ALUs implement such adders.
It could also be as a result of most people working in assembly being aware of the properties of logic gates, so they carry the understanding that under the hood it might somehow be better.
I think this might just be due to not realizing just how far back in CPU history this goes.
The former just seems way more practical
.b and .w -> clr eor sub are all identical
for .l moveq #0 is the winner
8080/Z80 is probably where XOR A got a lead over SUB A, but they are also the same number of cycles.
E.g. on Z80 and 6502 both have the same cycle count.
EOR and SBC still have the same cycle counts though.
Not scalar, but still sub vs xor. Though you’d use vmov immediate for zeroing anyway.
A real-world CPU example is the Cray-1, where S-Register Scalar Operations (64-bit) take 3 cycles for ADD/SUB but still only 1 cycle for XOR. [1]
[1] https://ed-thelen.org/comp-hist/CRAY-1-HardRefMan/CRAY-1-HRM...
A tangent, but what is Obvious depends on what you know.
Often experts don't explain the things they think are Obvious, but those things are only Obvious to them, because they are the expert.
We should all kind, and explain also the Obvious things those who do not know.
Edit: Looked at comments, seems like x86 and the major 8bit cpu's had the same speed, pondering in this might be a remnant from the 4-bit ALU times.
I think that era of CPUs used a single circuit capable of doing add, sub, xor etc. They'd have 8 of them and the signals propagate through them in a row. I think this page explains the situation on the 6502: https://c74project.com/card-b-alu-cu/
And this one for the ARM 1: https://daveshacks.blogspot.com/2015/12/inside-alu-of-armv1-...
But I'm a software engineer speculating about how hardware works. You might want to ask a hardware engineer instead.
In any ALU the speed is determined by the slowest operation, so XOR is never faster. It does not matter which is the width of the ALU, all that matters is that an ALU does many kinds of operations, including XOR and subtraction, where the operation done by an ALU is selected by some control bits.
I have explained in another comment that the only CPUs where XOR can be faster than subtraction are the so-called superpipelined CPUs. Superpipelined CPUs have been made only after 1990 and there were very few such CPUs. Even if in superpipelined CPUs it is possible for XOR to be faster than subtraction, it is very unlikely that this feature has been implemented in anyone of the few superpipelined CPU models that have ever been made, because it would not have been worthwhile.
For general-purpose computers, there have never been "4-bit ALU times".
The first monolithic general-purpose processor was Intel 8008 (i.e. the monolithic version of Datapoint 2200), with an 8-bit ISA.
Intel claims that Intel 4004 was the first "microprocessor" (in order to move its priority earlier by one year), but that was not a processor for a general-purpose computer, but a calculator IC. Its only historical relevance for the history of personal computers is that the Intel team which designed 4004 gained a lot of experience with it and they established a logic design methodology with PMOS transistors, which they used for designing the Intel 8008 processor.
Intel 4004, its successors and similar 4-bit processors introduced later by Rockwell, TI and others, were suitable only for calculators or for industrial controllers, never for general-purpose computers.
The first computers with monolithic processors, a.k.a. microcomputers, used 8-bit processors, and then 16-bit processors, and so on.
For cost reduction, it is possible for an 8-bit ISA to use a 4-bit ALU or even just a serial 1-bit ALU, but this is transparent for the programmer and for general-purpose computers there never were 4-bit instruction sets.
When you do XOR together with many other operations in an ALU (arithmetic-logical unit), the speed is determined by the slowest operation, so the speed of any faster operation does not matter.
This means that in almost all CPUs XOR and addition and subtraction have the same speed, despite the fact that XOR could be done faster.
In a modern pipelined CPU, the clock frequency is normally chosen so that a 64-bit addition can be done in 1 clock cycle, when including all the overheads caused by registers, multiplexers and other circuitry outside the ALU stages.
Operations more complex than 64-bit addition/subtraction have a latency greater than 1 clock cycle, even if one such operation can be initiated every clock cycle in one of the execution pipelines.
The operations less complex than 64-bit addition/subtraction, like XOR, are still executed in 1 clock cycle, so they do not have any speed advantage.
There have existed so-called superpipelined CPUs, where the clock frequency is increased, so that even addition/subtraction has a latency of 2 or more clock cycles.
Only in superpipelined CPUs it would be possible to have a XOR instruction that is faster than subtraction, but I do not know if this has ever been implemented in a real superpipelined CPU, because it could complicate the execution pipeline for negligible performance improvements.
Initially superpipelining was promoted by DEC as a supposedly better alternative to the superscalar processors promoted by IBM. However, later superpipelining was abandoned, because the superscalar approach provides better energy efficiency for the same performance. (I.e. even if for a few years it was thought that a Speed Demon beats a Brainiac, eventually it was proven that a Brainiac beats a Speed Demon, like shown in the Apple CPUs)
While mainstream CPUs do not use superpipelining, there have been some relatively recent IBM POWER CPUs that were superpipelined, but for a different reason than originally proposed. Those POWER CPUs were intended for having good performance only in multi-threaded workloads when using SMT, and not in single-thread applications. So by running simultaneous threads on the same ALU the multi-cycle latency of addition/subtraction was masked. This technique allowed IBM a simpler implementation of a CPU intended to run at 5 GHz or more, by degrading only the single-thread performance, without affecting the SMT performance. Because this would not have provided any advantage when using SMT, I assume that in those POWER CPUs XOR was not made faster than subtraction, even if this would have theoretically been possible.
Energy efficiency is usually better. There are countless ways to translate energy efficiency into higher performance.
The predominance of these idioms as a way to zero out a register led Intel to add special xor r, r-detection and sub r, r-detection in the instruction decoding front-end and rename the destination to an internal zero register, bypassing the execution of the instruction entirely.
There are also tree adders which add in O(log(n)) time but use O(n^2) gates if you really need the speed, but AFAIK nobody actually does need to.
[1]https://en.wikipedia.org/wiki/Carry-skip_adder
I have two bit-slice machines from TI based on the 74S481 (4-bit slice x 4).
Just like with the 74181, all ALU operations go through the same path, there are just extra gates that make the difference between logical or arithmetic. For instance, for each bit in the slice, the carry path is masked out if logical, but used if arithmetic.
* The XOR operation (logical) is accomplished with A+B but no bits carry. If carry is not masked, you get arithmetic ADD.
* The ZERO or CLEAR operation is (A+A without carry). With carry, A+A is a shift-left.
* The ONES operation forces all the carry chain to 1 (ignoring operand) (you can do a ONES+1 to get arithmetic 0, but why?)
* In the simpler 74181 (4 years earlier) there are 16 operations with 48 logical/arithmetic outcomes. Pick 12 or so for your instruction set. There are some weirdos.
The crazy thing here is that in the TM990/1481 implementation, the microinstruction clock is 15 MHz, and each has a field for number of micro-wait states. This is faster than the '481s max!
Theoretically, if 66ns is sufficient to settle the ALU, a logical operation doesn't need a micro-wait-state. While arithmetic needs one, only because of carry-look-ahead. If I/O buses are activated, then micro-instructions account for setup/hold times. I could be wrong about the details, but that field is there!
It's the only architecture I know of with short and long microinstructions! (The others are like a fixed 4-stage cycle: input valid, ALU valid, store)
I've only really looked at a single AM2900 implementation (and it was far from optimal). Guess I need to dig deeper at some point.
> The ONES operation forces all the carry chain to 1 (ignoring operand) (you can do a ONES+1 to get arithmetic 0, but why?)
Forcing all carries to 1 inverts the output.
If I'm understanding the ALU correctly, (the datasheet doesn't show that part) it only implements OR and XOR. When combined with the ability to invert both inputs, AND can be implemented as !(!A OR !B), NAND is (!A OR !B) and so on.
Or maybe the ALU implements NOR and XNOR, and all the carry logic is physically inverted from what the documentation says.
There's also the TEST instruction, which does a logical AND but without storing the result (like CMP does for SUB). This can be used to test specific bits.
Testing a single register for zero can be done in several ways, in addition to CMP with 0:
The 8080/Z80 didn't have TEST, but the other three were all in common use. Particularly INC/DEC, since it worked with all registers instead of just the accumulator.Also any arithmetic operation sets those flags, so you may not even need an explicit test. MOV doesn't set flags however, at least on x86 -- it does on some other architectures.
> It encodes to the same number of bytes, executes in the same number of cycles.
I mean, not for zeroing because we know from the TFA that it's special-cased anyway. But maybe if you test on different registers?
That would be quite late then, 1997 Pentium 2 for general population.
https://en.wikipedia.org/wiki/Carry-lookahead_adder
The only minor difference between the two on x86, really, is SUB sets OF and CF according to the result while XOR always clears them.
(But this does not discount the fact that basically all CPUs treat them both as one cycle)
<op> (1 Byte opcode), <Registers> (1 Byte), <immediate value> (1 Byte)
While xor eax, eax only uses 2 bytes. Since there are only 8 registers, meaning they can be encoded with 3 bits, you can pack two values into the <Registers> field (ModR/M).
Making mov eax, 0 only take two bytes would require significant changes of the ISA to allow immediate values in the ModR/M byte (or similar) but there would be little benefit since zeroing can already be done in 2 bytes and I doubt that other cases are even close to frequent enough for this to be any significant benefit overall. An actual improvement would be if there was a dedicated 1 Byte set-rax-to-0 instruction, but obviously that comes at a tradeoff where we have to encode another operation differently (probably with more bytes) again (and you can't zero anything else with it).
https://wiki.osdev.org/X86-64_Instruction_Encoding
https://pyokagan.name/blog/2019-09-20-x86encoding/
It could have been added to x86, even as a group of single-byte opcodes with the register encoded in three bits (as with PUSH, POP, and INC/DEC outside of long mode). But the XOR idiom was already established on the 8080 by that point.
Of course many of the RISC processors also have fixed length instructions, with small literal values being encoded as part of the instruction, so "mov reg, #0" and "mov reg, zero" would both be same length.
One-byte INC/DEC was dropped with x86-64, and PUSH/POP are almost obsolete in APX due to its addition of PUSH2/POP2, leaving only the least useful of the five in the most recent incantation of the instruction set.
It used to be not only faster but also smaller. And back then this mattered.
Say you had a computer running at 33 Mhz, you had 33 million cycles per second to do your stuff. A 60 Hz game? 33 million / 60 and suddenly you only have about 500 000 cycles per frame. 200 scanlines? Suddenly you're left with only 2500 cycles per scanline to do your stuff. And 2500 cycles really isn't that much.
So every cycle counted back then. We'd use the official doc and see how many cycles each instruction would take. And we'd then verify by code that this was correct too. And memory mattered too.
XOR was both faster and smaller (less bytes) then a MOV ..., 0.
Full stop.
And when those CPU first began having cache, the cache were really tiny at first: literally caching ridiculously low number of CPU instructions. We could actually count the size of the cache manually (for example by filling with a few NOP instructions then modifying them to, say, add one, and checking which result we got at the end).
XOR, due to being smaller, allowed to put more instructions in the cache too.
Now people may lament that it persisted way long after our x86 CPUs weren't even real x86 CPUs anymore and that is another topic.
But there's a reason XOR was used and people should deal with it.
We zero with XOR EAX,EAX and that's it.
Here's some more prior art
I looked it up afterwards and xor was also a valid instruction in that architecture to zero out a register, and used even fewer cycles than the subtraction method; but it was not listed in the subset of the assembly language instructions we were allowed to use for that unit. I suspect that it was deemed a bit off-topic, since you would need to explain what the mathematical XOR operation was (if you didn't already learn about it in other units), when the unit was about something else entirely- but everyone knows what subtraction is, and that subtracting a number by itself leads to zero.
[0] Not x86, I do not recall the exact architecture.
PS. What is static vs dynamic count?
Dynamic count - how many times an opcode gets executed.
I. e. an instruction that doesn't appear often in code, but comes up in some hot loops (like encryption) would have low static and high dynamic.
Edit: this is apparently not the case, see @tliltocatl's comment down the thread
8 'sub al, al', 14 'sub ah, ah', 3 'sub ax, ax'
26 'xor al, al', 43 'xor ah, ah', 3 'xor ax, ax'
edit: checked a 2010 bios and not a single 'sub x, x'
It has already been suggested to use these for steganography, i.e. for embedding a hidden message in a binary executable file, by encoding 1 or more bits in the choice of the instruction encoding among alternatives, for every instruction for which alternatives exist.
Absolutely. But I can also imagine that it feels more like something that should be more efficient, because it's "a bit hack" rather than arithmetic. After all, it avoids all the "data dependencies" (carries, never mind the ALU is clocked to allow time for that regardless)!
I imagine that a similar feeling is behind XOR swap.
> Once an instruction has an edge, even if only extremely slight, that’s enough to tip the scales and rally everyone to that side.
Network effects are much older than social media, then....
xor was the default zeroing idiom.I onkly did sub reg,reg when I actually want its flags result. Otherwise the main rule is: do not touch either form unless flags liveness makes the rewrite obviously safe. Had about 40 such idioms for the passes.
The chirality argument made is more akin to dynamic systems balance; yes, you can balance a pencil on its point .. but given a bit of random tilt one way or the other it's going to tend to keep going and end near flat on the table.
There's exceptions, but they tend to be colonial animals in the broadest sense e.g. how clownfish males are famously able to become female but each group has one breeding male and one breeding female at any given time*, or bees where the males (drones) are functionally flying sperm and there's only one fertile female in any given colony; or some reptiles which have a temperature-dependent sex determination that may have been 50/50 before we started causing rapid climate change but in many cases isn't now: https://en.wikipedia.org/wiki/Temperature-dependent_sex_dete...
* Wolves, despite being where nomenclature of "alpha" comes from, are not this. The researcher who coined the term realised they made a mistake and what he thought of as the "alpha" pair were simply the parents of the others in that specific situation: https://davemech.org/wolf-news-and-information/
as mentioned by others this is conjectural but it is a popular (if somewhat unfalsifiable) explanation for homochirality
eg: For one, Isaac Asimov in the 1970s wrote at length on this in his role as a non fiction science writer with a Chemistry Phd
> male/female ratios somehow tend to balance at 50/50.
This is different to the case of actual right handed dominance in humans and to L- Vs R- dominance in chirality ...
( Men and women aren't actual mirror images of each other ... )
Are I believe chosen by intelligent humans who are deliberately trying to keep the lines at gas stations balanced.
If you want to get diarrhea.
I remember the very first ROM instruction was XOR A and this was already a revelation to me as I'd never considered doing anything other than LD A,0 to clear the accumulator.
> In my hypothetical history, xor and sub started out with roughly similar popularity, but xor took a slightly lead due to some fluke, perhaps because it felt more “clever”.
SO MUCH ink and "odd" code has been spilled over these 2 sentences over the past few decades...
There are many kinds of SUB instructions in the x86-64 ISA, which do subtraction modulo 2^64, modulo 2^32, modulo 2^16 or modulo 2^8.
To produce a null result, any kind of subtraction can be used, and XOR is just a particular case of subtraction, it is not a different kind of operation.
Unlike for bigger moduli, when operations are done modulo 2 addition and subtraction are the same, so XOR can be used for either addition modulo 2 or subtraction modulo 2.
It's different in that there's no carry propagation.
Whenever you do addition/subtraction modulo some power of two, the carry does not propagate over the boundaries that correspond to the size of the modulus.
For instance, you can make the 128-bit register XMM1 to be zero in one of the following ways:
In all these 5 instructions, the carry propagates inside chunks corresponding to the size of the modulus and the carry does not propagate between chunks.For XOR, i.e. subtraction modulo 2^1, the size of a chunk is just 1 bit, so the propagation of the carry inside the chunk happens to do nothing.
There are no special rules for XOR, its behavior is the same as for any other subtraction, any behavior that seems special is caused by the facts that the numbers 1 (size in bits of the integer residue) and 0 (number of carry propagations inside a number having the size of the residue) are somewhat more special numbers than the other cardinal numbers.
When you do not do those 5 operations inside a single ALU, but with separate adders, the shorter is the number of bits over which the carry must propagate, the faster is the logic device. But when a single ALU does all 5, the speed of the ALU is a little slower than the slowest of those 5 (a little slower because there are additional control gates for selecting the desired operation).
The other bitwise operations are also just particular cases of more general vector operations. Each of the 3 most important bitwise operations is the 1-bit limit of 2 operations which are distinct for numbers with sizes greater than 1 bit, but which are equivalent for 1-bit numbers. While XOR is just addition or subtraction of 1-bit numbers, AND is just minimum or multiplication of 1-bit numbers, and OR is just maximum of 1-bit numbers or the 1-bit version of the function that gives the probability for 1 of 2 events to happen (i.e. difference between sum and product).
latency (L) and throughput (T) measurements from the InstLatx64 project (https://github.com/InstLatx64/InstLatx64) :
I couldn't find any AMD chips where the same is true.The weird values among those listed by you, i.e. those where the latency is less than 1 clock cycle, are when the operations have not been executed.
There are various special cases that are detected and such operations are not executed in an ALU. For instance, when the operands of XOR/SUB are the same the operation is not done and a null result is produced. On certain CPUs, the cases when one operand is a small constant are also detected and that operation is done by special circuits at the register renamer stage, so such operations do not reach the schedulers for the execution units.
To understand the meaning of the values, we must see the actual loop that has been used for measuring the latency.
In reality, the latency measured between truly dependent instructions cannot be less than 1 clock cycle. If a latency-measuring loop provides a time that when divided by the number of instructions is less than 1, that is because some of those instructions have been skipped. So that XOR-latency measuring loop must have included XORs between identical operands, which were bypassed.
How much of that advice applies to anything these days is questionable. Back then we used to squeeze as much as possible from every clock cycle.
And cache misses weren’t great but the “front side bus” vs CPU clock difference wasn’t so insane either. RAM is “far away” now.
So the stuff you optimize for has changed a bit.
Always measure!
In principle, sub requires 4 steps:
1. Move both operands to the ALU
2. Invert second operand (twos complement convert)
3. Add (which internally is just XOR plus carry propagate)
4. Move result to proper result register.
This is absolutely not how modern processors do it in practice; there are many shortcuts, but at least with pure XOR you don't need twos complement conversion or carry propagation.
Source: Wrote microcode at work a million years ago when designing a GPU.
Floating point is different because what matters is same sign or different sign (for same sign you cannot have cancellation and the exponent will always be the same or one than the largest input's. So the FP mantissa tends to use sign magnitude representation.
Its basically free today. Of course, mov RAX, 0 is also free and does the same thing. But CPUs have limited decoder lengths per clock tick, so the more instructions you fit in a given size, the more parallel a modern CPU can potentially execute.
So.... definitely still use XOR trick today. But really, let the compiler handle it. Its pretty good at keeping track of these things in practice.
-----------
I'm not sure if "sub" is hard-coded to be recognized in the decoder as a zero'd out allocation from the register file. There's only certain instructions that have been guaranteed to do this by Intel/AMD.
Here's html version: https://zzqcn.github.io/perf/intel_opt_manual/3.html#clearin...
AMD has similar list in "2.9.2 Idioms for Dependency removal" from "Software Optimization Guide for the AMD Zen5 Microarchitecture" document: https://docs.amd.com/v/u/en-US/58455_1.00
MOV is right out.