Quote Originally Posted by spoonitnow View Post
On the topic of adding and subtracting being different entities, in the mathematical sense they are essentially the same, but afaik they have to be treated differently when you're using logic on that low of a level.
As far as I can tell, what we are taught in grade school as how to add/subtract/multiply/divide on paper, by hand, is the same thing that a computer does.

This is true for addition and subtraction and division without any stretches.

With multiplication, it's the same, but a computer can't immediately recognize that 0*x = 0.
When you or I multiply 4 * 113 on paper, we break the problem down to 4*3 + 4*10 + 4*100.

A computer would (if it was able to calculate in decimal), see the problem as this: 004 * 113.
To break it down, the computer would go: 4*3 + 4*10 + 4*100 + 00*3 + 00*10 + 00*100 + 000*3 + 000*10 + 000*100.

So a computer can't ignore the leading 0's like you or I can. Other than that, it's the same process as working it out long hand.

I know a couple of iterative algorithms for approximating the square root of a number. I'm not sure how a computer does logarithm, exponential, or trigonometric equations.

I want to understand all of this, but for now, I want to focus on the "basics" of simple math.

***
What I'm confused about is whether the computer, e.g. when multiplying, has a single architecture (like a Dadda Tree) that sits behind a multiplier, or whether the CU simply tells the computer to add each partial sum to a storage register by utilizing a single adder over and over for each partial sum.

I'm really interested in the exact capabilities of the CPU in terms of:
What are the machine code functions the CPU recognizes? (Thanks D0zer, I didn't realize at first that just seeing specific machine code commands will help me understand what they can do.)

What are the actual steps the machine code processes for each request?

What architecture is necessary to support this?

I'm getting the feeling that there is a lot of variation in CPU's and that there is always a choice in chip design between fast and enormous or not as fast and smaller.
If you build a dedicated architecture for each task, you can optimize each build for it's specific purpose. If you build a single architecture to handle similar tasks, then you can't perform those tasks simultaneously, and the architecture may not be optimal in speed for any of the operations it can handle. However, being smaller makes it cheaper to manufacture. This cost to benefit analysis will ultimately drive how the chip is designed, and therefore how the computer will operate.

It might make a lot of sense for me to stop being so general and to pick a specific CPU to study. Once I understand one, I can study another and compare them. I tried looking for whatever chip the NES used (the MOS 6502) and found out some interesting stuff, but not enough to build one myself.

Is there a good online resource that fully describes something like this? I don't mind if it's a 4-bit CPU, instead of the 8-bit 6502.