Raphael S.Carvalho's Programming Blog


"A programmer that cannot debug effectively is blind."

Saturday, September 28, 2013

Does 'processor's size' limit the size of main memory?

Processor doesn't limit the size of main memory, but instead the addressable space.

"Does it mean something else also?"
It means a lot of things.
- Register size.
- Addressing.
- Data Alignment.

On x86-32 as an example, ESP (Stack Pointer) is a 32 bits long register that is used to point to the top of the stack. So it will be used implicitly by x86 when doing stack-related operations (PUSH, POP). Registers such as ESP work basically as an offset into the addressable space.

Nowadays, addresses issued by your program go through a translation process in MMU (On computers provided with MMU), but I will not to discuss this here as it has nothing to do with the main purpose of this topic. It's possible to have an address bus whose size is lower/higher than the size of the processor (Maybe, PAE rely on having a higher number of addressing lines =]).

Yes, address bus must be at least compatible with the size of the word of the processor. Otherwise, how would we send all bits of the address to the memory controller on load/store operations?

x86 real-mode is an interesting example of having an addressable space higher than the size of the word of the processor.
At that time, you had segment:offset addresses where segment would be multiplied by 16 (shift << 4), then the result would be added to the offset. This generated address would then be sent to the memory controller through the address bus. Even though real mode processors had at most registers of 16 bits, the address bus was made of 20 addressing lines.
Up to 1 megabyte of physical memory could be accessed.

The following sentence will probably help you:
"A microprocessor will typically have a number of addressing lines equal to the base-two logarithm of its physical addressing space" http://en.wikipedia.org/wiki/A20_line

There is an interesting approach used by compilers when certain operations aren't natively supported by the underlying processor. For example, 32-bit processors don't support 64-bit data values, but some compilers circumvent that by emulating 64-bit operations (load/store, arithmetic, branch).

Suppose we will run the following snippet of code under a 32-bit processor:
 long long int a, b; // 64-bit values (Even on 32-bit processors).  
 a += b; // Add a to b; store the result into a.  
How would it be possible if 32-bit processors cannot operate on data whose size is higher than 32 bits?
As I said above, it will emulate such operations. It does that by using multiple instructions (steps).
Yes, it will be slower, but that's the only way of dealing with data higher than that supported by the processor.

On a 32-bit processor, if you're adding one 64-bit value to another one, then the addition must be done partially as 64-bit values aren't supported by 32-bit processors.

The assembly code respective to the above snippet would look something like the following:
 # eax:ebx will be used to store a.  
 # ecx:edx will be used to store b.  
 add ebx, edx; # we must calculate ebx:edx first (storing the result into ebx)  
 # note that adc will be used instead of add;  
 # there may be a carry left over by the previous addition,  
 # so the next addition must take it into account.  
 adc eax, ecx; # then calculate eax:ecx (storing the result into eax)  
Yeah, it's expensive (from both resource and performance standpoint since many general-purpose registers are being used, and multiple steps are required to get the operation done respectivelly) and boring (personal opinion =P), but nevertheless, how would we do it otherwise?

Hope it will help you,
Raphael S. Carvalho.

No comments:

Post a Comment