external data bus.
The Intel 8086 was based on the design of the
Intel 8080 and
Interface Unit fed the instruction stream to the Execution
Unit through a 6 byte
prefetch queue, so fetch and execution
were concurrent - a primitive form of
pipelining (8086
instructions varied from 1 to 4 bytes).
It featured four 16-bit general
registers, which could also
registers were often used implicitly by instructions,
featured 64K 8-bit I/O (or 32K 16 bit) ports and fixed
The segment registers allowed the CPU to access 1 meg of
memory in an odd way. Rather than just supplying missing
bytes, as most segmented processors, the 8086 actually shifted
the segment registers left 4 bits and added it to the address.
As a result, segments overlapped, and it was possible to have
two pointers with the same value point to two different memory
locations, or two pointers with different values pointing to
the same location. Most people consider this a
braindamaged design.
where control of the segments was complete (it could even be
useful then), in higher level languages it caused constant
confusion (e.g. near/far pointers). Even worse, this made
expanding the address space to more than 1 meg difficult. A
later version, the
Intel 80386, expanded the design to 32
bits, and "fixed" the segmentation, but required extra modes
(suppressing the new features) for compatibility, and retains
the awkward architecture. In fact, with the right assembler,
code written for the 8008 can still be run on the most recent
So why did
IBM chose the 8086 series when most of the
alternatives were so much better? Apparently IBM's own
Computer, but IBM already had rights to manufacture the 8086,
designs. Apparently IBM was using 8086s in the IBM
Other factors were the 8-bit
Intel 8088 version, which could
use existing
Intel 8085-type components, and allowed the
computer to be based on a modified 8085 design. 68000
components were not widely available, though it could use
faded away as better and cheaper memory technologies arrived.
(1994-12-23)