Processors are probably the most single interesting piece of hardware in your computer. They have a rich and neat history history, dating all the way back to 1971 with the first commercially available microprocessor, the Intel 4004. As you can imagine and have no doubt seen yourself, since then, technology has improved by leaps and bounds.
We’re going to show you a history of the processor, starting with the Intel 8086. It was the processor IBM chose for the first PC and only has a neat history from then on out.
Editor’s Note: This article was originally published in 2001, but as of December 2016, we’ve updated it to include new advancements in the field since then.
- Intel 8086 
- Intel 486 (1989 – 1994) 
- AM486DX Series (1994 – 1995) 
- AMD AM5x86 (1995) 
- The Pentium (1993) 
- The Pentium Pro (1995-1999) 
- Cyrix 6×86 Series (1995) 
- MediaGX (1996) 
- AMD K5 (1996) 
- Pentium MMX (1997) 
- Cyrix 6x86MX (1997) 
- Pentium II (1997) 
- Celeron (1998) 
- AMD K6-2 & K6-3 (1998) 
- Pentium III (1999) 
- AMD Athlon (1999) 
- Celeron II (2000) 
- Duron (2000) 
- Pentium IV (2000) 
- Pentium M (2003) 
- Athlon 64, Athlon 64 X2 and Sempron (2003) 
- Pentium 4 Prescott, Celeron D and Pentium D (2005) 
- Intel Core 2 (2006) 
- AMD Phenom & Phenom II (2007) 
- Intel Core i3, Core i5, and Core i7 (2008 – present) 
- New Mobile Technology (Intel, 2008 – present) 
- Intel Atom 
- AMD APU’s (2011 – present) 
- AMD FX (2010 – present) 
- Closing 
CPUs have gone through many changes through the few years since Intel came out with the first one. IBM chose Intel’s 8088 processor for the brains of the first PC. This choice by IBM is what made Intel the perceived leader of the CPU market. Intel remains the perceived leader of microprocessor development. While newer contenders have developed their own technologies for their own processors, Intel continues to remain more than a viable source of new technology in this market, with the ever-growing AMD nipping at their heels.
The first four generations of Intel processor took on the “8” as the series name, which is why the technical types refer to this family of chips as the 8088, 8086, and 80186. This goes right on up to the 80486, or simply the 486. The following chips are considered the dinosaurs of the computer world. PC’s based on these processors are the kind that usually sit around in the garage or warehouse collecting dust. They are not of much use anymore, but us geeks don’t like throwing them out because they still work. You know who you are.
- Intel 8086 (1978)
This chip was skipped over for the original PC, but was used in a few later computers that didn’t amount to much. It was a true 16-bit processor and talked with its cards via a 16 wire data connection. The chip contained 29,000 transistors and 20 address lines that gave it the ability to talk with up to 1 MB of RAM. What is interesting is that the designers of the time never suspected anyone would ever need more than 1 MB of RAM. The chip was available in 5, 6,, 8, and 10 MHz versions.
- Intel 8088 (1979)
The 8088 is, for all practical purposes, identical to the 8086. The only difference is that it handles its address lines differently than the 8086. This chip was the one that was chosen for the first IBM PC, and like the 8086, it is able to work with the 8087 math coprocessor chip.
- NEC V20 and V30 (1981)
Clones of the 8088 and 8086. They are supposed to be about 30% faster than the Intel ones, though.
- Intel 80186 (1980)
The 186 was a popular chip. Many versions have been developed in its history. Buyers could choose from CHMOS or HMOS, 8-bit or 16-bit versions, depending on what they needed. A CHMOS chip could run at twice the clock speed and at one fourth the power of the HMOS chip. In 1990, Intel came out with the Enhanced 186 family. They all shared a common core design. They had a 1-micron core design and ran at about 25MHz at 3 volts. The 80186 contained a high level of integration, with the system controller, interrupt controller, DMA controller and timing circuitry right on the CPU. Despite this, the 186 never found itself in a personal computer.
- Intel 80286 (1982)
A 16-bit, 134,000 transistor processor capable of addressing up to 16 MB of RAM. In addition to the increased physical memory support, this chip is able to work with virtual memory, thereby allowing much for expandability. The 286 was the first “real” processor. It introduced the concept of protected mode. This is the ability to multitask, having different programs run separately but at the same time. This ability was not taken advantage of by DOS, but future Operating Systems, such as Windows, could play with this new feature. On the the drawbacks of this ability, though, was that while it could switch from real mode to protected mode (real mode was intended to make it backwards compatible with the 8088’s), it could not switch back to real mode without a warm reboot. This chip was used by IBM in its Advanced Technology PC/AT and was used in a lot of IBM-compatibles. It ran at 8, 10, and 12.5 MHz, but later editions of the chip ran as high as 20 MHz. While these chips are considered paperweights today, they were rather revolutionary for the time period.
- Intel 386 (1985 – 1990)
The 386 signified a major increase in technology from Intel. The 386 was a 32-bit processor, meaning its data throughput was immediately twice that of the 286. Containing 275,000 transistors, the 80386DX processor came in 16, 20, 25, and 33 MHz versions. The 32-bit address bus allowed the chip to work with a full 4 GB of RAM and a staggering 64 TB of virtual memory. In addition, the 386 was the first chip to use instruction pipelining, which allows the processor to start working on the next instruction before the previous one is complete. While the chip could run in both real and protected mode (like the 286), it could also run in virtual real mode, allowing several reasl mode sessions to be run at a time. A multi-tasking operating system such as Windows was necessary to do this, though. In 1988, Intel released the 386SX, which was basically a low-fat version of the 386. It used the 16-bit data bus rather than the 32-bit, and it was slower, but it thus used less power and thus enabled Intel to promote the chip into desktops and even portables. In 1990, Intel released the 80386SL, which was basically an 855,00 transistor version of the 386SX processor, with ISA compatibility and power management circuitry.
386 chips were designed to be user friendly. All chips in the family were pin-for-pin compatible and they were binary compatible with the previous 186 chips, meaning that users didn’t have to get new software to use it. Also, the 386 offered power friendly features such as low voltage requirements and System Management Mode (SMM) which could power down various components to save power. Overall, this chip was a big step for chip development. It set the standard that many later chips would follow. It offered a simple design which developers could easily design for.
Intel 486 (1989 – 1994)
The 80486DX was released in 1989. It was a 32-bit processor containing 1.2 million transistors. It had the same memory capacity as the 386 (both were 32-bit) but offered twice the speed at 26.9 million instructions per second (MIPS) at 33 MHz. There are some improvements here, though, beyond just speed. The 486 was the first to have an integrated floating point unit (FPU) to replace the normally separate math coprocessor (not all flavors of the 486 had this, though). It also contained an integrated 8 KB on-die cache. This increases speed by using the instruction pipelining to predict the next instructions and then storing them in the cache. Then, when the processor needs that data, it pulls it out of the cache rather than using the necessary overhead to access the external memory. Also, the 486 came in 5 volt and 3 volt versions, allowing flexibility for desktops and laptops.
The 486 chip was the first processor from Intel that was designed to be upgradeable. Previous processors were not designed this way, so when the processor became obsolete, the entire motherboard needed to be replaced. With the 486, the same CPU socket could accommodate several different flavors of the 486. Initial 486 offerings were designed to be able to be upgraded using “OverDrive” technology. This means you can insert a chip with a faster internal clock into the existing system. Not all 486 systems could use OverDrive, since it takes a certain type of motherboard to support it.
The first member of the 486 family was the i486DX, but in 1991 they released the 486SX and 486DX/50. Both chips were basically the same, except that the 486SX version had the math coprocessor disabled (yes, it was there, just turned off). The 486SX was, of course, slower than its DX cousin, but the resulting reduced cost and power lent itself to faster sales and movement into the laptop market. The 486DX/50 was simply a 50MHz version of the original 486. The DX could not support future OverDrives while the SX processor could.
In 1992, Intel released the next wave of 486’s making use of OverDrive technology. The first models were the i486DX2/50 and i486DX2/66. The extra “2” in the names indicate that the normal clock speed of the processor is being effectively doubled using OverDrive, so the 486DX2/50 is a 25MHz chip being doubled to 50MHz. The slower base speed allowed the chip to work with existing motherboard designs, but allowed the chip internally to operate at the increased speed, thereby increasing performance.
Also in 1992, Intel put out the 486SL. It was virtually identical to vintage 486 processors, but it contained 1.4 million transistors. The extra innards were used by its internal power management circuitry, optimizing it for mobile use. From there, Intel released various 486 flavors, mixing SL’s with SX’s and DX’s at a variety of clock speeds. By 1994, they were rounding out their continued development of the 486 family with the DX4 Overdrive processors. While you might think these were 4X clock quadruplers, they were actually 3X triplers, allowing a 33 MHz processor to operate internally at 100 MHz.
Click here: Next Page 
AM486DX Series (1994 – 1995)
Intel was not the only manufacturer playing in the sandbox at the time. AMD put out its AM486 series in answer to Intel’s counterpart. AMD released the chip in AM486DX4/75, AM486DX4/100, and AM486DX4/120 versions. It contained on-board cache, power management features, 3-volt operation and SMM mode. This made the chip fitting for mobiles in addition to desktops. The chip found its way into many 486-compatibles.
AMD AM5x86 (1995)
This is the chip that put AMD onto the map as official Intel competition. While I am mentioning it here on the 486 page of the history lesson, it was actually AMD’s competitive response to Intel’s Pentium-class processor. Users of the Intel 486 processor, in order to get Pentium-class performance, had to make use of an expensive OverDrive processor or ditch their motherboard in favor of a true Pentium board. AMD saw an opening here, and the AM5x86 was designed to offer Pentium-class performance while operating on a standard 486 motherboard.. They did this by designing the 5×86 to run at 133MHz by clock-quadrupling a 33 MHz chip. This 33 MHz bus allowed it to work on 486 boards. This speed also allowed it to support the 33 MHz PCI bus. The chip also had 16 KB on-die cache. All of this together, and the 5×86 performed better than a Pentium-75. The chip became the de facto upgrade for 486 users who did not want to ditch their 486-based PCs yet.
The Pentium (1993)
By this time, the Intel 486 was entrenched into the market. Also, people were used to the traditional 80×86 naming scheme. Intel was busy working on its next generation of processor. It was not to be called the 80586, though. There were some legal issues surrounding the ability for Intel to trademark the numbers 80586. So, instead, Intel changed the name of the processor to the Pentium, a name they could easily trademark. They released the Pentium in 1993. The original Pentium performed at 60 MHz and 100 MIPS. Also called the “P5” or “P54”, the chip contained 3.21 million transistors and worked on the 32-bit address bus (same as the 486). It has a 64-bit external data bus which could operate at roughly twice the speed of the 486.
The Pentium family includes the 60/66/75/90/100/120/133/150/166/200 MHz clock speeds. The original 60/66 MHz versions operated on the Socket 4 setup, while all of the remaining versions operated on the Socket 7 boards. Some of the chips (75MHz – 133MHz) could operate on Socket 5 boards as well. Pentium is compatible with all of the older operating systems including DOS, Windows 3.1, Unix, and OS/2. Its superscalar design can execute two instructions per clock cycle. The two separate 8K caches (code cache and data cache) and the pipelined floating point unit increase its performance beyond the x86 chips. It had the SL power management features of the i486SL, but the capability was much improved. It has 273 pins that connect it to the motherboard. Internally, though, its really two 32-bit chips chained together that split the work. The first Pentium chips operated at 5 volts and thus operated rather hotly. Starting at the 100MHz version, the requirement was reduced to 3.3 volts. Starting at the 75MHz version, the chip also supported Symmetric Dual Processing, meaning you could use two Pentiums side by side in the same system.
The Pentium stayed around a long time. It was released in many different speeds as well as different flavors. In fact, Intel implemented an “s-spec” rating which is marked on each Pentium CPU which tells the owner some key data about the processor in order to make sure they have their motherboard set correctly. There were just so many different Pentiums out there that it became hard to tell. You can look up processor specs using the s-spec at the link below.
Related Link: Intel Processor Spec Finder 
The Pentium Pro (1995-1999)
If the regular Pentium is an ape, this processor evolved into being human. The Pentium Pro (also called “P6” or “PPro”) is a RISC chip with a 486 hardware emulator on it, running at 200 MHz or below. Several techniques are used by this chip to produce more performance than its predecessors. Increased speed is achieved by dividing processing into more stages, and more work is done within each clock cycle. Three instructions can be decoded in each clock cycle, as opposed to only two for the Pentium. In addition, instruction decoding and execution are decoupled, meaning that instructions can still be executed if one pipeline stops (such as when one instruction is waiting for data from memory; the Pentium would stop all processing at this point). Instructions are sometimes executed out of order, that is, not necessarily as written down in the program, but rather when information is available, although they won’t be much out of sequence; just enough to make things run smoother. Such improvements to the PPro resulted in a chip optimized for higher end desktop workstations and network servers.
It has two separate 8K L1 cache (one for data and one for instructions), and up to 1 MB of onboard L2 cache in the same package. the onboard L2 cache increased performance in and of itself because the chip did not have to make use of an L2 cache on the motherboard itself. PPro is optimized for 32-bit code, so it will run 16-bit code no faster than a Pentium, which is a big drawback. It’s still a great processor for servers, being it can be in multiprocessor systems with 4 processors. Another good thing about the Pentium Pro is that with the use of a Pentium 2 overdrive processor, you have all the perks of a normal Pentium II, but the L2 cache is full speed, and you get the multiprocessor support of the original Pentium Pro.
Click here: Next Page 
Cyrix 6×86 Series (1995)
Cyrix, by this time, was a major player in the alternative processor market. They had been around since 1992, with their release of the 486SLC. By 1995, they had their own 5×86 processor and it was considered the only real competition to the AMD counterpart. But, they released their 6×86 in 1995. It was designed to go head to head with Intel’s Pentium processor. Dubbed “M1”, the chip contained two super-pipelined integer units, an on-die FPU, and 16 KB of write-back cache. It used many of the same techniques internally as the Intel and AMD chips to increase performance. Like AMD beginning with their K5 (see below), Cyrix used the P-rating system. It came in PR-120, 133, 150, 166 and 200 versions. Each rating had a “+” after it, indicating that it performed better than the corresponding Pentium. But, did it?
Cyrix had had a reputation for lagging in the area of performance, and the M1 was not an exception. The chip used a weaker FPU than both AMD and Intel, meaning it could not keep up with the competition in areas such as 3D gaming or other math-intensive software. On top of that, the chip had a reputation for running hot. Users had to get CPU fans that could keep these hot processors cool enough to run stably. Cyrix tried to combat this issue with the 6x86L processor. This “low power” processor made use of a split voltage (3.3 volts for I/O and 2.8 volts internally).
MediaGX was Cyrix’s answer to low-cost entry level PC’s. Making use of a standard x86 processor core, the chip lowered the cost of PCs using it by integrating many of the common PC components into the chip itself. MediaGX had integrated audio and video circuitry, as well as circuitry to handle many of the common tasks normally handled by chips on the motherboard itself. The CPU spoke directly to a PCI bus and DRAM memory, and the video was rather high-quality SVGA (for the time). It could support up to 128 MB of EDO RAM in 4 separate memory banks, and the video sub-system could support resolutions of up to 1280x1024x8 or 1024x768x16.
The integration of MediaGX was actually spanned across two chips: the processor itself and the MediaGX Cx5510. The chip requires a specially designed motherboard. It is not Socket 7 compatible. As a result, it is really an outsider in relation to the other processors we were discussing, but being that it was on the timetrack of history for CPUs, it bears mentioning.
AMD K5 (1996)
While AMD was competing with Intel with their 5×86 processor, this chip was not a true Pentium alternative. In 1996, however, AMD released the K5. This chip was designed to go head to head with the Pentium processor. It was designed to fit right into Socket 7 motherboards, allowing users to drop K5’s into the motherboards they might have already had. The chip was fully compatible with all x86 software. In order to rate the speed of the chips, AMD devised the P-rating system (or PR rating). This number identified the speed as compared to the true Intel Pentium equivalent. K5’s ran from 75 MHz to 166 MHz (in P-ratings, that is). They contained 24KB of L1 cache and 4.3 million transistors. While the K5’s were nice little chips for what they were, AMD quickly moved on with their release of K6.
Pentium MMX (1997)
Intel released many different flavors of the Pentium processor. One of the more improved flavors was the Pentium MMX, released in 1997. It was a move by Intel to improve the original Pentium and make it better serve the needs in the multimedia and performance department. One of the key enhancements, and where it gets its name from, is the MMX instruction set. The MMX instructions were an extension off the normal instruction set. The 57 additional streamlined instructions helped the processor perform certain key tasks in a streamlined fashion, allowing it to do some tasks with one instruction that it would have taken more regular instructions to do. It paid off, too. The Pentium MMX performed up to 10-20% faster with standard software, and higher with software optimized for the MMX instructions. Many multimedia applications and games that took advantage of MMX performed better, had higher frame rates, etc.
MMX was not the only improvement on the Pentium MMX. The dual 8K caches of the Pentium were doubled to 16 KB each. It also had improved dynamic branch prediction, a pipelined FPU, and an additional instruction pipe to allow faster instruction processing. With these and other improvements, the Pentium line of processor was extended even longer. The line lasted up until recently, and went up to 233 MHz. While new PCs with this processor are all but non-existent, there are many older PCs still using this processor and going strong.
AMD K6 (1997)
The K6 gave AMD a real leg up in performance, and it virtually closed the gap between Intel and AMD in terms of Intel being perceived as the real performance processor. The K6 processor compared, performance-wise, to the new Intel Pentium II’s, but the K6 was still Socket 7 meaning it was still a Pentium alternative. The K6 took on the MMX instruction set developed by Intel, allowing it to go head to head with Pentium MMX. Based on the RISC86 microarchitecture, the K6 contained seven parallel execution engines and two-level branch prediction. It contained 64KB of L1 cache (32KB for data and 32KB for instructions). It made use of SMM power management, leading to mobile version of this chip hitting the market. During its life span, it was released in 166MHz to 300 MHz versions. It gave the early Pentium II’s a run for their money, but AMD had to improve on it in order to keep up with Intel for long.
Cyrix 6x86MX (1997)
Well, Intel came up with MMX and AMD was already using it starting with the K6. So, Cyrix had to get in on the game as well. The 6x86MX, also dubbed “M2”, was Cyrix’s answer. This processor took on the MMX instruction set, as well as took an increased 64KB cache and an increase in speed. The first M2’s were 150 MHz chips, or a P-rating of PR166 (Yes, M2’s also used the P-rating system). The fastest ones operated at 333 MHz, or PR-466.
M2 was the last processor released by Cyrix as a stand-alone company. In 1999, Via Technologies acquired the Cyrix line from it’s parent company, National Semiconductor. At the same time, Via also acquired the Centaur processor division from IDT.
Click here: Next Page 
Pentium II (1997)
Intel made some major changes to the processor scene with the release of the Pentium II. They had the PentiumMMX and Pentium Pro’s out into the market in a strong way, and they wanted to bring the best of both into one chip. As a result, the Pentium II is kind of like the child of a Pentium MMX mother and the Pentium Pro Father. But like real life, it doesn’t necessarily combine the best of it’s parents. Pentium II is optimized for 32-bit applications. It also contains the MMX instruction set, which is almost a standard by this time. The chip uses the dynamic execution technology of the Pentium Pro, allowing the processor to predict coming instructions, accelerating work flow. It actually analyzes program instruction and re-orders the schedule of instructions into an order that can be run the quickest. Pentium II has 32KB of L1 cache (16KB each for data and instructions) and has a 512KB of L2 cache on package. The L2 cache runs at ½ the speed of the processor, not at full speed. Nonetheless, the fact that the L2 cache is not on the motherboard, but instead in the chip itself, boosts performance.
One of the most noticeable changes in this processor is the change in the package style. Almost all of the Pentium class processors use the Socket 7 interface to the motherboard. Pentium Pro’s use Socket 8. Pentium II, however, makes use of “Slot 1”. The package-type of the P2 is called Single-Edge contact (SEC). The chip and L2 cache actually reside on a card which attaches to the motherboard via a slot, much like an expansion card. The entire P2 package is surrounded by a plastic cartridge. In addition to Intel’s departure into Slot 1, they also patented the new Slot 1 interface, effectively barring the competition from making competitor chips to use the new Slot 1 motherboards. This move, no doubt, demonstrates why Intel moved away from Socket 7 to begin with – they couldn’t patent it.
The original Pentium II was code-named “Klamath”. It ran at a paltry 66 MHz bus speed and ranged from 233MHz to 300MHz. In 1998, Intel did some slight re-working of the processor and released “Deschutes”. They used a 0.25 micron design technology for this one, and allowed a 100MHz system bus. The L2 cache was still separate from the actual processor core and still ran at only half speed. They would not rectify this issue until the release of the Celeron A and Pentium III. Deschutes ran from 333MHz to up to 450 MHz.
About the time Intel was releasing the improved P2’s (Deschutes), they decided to tackle the entry level market with a stripped down version of the Pentium II, the Celeron. In order to decrease costs, Intel removed the L2 cache from the Pentium II. They also removed the support for dual processors, an ability that the Pentium II had. Additionally, they ditched the plastic cover which the P2 had, leaving simply the processor on the Slot 1 style card. This, no doubt, reduced the cost of the processor quite a bit, but performance suffered noticeably. Removing the L2 cache from a chip seriously hampers its performance. On top of that, the chip was still limited to the 66MHz system bus. As a result, competitor chips at the same clock speeds could still outperform the Celeron. What was the point?
Intel had realized their mistake with the next edition of the Celeron, the Celeron 300A. The 300A came with 128KB of L2 cache on board. The L2 cache was on-die with the 300A, meaning it ran at full processor speed, not half speed like the Pentium II. This fact was great for Intel users, because the Celerons with full speed cache operated much better than the Pentium II’s with 512 KB of cache running at half speed. With this fact, and the fact that Intel unleashed the bus speed of the Celeron, the 300A became well-known in overclocking enthusiast circles. It quickly became known for the cheap chip you could buy and crank up to compete with the more expensive stuff.
The Celeron is available in two formats. The original Celerons used the patented Slot 1 interface. But, Intel later switched over to a PPGA format, or Plastic Pin Grid Array, also known as Socket 370. This new interface allowed reduced costs in manufacturing. It also allowed cheaper conversion from Socket 7 boards to Socket 370. Motherboard manufacturers found it easier to swap out a Socket 7 socket for a Socket 370 socket, more or less leaving the rest of the board the same. It was more involved to change designs over to a slotted board. Slot 1 Celerons ranged from the original 233MHz up to 433 MHz, while Celerons 300MHz and up were available in Socket 370.
AMD K6-2 & K6-3 (1998)
AMD was a busy little company at the time Intel was playing around with their Pentium II’s and Celerons. In 1998, AMD released the K6-2. The “2” shows that there are some enhancements made onto the proven K6 core, with higher speeds and higher bus speeds. They probably were also taking a page out of the Pentium “2” book. The most notable new feature of the K6-2 was the addition of 3DNow technology. Just as Intel created the MMX instruction set to speed multimedia applications, AMD created 3DNow to act as an additional 21 instructions on top of the MMX instruction set. With software designed to use the 3DNow instructions, multimedia applications get even more boost. Using 3DNow, larger L1 cache, on-die L2 cache and Socket 7 usability, the K6-2 gained ranks in the market without too much trouble. When used with Socket 7 boards that contained L2 cache on board, the integrated L2 cache on the processor made the motherboard cache considered L3 cache.
The K6-3 processor was basically a K6-2 with 256 KB of on-die L2 cache. The chip could compete well with the Pentium II and even Pentium III’s of the early variety. In order to eek out the full potential of the processor core, though, AMD fine tuned the limits of the processor, leading the K6-2 and K6-3 to be a bit picky. The split voltage requirements were pretty rigid, and as a result AMD held a list of “approved” boards that could tolerate such fine control over the voltages. Processor cooling was also an important issue with these chips due to the increased heat. In that regard, they were a bit like the Cyrix 6x86MX processors.
Pentium III (1999)
Intel released the Pentium III “Katmai” processor in February of 1999, running at 450 MHz on a 100MHz bus. Katmai introduced the SSE instruction set, which was basically an extension of MMX that again improved the performance on 3D apps designed to use the new ability. Also dubbed MMX2, SSE contained 70 new instructions, with four simultaneous instructions able to be performed simultaneously. This original Pentium III worked off what was a slightly improved P6 core, so the chip was well suited to multimedia applications. The chip saw controversy, though, when Intel decided to include integrated “processor serial number” (PSN) on Katmai. the PSN was designed to be able to be read over a network, even the internet. The idea, as Intel saw it, was to increase the level of security in online transactions. End users saw it differently. They saw it as an invasion of privacy. After taking a hit in the eye from the PR perspective and getting some pressure from their customers, Intel eventually allowed the tag to be turned off in the BIOS. Katmai eventually saw 600 MHz, but Intel quickly moved on to the Coppermine.
In April of 2000, Intel released their Pentium III Coppermine. While Katmai had 512 KB of L2 cache, Coppermine had half that at only 256 KB. But, the cache was located directly on the CPU core rather than on the daughtercard as typified in previous Slot 1 processors. This made the smaller cache an actual non-issue, because performance benefited. Coppermine also took on a 0.18 micron design and the newer Single Edge Contact Cartridge 2 (SECC 2) package. With SECC 2, the surrounding cartridge only covered one side of the package, as opposed to previous slotted processors. What’s more, Intel again saw the logic they had when they took Celeron over to Socket 370, so they eventually released versions of Coppermine in socket format. Coppermine also supported the 133 MHz front side bus. Coppermine proved to be a performance chip and it was and still is used by many PCs. Coppermine eventually saw 1+ GHz.
Click here: Next Page 
AMD Athlon (1999)
With the release of the Athlon processor in 1999, AMD’s status in the high performance realm was placed in concrete. The Athlon line continues to this day, with the highest clock speeds all operating off of various designs and improvements off of the Athlon series. But, the whole line started with the original Athlon classic. The original Athlon came at 500MHz. Designed at a 0.25 micron level, the chip boasted a super-pipelined, superscalar microarchitecture. It contained nine execution pipelines, a super-pipelined FPU and an again-enhanced 3dNow technology. These issues all rolled into one gave Athlon a real performance reputation. One notable feature of the Athlon is the new Slot interface. While Intel could play games by patenting Slot 1, AMD decided to call the bet by developing a Slot of their own – Slot A. Slot A looks just like Slot 1, although they are not electrically compatible. But, the closeness of the two interfaces allowed motherboard manufacturers to more easily manufacturer mainboard PCBs that could be interchangeable. They would not have to re-design an entire board to accommodate either Intel or AMD – they could do both without too much hassle.
Also notable with the release of Athlon was the entirely new system bus. AMD licensed the Alpha EV6 technology from Digital Equipment Corporation. This bus operated at 200MHz, faster than anything Intel was using. The bus had a bandwidth capability of 1.6 GB/s.
Athlon has gone through revisions and improvements and is still being used and marketed. In June of 2000, AMD released the Athlon Thunderbird. This chip came with an improved 0.18 micron design, on-die full speed L2 cache (new for Athlon), DDR RAM support, etc. It is a real workhorse of a chip and has a reputation for being able to be pushed well beyond the speed rating as assigned by AMD. Overclocker’s paradise. Thunderbird was also released in Socket A (or Socket 462) format, so AMD was now returning to its socketed roots just as Intel had already done by this time.
In May 2001, AMD released Athlon “Palomino”, also dubbed the Athlon 4. While the Athlon had now been out for about 2 years, it was now being beaten by Intel’s Pentium IV. The direct competition of the Pentium III was on its way to the museum already, and Athlon needed a boost to keep up with the new contender. The answer was the new Palomino core. The original intention of Palomino was to expand off of the Thunderbird chip, by reducing heat and power consumption. Due to delays, it was delayed and it ended up being beneficial. The chip was released first in notebook computers. AMD-based notebooks, until this time, were still using K6-2’s and K6-3’s and thus AMD’s reputation for performance in the mobile market was lacking. So, Athlon 4 brought AMD to the line again in the mobile market. Athlon 4 was later released to the desktop market, workstations, and multiprocessor servers (with its true dual processor support). Palomino made use of a data pre-fetch cache predictor and a translation look-aside buffer. It also made full use of Intel’s SSE instruction set. The chip made use of AMD’s PowerNow! technology, which had actually been around since the K6-2 and 3 days. It allows the chip to change its voltage requirements and clock speed depending on the usage requirement of the time. This was excellent for making the chip appropriate for power-sensitive apps such as mobile systems.
When AMD released the Palomino to the desktop market in October of 2001, they renamed the chip to Athlon XP, and also took on a slightly different naming jargon. Due to the way Palomino executes instructions, the chip can actually perform more work per clock cycle than the competition, namely Pentium IV. Therefore, the chips actually operate at a slower clock speed than AMD makes apparent in the model numbers. They chose to name the Athlon XP versions based on the speed rating of the processor as determined by AMD and their own benchmarking. So, for example, the Athlon XP 1600+ performs at 1.4 GHz, but the average computer user will think 1.6 GHz, which is what AMD wants. But, this is not to say that AMD is tricking anybody. In fact, these chips to perform like the Thunderbird at the rated speed, and perform quite well when stacked against the Pentium IV. In fact, the Athlon XP 1800+ can out-perform the Pentium IV at 2 GHz. Besides the naming, the XP was basically the same as the mobile Palomino released a few months earlier. It did boast a new packaging style that would help AMD’s release of 0.13 micron design chips later on. It also operated on the 133MHz front-side bus (266MHz when DDR taken into account). AMD continued to use the Palomino core until the release of the Athlon XP 2100+, which was the last Palomino.
In June of 2002, AMD announced the 0.13 micron Thoroughbred-based 2200+ processor. The move was more of a financial one, since there are no real performance gains between Palomino and Thoroughbred. Nonetheless, the smaller more means AMD can product more of them per silicon wafer, and that just makes sense. AMD is really taunting everyone with news of the coming ClawHammer core, which will be AMD’s next big move. But, with that chip still in the development and testing phase at this point, ClawHammer is not yet ready. Until it is, AMD will keep us mildly entertained with Thoroughbred and keep Intel sweating.
Celeron II (2000)
Just as the Pentium III was a Pentium II with SSE and a few added features, the Celeron II is simply a Celeron with a SSE, SSE2, and a few added features. The chip is available from 533 MHz to 1.1 GHz. This chip was basically an enhancement of the original Celeron, and it was released in response to AMD’s coming competition in the low-cost market with the Duron. The PSN of the Pentium III had been disabled in the Celeron II, with Intel stating that the feature was not necessary in the entry-level consumer market. Due to some inefficiencies in the L2 cache and still using the 66MHz bus (unless you overclock), this chip would not hold up too well against the Duron despite being based on the trusted Coppermine core. Celeron II would not be released with true 100 MHz bus support until the 800MHz edition, which was put out at the beginning of 2001.
In April of 2000, AMD released the Duron “Spitfire”. Spitfire came primarily out of the Athlon Thunderbird lineage, but it had a lighter load of cache onboard, ensuring that it was not a contender in the performance realm with its big cousin. The chip had a 128 KB L1 cache, but only 64 KB of on-die L2. Despite the lower L2 cache, internal methods of dealing with the L2 cache coupled with other improvements make the Duron a clear winner when compared against the Celeron. Duron also works with the EV6 bus while Celeron was still working with 66 MHz bus, and this did not help Celeron at all.
In August of 2001, AMD released the Duron “Morgan”. This chip broke out at 950 MHz but quickly moved past 1 GHz. The Morgan processor core was the key to the improvement of Duron here, and it is comparable to the effect of the Palomino core on the Athlon. In fact, feature-wise, the Morgan core is basically the same as the Palomino core, but with 64 KB of L2 rather than 256 KB.
Click here: Next Page 
Pentium IV (2000)
While we have been talking about AMD’s high-speed Athlon Thunderbirds and Palominos, Intel actually beat AMD to the gun by releasing Pentium IV Willamette in November of 2000. Pentium IV was exactly what Intel needed to again take the torch from AMD. Pentium IV is a truly new CPU architecture and serves as the beginning to new technologies we will see for the next several years. The new NetBurst architecture is designed with future speed increase in mind, meaning P4 is not going to fade away quickly like Pentium III near the 1 GHz mark.
According to Intel, NetBurst is made up of four new technologies: Hyper Pipelined Technology, Rapid Execution Engine, Execution Trace Cache and a 400MHz system bus. Let’s look at the first three, since they require some explanation:
- Hyper Pipelined Technology
There are a couple of ways to increase the speed of a processor. One is to decrease the die size. Technology in this regard is developed quickly, but not quickly enough. The P5 core saw its limit quickly and so did the P6 core (which is why Pentium III was limited at around 1 GHz). The technology to move into a smaller die size was not yet ready at the time of the Willamette release, so Intel moved to plan B. Plan B is to change the design of the CPU pipeline so that it is wider, can accommodate more instructions. This is what Intel did. Hyper Pipelined Technology refers to Intel’s expanding of the CPU pipeline from 10 stages (of the P6) to 20 stages. This effectively makes the data pipe (bad term, but descriptive) wider, and allows each stage to do actually less per clock cycle than the P6 core. The fact that each stage actually does less per clock cycle is what gives this design room for expandability. It is analogous to expanding a street highway – you add more lanes and for awhile each lane has less traffic, but eventually traffic increases and the road can handle much more traffic. The tradeoff in simply expanding this pipeline to a bunch of stages is that it takes the processor longer to recover from mistakes in the branch level prediction, being that it has to basically start over with 20 stages rather than a shorter 10-stage pipeline. The P4, though, has a newly advanced branch predictor to help with this problem.
- Rapid Execution Engine
The Pentium IV contains 2 arithmetic logic units and they operate at twice the speed of the processor. While this might sound like absolute heaven, it is good to keep in mind that they had to do it this way due to the pipeline design in order to even keep integer performance up to that of the Pentium III. So, this is really a necessary design change due to the increase pipeline size.
- Execution Trace Cache
Intel also did some re-working of the P4’s internal cache in order to nullify the effects of a mistake in branch prediction that can be a real lag with a 20-stage pipeline. First, they increase the branch target buffer size to eight times that of the Pentium III. This cache is the area from which the branch predictor gets its data. Secondly, Intel reduced the size of the L1 data cache to only 8K in order to reduce the latency of the cache. This, no doubt, increases the need for the 256 KB on-die L2 cache, and the latency of that has been improved on the P4 as well. Lastly, Intel added a execution trace cache. This is a new cache that can hold instructions that are already decoded and ready for execution. This means that the processor does not have to again waste time decoding every instruction when a branch prediction error occurs. Instead, it can just go to this 12K cache and retrieve the operation and go.
The early Pentium 4’s made use of the Socket 423 interface. One of the reasons for the new interface is the addition of heatsink retention mechanisms to either side of the socket. This is a move to help owners avoid the dreaded mistake of crushing the CPU core by tightening the heatsink down on it too tightly. The retention bases hold the heat sink onto the CPU. Socket 423 was short-lived, and Pentium IV quickly moved to Socket 478 with the release of the 1.9 GHz. Also, P4 was, at its launch, associated exclusively with Rambus RDRAM. Intel was stuck in this agreement with Rambus, and this was an obvious hurdle for promotion, being that most computer users to not have Rambus and don’t wish to buy any. So, early retail P4’s actually came packaged with two 64MB sticks of RDRAM. With chipset support later coming, DDR mating with the Pentium IV eventually came.
Pentium IV’s, as you might expect, were and still are on the expensive end of things. The new core was quite big when compared to other processors and the cost to product it was innately higher. In early 2002, Intel announced a new edition of the Pentium IV based on the Northwood core. The big news with this is that Intel leaves the larger 0.18 Willamette core in favor of this new 0.13 micron Northwood. This shrunk the core and therefore allowed Intel to not only make Pentium IV’s cheaper but also make more of them. The core is still bigger than that of the Athlon XP, but this is explained by the fact that Intel increased the L2 cache from 256 KB to 512 KB for Northwood. This raises the transistor count from 42 million for Willamette to 55 million for Northwood. Northwood was first released in 2 GHz and 2.2 GHz versions, but the new design gives P4 room to move up to 3 GHz quite easily. It was recently released at 2.53 GHz using a 533 MHz front side bus. Other than that, Northwood is architecturally the same as Willamette.
Pentium M (2003)
The Pentium M was created for mobile applications, primarily laptops (or notebooks), thus the “M” moniker in the name of the processor. It uses Socket 479, with the most common applications of that socket being used in Pentium M and and Celeron M mobile processors. Interestingly, the Pentium M was not designed as a lower power version of the Pentium IV. Instead, it’s a heavily modified Pentium III, which in itself was based off of the Pentium II.
Intended for mobile uses, the Pentium M’s focus was on power efficiency in order to significantly improve the battery life of a laptop or notebook. With that in mind, the Pentium M runs at a much lower average power consumption as well as a much lower heat output. It has a maximum Thermal Design Power (TDP) of 5-27W.
Despite not being based off of the Pentium IV, it runs a lower clock speed of the laptop version of the Pentium IV, but has similar performance capabilities. For instance, a typical Pentium M will clock in at 1.6GHz, but is more than capable of attaining or surpassing the performance of a Pentium 4-M that clocks in at 2.4GHz.
Click here: Next Page 
Athlon 64, Athlon 64 X2 and Sempron (2003)
AMD’s Athlon 64 is the successor to the Athlon XP and is the second of AMD’s processors to to implement its own 64-bit architecture. The first processor to implement that 64-bit technology was the AMD Opteron, but that was targeted at commercial uses, such as servers and workstations. The Athlon 64, however, is the first 64-bit processor aimed at the consumer market. So, in a way, this is AMD’s first venture into 64-bit territory.
In the Athlon 64, AMD a second bus, the northbridge, to connect the CPU to the chipset and device attachment bus. AMD did this through a high-performance technology called HyperTransport (which boasted speeds of 800 MT/s to 1000 MT/s at the the time).
It’s worth noting that the Athlon 64 was only a single-core processor. However, AMD eventually launched an improved version of the Athlon 64, the Athlon 64 X2. This newer version launched in 2005 was the first dual-core desktop processor that was designed by AMD. In May of 2006, AMD released Athlon 64 X2 versions with AMD virtualization technology, commonly referred to as AMD-V.
Before that, AMD launched another processor called the Athlon 64 FX, which was intended towards hardware enthusiasts (like gamers). This is for a number of reasons, two of them being that its multipliers were always unlocked and that they always had the highest clock speeds of all the Athlons at launch. Eventually, AMD launched the Athlon 64 FX-60, which is when the Athlon 64 FX line went dual-core.
At the time of the Athlon 64’s launch, it was only available in Socket 754 and Socket 940. They introduced the Athlon 64 in Socket 754 largely because its onboard memory controller was incapable of running non-registered or unbuffered memory in dual-channel. Eventually, AMD launched the Athlon 64 on another socket — Socket 939. This was intended for the mainstream market with the dual-channel memory interface fix. This essentially replaces Socket 754, so Athlon 64s sold on Socket 754 were essentially moved to a budget line of processors.
AMD actually referred to its budget line of processors as Sempron, which was made out of a variety of different CPUs. The second generation of this line was loosely based on the Athlon 64 architecture, but earlier versions did not include the 64-bit technology, but also a reduced cache size. In the second half of 2005, AMD actually added the AMD64 support to the Sempron line in order to extend the market of 64-bit processors. This is because, at the time, 64-bit processors were a pretty niche market.
Pentium 4 Prescott, Celeron D and Pentium D (2005)
The Pentium 4 Prescott was introduced in 2004 to mixed feelings. The Pentium 4 Prescott was the first core to use the 90nm semiconductor manufacturing process . Many weren’t happy with it because the Prescott was essentially a restructuring of the Pentium 4’s microarchitecture. While that’d normally be a good thing, there weren’t too many positives. Some programs were enhanced by the doubled cache as well as the SSE3 instruction set. Unfortunately, there were other programs that suffered because of the longer instruction pipeline .
It’s also worth noting that the Pentum 4 Prescott was able to achieve some pretty high clock speeds, but not nearly as high as Intel was hoping. One version of the Prescott was actually able to obtain speeds of 3.8GHz. Eventually, Intel released a version of the Prescott supporting Intel’s 64-bit architecture, Intel 64. To start out, these were only sold as the F-series to OEMs, but Intel eventually renamed it to the 5×1 series, which was sold to consumers.
Intel introduced another version of the Prentium 4 Prescott, which was the Celeron D. A major difference with them is that they sported double the L1 and L2 cache than the previous Willamette and Northwood desktop Celerons. Not only that, but you got the SSE3 instruction set and they were manufactured on Socket 478. The Celeron D overall was a major performance improvement over many of the previous NetBurst-based Celerons. While there were major performance improvements across the board, it had a huge problem — excessive heat.
Eventually, Intel would go on to refresh the Celeron D, but this time with 64-bit architecture. Unfortunately, Intel never built these with Socket 478, but with the LGA 775  socket type.
Another one Intel made was the Pentium D. You can look at this processor as the dual-core variant of the Pentium 4 Prescott. You obviously get all the benefits that an extra core brings, but the other notable improvement with the Pentium D was that it could run multi-threaded applications. There were a few different generations of the Pentium D, all featuring small and minor improvements over the other, but the Pentium D series was eventually retired in 2008. The Pentium D had a lot of pitfalls, including high power consumption and that a single core of the Pentium D was built on two dies (more energy efficient CPUs and slower dual-core CPUs were on just a single die).
The true and overall better successor was the Intel Core 2 brand, which had a lot of success.
Click here: Next Page 
Intel Core 2 (2006)
The Intel Core 2 is a brand that houses a variety of different 64-bit X86-64 CPUs. This includes single-core, dual-core and quad-core processor based on Intel’s Core microarchitecture. The Core 2 brand encompassed a lot of different CPUs, but to give you an idea, you had the Solo (a single-core CPU), the Duo (a dual-core CPU), Quad (a quad-core CPU) and then later on, they had Extreme (a dual- or quad-core processor aimed at hardware enthusiasts).
The Intel Core 2 line was really the first multi-core processors. This was a necessary route for Intel to go, as true multi-core processors are essentially a single component, but with two or more independent processing units. They’re often referred to as cores. With multiple cores like this, Intel is able to increase overall speed for programs, and therefore, opening the path to more demanding programs as we could see today. That’s not to say Intel or AMD are responsible for demanding programs today, but without high-end processors and breakthroughs in technology by them, we really wouldn’t have the hardware that can run those programs.
Core 2 branded processors came with a lot of neat technology. For instance, you had Intel’s own virtualization technology, 64-bit architecture, low power, and SSE4 (Streaming SIMD Extensions 4, a processor instruction set).
AMD Phenom & Phenom II (2007)
AMD began the Phenom family of processors in 2007. It was a 64-bit desktop processor based off of AMD’s K10 microarchitecture. The Phenom family is an interesting. AMD actually considered the quad-core Phenoms (AMD made dual-core and triple-cores versions of the Phenom as well) to be the first processor with a true quad-core design. This is because all of the Phenom’s cores are on the same die. If you like at Intel’s Core 2 Quad processor, it features a multi-chip module  design instead.
There were some issues with early Phenom processors where the system would lock-up in extremely rare instances. This is because of a flaw discovered in the translation lookaside buffer  (TLB). Pretty much all early versions of the Phenom processor were affected, as it wasn’t fixed until version B3 of the Phenom processor in 2008. The processors without the bug also had a “xx50” model number (so, there would be the number “50” at the end of every model number, indicating that this was a processor without the bug).
After these issues, AMD eventually went ahead and launched a successor at the end of 2008, the Phenom II. The Phenom II comes in a lot of versions. They made dual-core, triple-core and quad-core variants in early 2009, but an improved quad-core model and a hex-core model came in around early to mid 2010. Again, it’s based off of the K10 microarchitecture, but it’s also built off of the 45nm semiconductor manufacturing process. The Phenom II initially launched on the Socket AM2+, but Socket AM3 versions launched in early 2009 with with DDR3 support.
The Phenom II is a really neat processor. Just a year before, the Phenom launched with a meager L3 Cache Size of 2MB. The Phenom II tripled that, bringing it up to 6MB. It also has the SSE4a instruction set . Black Edition’s of AMD’s Phantom II CPUs also offered some crazy overclocking potential. At CES 2009 in Las Vegas, in a public demonstration, it was able to achieve an overclock of a whopping 6.5GHz. In a separate instance, a group called LimitTeam was able to achieve 7.127GHz
Click here: Next Page 
Intel Core i3, Core i5, and Core i7 (2008 – present)
Truth be told, there’s nothing more confusing than Intel’s name convention here: Core i3, Core i5 and Core i7. What is that supposed to even mean? It’s confusing — particularly to the lay person — but hopefully I can give you the difference between the three tiers in plain language.
You can look at the Intel Core i3 as Intel’s lowest tier processor line here. With the Core i3, you’ll get two cores (so, dual-core), hyperthreading technology, a smaller cache and more power efficiency. This makes it cost a whole lot less than a Core i5, but in turn, it’s worse than a Core i5 as well.
The Core i5 is a tad bit more confusing. In mobile applications, the Core i5 has two cores and hyperthreading. Desktop variants have 4 cores (quad-core), but no hyperthreading. With it, you get improved onboard graphics as well as Turbo Boost, a way to temporarily accelerate processor performance when you need a little more heavy lifting.
And that brings us to the Core i7. All Core i7 processors feature the aforementioned hyperthreading technology missing from the Core i5. But, a Core i7 can have anywhere from two cores in a mobile application (i.e. an ultrabook) all the way up to a whopping 8 cores in a workstation. Most commonly in the real-world, you’ll no doubt mostly see quad-core variations. Not only that, but the Core i7 can support as little as 2 memory sticks all the way up to 8.
The dual-core variants can have a TDP of 10W, but the 8-core workstation variants can go all the way up to a TDP of 130W. And, since the Core i7 is Intel’s highest tier processor in this series, you can expect even better onboard graphics, a more efficient and faster Turbo Boost as well as a larger cache. That said, the Core i7 will be the most expensive processor variant.
Nehalem and Westmere
The first generation of Core i5 and i7 processors was known as the Nehalem microarchitecture. As a general overview, it was based on the 45nm process, feature higher clock speeds and improved power efficiency. It does have hyperthreading, but Intel did reduce the L2 Cache size. To compensate, the on-die L3 Cache size was increased and is shared among all cores.
With the Nehalem architecture, you get onboard Intel HD graphics as well as a native memory controller that is capable of supporting two to three memory channels of DDR3 SDRAM or four FB-DIMM2 channels.
As you might’ve noticed, Nehalem doesn’t encompass the Core i3; however, the Westmere microarchitecture does, which was introduced in 2010. Core i5 and Core i7 was available under Nehalem, but Core i3 wasn’t introduced until 2010 alongside the Westmere architecture. Under Westmere, you could get processors up to 10 cores (the Westmere-EX) with clock speeds reaching up to 4.4GHz in some cases. New sets of instructions would allow for up to 3x the encryption and decryption rate than ever before. And, of course, you have those integrated graphics and better virtualization latency.
Sandy Bridge and Ivy Bridge
Eventually Sandy Bridge and Ivy Bridge microarchitectures would replace Nehalem and Westmere in 2011. These brought notable improvements to the Core i3, i5 and i7 line. Sandy Bridge uses the 32nm manufacturing process while Ivy Bridge uses an even better 22nm process. On the Sandy Bridge side of things, some notable improvements include Turbo Boost 2.0 and a shared L3 cache that includes the processor graphics on Socket H2. What might be even more impressive is the integration of the GMCH (the integrated graphics and memory controller) and the processor onto a single die. On the high-end side, clock speeds could reach 3.5GHz (Turbo up to 4.0GHz), but for mainstream markets, it was more like 3.4GHz.
Ivy Bridge has some significant improvements over Sandy Bridge. This includes support for PCI Express 3.0, 16-bit Floating-point conversion instructions, multiple 4K video playback, and support for up to 3 displays. As far as actual numbers go, there’s about a 6% increase in CPU performance compared to Sandy Bridge. But, you get anywhere between 25% and a 68% increase in GPU performance.
Haswell and Broadwell
Now, the successor to Ivy Bridge was Haswell, which was introduced in 2013. Many of the features that were in Ivy Bridge carried over to Haswell, but there’s also plenty of new features, too. Socket-wise, it came in the LGA 1150 and LGA 2011. Graphics support for Direct3D 11.1  and OpenGL  4.3 was brought on as well as support for Thunderbolt technology. There were also four versions of the integrated GPU — the GT1, GT2, GT3 and GT3e. The GT3 was capable of 40 execution units. In contrast, Ivy Bridge’s was capable of just 16 execution units. It also came with a ton of new instruction sets — AVX , AVX2 , BMI1, BMI2, FMA3 , and AES-NI . With the Haswell microarchitecture, these instruction sets are available to the Core i3, Core i5 and Core i7. Depending on the type of processor you bought, clock speeds could reach all the way up to 4GHz at a normal operating frequency.
Now, the successor to Haswell is Broadwell. There weren’t a whole lot of changes, but some notable improvements. Not only that, but it’s not meant as a “true” replacement since there would be no low-end or budget CPUs based on the Broadwell architecture.
New features are primarily video-related. With Broadwell, you get Intel Quick Sync Video, which adds VP8 hardware encoding and decoding. There’s support for VP9 and HEVC decoding as well. With the changes being pretty video-related, there’s added support for Direct3D 11.2 and OpenGL 4.4, too. As far as clock speed goes, base mainstream processors start out at 3.1GHz and can be Turbo Boosted to 3.6GHz. Performance variants have a base of 3.3GHz, but can be Turbo Boosted to 3.7GHz.
Skylake, Kaby Lake and Cannonlake
Skylake is the next generation successor to Haswell and Broadwell. It’s one of the most recent variants, having just launched in mid-2015. Now, it is based on the 14nm process, the same process on the Broadwell. But, it does increase CPU and GPU performance across the board while at the same time lowering the power consumption. Skylake was a huge undertaking, but Intel was able to overcome the challenge, bringing us a processor fit for mobile applications all the way up to commercial uses (a TDP of 4.5W to 45W for mobile devices and up to 91W for desktops).
As far as actual features go, you get support for Thunderbolt 3.0, SATA Express and an upgrade to Iris Pro graphics. Skylake actually retires VGA support and adds capabilities for up to 5 displays. Two new instruction sets were also added — Intel MPX, Intel SGX and AVX-512. And on the mobile side of things, Skylake CPUs are actually capable of being overclocked.
Kaby Lake is the most recent generation of Intel CPUs, having been announced just a few months ago in August 2016. Built on the same 14nm process, Kaby Lake brings much of the trend we’ve already been seeing — better CPU clock speeds and clock speed changes. New graphics architecture was also added to Kaby Lake to improve the performance of 3D graphics and 4K video playback. Beyond that, there weren’t any major changes over Skylake, just a lot of little alterations here and there.
Cannonlake is what will replace the Kaby Lake architecture. We don’t know much about it yet, as it’s been announced, but won’t release until mid-2017. It will be based on the 10nm process, but will be limited in some sense due to the low yields from the 10nm process. That said, Intel plans on releasing Coffee Lake, loosely based on Cannonlake, but with that 14nm process. We don’t know much more beyond that. There are rumors of other replacements coming down the horizon as well — Icelake based on the 10nm process intended to replace Cannonlake in 2018. And finally, Tigerlake based on the 10nm process intended to replace Icelake a year later in 2019.
Click here: Next Page 
New Mobile Technology (Intel, 2008 – present)
Processors intended for mobile and embedded use are very much needed in our growing mobile-first world. While Intel has met some of that need with variations of Skylake and other processors, the Intel Atom is more of a true mobile processor, as that’s the goal of the Atom — to meet the needs of mobile equipment.
The Intel Atom originally launched in 2008, aimed at providing a solution for netbooks and a variety of embedded applications in different industries, such as health care. It was originally designed on the 45nm process, but in 2012 was brought all the way down to the 22nm process. The first generation of Atom processors were actually based on the Bonnell microarchitecture .
Like we said, the Atom is used in many different embedded applications within a variety of industries. In comparison to the rest of the processors we listed, it’s a pretty unknown processor. But, it does power a large amount of health care equipment as well as equipment for other services we use.
Most variations of the Intel Atom have an on-die GPU. And generally, you’re going to see very small clock speeds with the Intel Atom CPUs. Keep in mind that that’s not a bad thing, though. The major differences between Intel’s Core processors and the Atom is that the Atom was designed for extremely low power and low performance applications. Efficiency is key here. That said, an old Core i3 will knock an Atom out of the park in terms of performance any idea. But, there’s no comparison since the two processors have very different goals.
At least for those that follow technology blogs, the Intel Atom made more of a name for itself when Intel partnered with Google in 2012 to provide support for Google’s Android mobile operating system on Intel x86 processors. That said, Intel began offering a new system-on-a-chip (SoC) platform with its Atom line of processors. Early on, there were some overheating issues, but Intel eventually worked out the issues.
Unfortunately, the SoC market is already a crowded industry with fierce competition from Samsung, Qualcomm, NVIDIA, Texas Instruments and so many more. That said, Intel has essentially given up on the smartphone and tablet, throwing away billions of dollars the company spent trying to expand into it. Like we said, it’s a market with fierce competition, and Intel didn’t see a place for itself there anymore. The most recent development is that they cancelled two new Atom chips intended for the smartphone market — Sofia and Broxton. We haven’t heard anything since then.
Click here: Next Page 
AMD APU’s (2011 – present)
AMD launches a new line of processors called the Accelerated Processing Unit (APU). It is, of course, a line of 64-bit processors, but is innovative because it’s designed to act as a CPU and GPU on a single chip (so, you’d have your regular CPU, but also an on-die GPU). The first generation of APUs announced in 2011 was Llano and Brazos. The former was designed for high performance situations while the latter was geared towards low-power devices. Trinity and Brazos-2 was announced in 2012 — Trinity for a high performance solution and Brazos-2, again, for a low-power offering. Kaveri was the third generation core, announced in early 2014 for high-performance. In the summer of 2013, Kabini and Temash were announced and intended for low-power hardware.
The AMD APU started out as just a project — the AMD Fusion project. It all started in 2006 when AMD wanted to create a system-on-a-chip (SoC) that combined the CPU with a on-die GPU. And that’s how the AMD APU got started.
There’s a lot of neat technology embedded in it — out-of-order execution , SSE5/AVX4 instruction, and they came on both the FM1 and FM2 sockets. It wouldn’t necessarily be surprising if you hadn’t heard of the AMD APU before, but despite that, it’s likely many tech enthusiasts and average gamers use the chip everyday. Both Sony’s PlayStation 4 and Microsoft’s Xbox One use custom versions of third generation low-power APUs.
AMD FX (2010 – present)
And then you have the AMD FX microprocessor. It’s most definitely not a successor to the AMD APUs, but something sold alongside them that directly compete with Intel’s Sandy Bridge and Ivy Bridge architectures. The AMD FX processors are actually geared more towards the high performance market, while the AMD APUs have a wider range of markets (low power and high performance) to cover.
One major difference between Intel’s Sandy Bridge and Ivy Bridge is that the AMD APUs don’t have integrated graphics. The integrated GPU is something AMD is keeping with its APU line. Still, the AMD FX is built off of the 32nm processor and AMD actually calls the FX series the first native 8-core desktop processor. As far as sockets go, AMD — for the most part — uses a single socket, the AM3+ for the FX series. Some other things worth mentioning is that the FX series has the FMA instruction set and supports Open CL.
When it initially launched in 2011, it was built off of the Bulldozer microarchitecture. In 2012, the Piledriver architecture succeeded that. Both of these architectures use a modular design to put two-cores on a single module. But, another successor is coming in 2017 — the Zen microarchitecture. It will use the 14nm process, feature SMT (a version of Intel’s hyperthreading) and will employ the AM4 socket, which provides support for DDR4 RAM.
You can actually get a significant amount of performance out of the AMD FX series. All of the cores (or CPUs) in this series are all unlocked and overclockable, allowing you to seriously push the clock speed on these processors. For instance, using liquid nitrogen for cooling, the AMD FX-8370 was able to set a world record for clock speed  — 8722.78MHz or a little over 8.7GHz.
Since the FX series are high performance processors, they also have a high TDP — up to a whopping 220W.
Intel offers some serious power with their currently line of Core i7 processors, but the AMD FX series takes the cake for the highest performance chips for consumer PCs. The drawback is that there’s no onboard GPU, but when you’re seeking power like this, you might rather have a dedicated video card anyway. It’ll certainly be interesting to see what 2017 and beyond brings with the competition between AMDs upcoming Zen microarchitecture and Intel’s Kaby Lake and Cannonlake architecture.
And that wraps up the timeline of the many different processors out there, at least for the time of this writing. Processor technology is an interesting concept, and if you read about the different CPUs, you’ll notice the trend of them getting smaller, yet more powerful. It’ll no doubt be interesting to see what we have in another 10 or twenty years down the road.
Keep in mind that this is a timeline we plan on keeping updated, so as new CPU generations release, be sure to check back here for new information!