Jump to content

Talk:Intel 4004

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

How big is it?

[edit]

The book "California, a History" by Kevin Starr(p266) compares the 4004 and the ENIAC and says the 4004 was "one sixteenth of an inch long". Is this correct?

~~A. Reader —Preceding unsigned comment added by 68.105.63.69 (talk) 01:48, 14 September 2008 (UTC)[reply]

The 16 pin DIP package has two rows of pins 0.3 (7.62mm) inches apart, with a spacing between pins of 0.1 (0.254mm) inches. That makes the entire packaged chip about .9 inches long and .3 inches wide. The left hand package in the main picture has the cover removed, the chip is larger than the pin spacing so it is larger than .1 inches. I'd say that it is closer to "one eighth of an inch long" than "one sixteenth of an inch long" Ferritecore (talk) 15:04, 14 September 2008 (UTC)[reply]

4004-driven traffic lights

[edit]

I'm a little unsure about the following passage, deleted (and slightly modified) from a previous revision of this article:

For calculator and controller use the 4004 was a very effective design. As of 2003, there are reportedly even a few traffic light control systems still in use built with these chips.

If someone cares to do research on this, feel free :-). It would certainly be an interesting anecdote. Wernher 23:35, 13 Nov 2003 (UTC)

Seems like it'd be overkill when you could make a suitable solid-state traffic light switching system using discrete logic, and it'd need to interface to a bunch of heavy duty relays somehow anyway. But, it might have ended up that the cheapest and easiest way to implement the control logic was with a 4004 and single ROM (and maybe a single RAM if there was a push-to-request pedestrian crossing included, or vehicle sensor loops) instead of soldering a bunch of transistors, resistors and diodes to a PCB (or at least, a couple dozen of them than you'd otherwise use...). Heck, you could probably make a system that would read the pattern out of a couple of 4000-series ROMs by single stepping through addresses (using a fairly simple TTL binary counter and a 555 timer) and using the raw output from the chips to drive the light relays, with e.g. the length of a yellow light being the calibrated single-step part of the whole cycle. I remember playing with such on a home electronics experiment board, which had a very simplistic 16-nibble RAM you could program with a quintet of switches (4 to set the pattern, 1 to write it) - which could be jumpered to various external binary (or analogue-through-a-discriminator) inputs - then read out, sequentially, via a single-step counter with jumperable increment and reset buttons (and a separate 555 + pot set up to provide variable frequency pulses from about once a minute to maybe 100Hz), to drive a set of four coloured LEDs, a hex-decipering 7-seg display, a buzzer that changed its frequency to match the coded value, various motors and bulbs, or some arbitrary external device. No microprocessor in it at all, just the RAM and the counter. One of the projects was coding a traffic light (or light-controlled crosswalk) cycle - for either a single direction with 3- or 4-aspect lights, or two directions with 2-aspect ones... 193.63.174.211 (talk) 09:23, 2 June 2014 (UTC)[reply]

Corrected clock speed again

[edit]

After a careful editor changed the 4004's maximum clock speed to the proper value of 740 kHz, someone changed it back to the widely quoted but completely incorrect value of 108 kHz. I've just fixed it again, and I added a stern explanatory comment explaining why 740 kHz is correct and should not be changed.

Unfortunately, the incorrect value of 108 kHz is all over the place now, including within some very reputable sources. Even Intel's own pages on the 4004 list it. But Intel's original 4004 data sheets all say that its minimum clock period is 1350 nanoseconds, which means the maximum clock speed is 740 kHz. (I have checked data sheets from 1971, 1973, and 1977; they all agree on this.) The only possible explanation for the widely-quoted value of 108 kHz is that the first page of these data sheets don't list a clock speed, but instead list a 10.8 microsecond instruction cycle. (This instruction cycle requires 8 clock cycles.) At some point a writer must have somehow misinterpreted this value as a 108 kHz clock speed.

Hopefully my comment in the page will keep this from happening again here, but correcting the whole world is going to take a while.  :-)

--Colin Douglas Howell 00:13, 9 Oct 2004 (UTC)

Thanks for fixing! I corrected the 4004 clock rate figure in the Hertz article as well. --Wernher 00:29, 9 Oct 2004 (UTC)

I corrected the russian page of Wiki (4004) after reading of Intel's datasheets and laughing ;-) This Colin Douglas's note is acknowledgement of my version of Intel's boosters mistake. Jem222 (talk) 11:08, 12 November 2011 (UTC)[reply]

I believe you are right that the 740kHz makes more sense, but it isn't completely obvious that the instruction cycle rate isn't reasonable. Most processors now have an internal PLL clock multiplier, and are commonly described in terms of that rate, which is also often the instruction cycle (one instruction per clock) rate. Many early processors generate a two or four phase clock from the external clock input. (Or with a separate clock generator chip.) If the 4004 does that, then the lower rate may be more correct. What clock rate is used for the most FFs inside the 4004? Gah4 (talk) 17:11, 4 February 2013 (UTC)[reply]

Just before anyone gets too hung up on THAT explanation, note that 740 divided by 108 is a decidedly non-integer figure of 6.852... 740 / 4 would be 185kHz. It's rather more likely it used a 1.48, 2.22 or 2.96MHz master clock crystal, if there was indeed any subdivision of the main clock to drive different phases of each individual processor cycle. Somewhat unusually for chips of this era, it doesn't seem like the oddball speed rating is related to use of a cheap NTSC colourburst crystal (at 3.57MHz, it too doesn't divide exactly)... 193.63.174.211 (talk) 08:59, 2 June 2014 (UTC)[reply]
Actually, the clock crystals I've found in schematics for 4004-controlled systems are often 5.185 MHz, provided there isn't some other reason to use slower timing. This is a common crystal frequency, apparently used for radio. For a 4004 system, it was divided by 7 in the system's clock generator circuit to get 740 kHz for the two clock phases fed to the 4004. Here's an example Intel datasheet for such a clock crystal offered explicitly for 4004 and 4040 systems.
While later microcomputer systems often had some sort of television display, I can't imagine a 4004 system with one, since they usually acted as embedded controllers of some sort. So there would have been no reason for them to use an NTSC colorburst crystal. --Colin Douglas Howell (talk) 03:27, 14 October 2014 (UTC)[reply]
"... it isn't completely obvious that the instruction cycle rate isn't reasonable." True, the erroneous "108 kHz" figure, taken as a rate, is not very far off from the 4004's actual instruction cycle rate of 92,500 instructions per second (at its maximum clock speed of 740 kHz). But I think it would be a mistake to start using instruction cycle rate instead of actual clock rate for the 4004, when the 4004 already has a well-defined clock signal whose rate was used in contemporary descriptions of the chip. Changing the definition like this would just lead to confusion. Other microprocessors from this period are described in the same way as the 4004 was: for example, when people talk about a 1 MHz 6502 or a 2 MHz 8080, they mean the input clock rate, not the instruction cycle, which is several times slower. If you're comparing a 4004 with such processors, you really want to compare apples to apples.
Anyway, 740 kHz is the frequency of the two clock phases which are fed to the 4004's clock inputs. It also corresponds to the rate at which the 4004's external bus changes state, which seems to be in sync with internal changes in the processor state. During an instruction cycle, which lasts for 8 clock cycles, the 4004 first transmits its 12-bit instruction address as three 4-bit chunks (3 clock cycles), then receives the 8-bit instruction word as two more 4-bit chunks (2 clock cycles), and finally does execution processing for the last 3 clock cycles. And during those last 3 cycles, when executing the SRC instruction, 2 of those cycles are spent transmitting an 8-bit RAM address over the bus as two 4-bit chunks. --Colin Douglas Howell (talk) 06:21, 14 October 2014 (UTC)[reply]

"The Intel Trinity" has both 740kHz and 108kHz in it. There are a lot of details that you don't find in other sources, but I believe also other errors. Gah4 (talk) 07:18, 9 September 2014 (UTC)[reply]

Looking up that book's author, he seems to be a standard tech reporter, not someone with a hard technical background who could judge the correctness of technical details. So I'd treat any claims it makes in that sphere with a lot of caution. --Colin Douglas Howell (talk) 03:10, 14 October 2014 (UTC)[reply]

4004 vs. other "first µP" candidates...

[edit]

I'm not a big fan of these "who was first" fights; in my view, they often obscure the real significance of the events in question. But I also don't like the current phrasing, which implies that the 4004's status as "first microprocessor" may not be fully deserved and that other similar devices already existed. This seems like an unnecessary distortion of history.

The idea of using large-scale integrated circuits to shrink computer processors had certainly occurred to many people, of course, but the Intel designers do seem to have been the first to make a one-chip processor intended to be generally applicable to a variety of problems and to introduce it to a broad market.

To make this clear, I would describe the 4004 as "the first commercial single-chip microprocessor".

--Colin Douglas Howell 01:40, 9 Oct 2004 (UTC)

Second that. I made the (slightly more unambiguous, at the risk of being corrected...) edit, and split the F14 CADC material off into its own article (where it should be further elaborated upon, interesting as it most certainly is). --Wernher 20:52, 10 Oct 2004 (UTC)
I disagree. The CADC was not multi-chip. The central processor itself had everything (and more) that the 4004 did, EXCEPT for a program counter. The PC on the CADC architecture was placed on the RAS (e.g. RAM) and ROM chips. This was to facilitate multi-processing. To discount the CADC because of this is debatable. I think this problem could be solved by careful use of the word "first". As a computer historian, I learned long ago never to use the word as an adjective when describing an artifact from computing history because you will eventually and invariably be proven wrong.
I think the wording I implemented is proper, and furthermore I think the CADC reference should be completely omitted as this is a page about the 4004. There should be a footnote or a link to the CADC entry. As far as the second point ("generally applicable to a variety of problems") this is true, but these caveats should be explicitly expressed in the description because they matter. I can be reached at sellam@vintagetech.com --Sellam Ismail 07:47, 11 November 2005 (PST)
So the CADC wouldn't be useful for other tasks besides keeping a fighter jet in the air if you stripped it out and reprogrammed it? 193.63.174.211 (talk) 17:22, 10 May 2012 (UTC)[reply]
After having studied the matter more thoroughly, I am now aware of the things you point out here. However, I don't fully understand your having a problem with the mention of the CADC in this article---unless you are simply of the opinion that the CADC information in the intro pgph kind of clutters up the article a bit? If so, moving most of the CADC info into a footnote would be no problem. As for further elaborations of "who was first", FWIW, please see the CADC talk page. --Wernher 06:34, 23 November 2005 (UTC)[reply]

Added a reference that has interesting including multiple viewpoints of people who were there at the time. Link in the main article is to the author's homepage. Alternative link: http://home.tx.rr.com/gep2b/schaller_dissertation_2004.pdf Also compare discussion to wording on intel's 4004 site, which I am inclined to think of as 'simply not truthful', but then, they do it for the marketing. Anyway, enjoy the read. It appears to be very well researched. 85.178.88.27 02:09, 26 September 2007 (UTC)[reply]

The paper can be found on the Computer History Museum web site: http://corphist.computerhistory.org/corphist/documents/doc-487ecec0af0da.pdf — Preceding unsigned comment added by GilCarrick (talkcontribs) 17:09, 8 June 2011 (UTC)[reply]

Adware/spyware warning

[edit]

I moved the "adware warning" against http://www.intel4004.com/ from the ext lk description itself into an intra-section footnote: "Site has been reported to contain adware/spyware." If someone cares to investigate this further, please do so. --Wernher 22:52, 16 January 2006 (UTC)[reply]

There seems to be a web bug on the home page of http://www.intel4004.com/, namely http://www.intel4004.com/images/blank_trans.gif, this has privacy implications although whether this web bug can be classified as adware and/or spyware I am unsure. Perhaps this issue has something to do with the bad blood between Faggin and Intel (just my little theory). If someone cares to investigate this even further, please do so. Slark 17:04, 3 April 2006 (UTC)[reply]
I removed the adware/spyware claim pending some actual information. Mirror Vax 17:51, 3 April 2006 (UTC)[reply]
McAfee's SiteAdvisor reports that it links to a couple of shady sites, webstats4u.com and itrack.it. --Traal 23:41, 18 April 2007 (UTC)[reply]

The problem has been solved. We got rid of the offending counter.

Jokes

[edit]

The 4004 marked the 4004BC of modern computing.

I don't get it!? Pluke 21:47, 15 November 2006 (UTC)[reply]
See Ussher chronology. Ferritecore 11:18, 22 June 2007 (UTC)[reply]

Large image

[edit]

I removed Image:Intel-4004-schematics.png from the article because it was 1.5MB, which is far too large to appear in an article. Could someone who knows how to do images please replace it with a small thumbnail. 58.179.129.241 00:48, 21 November 2006 (UTC)[reply]

The 4004 was a commercial failure?

[edit]

In the second paragraph of the Intel 4004 article (as of 12/7/2006 at 10AM EST), it is written:

As for the 4004 itself, it was largely a commercial failure and had very little impact on the electronics industry as a whole.

Based on my extensive research on 4004 history, I find this statement quite surprising, and would be interested in seeing evidence to support this claim. Intel's very next microprocessor, the 8008 was indeed a commercial failure. But in the case of the MCS-4 family (4001, 4002, 4003, 4004), you have to consider that commercial failures rarely spawn a family of compatible follow-on products like the 4040 microprocessor, or two generations of interface chips, like the 4008, 4009, and the later 4289 memory interface chips. Nor does one find IC date codes on chips that are commercial failures that extend 15 years from the first date of manufacture (in this case 1971-1986). National Semiconductor second-sourced the 4004 as the INS4004. I don't have hard production numbers, but my understanding is that over a million Intel 4004s were made.

It is well recognized by now that other companies were working on microprocessor technology at the same time, and that the notion of the microprocessor was "in the air." Intel's 4004 team didn't "pull an Einstein," they just got to market first with a microprocessor you could buy off-the-shelf and program yourself. William Asprey's journal article The Intel 4004 Microprocessor: What Constituted Invention?, published in the IEEE Annals of the History of Computing, Vol. 19, No. 3 (1997) gives a great overview of the historical context and considers whether the birth of the microprocessor was a revolutionary or evolutionary milestone in the history of technology.

Sure, let's give credit where credit is due. Let's list any and all the other companies and projects that were working on microprocessor technology around the same time. A stable Wiki that we can all agree on is the best thing we can offer the world.

Disclosure: I am not now, nor was I ever an employee of Intel. Though I have done exhibit design for the Intel Museum as an independent vendor, I am committed to as accurate a portrayal of history as possible. --Tim McNerney

I too would agree.

The statement that the 4004 was a commercial failure is an unfair opinion.

Busicom paid for its development (largely), and gave Intel an incrediable market opportunity. Let's not forget that Busicom, using the NCR brand, produced millions of desktop calculators using their version of the 4004.

--Robert

A curious coincidence

[edit]

I added "A curious coincidence" because I thought that the story of the naming of the 4004 was interesting and also the coincidence with the date 4004 B.C. surprising. viuz 22:19, 27 June 2007 (UTC)[reply]

Elvia -- Thanks for all of your contributions to this page. I teach Computing at Bennington College and we've discussed the design of the 4004 at length. We designed and built our own CPU in TTL last year, and one of the students dedicated it to Federico. But mostly, I just wanted to say that your contributions to the documentation and history of the 4004 are excellent. Joe 04:09, 29 June 2007 (UTC)[reply]

Sorry I deleted this section. One must admit this is a 'cute' little coincidence, but you should present it in a way which doesn't assume creationism. --hydrox 02:20, 4 July 2007 (UTC)[reply]
I find this "curious coincidence" interesting because a break from an established part numbering system suggests that the chooser-of-numbers thought that this family of chips was a significant break from the established product line. A little documentation and a more encyclopedic presentation and I think the break in numbering conventions fits in as more than just a coincidence. It speaks to the significance of this chip offering, and that the significance was realized at the time. Any speculation as to why the specific number 4004 was chosen remains just speculation absent some supporting evidence. The relationship between the part number of this chip and the Usher chronology is likely just coincidence. It is just as likely that somebody thought a palindrome looked nice. Ferritecore 03:37, 5 July 2007 (UTC)[reply]
I think with some rewording the section should go back in. As it was, it's not a hard stretch to read it without assuming creationism, so the text is nearly fine. Or perhaps it can be prefaced or couched in text that alerts the reader to the more informal nature of the paragraph. Joe 19:50, 6 July 2007 (UTC)[reply]
I agree, and done right it could possibly fit in the history section. Ferritecore 21:50, 6 July 2007 (UTC)[reply]

So, um, what of the 4001, 4002, 4003 chips? Are they supposed to represent each of the three years following Genesis, then? Could just be that they were the first four in a new line of "4000-series", 4-bit devices... Not everything has to have biblical consequences or even be related back to it. I don't go about finding ways to relate the Motorola 68k to events that (supposedly!) happened 70,012 years ago then add a reference to them on its page... 193.63.174.211 (talk) 18:06, 10 May 2012 (UTC)[reply]

Silly how an article about microprocessors tolerates any sort of numerology. Go home, idiots! — Preceding unsigned comment added by 77.13.42.35 (talk) 00:45, 6 September 2014 (UTC)[reply]

4004 vs. ENIAC

[edit]

I removed the following:

The Intel 4004 CPU had 17 times the computing power of the 1946 ENIAC vacuum tube supercomputer, which weighed 33 tons and occupied 212 square metres of floor space.

This seems to be comparing add time (10.8us on 4004 vs. 200us on ENIAC, although that is closer to 18.5). This is an unfair comparison as the 4004 is adding only 1 digit in this time and ENIAC is adding 10 digits (single precision) or 20 digits (double precision). Also ENIAC was parallel in 1946 and could be doing additions in up to 20 accumulators at the same time.

By my calculations they are from roughly comparable to ENIAC being somewhere over 10 times faster; although this would require actual 4004 code for 10 digit addition to be certain which probably required several instructions per digit, not just one.

I don't believe any fair comparison of the two machines could be done in one single number like this. -- RTC 00:13, 26 July 2007 (UTC)[reply]

I found the text was originally inserted as the following:

The Intel 4004 CPU had the computing power of the 1946 ENIAC vacuum tube supercomputer, which weighed 30 tons and occupied 167 square metres of floor space.

While this is more realistic, I still don't believe a single number can accurately and fairly compare the two machines. -- RTC 00:27, 26 July 2007 (UTC)[reply]

Not only can't one compare the 4004 to the 1946 ENIAC with a single number, but in fact the people who like to trot out this comparison always seem to assume that the 4004 must have been faster, simply because it was newer, even though the truth was quite the reverse. As you already commented, ENIAC in its original form could be parallelized, and even one of its multiple accumulators could do a 10-digit decimal addition around 5 times faster than the 4004. (At its maximum clock speed, the 4004 would take 850 µs for an 8-digit decimal add of one memory operand to another, and a 10-digit one would probably take around 1080 µs, while a single ENIAC accumulator only needed 200 µs for the same operation.) Plus the ENIAC had special multiply and divide/square-root units, something completely beyond the 4004. It's true that ENIAC was slowed greatly when it was converted to stored-program operation in 1948, and it lost its ability to do operations in parallel. A comparison between this machine and the 4004 would be much fairer. But the comparisons I see, always using the original, 1946 ENIAC, strike me as a sloppy attempt to make a "golly gee whiz look how much things have advanced" contrast without actually checking to see if it's true. *grump* --Colin Douglas Howell (talk) 07:26, 14 October 2014 (UTC)[reply]

Single--chip microprocessor?

[edit]

There's the statement in the article that the 4004 was not a "single chip microprocessor", as "contrary to popular belief". I find this somewhat confusing; admittedly, the 4004 was not a "stand-alone" chip, of the later sort that included RAM, ROM, and/or IO multiplexing, but did anyone ever think it was? As I recall the times, the chip was the first to be a microprocessor on a single chip - the fact that the RAM or ROM was separate isn't relevant, really. Ofc, even the very newest Pentiums still don't contain memory. Anyways, I think this needs clarification, but I wasn't sure how to fix it. Is there a distinct category called "single-chip micro" which specifically refers to having all major functions onboard a single chip? I'm not familiar with the semantics here, but clearly the 4004 was the first processor on a single chip. Eaglizard 21:51, 15 November 2007 (UTC)[reply]

This goes to the heart of the discussion as to whether the Central Air Data Computer was a microprocessor and did it beat the 4004 by 1 year. Alatari (talk) 06:08, 13 January 2008 (UTC)[reply]

Some more is discussed in [1] The address and data are multiplexed on a single 4 bit bus. The 4001 and 4002 demultiplex the address (the high digit is a mask option). Instead, one could externally demultiplex the address and data, and interface to more usual ROM and RAM. The voltage levels for clock inputs are different from those for data inputs, requiring special clock drivers and, for optimal clock frequency, complicated timing. Gah4 (talk) 07:38, 9 September 2014 (UTC)[reply]


To beletedly answer the point. It was not considered a single chip microprocessor, because not everything required to make it work was on the chip itself. For example, the processor required an extenally generated 2 phase clock signal as well as a 4008 and/or 4009 to actually interface with ROM, RAM or I/O. It is true that all the processing elements were on the one chip but seldom, at that time, the support logic. This state of affairs continued for some time after the 4004 with many well known microprocessors actually occupying more than one chip. Even many of the big selling chips still in use by the early 1980's still came as multiple chip sets. The Intel 8080 was a three chip set and the Motorola 6800 still required a separate 6870 clock generator. Even the ubiquitous Z80 still required an external clock generator. The Intel 8085 (introduced 1977) still theoretically required an additional chip to separate out the control and multiplexed bus signals but arguably it qualified as a single chip processor because by this time, Intel had developed memory and input/output chips which could directly cope with these signals. 109.156.49.202 (talk) 16:29, 4 October 2011 (UTC)[reply]

I'm pretty sure that any time I've seen the definition of a (micro)processor, aka CPU, in the past, it's always been the part which performs various operations on data presented at its inputs, and then provides the result at its outputs. Where it comes from, where it goes to, is immaterial. Memory (ram, rom, disk, paper tape, stuff coming out of a users brain and into a keyboard) is decidedly external - you could possibly even argue that a pure CPU doesn't even require any internal registers beyond the bare minimum to enable a Turing-complete instruction set, let alone cache - and so are clock and sync pulses. Remember how the oldest, somewhat experimental micros could be single-stepped by hitting a switch that sent a single pulse, similar to what could be done with minis and mainframes? The USER became the clock generator. But they still had proper 8080 CPUs inside. The 4004 is a single piece of silicon in a single ceramic- or plastic-plus-connectors package that can (request and) receive data through the connectors, and thrust it out again the same way after doing stuff to it. It counts.

BTW, if it required a 4008 and 4009 to work, however were you supposed to use the original 4001-thru-4004 lineup without them? ;) In any case, their assignments mirrored what we still see today; the 4001 was the BIOS ROM, 4002 the RAM, 4003 the keyboard PIC and video card, and the serial/parallel port (or USB) connector tasks were split between all three of the "support" chips. Very very few microprocessors can be used just as they are without SOME kind of additional cardware... I think the term you're hunting for instead is either "microcontroller" or "system on a chip".

(And the designers weren't going for that - they were instead looking to replace the separate cards of a backplane minicomputer (the CPU card - sometimes itself split into separate parts with e.g. the ALU taking up a whole slot - the memory card(s), the terminal interface card etc) with a single chip apiece. They probably didn't even think of the 4004 itself as the most significant part of the whole setup at the time, but instead the accomplishment of making EACH of those things, that previously required a square foot of PCB with an edge connector and requiring an inch-plus of clearance, fit into a device that was small enough to swallow. The further concatenation of all those parts into a single unit would take a while longer still.) 193.63.174.211 (talk) 18:20, 10 May 2012 (UTC)[reply]


References

  1. ^ [1]MCS-4 data sheet.

Pioneer 10

[edit]

I recently viewed a video by the Computer History Museum in which Ted Hoff and Federico Faggin talked about the 4004, and at one point Federico says, "... the most remarkable application, of which I feel very ... proud, is the 4004 is one of the few artifacts that have gone beyond the asteroid belts in Pioneer 10 ... It's a major piece of artwork up in space". This wiki article states that it's just a myth, however. Was Federico kidding around? -- MP64 (talk) 07:20, 16 December 2007 (UTC)[reply]

I doubt he was kidding around; he probably just didn't know any better. It's an attractive story, and Federico was in engineering, not sales, so he'd have no reason to know the story was false. --Colin Douglas Howell (talk) 07:39, 14 October 2014 (UTC)[reply]

I have found a reference to the possibility of the 4004 on Pioneer 10: http://www.opencores.org/forums/cores/2002/05/00059 According to Dr. Larry Lasher of AMES Research Center that the 4004 was not used on the Pioneer 10 spacecraft. http://home.comcast.net/~jsweinrich Jweinrich (talk) 15:26, 2 August 2008 (UTC)[reply]

Here stated http://voyager.jpl.nasa.gov/faq.html that Pioneer 10 computers had 16-bit and 18-bit word sizes. Not 4-bit as 4004.Avivanov76 (talk) 16:26, 3 November 2012 (UTC)[reply]

looking for old software

[edit]

The MCS-4 manual states there was some publicly availble 4004 software:

  • a cross-assembler and simulator running on the PDP-8 in FORTRAN IV
  • subroutines for AND, OR, and XOR
  • a 16-digit decimal addition routine
  • Chebychev polynomial approximation routines for addition, subtraction, multiplication, division, exponents, natural logs, sine, cosine, and arctangent

Is any of this still available? A quick search of the usual places didn't turn up anything useful (but intel400.com again/still has an infection). Dugong.is.good.tucker (talk) 16:19, 28 August 2008 (UTC)[reply]

I believe stan mazor should be mentioned

[edit]

I am not an expert, but various articles, googling, and a patent listed on the page suggest that mazor should be mentioned as one of the designers.

He was a co-recipient of the kyoto prize for the 4004. AllanGottlieb (talk) 01:07, 6 September 2008 (UTC)[reply]

ancestor of all modern x86 based chips?

[edit]

would it be accurate to say the 4004 is the ancestor of all the current x86 based chips produced today? after all, some of the same people worked on the 4004 and the 8008, which led to the 8088, the 8086, 286, 386, 486, pentium, itanium, etc, and thus all the AMD chips as well which were copies of x86, at least in instruction set and addressing, if not more? Decora (talk) 10:39, 15 November 2009 (UTC)[reply]

(Itanium isn't part of the x86 line; it was supposed to be a "clean-sheet" design to replace it.)
People often assume that the entire x86 line is descended from the 4004, but there's no evidence for this.
The x86 line is clearly descended from the 8008: the 8080 and 8085 were derived from the 8008, and the 8086 and 8088 processors were designed so that 8080/8085 code could be automatically translated into 8086/8088 code.
However, the 4004 and 8008 are quite different in their architectures and instruction sets, as you quickly discover if you read about them. The 8008 was not an 8-bit derivative of the 4004, as many people like to assume. The two architectures were developed at the same time as separate projects for different customers. The 4004 was developed to control Busicom calculators, and Busicom's Masatoshi Shima had a heavy influence on its architecture. The 8008, meanwhile, was designed to control the programmable terminals of Computer Terminal Corporation, and that company had already specified the processor's architecture in advance.
Yes, some of the same people did work on the detailed hardware layout of the 4004 and 8008, so you might be able to find some low-level hardware features that the two chips share in common. But that's not what people mean when they talk about the relationship with later chips; they're talking about the high-level architecture visible to the programmer. I'm pretty sure that any possible low-level design features shared by the 4004 and 8008 chips have long since been eradicated in later chips, since there would be no reason to preserve them—such features don't affect architectural compatibility. There's a vast difference between the PMOS semiconductor technology of the early 1970s, with feature sizes of 10,000 nanometers, and the CMOS semiconductor technologies used 40 years later, whose features are several hundred times smaller. --Colin Douglas Howell (talk) 17:48, 14 October 2014 (UTC)[reply]

instructions per second?

[edit]

Why does today's main page list 60,000 instructions per second when it says 92,000 here?!? CapnZapp (talk) 21:22, 15 November 2009 (UTC)[reply]

60 - its average power (for mix of 1 or 2-cycle instructions), 92 (on 740 kHz) - its maximum. Jem222 (talk) 11:17, 12 November 2011 (UTC)[reply]

Possible to overclock?

[edit]

Is it possible to overclock this CPU to achieve higher speeds? General Heed (talk) 18:10, 28 August 2010 (UTC)[reply]

I believe you'll have to get hold of one, do the experiment, publish the results somewhere on a webpage, then have a friend cite it into the article for us to find that out. But as Intel themselves wanted it to run at 1Mhz originally, you'd probably have trouble. The internal circuitry is quite a bit "larger" than in, say, a 6502 (typically stuck down at 2Mhz or less), and uses somewhat less efficient construction. You could probably do it, but only with additional cooling (more to speed up the internal signalling than to protect against melting), custom external support hardware etc. 193.63.174.211 (talk) 18:39, 10 May 2012 (UTC)[reply]

not really a chip to overclock. you can find manuals online that also include performance dependent on temperatures and voltages. the 4004 maximum temperature is not too high. the chip is too small to add a heat sink.

Overclocking is a very modern practice which didn't really take hold until the late 1990s, though there are isolated earlier examples. I suspect it would have been anathema to the 4004's designers, who were going for reliability rather than speed. They were selling these chips to electronics designers who would use them as embedded components in larger devices or systems. The goal was something you could plug in and forget about while it quietly performed its job for years without fuss.

the 4004 chip usually has multiple 4004 chips with even more of the matching rom,ram and IO-modules on the same board. The 4004 chip is a multi-processor chip by being only 4-bit. Synchronization is more important on 4-bit instruction sets and a higher synchronization frequency will easily randomize some communication processes. — Preceding unsigned comment added by 77.13.42.35 (talk) 00:55, 6 September 2014 (UTC)[reply]

I'm pretty sure the 4004 was almost never used as a multiprocessor in the way you're thinking, though it's true there were some systems which had several 4004s performing separate tasks. Multiprocessors were far from practical in the early 1970s, and Intel's 4004 documentation gives absolutely no information or advice on how to use 4004s in a multiprocessor system. If they intended for multiprocessing to be a major use, you'd expect them to provide guidance on that.
I think you've been confused by Intel's 4004 datasheets, which often describe it as a "4-Bit Parallel CPU". Admittedly, that wording is a bit sloppy to my eyes and these days it might be misinterpreted to mean a multiprocessor, but that's not what was intended. Rather, it's saying that the 4004 is a CPU which works on 4-bit values, with all bits processed in parallel—in other words, a so-called "bit-parallel" architecture. This may seem redundant to our eyes, since almost all processors these days are bit-parallel designs, but there is an alternative: the so-called "bit-serial" architecture, in which the processor's logic elements only process one bit at a time, and multi-bit values are handled by processing all the bits in sequence. Such designs are much slower than bit-parallel ones, but they can be built with fewer logic elements and used to be quite common. They were still fairly widely used in the early 1970s: for example, DEC's PDP-8/S was a 12-bit bit-serial computer, and HP's early handheld calculators were 56-bit bit-serial. Intel was trying to emphasize that the 4004 was not a bit-serial machine, using language which was well understood at the time. --Colin Douglas Howell (talk) 23:03, 14 October 2014 (UTC)[reply]

MCS-4 and 8008?

[edit]

Historical articles on the MCM/70 appear to suggest (perhaps incorrectly) that the 8008 was also supported by the MCS-4. Does anyone know for sure? If this is wrong I'd like to set the record straight. Maury Markowitz (talk) 11:27, 27 June 2011 (UTC)[reply]

The 8008 8-bit CPU architecture and chip set were originally created by Datapoint Corp. for use in their Datapoint 2200 intelligent terminal, and later designed into a chip by Intel. The MCS-4 4-bit architecture and chip set were originally created by Intel for use in Busicom Corp. family of calculating machines, and consisted of 4 chips: CPU (4004), ROM with I/O (4001), RAM with I/O (4002), and I/O (4003). The 8008 required standard memory components and there were no dedicated I/O chips. The instruction sets of the two processors were quite different, and the MCS-4 did not support the 8008. The only relationship between the 8008 and the 4004 was their use of the same manufacturing technology and design methodology, originally developed for the 4004. viuz — Preceding unsigned comment added by 75.18.188.109 (talk) 21:39, 12 July 2011 (UTC)[reply]

MCS-4 historical and recent documents

[edit]

A distinction should be made between the early original MCS-4 documents, and later recollections — Preceding unsigned comment added by 75.18.188.109 (talk) 22:14, 6 July 2011 (UTC)[reply]

Original research?

[edit]

Please explain where and why. I remove the banner till this is done. Audriusa (talk) 09:58, 13 February 2012 (UTC)[reply]

maximum memory amount?

[edit]

OK, could it just have one each of the 4001 and 4002, or more? If so, how many? And was this any different with the 4008/4009 or 4289? (I'm trying to do a thought experiment over whether you could make any kind of "useful" home computer from it beyond the level of the electric typewriter/very very early word processor - with limited (magtape-based) editing capabilities - I've otherwise heard it was integrated to. 40/256 bytes is a bit limited... 1K/4K, now we're getting somewhere, up to Atari 2600 or Sinclair ZX81 levels) 193.63.174.211 (talk) 18:44, 10 May 2012 (UTC)[reply]

There is a 12 bit address bus, I believe addressing four bit nybbles. But separate instruction and data space, though I am not sure where the partition is. It might be the high bit, so 2048 nybbles instruction, and 2048 for data. There is one reference, I believe on a page linked from here, that the busicom has five 4001 ROM chips, including the optional square root ROM. Gah4 (talk) 17:25, 4 February 2013 (UTC)[reply]

(That was me before, and I've ended up back here again for similar reasons) ... Ah, thanks. I did see the 12-bit address, but the way it seems to deal with addressing those chips is a bit strange. Also I'm sure I saw an actual amount for both discussed somewhere - something like the equivalent of 640 bytes of RAM and 5KB ROM or the like - but I can no longer remember the specifics or where I saw it. I may check out that Busicom page and see if it mentions it. (In fact, now you're jogging my memory, it might even have been the Intel spec sheet itself?)
I also wonder if there might be any mileage in attempting some kind of bank-switching memory extension scheme as used in other architectures which suffered from limited overall address ranges, such as the VCS itself (originally 4KB carts only, but some went up to 32KB or had 16KB plus extra RAM vs its lowly 1 kbit of onboard storage) or some later 8-bits... and indeed even the 8086 which had a 16-bit range at heart but could work with 16 different "segments"... having 4 segs for each may allow 20KB of ROM and 2.5KB of working RAM, which would allow reasonable sophistication, especially in the context of the limited processing power and bus speed. 193.63.174.211 (talk) 09:05, 2 June 2014 (UTC)[reply]
Actually, the above is somewhat confused. The 4004 does not have a dedicated address bus at all. It has a single 4-bit bus which is used for both data and addresses and for both program memory and data memory. So things get kind of complex.
Program memory and data memory are considered logically separate. They have separate address spaces and different word sizes and reside in different kinds of chips. Which memory is being accessed is determined by context and timing. Program memory is accessed by instruction fetches and by a couple of data-fetching instructions: FIM ("fetch immediate") and FIN ("fetch indirect"). In both cases, the access occurs at the start of the instruction cycle. (Executing a data-fetching instruction triggers a second instruction cycle to fetch the requested data from program memory.) Data memory is accessed by all other instructions which either read data from memory or write data to memory. The data memory access, if any, uses the same bus as the program memory access, but the data memory access occurs at the end of the instruction cycle, and different chips respond to it.
Program memory uses 12-bit addresses, transmitted in three 4-bit chunks. These addresses are for memory words 8 bits wide, so they are byte addresses. The 8-bit words in program memory are transferred in two 4-bit chunks. With 12-bit addresses of 8-bit words, the 4004 can address 4 kB of program memory. This normally resides in 4001 ROMs; the maximum program memory requires 16 of these ROMs. With the help of the 4008/4009 (or the later 4289 equivalent), standard 1702-style ROMs (or even standard RAMs) could be used instead of the 4001s, with the 4008/4009 doing the necessary bus translation. Using the 4008/4009 did not change the limit on program memory size.
Program memory was originally supposed to be unalterable. When the 4008/4009 was introduced, allowing standard RAMs to be used for program memory, a previously undocumented 4004 instruction, WPM ("write program memory"), became effective, but this was usually used only on development systems. Normal 4004 systems did not write to program memory.
Data memory is much more complicated. Data memory is arranged in multiple banks, and there's a bank-switching instruction, DCL ("designate command line"), to select the active bank. Within each bank, there are two kinds of data memory, both stored within the same chip. "Main memory" is the normal kind, used by ordinary instructions. It takes an 8-bit address, which selects a 4-bit word. "Status memory" is the other kind of data memory, and it's only used by a special set of status-memory instructions. The word size is still 4 bits, but the address is effectively 6 bits, split into two parts. Yes, this is weird and ugly; it's an artifact of how the 4004 was originally designed to control Busicom's calculators.
Each bank of data memory, composed of four 4002 RAM chips, has 256 4-bit words of "main memory" and 64 4-bit words of "status memory", for a total of 320 4-bit words (160 8-bit bytes). Each 4002 chip has 64 words of "main memory" and 16 words of "status memory", or 80 words total (40 8-bit bytes).
The total amount of data memory depends on how many banks are available. In a normal 4004 system, there can be up to 4 banks, because the 4004 has 4 control lines for selecting a RAM bank. (These are known as "command lines" and designated CM-RAM0 through CM-RAM3.) Without extra hardware, you can have only one RAM bank per command line. With 4 banks, at 320 words per bank, you get 1,280 4-bit words total, or 640 8-bit bytes. Since each bank has four 4002s, you get a total of 16 4002 RAM chips.
It is also possible to build a 4004 system with up to 8 banks of data memory. This limit comes from the 4004's 3-bit internal register set by the bank-switching DCL instruction. The extra banks, 4-7, are indicated by combinations of command lines CM-RAM1 to CM-RAM3. You have to pass these lines through a 3-to-8 decoder circuit to separate the distinct banks 1-7. With this technique, you get 2,560 4-bit words total, or 1,280 8-bit bytes, on 32 4002 RAM chips. This latter technique was used by Intel's own Intellec 4 development system, based on the 4004.
The 4008/4009 chips (and their later 4289 replacement) did not affect either the maximum size of data memory or which chips could be used for it. Only the specialized 4002 RAMs were allowed in any case.
Bank switching was indeed done in 4004 systems. In addition to the bank switching of data memory which is provided by the 4004's DCL instruction, you could do bank switching on program memory as well with appropriate external circuits. The Intellec 4 had switches on its front panel console to provide bank switching of program memory. One bank was for the system's monitor ROM, one for the RAM normally used for user program development, and one for an optional bank of ROM which was separate from the system monitor. The user would boot the system from the monitor ROM bank, perform the monitor operations he wanted (such as loading a user program into RAM), and then switch the system to the desired program memory bank to run that program.
So you could build a fairly large 4004 system, considering the limits of the logic. But the 4004 was always intended to be used as a simple computer with limited memory running a single fixed program, acting as a control system and replacing complex circuits of custom logic. It was not designed to be a full-fledged general-purpose computing platform. --Colin Douglas Howell (talk) 03:31, 15 October 2014 (UTC)[reply]

A computer with 4*4004 chips addresses 5120 bits of data-random-access-memory and 32768 bits of program-random-access-memory. A 4004 chip addresses 8 memory banks, that are 4002-ram chips. It has 16 4-bit registers. Each 4004 chip has its 12 bits for its stack, 4 bits for its "accumulator" and 1 bit for its "carry". (from a 4004 manual) 77.13.42.35 (talk) 01:17, 6 September 2014 (UTC)[reply]

As I've said above in a previous section, you're confused about the 4004 being a multiprocessor. The memory sizes you give are for a computer with one 4004, and the 5120 bits of data memory assumes only 4 banks are available. Also, the 4004's internal program address stack is actually 4 x 12 bits, since it has the current 12-bit program counter and three levels of 12-bit return addresses. --Colin Douglas Howell (talk) 03:31, 15 October 2014 (UTC)[reply]

Past tense

[edit]

This article, like many retrocomputing articles, is written in the past tense. Seems to me that 4004 chips still exist, so past tense isn't necessary when referring to the processor. It might still be for some other statements, such as design decisions, people or places. Is there a wikipedia rule on tense in articles? Gah4 (talk) 17:30, 4 February 2013 (UTC)[reply]

It's not being actively produced any more, so I believe the convention is to use the past tense - the same as you would for cars, or TV shows, or anything else that was once in series production but hasn't been for quite some time, yet hasn't ceased to exist altogether because of this. A mass produced microprocessor isn't the same as a person, it doesn't "die" completely and permanently once discontinued, but it would still be odd to use the present tense unless you're actually holding one in your hand, or a system that uses it. 193.63.174.211 (talk) 09:11, 2 June 2014 (UTC)[reply]
Agreed. I think that once the major manufacturer(s) of the device stop producing it, that is when the tense changes from present tense to past tense, regardless of whether the devices continue to exist and be used. Consider the context of a typical lede sentence, for example: The Foobar 6900 is|was a widget produced by Foobar Doohickies, Inc. ... — Loadmaster (talk) 18:11, 2 June 2014 (UTC)[reply]
OK, but that one specifically indicates "produced". My (random example) Ford Taurus, sitting on the driveway (is/was) green. Now, if I sell it then "My car was green." past tense as it isn't my car, not because it isn't green. (Though the new owner could repaint it.) But does it really matter if it is in production or not, if it is sitting in my driveway? Gah4 (talk) 00:12, 8 September 2014 (UTC)[reply]
See MOS:PRESENT RastaKins (talk) 15:25, 5 May 2024 (UTC)[reply]
The PDP-10 example came about 2015, when I was sitting next to a running PDP-10, and reading it in past tense. But even not running, it still existed. Gah4 (talk) 19:27, 5 May 2024 (UTC)[reply]

Predated by TMS 1000

[edit]

I put a citation needed tag after the claim that the 4004 was predated by the TMS 1000. The citation in the Microprocessor article, from the Smithsoninan, says that the TMS 1000 was developed "about the same time that Intel fashioned the first microprocessor", actually giving credit to Intel for the first microprocessor. That citation also has a disclaimer that says it is not guaranteed to be accurate. Rsduhamel (talk) 19:47, 25 May 2013 (UTC)[reply]

I am now reading "The Intel Trinity" [1] which has many insider details on Intel products. I haven't finished yet, though. Gah4 (talk) 00:14, 8 September 2014 (UTC)[reply]

References

  1. ^ [2]The Intel Trinity

Co-invention

[edit]

(Sock puppet of banned user trying to imply that because Busicom commissioned a calculator architecture from Intel they were co-inventors of the microprocessor.) It seems to me that this depends somewhat on the actual input from Busicom. As well as I know it, they did more than just commission (supply money for), but actually did much of the design. One could see what the patent says, for example. Gah4 (talk) 23:31, 8 May 2017 (UTC)[reply]

  • You mean the patent assigned to three Intel engineers (Hoff, Mazor, and Faggin)? Busicom wanted a desktop calculator that executed functions through a combination of hardware and software so the final design could be applicable to other projects like cash registers. They were looking at a set of roughly a dozen chips to meet this need. Ted Hoff, and as far as I know Ted Hoff alone, came up with the idea of doing it as a single-chip CPU supported by three other chips instead. This is not to imply that other people had not been trying to develop a single-chip Von Neumann architecture for several years at that point, but Hoff is the man who articulated the first practical design with the help of Mazor, who was a programmer. Faggin is the man who actually designed the chip, though Masatoshi Shima from Busicom helped design the logic because the entire project was way behind schedule due to Intel needing most of its engineers to sort out problems in its memory business.
These details are pretty well covered in reliable sources and anyone who wants to add more detail to that narrative is welcome to do so. In the edits I reverted, however, banned user Jagged 85, infamous for his misuse of sources and misattribution of scientific and technological developments to non-Western inventors (see Wikipedia:Requests_for_comment/Jagged_85/Evidence and Wikipedia:Requests for comment/Jagged 85/Computer Games Evidence for examples) spins a completely erroneous tale of a Japanese woman formulating the entire 4000-series chipset back in 1968, which was subsequently presented to Bob Noyce by Sharp engineer Tadashi Sasaki, who convinced Busicom to fund the project. Jagged reaches this conclusion by completely misreading an oral history with Sasaki, in which the engineer merely states that the Japanese woman was the first person he was aware of that articulated the general theory of a single-chip Van Neumann architecture and that he subsequently discussed the idea of the microprocessor with Noyce in 1968, while also connecting Busicom and Intel for a the calculator project, which was not based on a single-chip CPU at that point. Sasaki does not claim that the design of the 4004 was already articulated at that point or that he sent Busicom to Intel to fund the 4004. This is the kind of nonsense that got Jagged banned. Indrian (talk) 00:13, 9 May 2017 (UTC)[reply]
Yes. Just to be sure, I was not at all trying to argue for Jagged 85, but just for the real story. As well as I remember knowing it from years ago, Busicom went to Intel to produce what would now be called ASICs. Not generally useful for anything other than a specific type of calculator, and that it was all Intel that instead produced the 4004. But as I understand it now, Busicom was active in the discussion process, and may have helped with the logic design. There may have been politics in the patent process, or maybe just money. Exactly why no-one from Busicom is on the patent may be lost in history. Oh, as I understand it, the 4004 is Harvard architecture, with separate instruction and data address space. Gah4 (talk) 00:59, 9 May 2017 (UTC)[reply]
Gotcha. Yeah, Shima was intimately involved in the logic design, which is under reported in most sources. That involvement was not really planned though and came about because what was supposed to be a completed chip when he came to Intel had barely been started. Nothing wrong with playing up that role so long as the article is clear that Intel took the lead in developing the concept. Indrian (talk) 02:25, 9 May 2017 (UTC)[reply]
[edit]

Hello fellow Wikipedians,

I have just modified one external link on Intel 4004. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 14:57, 14 November 2017 (UTC)[reply]

(stated importance of this chip in the first paragraph. The average Wikipedia reader would have no idea this chip is central to the development of the computers in their lives.)

[edit]

(stated importance of this chip in the first paragraph. The average Wikipedia reader would have no idea this chip is central to the development of the computers in their lives.) As well as I know it, like with many inventions, the time was ripe for it. If not intel, someone else would have done it soon enough. Society tends to miss this, for example, in the case of the telephone and incandescent lamp. Well, without the 4004, we might be using descendants of the 6800 and 68000, instead of ones that can trace back through the 4004, 8008, 8080, 8086. Gah4 (talk) 01:02, 10 April 2018 (UTC)[reply]

Tadashi Sasaki involvement

[edit]

How true is the Tadashi Sasaki thing anyways? As far as we know Ted Hoff came up with the idea of the microprocessor but Sasaki seems to imply in this interview (https://ethw.org/Oral-History:Tadashi_Sasaki) that he trasmitted the idea a unnamed woman had (feels wrong to not recognice the woman to begin with if it were true) to Robert Noyce in 1968, what's weirder is that he implies that he accepted the opinion of the majority that formed the brainstorming session at the time, wich he says was a mistake. So did he heard the idea, found it nice, but not nice enough to consider it but then trasmitted the idea to Intel. Feels a weird sequence of events, and more so when the busicoin schemes after all that were completely divorced from that woman's ideas. I think or there is something badly translated, he is lying, comfused or the original story of the 4004 needs a retelling. Mirad1000 (talk) 21:49, 28 January 2023 (UTC)[reply]

first

[edit]

In Talk:Microprocessor#Intel_4004/TMS1802NC_Dispute there is discussion about which one was first. The 4004 is better known as being commercially available, though the 8008 was more available. Gah4 (talk) 02:34, 24 June 2023 (UTC)[reply]

Possible incorrect information about the TMS0100?

[edit]

correct me if im confused but isn't the TMS0100 a micro-controller and not a microprocessor (CPU). Whathaveigottenmyselfinto (talk) 19:21, 22 November 2024 (UTC)[reply]