Bob Saunders Interview: 5 April 2003
Technology historian Derek Peschel discusses the TX-0—widely regarded as the first transistorized computer—with Bob Saunders. Bob is the author of the TX-0 MIDAS assembler. I went along to serve as ammanuensis.
5 April 2003
- Bob:
- Uh... oh.
- Bear:
- I should've brought the cassette recorder last time.
- Bob:
- Oh, to take notes of the conversation.
- Derek:
- I wrote them down that evening. If you have time, it might be useful for you to look over them.
- (Derek is referring to his written notes from our previous visit with Bob. –ed.)
- Bob:
- I can do that. Did you bring them, or did you want to email them?
- Derek:
- I brought them; they're on paper. So anyway, I was wondering if the [TX-0] instruction sets were compatible with each other, because you went from two opcode bits to six.
- Bob:
- A change was evidently made with respect to compatibility early on. At this point, I don't remember the details, because it's been too long. At one point, the instruction set had a "clear left" and "clear right", which would clear the left and right bits of the accumulator, respectively. That was apparently done away with, because it was chewing up an extra bit they decided they couldn't afford to use for something as mundane as that.
- So, there was a major revamping of the machine to go from the initial two bit opcode up to five. Then, it got changed again—but that was strictly an augmentation to include this index register stuff, when they added the index register to the machine a while later
- Derek:
- As I understand it, the first jump—when they went from two to five [opcode bits]—they didn't decode all the possibilities.
- Bob:
- That's right, because there was some stuff...
- Derek:
- The paper said they had something like six opcodes, or something.
- Bob:
- Well, it was around six, because what was added was a "load" instruction, a "load-" and "store-live-register" instruction, an absolute transfer... um, so that's one, two, three... four right there. That probably was all the ones they added in the first go-round; of course they added more when they put in the index register.
- Derek:
- Okay, but were the top two bits of those five bit opcodes still the same?
- Bob:
- Oh yes, absolutely, and so if the next three bits of the opcode were zero, then the instruction would have been exactly what it had been before it was modified.
- Derek:
- But with the smaller address space...
- Bob:
- Correct. which didn't matter, because...
- Derek:
- There wasn't any memory any more to fill the bigger address space.
- Bob:
- Right. Yep.
- Derek:
- Okay, and then they filled up the five bit opcodes too.. the top three bits.
- Bob:
- When the index register went in, that filled up the space. Okay, you had my thesis project.
- Derek:
- Yeah.
- Bob:
- At the time that was done, the instruction set had been expanded, but the index register had not yet been added. Of course, there was a lot of stuff in there relating to the magnetic tape drive.
- Derek:
- Well, a lot of that stuff was shoehorned under "operate" anyway.
- Bob:
- Yes. The guts of the magnetic tape stuff was all buried in the "operate" instruction, which is where the rest of the I/O stuff was, as well.
- Derek:
- There's a program called MESS, which is a framework for emulators. It grew out of MAME [Multi Arcade Machine Emulator] which is a framework for emulators for arcade game hardware. So basically, they have a bunch of CPU emulators, and some other specialized video memory code and things like that. You kind of pick the ones you want, tweak for your particular machine, throw in a ROM image, and then all of a sudden you have... Pac-Man, or whatever. I was talking to the guy who had written the... er, is maintaining the PDP-1 emulator. He actually seemed interested in doing a TX-0 emulator as well.
- Bob:
- If he's got a PDP-1 emulator, doing the TX-0 would be pretty damned simple, because the machines were—architecturally—very similar.
- Derek:
- Yeah, they were.
- Bob:
- Considering who built them, it's not surprising.
- Derek:
- Well, the PDP-1 added a few things. I don't think the TX-0 had sequence break, which is interrupts. They came up with all these funny names...
- Bob:
- Well, no. Interrupts were added later. Of course the technology had been in use before that, because IBM had used it on the 704 and 709.
- Derek:
- Right, but the TX-0 was designed to be a simple computer, so...
- Bob:
- Yeah, that's right. As I mentioned, its intended purpose was twofold. One was to establish the feasibility of a transistorized CPU. Second, to do some experiments in exercising a very large memory.
- Derek:
- How much of the computer was transistorized?
- Bob:
- All of it. Well, I take it back, that's an exaggeration. The processor, consisting of the registers, the logic circuitry, the memory drivers, and all that related hardware was transistorized. The thing was driven by clock pulses from a time pulse generator stack which was not transistorized. Those were vacuum tube delay line units. You'd put in a pulse and the delay line thing would delay the pulse, reshape and amplify it and put out the result... so if you looked at the pictures of the business... I know there's a photograph in there.
- Derek:
- There is, in here. (searching through thesis)
- Bob:
- Okay, what've we got here? Okay, so... now this was the installation at Lincoln Labs. So, this is the transistorized processor stuff here in this bank, and this stuff is probably the big memory—but I don't know because I never saw it—and I don't see the delay line stuff. Now, when it was set up at MIT, the processor stuff was on the left, the delay line rack and the tape drive were on the right, and farther to the right of that, around the corner, was the power supply rack.
- Derek:
- Did they change the console units to match?
- Bob:
- No. same console.
- Derek:
- But, they didn't even move the parts around?
- Bob:
- No. they didn't do a thing.
- Derek:
- Picked it up, put it back down again.
- Bob:
- Yep.
- Derek:
- You told me.. okay, so the timing pulses were tube driven.
- Bob:
- Yeah.
- Derek:
- You told me something else was tube-driven. Was it the amplifiers for the memory, or something? Or was the memory completely transistorized?
- Bob:
- The memory on the TX-0 was completely transistorized. The big memory that the machine had been built to exercise—which was subsequently used on the TX-2—had vacuum tube drivers, because transistors with the necessary power and bandwidth were not yet available.
- Derek:
- Don't delay lines change their rate according to temperature?
- Bob:
- It depends on how they're made. The short answer is, "yes, they do," but the computer room was air conditioned so it wasn't really an issue.
- Derek:
- I heard the TX-0 was a pain to turn off and off.
- Bob:
- Well, not much of a pain. It required a little bit of fiddling. What you had to do was go over to the power panel and press the "start" button. Then the sequencers would bring up the power on the supplies for the transistor racks and everything else. After the vacuum tubes had had a minute to warm up, you went over to the pulse generator panel, pushed the "generate a pulse" button, and check to make sure you had gotten exactly one. If the oscilloscope said that, yes, there's a pulse and no more, then you were on the air.
- Derek:
- If you'd gotten one, was it then recirculating?
- Bob:
- Yes. the delay lines were connected in a ring so that the timing pulses... there's diagrams in this of what the pulses were, so during each machine cycle... there were about five or six pulses that were used for things, which were spaced somewhat irregularly depending on the hardware requirements. The last pulse generator was connected round-robin back to the first one, so that pulses kept running around as long as the chain was intact.
- Derek:
- Right. (pages shuffling) Here we go. No, no, this is different. This is for core memory I think.
- Bob:
- Yeah, it'll show you wave shapes for...
- Derek:
- Yeah. (pages shuffling)
- Derek:
- Did you ever get more than one pulse? Or none?
- Bob:
- I never did. The pulse generator was cleverly enough built that it was intended you'd get one pulse when you pushed the button, and as far as I know, it always did... it being not very difficult to design a circuit that will do that.
- Derek:
- That's true. But sometimes... save money and they leave that out.
- Bob:
- In those days it would've taken both sides of a dual triode to do it, so it's not as if we're talking heavy machine costs, here.
- Derek:
- After that, I guess you walked over the console, and loaded your program.
- Bob:
- Yep.
- Derek:
- And then you were running.
- Bob:
- Mmhm.
- Derek:
- Did you have to reverse the sequence for turning off the machine?
- Bob:
- For turning off the machine, basically you walked over to the power panel, pressed the big red button, and it shut down.
- Derek:
- I see.
- Bear:
- I don't think anybody ever built a circuit to suck a pulse out.
- Derek:
- No, I think you're right. I don't know about suck a pulse out. You could gate it so that it went away, and then put another one in. There are serial computers that work by trains of pulses that just keep getting regenerated, and they get taken out of various rings and put into various other rings. You'd need something like that for them, because you can't do anything with your pulses without some sort of steering for them. I suppose it would be bad to have pulses already in and then generate another one.
- Bob:
- That would sort of screw things up, and make them not work very well. Speaking of pulses reminds me of timing. I was thinking of timing, and that the machine you can buy at Wal*Mart is 40,000 times faster than the TX-0. And then it occurred to me that... did you understand all of what I was saying about the generation of music and the obsolescence of the old issues with respect to doing it?
- Derek:
- (aside) We had a conversation about software-created music and why it's now irrelevant. Not only is there a huge amount of brute force at your disposal, but it also doesn't cost very much money. So someone else can do the design once, and you can go just buy a sound card for a hundred bucks. But I still think that the theory is interesting, because why should we let the sound card designers do the only design work?
- Bob:
- It is interesting. The question then arises, are you interested in generating the tones yourself from your own formulas, or are you more interested in dealing with recreating tones from existing sources? And of course, you can do things with either, and people are doing some of each of those these days. If you're building a keyboard, like that one, a standard Radio Shack $100 keyboard, which has 100-odd voices in it, which represents basically a hundred sets of locations in a read-only memory, giving the waveshapes.
- Derek:
- For such a fraction of a second per voice.
- Bob:
- And all you have to do to make the particular pitch is read out the read-only memory at the appropriate time, throw it into the D[igital]-to-A[nalog] converter, and you're done. Bear in mind that all of the initial fiddlings we did with trying to generate sound on the computer were simply to construct very poor-man's D-to-A converters.
- Derek:
- Without being interfered with by your program.
- Bob:
- Yeah.
- Derek:
- I think it's interesting how you can use digital logic to combine the waves before converting them to analog.
- Bob:
- The prospects for doing that sort of thing... you can now do just all sorts of stuff. [Bob's wife] Harriet needed to prepare sound cues for a production she was doing. She wanted to start with existing material, some of which came on a vinyl record, and some of which came on a cassette tape. In both cases it was necessary to clean up some noise. She got some software over the internet to do this. The software will let you work in either the time domain or the frequency domain. If you're working the time domain, if the particular amplitude you want to tweak is a pop because of an irregularity in the vinyl, you can just change the amplitude for that particular time and fix it. If on the other hand you've got hiss from a tape cassette, then what you do is you do a Fourier transform into the frequency domain, subtract off the frequency spectrum of the hiss, and transform it back. It's billions of computations, but with a Pentium processor, who gives a shit?
- Derek:
- And it works. Did she do the noise removal herself, or did you do it?
- Bob:
- She did it. I don't know how the software works. She bought it, she made the disc. I gave technical advice from time to time on what it was doing, but she was the one who did all the dirty work.
- Derek:
- That's pretty amazing. And then you can do the same thing with images. I don't know what other data people manipulate these days.
- Bob:
- It's getting to be more and more stuff. Another example of doing this sort of thing is Computer Aided Tomography. I saw this in action, one time, when Harriet needed a CT scan to attempt to pursue an ailment she was having. The X-ray tube and the detectors go round the tray you're lying on. They go around, and they go back, they go around, and they go back, as the tray is slowly moved through the machine. The results of all of this from the detectors are digitized, and then number-crunched to make the pictures. The number crunching is basically a Fourier transform sort of thing, and the amount of computations that would have to be done is stupendous. But again, with today's processors, who gives a damn? Because processing time is so fast and so cheap that it's totally irrelevant. The more interesting questions are how you arrange to pump 20,000 watts from a non-moving supply into this rotating assembly to feed this X-ray tube, which runs at something over 100,000 volts, and put all this power in and get all the information out and still have the thing work reliably.
- Derek:
- And not put any power into the patient.
- Bob:
- Actually you are, because from 20,000 watts you get a lot of X-rays. So the X-ray dosage you get from one of these things is a hell of a lot more than what you get from when you go see your local dentist. But still, if you need to do it, you do it.
- So, I could look at your notes if you want.
- Derek:
- Well, they're upstairs.
- Bob:
- I don't think we need anything more from down here.
- (track break)
- Derek:
- Can you read my handwriting?
- Bob:
- Oh, easily. The machine was on the second floor, of Building 26, in room 26-248. The PDP-1 was installed in 26-260
- Derek:
- (aside) Are you still recording?
- Bear:
- Yep.
- Derek:
- Oh good. Do you think we're actually picking anything up?
- Bear:
- Yep.
- Derek:
- Good.
- Bob:
- When did the memory go from 4k [18-bit words] to 8k? It would've been 1959 or 1960, I think.
- Derek:
- Eventually I want to match up all the dates, but that will take a while.
- Bob:
- The documentation will give clues for the dates things came out.
- Derek:
- Right. That's what I had in mind.
- Bear:
- Was the 8k the "large memory" you guys were testing?
- Derek:
- No, they started with 64k.
- Bear:
- That's what I thought.
- Derek:
- Which fills the entire address space of the machine, and is why there was only room for two bits for opcodes. Then they took [the large memory] away, and they didn't have enough memory to run all the interesting programs. They had to create more opcodes so the programs could be shorter—which intruded on the bits for the addresses, but since there wasn't any memory any more and there probably never would be again, it was okay.
- Bob:
- Actually, I can comment on why it took MIDAS longer to get ported to the PDP-1. In fact, I don't know that it was actually ever done.
- Derek:
- It was. I showed you the manual last time. They said that you were involved in the port, actually.
- Bob:
- I probably was to a modest degree, but I didn't do all of it. The reason for doing the original portation of the MACRO assembler was it was smaller, well-understood, and we were reasonably confident that it could—in fact, as it was—be done in a weekend. MIDAS was a bigger thing. MIDAS I think required the use of the index register. I'm not positive on that, I'd have to look at the code.
- (Bob is referring to the so-called "weekend hack", which you can read more about at tixo.org. --ed.)
- Derek:
- I hadn't considered the algorithms, but I guess that would make sense.
- Bob:
- If it was an index register thing, that would've made porting it to the PDP-1 considerably more difficult, because [the PDP-1] did not have one.
- Derek:
- You can fake it, but it's not exactly the same.
- I had this momentary lapse because I was checking the date on the computer and I got confused between the Japanese date format and the American date format and some other late-at-night date format. It's just kind of funny that the computer doesn't print leading zeros in certain cases and I was thinking, "wait a second, here." It's, uh, hard to deal with dates. Or I've seen other programs which don't print leading zeros, and mine does, or something. It was kind of silly.
- Apparently the PDP-1 found a new home at MIT after it was decommisioned by the lab. Jack Dennis took it over and kind of kept it running, and improved the OS and hacked everything up to unbelievable amounts.
- Bob:
- Well, it was the timesharing project which was Dennis' big deal.
- Derek:
- But then I think eventually the computer was surplussed.
- Bob:
- It probably was, but that would've been long after I was gone.
- Derek:
- I guess what I'm saying is it continued to run until at least 1970. In the Computer Museum, they have routines for faking ASCII on the Flexowriter, for reading and printing, which seems kind of a pain, but someone wanted to do it. How much do you think the timesharing project was exploring original ideas, and how much do you think it was trying to put them on a smaller machine?
- Bob:
- In the first instance, the effort was simply to demonstrate that it could be done. We had every reason to suppose that things could be done along those lines. The MIT project had the effect of advancing this demonstration, and eventually led to the implementation of a timesharing system on a very large computer called the AN/FSQ-32, of which four of them were built. I ended up using the one in Santa Monica, California.
- Derek:
- I know that SAGE became AN/FSQ-7, but... (aside) they had this whole cataloging system for various pieces of electronic equipment...
- Bob:
- The [AN/FS]Q-32 was a Super-SAGE.
- Derek:
- Okay. I was explaining to Ray that the military has this...
- Bear:
- Yes, I'm familiar with that.
- Derek:
- I thought MIT was working on CTSS at the same time.
- Bob:
- I suspect that was a little later.
- Bear:
- It was '62. I've been reading about the history of z/VM. Late '61 or '62 was right around when they were working on it.
- Bob:
- The Q-32 timesharing system in Santa Monica was happening in 1962 and 1963. Actually, I started working on it in 1964.
- Derek:
- Is that when you were at III?
- Bob:
- Yes.
- Derek:
- I'm sort of getting a faint memory here. There's a book that has a copy of Peter Deutsch's LISP interpreter for the PDP-1, but they also mention the Q-32, so I guess there was some...
- Bob:
- You may be referring to a book called The Programming Language LISP—Its Operation and Implementation, which III published .
- Derek:
- Oh, okay, that would explain it. Yes, it was by Edmund Berkeley.
- Bob:
- Berkeley was one of the authors of the thing. I had an article in it, also, in which I attempted to dispel some of Berkeley's misconceptions.
- Derek:
- (laughs) This is sort of a constant throughout the history of LISP, I think.
- Bob:
- There was a listing of much of or perhaps all—I think it was all—the compiler I wrote to generate executable code from LISP S-expressions to run on the Q-32.
- Derek:
- I was thinking of the PDP-1 one. I guess the Q-32 one is in there, too.
- Bob:
- There was not early on a significant effort to do any LISPish stuff on the PDP-1, simply because the machine was too small. LISP takes a lot of room to run, and the Q-32 was big enough, barely, to do some useful things. Of course, nowadays, if one wants to run LISP you know you can get a 256 MB Pentium box with room to burn, so if you want to do LISP, it's no problem. But back in those days, this didn't exist. The Q-32 I think was 65,000 words.
- Derek:
- How long is a word?
- Bob:
- 48 bits, I think.
- Derek:
- So that was more than twice the size of the [IBM] 7090.
- Bob:
- Yeah. It used 7090 technology. IBM built it using that technology.
- Derek:
- I know they built SAGE. But SAGE was a vacuum tube machine.
- Bob:
- Correct. That is the [AN/FS]Q-7.
- Derek:
- It was the last of the vacuum tube machines, practically.
- Bob:
- Yes.
- Derek:
- Were the instruction sets similar?
- Bob:
- Probably not, but I really don't know. I saw the Q-7 physically—you know it occupied an entire room—but I never had any contact with programming the thing or attempting to do so. I do not know anything about the nature of the instruction set or the word size or any of the other technical characteristics of the machine. All I know is that it was great racks of vacuum tube panels and it was huge and it required God-awful amounts of power and air conditioning.
- Derek:
- Well, someone even sent the manual for that to Al [Kossow]'s site. I don't know where people get all this stuff.
- Bob:
- Now, Les Ernest, whom I worked for at Stanford for a couple of years, was looking for information on timesharing systems of that vintage and asked whether I had anything relating to the Q-32. I didn't know whether I had anything dating back from that era, though some months later I eventually found some information. But, it was about the Q-32's instruction set. So I sent him an email saying that I had found this and he could have it if he wanted it. It didn't sound like, based upon what he had told me he had been looking for that it was anything he would be particularly interested in, and he said it wasn't.
- The machine was basically an enlargement of the 7090: the same basic design and organization with more instructions, more registers, longer word length, and more memory. Aside from being increased in all of the ways you could increase a machine in those days, it was basically the same sort of animal using basically the same sort of technology. It wound up having some interesting problems. Now, it turns out that core memories are fairly fussy about their temperature. So in order to control the temperature, the early core memories were in an oil bath in a big tank, through which oil was circulated to maintain the temperature of the things. This technology did not last long because they figured out how to use better cores, and used...
- Derek:
- ...it probably would not have scaled, either...
- Bob:
- ...and used air cooled memories. The [IBM] 7094 had one. But The Q-32 used the oil-tank memory and one day it stopped working. When they inspected the innards to find out why, they discovered that one of the cores was sitting fairly close to the outlet from the oil circulator, and the oil circulator caused it to spin. Eventually it succeeded in sawing its way through the copper wire supporting it, and it had fallen into the bottom of the tank.
- Derek:
- Oh, no.
- Bob:
- So they had to fix it.
- Derek:
- Back to LISP for a second. I know that someone—I forget who—and then Peter Deutsch after him, got it working on the PDP-1. It would run on a 4k machine, although I don't think it would do very much.
- Bob:
- I don't imagine it would do much of anything on a 4k machine.
- Derek:
- It would also run in extend mode, which uses 64k. So you could do more, that way.
- Bob:
- Now the Q-32 LISP system was an interesting expansion of the state of the art, in that it was the first LISP system which did not even have an interpreter. It was strictly compiler based and there were some other [interesting] technical fiddles on it, but that was the most important one.
- Derek:
- What if you wanted to create code at runtime?
- Bob:
- You would just invoke the compiler. That would be part of the game. If you were going to invent and pass a functional argument, that was the way you would have to do it.
- Derek:
- I think MIT got to that point, eventually, but it took them a long time.
- Bob:
- I don't know why that had not been done earlier, because the compiler which I used was based upon one that was running at MIT, so they were able to compile code at the time—and evidently occasionally did. All I really did was to change the MIT compiler to emit Q-32 code, as opposed to 7090 code.
- Derek:
- You said that wasn't even much of a change.
- Bob:
- As the machines were pretty similar, yes. So, to do the Q-32 LISP system, then, aside from tweaking the compiler, there was the necessity to provide the machine language procedures to do the guts of the system such as cons, car, and cdr, and also to write a garbage collector.
- Derek:
- The thing about compilers is you have to get them running somehow, so you either write them by hand or you write an intermediate compiler, or you compile from another machine. So, obviously you had to get the whole thing working.
- Bob:
- The compiler was written in LISP. I spent the fall of 1963 at Stanford working on the thing. It was easy to do since the compiler was written in LISP; you simply run the stuff on the 7090's LISP system and see what it did.
- Derek:
- Eventually, hopefully, you get the state where you can load a tape in your new machine and start using it.
- Bob:
- And that's eventually what I did. The last thing I did at Stanford was to have the compiler compile itself, put the resulting machine code on a magnetic tape, which I then took down to Santa Monica, loaded on the Q-32 and then ran it. Of course, there were a couple of bugs, which required that the process be iterated once or twice, but they weren't all that serious and it came up really rather quickly.
- Derek:
- Even though you didn't find out anything about the timesharing system, can you remember anything about it? Was it based on any other designs? The one for the Q-32.
- Bob:
- No. So far as I know, it was not. One of the things they decided to do was to put a front-end processor on the machine, for which they used a PDP-1. So that all of the terminal—which of course were Model 33 Teletypes back in those days: they were cheap, they were reasonably reliable and they did the job. You could buy one from Teletype in Skokie for a little over $500, which, compared to the price of anything else those days, was peanuts. I never acutally owned one myself, because computer terminals started coming down the pike not too long after that, and it was evident [the Teletypes] were going to be obsolete fairly soon.
- Derek:
- The interesting thing is they had had research CRT displays before, but they were expensive. Not just the one at MIT, but actual units for connecting to the computer, modularly.
- Bob:
- They were expensive. Part of that was technological issues. The early CRT displays such as the one that was on the TX-0 and subsequently the one that was on the PDP-1... I think the TX-0 one used electrostatic deflection—I'm not sure—but the PDP-1 one used magnetic deflection and there was a large transistorized driver circuit to drive the magnetic fields in the deflection coil. The thing took a lot of power, and was obviously pretty expensive to build because it had a large number of high-power, high-speed transistors, which at that point were not cheap. So it isn't surprising that the list price on the Type 30 display was about $35,000. It was obvious to anybody who would think about it that displays were never going to be really popular until some TV technology could be harnessed to do it. TVs were much cheaper by comparison. [Their] deflection circuits did not require nearly as much power to drive, because you had 60 milliseconds to get from one side of the screen to the other as opposed to the few microseconds you would have if you were trying to do it on a point-plotting basis.
- Derek:
- The problem with bitmap displays is they eat up huge amounts of memory if you have only a small amount.
- Bob:
- Of course, back in those days, that was exactly the problem because memory was not cheap. Nowadays, memory is dirt cheap, so having several megabytes on your video controller card is not even a bump in the rug. That was the technological advance that would be needed in order to make it work. [Nolan] Bushnell figured out how to do it well enough on Pong to make a viable game out of it. I believe that all of the computations to drive that display were done on the fly, as opposed to having any sort of memory, but as long as you can drive your Z-axis with sufficient timing precision, that's enough.
- Derek:
- People will keep putting quarters in.
- Bob:
- People will keep putting quarters in until they won't fit any more. That was the story on that. I suspect it likely that the six CRT terminals that III built for Stanford back in 1969 or so, were very likely about the last ones that were built using the high-power, non-raster display technology.
- Derek:
- I think you're right. Stanford built their own bitmap display system, probably not long after that. Because they had a larger capacity—it might've been 32 or 64 terminals—that system got much heavier use. Who was it that created, back to the Q-32 timesharing system... who were the people who created it?
- Bob:
- The overall director of the effort was a chap named Jules Schwartz, who was the inventor—among other things—of a language called JOVIAL, which is an acronym for "Jules' Own Version of the International Algorithm Compiler"... or "International Algorithmic Language", rather. The fellow that I was working with, who was sort of SDC's [System Development Corporation's] guy in charge of the particular project, was a chap named Clark Weissman. Of course there were various other hangers-on, and people involved whose names I do not now recall.
- (Actually, JOVIAL stands for "Jules' Own Version of the International Algebraic Language." --ed.)
- About Steve Russell... tell you a bit more about Steve Russell. Steve Russell was a member of the [MIT Tech] Model Railroad Club. he had initially done undergraduate work at Dartmouth, and had wound up at MIT working in McCarthy's LISP effort. It was he who came up with the idea for Spacewar and wound up eventually coding it. Not too long after all of this, McCarthy left MIT to take a position at Stanford, where he remained. I don't know whether he's still doing any teaching or if he's fully retired.
- Derek:
- He's there; he's on campus.
- Bob:
- When he went to Stanford, Steve Russell went along as a sort of a general factotum and shit-kicker. So, they were at Stanford in the fall of 1963 and spring of 1964 when I was working on LISP compilers. In fact, I had a desk in Steve Russell's office. They had brought up Spacewar on their PDP-1; they had one. Steve was there for quite some time and was doing other things. Of course I left Stanford and was at Hewlett-packard for 20 years, and I had lost track of Steve, until one day in the spring of 1963 when I was walking down University Avenue in downtown Palo Alto, and discovered that I was walking beside him. Literally, I ran into him on the street.
- Derek:
- What was the year again?
- Bob:
- That would be 1993—the spring of that year.
- Derek:
- Okay.
- Bob:
- I discovered that he had been living in Palo Alto in a rented house, for some years. So of course, we got together and talked about what had been happening. He was there for a little while longer and then went to work for somebody in San Jose, and he's working down in San Jose now. I've got an email for him somewhere, which I could dig up if you wanted to talk to him. That is the story on Steve Russell.
- Derek:
- He wrote Spacewar before he left MIT for Stanford.
- Bob:
- Oh, yes, yes, absolutely. The project began in the fall of 1961 and Spacewar was up and running early in 1962. [It] was a featured exhibit at the MIT open house in May of that year.
- Derek:
- I'll say. I bet it made a huge impression.
- Bob:
- It did. We were able to connect a large screen television repeater to the Type 30 [display].
- Derek:
- So that was a raster scan of a vector display?
- Bob:
- Yes, it was.
- Bear:
- Accomplished with a camera on the...
- Bob:
- No, it was all done electronically. I don't know how they did it.
- Derek:
- Did you have a deflector circuit that deflected onto a camera tube, do you think?
- Bob:
- No.
- Derek:
- Hm.
- Bob:
- I don't think so. At this point I simply have no idea how the thing worked. It wasn't real good, but it was better than nothing.
- Derek:
- And the control boxes had been built by that point.
- Bob:
- Oh, yeah. Those came along early in the game. The program originally came up, long about November of '61 or so, and it didn't take long for me to recognize that to try to control the things with the switches on the front panel of the machine was a total crock, and so building the control boxes was an evening's work.
- Derek:
- Not to mention it would save your computer from costly repairs.
- Bob:
- So that happened early on. I probably showed you, when you were last here, a reconstruction which I have of the original control boxes.
- Derek:
- No!
- Bob:
- I didn't show you that?
- Derek:
- No.
- Bob:
- I'll go get one.
- Derek:
- You told me you ported [Spacewar] to the HP terminals.
- Bob:
- I implemented the program on HP, because HP had a point-plotting display at the time. It was the Type 1301. I built the control boxes I am about to show you, at that time. This would have been in 1971 or 1972.
- Derek:
- How come the control boxes were so much cheaper to build than joysticks? Because you guys had the parts?
- Bob:
- You'll see when I show you. This is a joystick. Let me show you a different one which has its cover off.
- Bear:
- That looks like an impressive piece of...
- Bob:
- Yes. That obviously is a fairly expensive piece of hardware.
- Bear:
- Let's see. "812 Gyro-Pilot Flight Control".
- Derek:
- This is a real joystick. It's not just some wimpy gamer's joystick.
- Bob:
- Here's one with the cover off.
- Bear:
- Holy schultz.
- Bob:
- As you can see, there's a lot of shit in there.
- Derek:
- I was thinking of the cheaper ones, the ones you can now buy.
- Bear:
- The ones Derek and I grew up with, on the Atari, were much simpler.
- Bob:
- You could see why these were not widely used. They were simply too damned expensive.
- Derek:
- What is all that stuff?
- Bob:
- There are potentiometers here and here to read out the X and Y positions. I'm not sure what this set of contacts is for.
- Derek:
- The chair, maybe?
- Bob:
- Of course, there are several trigger mechanisms.
- Derek:
- Is this a microphone, or something else?
- Bob:
- No. Push it.
- Derek:
- Oh. Hm.
- Bob:
- I thought it would be fun to connect these up to the computer, but I never did it, simply because at that point the A-to-D converters necessary to read out the pots were not cheaply available. Now of course you can get them; just run it into your sound card and it's no problem. Now, if you compare that technology to this...
- Bear:
- Aha!
- Derek:
- Yes, I can see how there would be a cost savings. I thought the originals were made of wood? (hollow knocking) I guess it is wood.
- Bob:
- It is wood.
- Derek:
- It was the metallic paint, that was on top here...
- Bob:
- On the originals, the tops were metal. This is a pretty close approximation to the original control boxes.
- Derek:
- I thought the original had "hyperspace" also?
- Bob:
- It does.
- Derek:
- But it's not labelled on here. Unless maybe you push "thrust" forward?
- Bob:
- No, no, that's it. This is thrust on, and this is hyperspace.
- Derek:
- Oh, this one locks, and this one doesn't. That's nice.
- Bob:
- And the only significant difference between this box and the original boxes is that it has.. this one has two-speed turn switches, while the other ones were simple telephone lever switches like this one...
- Derek:
- Did you have to change the program to read those?
- Bob:
- The original program did not deal with this box, so modification to the version of the thing which I did for the Hewlett-Packard machine required a trivial modification to deal with the...
- Derek:
- I guess what I'm saying is, how are the control boxes hooked up to the PDP-1, and how did the program read them?
- Bob:
- Oh. One of the interesting things about the PDP-1 was it was intended for use in laboratory environments, so it had a fairly extensive input-output system. One entire bay was devoted to input-output gadgetry. You could install flip-flops which could be read, and input registers which you could drive with your digital inputs from whatever your gadgetry was you wanted to look at.
- Derek:
- And there were a few bus pulses for synchronization.
- Bob:
- All one had to do was connect this box to that panel, which took a half-hour's work to do it.
- Derek:
- So the front-panel switches were no longer involved at all.
- Bob:
- A modification to the software was needed to replace the I/O transfer which read the panel switches with an I/O transfer to read the I/O bay, and that's all it took.
- Derek:
- When you put it that way, if you make the bits compatible, it's a piece of cake to modify.
- Bob:
- Yeah. Which we did.
- Derek:
- What does that control box hook up to?
- Bob:
- Which, the one I just showed you?
- Derek:
- The one you just showed me.
- Bob:
- At the moment, nothing. It was intented to [be]—and was in fact—hooked up to the HP 21MX machine which I have in the garage. But that machine is hopelessly obsolete. I have not had it on for years and last time that machine had power on it probably would have been sometime in the early to mid '80s.
- Derek:
- So who knows what shape it's in anymore.
- Bob:
- Well, the processor probably runs, the disk drive probably doesn't. The disk drive uses disks [approximately 11 inches in diameter], which are in a plastic case. Each one of them holds as much information as you can get on a three-inch floppy. The technology is long since history.
- Derek:
- The old machines are good for running software. Theoretically all software should be carried forward, but I guess it doesn't work that way.
- Bob:
- It doesn't work that way, because the applications simply wind up no longer being interesting. You don't want to try to run software that was adequate for doing accounting at the time but that dies when the year 2000 rolls around.
- Derek:
- On the other hand quality should at least not go down.
- Bear:
- Or play "Munching Squares" on the X-box.
- Derek:
- Heh, munching X's. Anyway, quality should not go down, and it does seem to do that.
- Bob:
- Sometimes. It was that machine... okay, you have notes here about running the thing on the 2100, which is a 21MX-compatible machine.
- Derek:
- Did the 2100.. that was the original name for the series, right, and they came up with a number of other numbered...
- Bob:
- Actually, it was not really the first. The very first machine that Hewlett-Packard built that was of that sort, was the 2116, which was a 16-bit machine which could come with up to 32k of memory on it. They preserved the architecture, which was not very well done. It was an inferior architectural design, for a number of reasons, but they kept that architecture on two extensions to the line: the 2114 and the 2115, which were cheaper versions; and then the 2100 which was a significant repackaging of the machine, [which] used the newer and smaller core memories, and then the 21MX and 21XE and some other machines which was pretty much the end of the line for that stuff. But all of that hardware was used for laboratory instrumentation of various sorts, not so much for commercial data processing, simply because it wasn't big enough.
- Derek:
- A lot of people also remember the timesharing BASIC that they wrote. Maybe it was for a slightly different series.
- Bob:
- No, that ran on that hardware. That was the old HP2000 basic system, [which] ran on the 2100-type machines.
- Derek:
- It's interesting the architecture wasn't very good, because the machine has a certain cult following. Probably because of the timeshared BASIC.
- Bob:
- One of the dumbest pieces of the design was they screwed up the accumulator test instruction. You want to be able to test for zero, greater than zero, less than zero, greater than equal, less than equal, [and] not equal, if you do it right. They did it with a skip instruction which is fine, but if you do it right you can do it with a single skip instruction, which, if you code the bits to be tested properly, can work dandy. Well, they didn't do that, they screwed that up so that for some of the conditions you want to test for, it takes not one skip instruction but two.
- Derek:
- Composing skip instructions can be a pain.
- Bob:
- So they botched that and they wrote some other piece of design which they also screwed up, having to do with the addressing mostly.
- Derek:
- I noticed that it's a 16-bit machine; it only has 15 bits of addressing though.
- Bob:
- Yeah, and the reason for that is that the top bit was taken as an indirect bit. It was fairly common, though actually not all that good an idea.
- Derek:
- The thing about HP is, you read through their catalogs and you get an endless stream of part numbers and software packages and subpackages and things.
- Bob:
- I haven't kept any of the old catalogs. In fact, I don't think I ever really had much in the way of old catalogs. They never were all that interesting, and were easy to get.
- Derek:
- Back to the Q-32 again... oh... do you need to finish reading my notes?
- Bob:
- Well, if you want them read, I need to get on with it.
- "Why is there no rail society in Saudi Arabia?" Actually there are some model railroad things, but the reason there is not much interest in railroad per se is because there's only one railroad line in the whole country. It runs from Dammam to Riyadh.
- Derek:
- I must have written that down because of something we were talking about last time.
- Bob:
- What I just told you about Steve Russell being in Palo Alto, I obviously mentioned before, because I see it here.
- Derek:
- I don't know if you'd explained that you'd run into him. I think you just said he was there.
- Bob:
- That's basically my comment on your notes.
- Derek:
- Okay. It looks like I got the [unintelligible] of everything.
- Bear:
- I'm still impressed by how much he remembered. He asked me to look at [his notes] and I thought that would be fine until I saw that he'd written down six pages. I couldn't possibly have had anything to add to that.
- Bob:
- So what else would you like to talk about?
- Derek:
- I don't know. How much time do we have? Is that guy going to call back?
- Bob:
- At some point. I could put him off again. It's not that important.
- Derek:
- I was asking about the Q-32 timesharing system
- Bob:
- Yeah. As I mentioned, they had a PDP-1 for a front end processor for the thing, and a bunch of [Model] 33 Teletypes for terminals, which turned out to be reasonably up to the task that was demanded of them. But I never used it much myself. My connection to that project basically ended when I got out of the LISP business. After that got done, I messed around with III, doing odds and ends; [then] I worked for Stanford for a couple years, and then went to Hewlett Packard for twenty years.
- Derek:
- Did you keep doing LISP at Stanford, or did you do other things?
- Bob:
- I was very little involved with LISP at Stanford. My principal project there was to incorporate the new disk drive to the operating system, which turned out to be a major chore, because the documentation to address disk drives generally ranged from skimpy to non-existent. So I had to invent a whole bunch of stuff. I invented a couple of things that proved to be useful to the project. One was the idea of slipping sectors of the disk.
- Derek:
- You mean, skimming from track to track?
- Bob:
- Yeah. So, I made use of that and also I wound up making use of co-routines to handle interrupts, which is a technique I had not used before and in fact haven't used since because it turns out that attempting to do subroutines on co-routines winds up being a mess. But it was a neat solution to the needs of the day. The organization which co-routines let you do turns out to be implemented using other techniques and I have in fact used it since.
- Derek:
- You have used it, or you haven't?
- Bob:
- I have used it. The real-estate software program [...] doesn't use co-routines, but it is organized as if it did. The gimmick being that you call a procedure which presents the information to the user that you want to present, waits for the user to consider it, enters his reply, and then when he presses enter, you receive what comes back, take it apart, and do whatever you want to do with it. So, all of the I/O stuff is buried in a procedure call.
- Derek:
- I'm trying to figure out how that works like a co-routine. I know the calling mechanism isn't the same. Is there some state that stuck in there somewhere that you don't have to worry about, or what?
- Bob:
- The way that the co-routine business worked in the Stanford disk driver was that an interrupt would happen when the particular sector transfer that you had requested had finished. What this would do, then, would be to return, using a co-routine mechanism, the procedure which had instigated the transfer, as [if a] request as a subroutine call, thus returning to the user. The user code would do its thing and then do a subsequent procedure call back to the I/O processor, which would wind up setting it up and then dismissing the interrupt, thus exiting from the hardware instigated procedure call which had started all this.
- Derek:
- Did that mean that interrupts were disabled for a long time?
- Bob:
- No, because the routines that were instigating were carefully kept to be of relatively short duration, so they would not be disabled. Even while this was going, not all interrupts were disabled. You disabled interrupts from the disk while you were doing this, but you didn't expect to get any anyway because you hadn't asked it to do something yet.
- Derek:
- It would still be prudent to disable them anyway.
- Bob:
- Well, they probably were, simply because the way the hardware worked. But again, that was not something that was dealt with deliberately, simply because there was no need.
- Derek:
- I think that's called "callbacks" now. "When this happens, I want my code to be called here," and it calls you. Is that what you're thinking of? Is that how it happened? Or am I thinking of something else?
- Bob:
- It sounds like it might be the same sort of thing.
- Derek:
- It would be like passing a function in LISP, and saying, "call this function when this happens."
- Bob:
- Yeah. Kind of.
- Derek:
- Okay.
- Derek:
- Was that the first disk drive that Stanford had added to the system?
- Bob:
- It was the first piece of hardware they put on the system that had not come from Digital [Equipment Corporation]. I don't think there was a disk drive on the machine... well, I don't remember if there was a disk drive on the machine when they got it or not. I don't think there was. They added this to the machine, but they also added some additional memory to the machine which they bought from—I believe—Ampex.
- Derek:
- I think Martin Frost was telling me about that. They actually exceeded the address space of the machine, didn't they? At some point?
- Bob:
- I don't recall that they did. The addressing space on the machine was... I think it was 256k. I think what they got from DEC when they bought the machine was 64[k], and the Ampex box added another 192[k]. I may be off on that, it may have had 128[k] and they added 128[k], but it was something on that order.
- Derek:
- Back to the Q-32 again, but not the timesharing system. It's interesting it was just a souped up 7090, because I heard that the Federal Systems Division of IBM is one of the more closed-mouthed parts of the company, which is not the most open-mouthed [company] to begin with, necessarily. So, it's kind of ironic that they would take the design and just kind of stretch it out. I'm sure they didn't do that in all cases, though.
- Bob:
- I have no good theories on why they did what they did. It seems like a reasonable approach at the time. They were asked to build a machine with certain characteristics. They negotiated with relevance to what the characteristics ought to be, and they had to decide what technology they were going to use. The 7090 technology was technology that was then available for the task, so it's hardly surprising that that's what they used.
- Derek:
- It's reasonable, I guess, from their... you would sort of expect revolutionary things from them, but that's not necessarily true.
- Bear:
- You've got to make money, too.
- Bob:
- One might expect revolutionary things under some circumstances, but to expect revolutionary things being done for a government contract is a bit of a stretch.
- Bear:
- I thought the whole point of the government was to stifle revolution.
- Derek:
- (guffaw)
- Bear:
- Sorry.
- Derek:
- That's horrible. So how did this thing end up in the hands of SDC, then?
- Bob:
- SDC was the contractor to write the code.
- Derek:
- Did the government ask for a LISP system then?
- Bob:
- The history, as near as I understand it, is approximately this: the original SAGE system was developed to protect from Russian bombers.
- Derek:
- Basically, track as much as it could, send data around...
- Bob:
- So they've got a line of radar stations across the northern part of Canada and Alaska, which of course was an interesting thing in itself. Those radar systems were huge. I have a vacuum tube that was used for switching those things to drive the magnetron. The thing is [about 18 inches high] and [about 9 inches in diameter]. It's a hydrogen thyratron. It is capable of switching 2000 amperes at 33,000 volts.
- Derek:
- Oh geez. How fast?
- Bob:
- Microsecond range.
- Derek:
- Your tax dollars at work.
- Bob:
- I have two of these things in the closet in the bedroom. The idea was to have these radars and have computers deal with the radar images and help track them; they present screens so that operators could see targets and dispatch fighters to pursue them, and this sort of thing.
- My understanding is that SDC was the software development agency for this thing. Obviously, you've got these Q-7 computers, and you've got to have software for them. I imagine that JOVIAL was originally designed as the language in which to write this software. I can't swear that this was the case, but it seems like a good bet, all things considered.
- It was clear back in the late '50s that vacuum tubes were on the way out and transistors were on the way in, and it was time to get with the program. So, they proposed to replace the original vacuum tube SAGE computers with these Super-SAGE things, and four of them were built. They were scattered around the country; one was in Massachussetts, one at SDC in Santa Monica, and where the other two were, I don't know. In the fullness of time, the program got cancelled. They have these reasonably powerful and extremely expensive machines, and then, what the hell are we going to do with these things?
- Enter J. C. R. Licklider, and ARPA. He said, "let's use these things for research. We've got money to do this sort of thing, and maybe we can do something interesting with them." So, amongst the research projects they decided to undertake was to do this timesharing system, and to put a LISP system on. So I got involved in the LISP system. III got the contract to do this thing, and I was the guy who wound up doing it.
- Derek:
- Was MIT giving out the LISP system to anyone who asked for it? Since it was a research project.
- Bob:
- I imagine so. Certainly one wanted one and there would be no problem getting it. It was not secret in any form. More accurately, the III book got published. Of course, everything was in the publc domain.
- Derek:
- I'm not saying it was secret, it's just that it's a lot harder now to dig up the old source code, because people don't tend to keep things like that. So SDC was planning to use this thing for research... hm... would the researchers have been from Stanford, or from SDC, or what? I mean, the users of your LISP system, once you'd built it.
- Bob:
- Well, they probably would've been SDC, because remote usage of computers had not started to take off. The idea was going to be these teleprinters scattered around and people would be able to work on their projects. What eventually came of all of this stuff, I don't know. I did not try to keep track of it after I went to Stanford.
- Derek:
- That is interesting. Do you know what happened to the other three Q-32s?
- Bob:
- No. I'm sure they were eventually scrapped, but what became of the bits and pieces? I'm sure they wound up some place like Eli Heffron's.
- Derek:
- Like whose?
- Bob:
- Eli Heffron, also known as Evil Eli, was a junk dealer. He had a facility in Cambridge, on the border of Somerville, where he parked stuff—most of which he got from the Navy—in the way of surplus electronic and mechanical gear. Of course, if you went to MIT back in those days, that was one of the places you went to get stuff to play with. So, there was a lot of people who built stuff that came from Eli's. It was convenient to... within walking distance from MIT. [It was] about a mile away.
- Derek:
- Do you know anything about optimizing compilers?
- Bob:
- Hum, optimizing compilers. Well, the term has been kicked around for quite some time. There is optimzation and optimization.
- Derek:
- But their other compilers didn't try to do anything at all, they just spat out code. At least the simple ones.
- Bob:
- Of course as time has gone on, more effort has gotten expended on getting compilers to do some semblance of optimzation, and the amount which they're doing has increased as compilers have gotten to be more and more important. A feature of the programming landscape as larger and faster processors have become available to drive them.
- The processor to translate SPL for the HP3000 does very little in the way of optimzation, but it doesn't need to because the source language for the thing was... actually the machine architecture was designed to be easily mappable into a source language so that the translation would be straightforward and no optimization would be needed.
- Derek:
- Right, and if there were any speedups, it would be the hardware taking care of them by caching and things like that.
- Bob:
- One of the more useful upshots of this was that it is not at all difficult, if you have a program on the [HP]3000, to construct the source code for that program.
- Derek:
- I can see how that would be convenient, except for to HP.
- Bob:
- The process is tedious, but it's mechanical, and you can do it. Let me just describe how you do it. The first thing you do is to admire the machine code with a disassembler, which gives you the machine language instructions, as in, "load relative address of Q - 5", for example. And so you wind up getting a long list of this stuff. Now, once you're familiar with what the compiler does, writing down the source code which generates a particular sequence of instructions is a fairly easy task. It's tedious, but it's doable. So you do it.
- Derek:
- So you simply have a set of patterns that you match?
- Bob:
- Yeah. So you do that; initially assigning the variable names to be the stack locations where the information is stored in the first place: for example Q - 5 would be QM5. So it would be, for example, "function call of parameter, comma, blah blah blah," the location of the parameter in the original procedure would be at Q - 5, then you would have "load relative address Q - 5." You could wind up writing down "function(qm5, blah blah blah." This goes on and on and on, and after you have all this shit, you simply run it through the SPL [compiler], compare what comes out with what you put in, you fix the bugs, and after about two iterations of this you've got it right. So now you have a source form for the thing. Now you look at the code and see what it does.
- Derek:
- That's the less mechanical part.
- Bob:
- Well, it's not that hard, because you can tell from the nature of the arithmetic that is done, if the parameter were of type logical, or type integer, or type real, or whatever. You know what the program is supposed to do in the first place, or you wouldn't have bothered to take it apart. The procedure names are visible, which helps too, so the next thing you do is you translate all of the references such as QM5 into a more credible variable name. Then you iterate this a couple times as you see more and more what the overall flow of the thing is, and by the time you're through you've got a source file which is pretty damn close except for subtle variations in variable names, to what the original programmer had put down.
- This turned out to be extremely important when I was in [Saudi] Arabia. Some of the stuff I was working on, I had to be able to access the source code because I wanted to be able to modify it—to change the program to do something different. So, you take apart the original, then with the reconstructed original source in hand, make your modifications, and everything is wonderful.
- Derek:
- Right, then you simply run it through the compiler, which does all the bookkeeping of moving the blocks around, (unintelligible) them and so on. Who had written this stuff in the first place?
- Bob:
- Everybody and his brother. There were two particular programs I worked on where I needed the source. One was a scheduling program, which used a little database so that particular tasks could get started at particular times, and you could run them at 1:00 A.M. daily; or 1:00 A.M. once a week; or 1:00 A.M. Monday, Wednesday, Friday; or various times of the day, and yadda yadda yadda. The question arose, is this thing going to be Y2k compliant? I really didn't know. Well, I took it apart to look, and discovered that it wasn't. So I had to fix it.
- Derek:
- You couldn't tell from the data file format?
- Bob:
- No.
- Derek:
- That's strange. If the data file didn't contain any errors, it must've been something else in the program.
- Bob:
- The procedure which would calculate the date of the next execution, for example, and compare that with the current date and see whether it was supposed to go. Anyway, that was one thing that needed to get fixed. So I reworked that rather considerably, and the other thing which turned out to require this sort of work in detail, was a program called 'undeadlock'. Now, I imagine you know what a database deadlock is?
- Derek:
- I know what a deadlock is, period. It's two things each waiting for the other to release something.
- Bob:
- If you have a database system such as IMAGE for the [HP]3000, where it is possible in principle for process one to have resource A and want resource B, and process two have resource B and want resource A, and get in a deadlock.
- The particular programming language that was being used for most of the application, was a thing which generates COBOL, which would be translated into something executable by the COBOL compiler, which attempted, to modest success, to deal with the requirements of database locking, isolating this from the user. But of course in any such things, there are going to be issues; it isn't going to work perfectly, and it didn't.
- One of my principal tasks during the day was to monitor the performance of systems all over [Saudi] Arabia—I think there were eight, plus the central system which made nine—and to notice when a deadlock might have occurred. [If I suspected one, I would] investigate, see whether there was one, and if there was, [...] knock one of the locks off so that things could continue. So Hewlett-Packard provided a library program called "undeadlock" to do this—to facilitate doing this. But it wasn't perfect.
- One of the things which it attempted to do was to identify which processes were involved in deadlock, so it would tell you which one so you could decide which one you wanted to knock the [locks] off. Unfortunately, the procedure was buggy to the point that in the next version of the thing they took it out altogether. They didn't even try to do it. So I decided that since this was one of our most important tools it really ought to work right.
- Bear:
- (snicker)
- Bob:
- I decompiled the program to discover how it worked, how it was supposed to work, and then figured out how to modify it so that in fact the deadlock recognition code worked properly. This was the most difficult programming task I have ever done. It involved dual recursion, working down lists of resources and lists of their owners to determine whether a deadlock [did] in fact exist, but I eventually figured it out and got it to work.
- Derek:
- It sounds like the sort of thing that wasn't long, it was just very detail oriented.
- Bob:
- Well, it was complicated, it was conceptually difficult to address, because it required addressing two levels of recursion at once.
- Derek:
- And a small mistake would've made it not work at all.
- Bob:
- So decompiling source to get the thing to work with was by far the smallest part of the job. Putting in a new front end so you could talk to it intelligently, and have it talk to you intelligently was a fairly significant job; [...] putting in the revised guts of it so it would do what it was supposed to do was a major task.
- Derek:
- You were working for HP at the time, weren't you?
- Bob:
- At the time I was doing this, I was working in [Saudi] Arabia, [but] not for HP. The actual situation was my paychecks were cut by an outfit called BDM, which was a company headquartered in Virginia—one of what was called "the Beltway Bandits." It was a company which did a lot of business consulting for the U.S. government. BDM had managed to establish a presence in Saudi Arabia, and the Royal Saudi Air Force arranged through the Defense Procurement Agency to obtain support services from the U.S. Air Force, which was in turn subcontracted to BDM.
- In the spring of 1993, I, being not employed at the time, saw an ad in the San Jose Mercury. The ad said, "HP3000 gurus wanted, foreign service assignments available." This impressed me as being of considerable interest, so I sent a resume and a cover letter saying, essentially, "I think you may find from the attached resume that I have sufficient guruness for whatever project you have in mind. If you think so too, give me a call." Well, they did. So in the fullness of time it was off to [Saudi] Arabia for five years.
- Derek:
- I got confused about that; I had thought maybe you were working for HP at the time. So clearly, it probably would not have been politic to send your changes to HP.
- Bob:
- Well, actually, we were on good terms with HP. HP's support efforts were very valuable to us. The local vendor of support services for the computer hardware when I got there in '93 was a company called al-Jazeera, which is not the same as the al-Jazeera television operation. It just has the same name.
- Derek:
- Do you know what that means?
- Bob:
- I could look it up. Let me do that, just for fun.
- Derek:
- (aside) How much does that hold?
- Bear:
- I have it on the longest record mode right now, so it's like four hours or six hours or something.
- Derek:
- I don't think we'll be here that long.
- Bear:
- I hope not.
- Bob:
- [returning] Handy Arabic-English dictionary. So, I had asked the guy at al-Jazeera to get me a copy of the thing called "The Tables Manual," which tells how the operating system's internal data structures are organized. I had had a copy of [it] back in the states, but [which I] hadn't bothered to bring to Arabia. He had fiddled and diddled for some time and hadn't succeeded in landing one, but as it turned out the HP support office at the time was only six blocks from where I was working. So, one day I simply walked down there, walked in, told the guys what I was after, and walked out with it in my hands.
- Derek:
- Well that's pretty easy! It doesn't get much more convenient than that.
- Bob:
- So I stopped past the company office, gave it to the clerk there, said, "copy this stuff," and was able to return it the next day.
- Derek:
- And that's how you know what resources are owned by what processes, and so on.
- Bob:
- Oh, the Tables Manual tells you everything you want to know about the internal data structure of the operating system. There are tables related to the dispatcher, which decide which processes get run next; there are tables related to the filesystem, which keep track of who all has a particular file open and what each of them is supposed to be able to do with it; and on and on and on. It's all this stuff.
- Derek:
- I've seen one for TOPS-10. It's very long. Or if it isn't TOPS-10, it's something else. They're all very long.
- Bob:
- Yeah. The thing runs a couple hundred pages. (muttering) Let's see, "jz"...
- Derek:
- Do you not alphabetize vowels in Arabic?
- Bob:
- Arabic is alphabetized by the word as it is written, which means that the long vowels count and the short ones don't.
- Derek:
- What if two words are written the same way?
- Bob:
- It doesn't happen. (muttering) "jazeeb..." [muttering] Ah, "jazeera" by itself means "island", but in the particular context in which it appears here, it would've meant "Arabian peninsula."
- Derek:
- I see.
- Bob:
- So that's what that's all about.
- Derek:
- Do you know if there's a separate word for "peninsula", instead of "island", or if they just use the word for "island" to refer to the Arabian peninsula?
- Bob:
- I'd have to look at the dictionary in the other direction, which is downstairs, and I don't think I'll make another trip [down there] today.
- Bear:
- That reminds me that the two shortest words in the Danish language are for "river", and for "island". They're each one letter long.
- Derek:
- Guess what Denmark has a lot of?
- Bear:
- Go figure.
- Derek:
- It's interesting that you were telling me that what you learned, even when you started at HP, that they said, "the compiler will cream you in writing machine code." Still, people are trying to write faster and faster optimizers. And then, on the other hand, there are the FORTH people who still don't exactly trust compilers, and think they can beat them. They're pretty conservative, though.
- Bob:
- That's become totally pointless, because even if the compiler turns out relatively grungy code, given the speed of today's processors, nobody gives a rat's ass.
- Derek:
- True. So, it all gets back to what you were saying about brute force—how it's cheap and relatively available. Ray and I were talking about DMA on the way down, and I think that there's still a need for some kind of speedup in certain areas. Especially as processors get faster and faster, you'd want to waste less time on I/O, because it will hurt you more.
- Bob:
- I suppose your typical processor these days actually has some DMA capability built into it, but my take on that is that this is really down in the noise unless you're doing something like a disk transfer, when you really... Actually, processor DMA isn't really applicable to disk transfers anyway, because the disk controller takes care of it. The controller will seize the memory bus and shovel the data when it needs to be shovelled.
- Derek:
- Well, now they have that on PCs, but I think originally when the [IBM] PC was designed, they were still in the era where a cheap system wouldn't have that.
- Bear:
- The original [IBM] PC had DMA.
- Derek:
- That's true, but what I meant by, "in the era," is that other systems didn't.
- So, the processor is fast, the disk controller can take care of itself, but there's still some kind of bottleneck. At least, it seems that way in a lot of situations. I guess it's just bad code.
- Bob:
- The bottlenecks that one runs into these days on computer systems are in my estimated order of importance, firstly in access times over the internet. By the time your request gets from your box to the target box and gets turned around and comes back to you, it's gone through eight or a dozen different links, each one of which had to buffer the thing up, figure out where the hell to send it next, and do so. So that takes time.
- Derek:
- Possibly even running interpreted code to do it.
- Bob:
- No, not really.
- Derek:
- Java, or javascript.
- Bear:
- They don't run that in routers.
- Derek:
- Not routers. Sorry, I thought you were talking about the web.
- Bob:
- If you're working on an application where speed is of the essence, you're going to compile. You don't use interpreters for things like that.
- Derek:
- But the web does its own interpretation of redirection.
- Bear:
- That doesn't come into it, going from your computer to the other one.
- Derek:
- So you're talking about one-directional transfer?
- Bear:
- Yes, and back again.
- Bob:
- And similarly, the other direction coming back. You've got just a lot of I/O going on between a bunch of machines and it takes some time to do it. The speed of light delay is of some importance, and the process of buffering up the packet, figuring out what the hell to do with it, and then doing it adds more, so you've got a significant bottleneck on that, all told.
- The next important bottleneck on hardware is the speed of disks. Until a year or so ago, the answer to the question, "what is the fastest spinning, rotating magnetic memory on a computer, and when was it built?" would have been properly answered... it turned at 12,500 RPM, and it was on the IBM 650, which was being sold in 1958. You know, 45 years ago. Nowadays, you can, in fact, get disk drives which spin at 15,000 RPM. They're relatively new on the market, but still, the de facto standard is 7200 [RPM]. We're talking about magnetic memories which are half as fast as stuff that was in use more than 40 years ago. So that's a bottleneck. If you are running an instruction every half a nanosecond, waiting 16 milliseconds for your data to come up on your disk is a major pain in the ass. So, there's that.
- The next biggest bottleneck that people run into is again a network issue. That is, when you send a request for a page down to somebody who is going to supply it, it is often necessary that computation is required in order to develop that page. For example, if you're trying to book an airplane flight, and you said, "I want to go from here to Tulsa a week from Tuesday," it takes them a while for the software to figure out what sort of flight is going to get you from here to Tulsa a week from Tuesday. It doesn't take too long to send the information back once it's computed, but computing it takes a while.
- Derek:
- Especially if there's six million other people all asking for other flights, and seats being filled...
- Bob:
- Those are the three major bottlnecks that I would be concerned about these days.
- Derek:
- Do you think that clever software design to do other things in the meantime, or clever hardware design to make the DMA go as smoothly as possible, would work? Or do you think you just have to solve the hardware problems by making faster disks or whatever?
- Bob:
- A lot of attention has been addressed to hardware band-aids on this, with larger and larger disk caches [being] the typical example. A typical disk drive is going to have a cache on it, these days, of some megabytes, in order to reduce this. [...] Hopefully, you will ask for data which is sufficiently close to the last data you addressed, that the information may be sitting in the cache and you won't have to wait for the disk to spin to get it. A lot has been done on those lines, and it definitely helps. It doesn't really solve the problem, because caches are of finite size. There is a significant probability the information you want happens to not be in the cache. It depends, of course, entirely upon what the nature of what you're processing is. Some processes have much better locality than others, but trying to predict that sort of a priori is not that easy to do. You've got the operating system's filesystem between you and the disk. Who knows what the filesystem is going to do about distributing the information on the disk? It may group things in a way which works well with the particular caching system that particular disk driver is using, or it may not.
- Derek:
- Basically, every level is making assumptions about... and even if you could tune it the way you want, you'd have to re-tune it if you changed your application. I was also thinking about other sophisticated transfer methods, like DMA that takes up as little of the processor's time as possible, or systems that schedule other things to run in the meantime.
- Bob:
- That's a non-issue these days. Processors are so fast that if you take ten percent of the processor's time doing I/O, nobody gives a shit. Nobody would even notice.
- Derek:
- Do you think that bad code is a problem? How does it rank on your bottleneck list?
- Bob:
- There are various ways in which code can be bad.
- Derek:
- That's true.
- Bob:
- Code can be bad because it's buggy. Bugs are always going to be with us. Bugs can arise from either sloppy design or sloppy implementation—typically, some of each. Code can be bad because the overall architectural design of the code is bad: it attempts to organize its database hierarchies in ways which are cumbersome to process. This situation is particularly likely to obtain if your system has been developed over a period of time and tasks which have been imposed upon it have increased over that period of time. The organization which may have worked just fine with the original layout of the system may turn out to not work worth crap for this new thing you're trying to get it to do. That was a major problem for the Saudi Air Force; the database system had evolved to the point where it was a mess, and it really ought to have been completely redone. But that would've required rewriting three thousand programs, which was not something anybody was really interested in trying to do.
- Derek:
- Or maybe the original assumptions the system was designed for—in other words, what it thought it was optimizing—no longer applied.
- Bob:
- That can be so, but I would say that in the overall scheme of things, that's down in the noise. It is changes in the intended application—in particular, the added new applications—which require fudging data that was not easily doable in the initial design because that particular bit of business had not been contemplated then.
- Derek:
- On the other hand, do you think that having an architecture that's too general is also an error in design?
- Bob:
- That's a difficult question to answer intelligently, for a number of reasons. Architecture is a very broad term. We can talk about a hardware architecture, or a software architecture, or a compiler translation architecture, all of which have their fingers in the pie. Too much generality can be bad for you in the following sense; I will give you an example. Consider the programming language COBOL. It's been around for a long time. The original COBOL impressed me as something of a crock. In fact, I considered it toxic waste and I deliberately, successfully, avoided it. I didn't have anything to do with it for about thirty years.
- Derek:
- I think it was an "interim solution," actually.
- Bob:
- Over the years, they made a number of enhancements to it; a number of which were good and desperately needed, a number of which were elaborations which did nothing except make it more complicated to no particularly good purpose. If one avoided the unnecessary enhancements, by the time I was dealing with the Arabian situation in the '90s, when they were using the '85 version of COBOL, it was actually a tolerably decent programming language. It gave you the procedure calls and procedural organization which you needed to do reasonably decent coding. The major deficiency which remained, and continues to remain, is that it does not fit well into a block structure, such that the data which pertains to a particular piece of code is associated only with that code and not visible to outside users. You have to declare in your data declaration all the data that all the procedures use. This is a major impediment, because it requires the use of funny naming, because you have to have, not just "X", but you have got to have "procedureCalculateTodaysDateX"
- Derek:
- Right, because if you want to have one use of the variable not affect the others, you have to come up with a new name?
- Bob:
- Yes. So, the variable names get to be a mess. Of course, the variables are all sitting at the front of the program, where they are all out-of-sight, out-of-mind. I have some problem of that sort in my real-estate thing, because the first twenty pages of the three-hundred page listing are variable declarations—but they're all necessarily global, and you've got to have them at the front. They define the file structures of the various files that the program uses, in exactly the same way that the data declarations in a COBOL program define the file structures there, with pictures and the like. From that point of view, doing a translation between my real-estate program in SPL and a COBOL version of the thing would be relatively straightforward. But there are other things that make it more difficult, such as the aforementioned non-locality; I use variables "n" and "i" and "x" for miscellaneous garbage. Essentially, every procedure gets written, and trying to isolate these things would make a considerable increase in complexity of the program and the difficulty in trying to read it.
- (track break)
- Bob:
- These generalizations in COBOL have been generally successful; on the other hand, there are some other things which impressed me as being pretty much a waste of time. One such is a verb called "evaluate," sort of like a case statement. You say, "evaluate expression; if it's this, you do that, if it's something else, you do something else," yadda yadda yadda. So there isn't anything that you can do with this that you couldn't do equally as well with say, "i gets expression, if i = 1 then such and such, if i = 2 then so and so," and proceed in that way.
- Derek:
- Does SPL have a case statement?
- Bob:
- It does. I seldom use it.
- Derek:
- So you just don't think that case statements in general are very useful.
- Bob:
- Well, they do have their uses in some cases, but the occasions when I have found a case statement a reasonable approach to a programming issue are very very few. Similarly, with the switch statement.
- Derek:
- Which is the same thing.
- Bob:
- It's the modern view of a computed goto. Those particular language provisions really have not turned me on. I seldom have ever used either of them.
- Derek:
- There are a few others I'm sure you absolutely hate.
- Bob:
- Such as what?
- Derek:
- Let's see. COBOL has "alter".
- Bob:
- Oh, "alter" is an abomination, yes.
- Derek:
- Someone explained how to use it in sneaky ways with "move corresponding", but since I don't understand "move corresponding", it's probably better I didn't know what he was getting at.
- Bob:
- I have heard of "move corresponding"; I have never used it. I don't know exactly what it does. If it does what I think it does, there are some things for which it would be of great value. Suppose that you have a data record from file A, and a data record from file B, which contain some fields of similar purport: a customer's name and address, for example. If you were constructing a record in file A from data in file B, it would be convenient to be able to say, "copy the customer's name, address, and phone number from the record in file B to the record in file A, and put them where they belong." If that's what "move corresponding" does, obviously it would be of some use.
- Derek:
- That could be right. I guess it assumes that something named the same, is the same in both records.
- Bob:
- Which is a perfectly reasonable assumption to make.
- Derek:
- If it happens to be true.
- Bob:
- It may, in fact, be less cumbersome than saying, "move custfile/custname to billingfile/custname" and on and on for half a dozen move statements.
- Derek:
- I was asking about hardware bottlenecks because I have UNIX on my system, but the port to the Mac[intosh] needs some polishing, and so I think there are still some bottlenecks there. It uses OpenFirmware, which is the boot environment, to do stuff that it really ought to be doing by talking to the hardware directly, for speed. If speed is important, you want to talk to the hardware directly.
- Bear:
- If generalization is important, you don't.
- Derek:
- That's true.
- Bear:
- So pick one! You seem to go back and forth on this. (laughing)
- Derek:
- I didn't say that every condition should be as generalized as possible.
- Bob:
- It depends on what you want to do. You want to be fast, you want to be broad-minded. There are occasions for each. The biggest hardware bottleneck in today's technology, is, as far as I'm concerned, the question of memory bandwidth. If you've got a 70ns memory, which is fairly common these days—I think you can get 60[ns]—well then, let's assume it's 60ns memory! Now. How many idle spin cycles does that have your 3 GHz Pentium processor twiddling its thumbs while it waits for the fetch to complete? It's doing almost 200 nothings while waiting for this fetch to go.
- Bear:
- I think it's more than that. They've done some sneaky things to try to correct that, but none of them are, shall we say, well conceived.
- Bob:
- I went to a seminar in 1998 where Hewlett-Packard was talking about the architecture for the Merced processor, which they were then planning to put into the HP3000, as the successor to the HP-PA chips which were being used at the time—and still are. I raised the question about memory speed, and he said, "That's a big problem. It's going to get worse, and it's one of the principal things we've addressed in the new architecture." [Their architectural solution was to enable you to] put out a fetch for something, and go on and do other stuff while waiting for the data to show up. So it's got some parallel execution thread capability.
- Derek:
- That's what I meant when I was talking about hardware to make for better-designed programs. It doesn't make the memory any faster, but it allows you to work around it if the situation is geared that way.
- Bob:
- (distantly) Trying to think about computations in the real world... There are some things which caching does, which will ameliorate the issue. The example of the "move corresponding", for example: if your cache business is worth the powder to blow it up, your "move corresponding" is going to go reasonably fast, because both and the source and the target data are going to wind up in the cache, so you aren't waiting 70ns for the memory bus to [return your data].
- Derek:
- Or 15ms for your disk read.
- (track break)
- Derek:
- One other thing I was alluding to when I was mentioning bad code, was [Microsoft] Windows. I don't always find it easy to measure these bottlenecks, especially as the system gets more and more complicated. So I sit there and I just sort of intuitively wonder what it could possibly be doing—which is not the same thing as actually knowing.
- Bob:
- [Microsoft] Windows is in some respects very good, and in some respects pretty bad. They have done an excellent job of coming up with code that runs on an incredible variety of hardware. There is just all sorts of shit you can put on a Windows box and it will work.
- Derek:
- There certainly is.
- Bob:
- This is achieved at considerable cost. A lot of Microsoft's efforts in India are devoted to testing various flavors of hardware on their software, to see if they can make it go.
- Derek:
- Or they try to tell the hardware makers how to design hardware, which I think is doing things the wrong way about. But it's sort of a solution.
- Bob:
- The design issue needs to be iterative. If some sort of organizational rules are not followed when you're designing a new hardware box to attach to a computer, the program is going to turn out to be a nightmare. You want to have a general approach to what kind of data is being passed around, what are the circumstances under which they are passed, what are the signalling methods to be used to indicate that it should be passed, and these sort of things. The basic issues of hardware compatibility: voltage levels, pulse widths, frequencies, bandwidth, and all the rest, have their counterparts in software. "What do I say to you to get you to send me data, and how shall you know that I've got it?" and that sort of thing.
- Derek:
- That's fine, it's just that it happens to be Microsoft—the OS writer—who is writing these standards for the hardware makers.
- Bob:
- It's sort of come out that way by default, because once Microsoft has got a market penetration—for, say, Windows 3.1—then that pretty well establishes a set of protocols which the hardware manufacturers already on the box need to continue to follow. Hardware manufacturers coming into the arena need to also follow [these protocols], so they can get in the door. If they try to redesign the protocols totally, Microsoft is going to say, "we can't support this." If Microsoft can't support it, then the manufacturer can't sell it.
- Microsoft, in this respect, is the tail that wags the dog. It would be hard for it to be seen as any other way.
- Derek:
- Given the market-share that they have, yeah.
- Bob:
- This may change somewhat as the competitive products such as Linux and FreeBSD get a larger presence in the arena. I would not expect to see it change much or soon. That is going to be an issue. Microsoft does a good job at dealing with a lot of different hardware, and the scalability is pretty good: you can run the old stuff to at least some degree on the new stuff, although there are some unpleasant exceptions...
- Derek:
- Sometimes they're really committed to that, in amazing degrees, and sometimes they're really uncommitted to that, in amazing degrees.
- Bob:
- Now, I've recently upgraded from Windows 95 to Windows 98 on my PC. This turned out to be a nightmare.
- Derek:
- Was it the actual upgrade process? Or was it the programs you were trying to run?
- Bob:
- It was the upgrade process. Actually, the process itself was not all that bad; it was the fact that it didn't completely do the job.
- Bear:
- (snickers)
- Bob:
- I will tell you how this went. After scurrying around, I finally found a copy of the upgrade disk. Actually, Staples had it back-ordered, but they got it to me and it cost me a hundred bucks. One evening, when Harriet was off at the theater and I had nothing else to do, I decided to do the upgrade.
- Derek:
- "I think I'll have some comedy in my house."
- Bear:
- (laughs)
- Derek:
- (sotto voce) More a tragedy, I think.
- Bob:
- So I walked through the upgrade procedure, which involves restarting the machine four or five times, but that's pretty well automated. It did not require a great deal of attention on my part. It sort of happens, and you sort of keep an eye on it, to make sure that things are going.
- When I got it up, some things ran. I was able to run Netscape Navigator on the box, without any problems at all. That ran just fine. I was able to run Microsoft Word, and Excel, without any problems. None of the Microsoft internet stuff worked worth shit. It kept getting complaints about a certain DLL [that] could not be loaded.
- Bear:
- It's usually the other way around!
- Bob:
- The internet mail program which I had been using—which all my messages were in, and address book and stuff—wouldn't run at all. The next day, I called Microsoft's support operation. While talking to a guy [whom] I thought was in Redmond but turned out to be New Delhi, we tried this and that for more than three hours without fixing anything. He kicked the thing back to the States and a few days later I talked to a support guy who turned out to be in Orem, Utah. Or maybe it was Provo; anyway, it was in the Salt Lake City area.
- We worked on it for a couple of hours, and I finally had the idea to try to completely delete and re-install the DLL file which it was complaining about loading. He walked me through how to do that, we did it, and it worked. That fixed it. We were then able to move all of the old internet mail stuff into Outlook Express, so that I could [use] that.
- But that didn't fix absolutely everything. It got the thing up to the point where I could then access the Microsoft website, to get thirteen service packs installed, which wound up fixing essentially everything else. The bottom line is it took about five days, during which time my mail capabilities were severely restricted. I could still send mail using my mail account on Yahoo!, accessing it through Netscape.
- Derek:
- Is this why you told me to use a different email address?
- Bob:
- Exactly.
- Derek:
- (knowingly) Oh.
- Bob:
- Exactly is too strong a term. There was another issue...
- Derek:
- It was at the same time...
- Bob:
- Yes. So that was part of it. But of course, I couldn't receive any mail coming into the Verizon account, until I got the thing fixed. There were three dozen messages [waiting] when I did, so that took a while.
- Another issue which arose during all of this was I was getting complaints from a couple of mail recipients—including you—that the return address was unsatisfactory. The mail server didn't like it, which is why I was sending stuff out of the Yahoo! account instead of the Verizon account. It could get out of the Yahoo! account without trouble.
- I wound up discussing the matter with the other addressee who was bouncing my email message, and what we discovered was happening, was that it was getting derailed by their spam guard. It turns out that email coming out of Verizon can of course [have been] forwarded from somebody else, but this turns out to be something the spam guard took a dim view of, because that means the thing can be used as a relay for spam.
- Derek:
- And it assumes the worst, and figures you are some sort of blackguard, who is forwarding this stuff all over the map for nefarious purposes...
- Bob:
- It was at that point that I got into looking to see how bad the problem of spam really is, and it turns out to be horrible. There are all sorts of blacklist keepers who will blacklist IP addresses or URLs or users or whatever from which spam appears to originate, and if you wind up being on one of these lists, you can be perfectly innocent and still wind up getting diddled as indeed I was when I was trying to talk to you!
- Apparently stuff coming out of the Yahoo! mail is not as subject to this particular business... in other words, your eskimo[.com] server apparently is using some blacklist that Verizon has managed to get itself on—or at least the particular Verizon ports out of which mail comes. There turns out to be a bunch of them.
- Bear:
- That's one of the problem with the blacklists: you can end up on them for any number of transgressions—real or imagined—and often times it will be a whole city's worth of DSL IP addresses and you won't have any idea...
- Bob:
- One of these blacklist things will give you a history of transactions it has seen, regarding whether a particular IP address should or should not be blacklisted.
- "March 2: somebody complained, we blacklisted."
- "March 3: somebody complained they weren't getting through, we took them off."
- (all)
- (laughing)
- Bob:
- And on and on and on. So that is another part of that story.
- Derek:
- I worked for a company that did work for Microsoft—it's BSquare, which had been written up a couple times in the paper since [that time]—so I know how much work they put in to... well, in this case it was the compiler. But I'm not sure that they are optimizing what they should be optimizing.
- I guess we'd better go.
- Bear:
- In the case of Windows, I don't know why they optimize the compilers, and not the APIs.
- Derek:
- That's a good question. I was thinking about the humungous amount of disk activity it does when you log in. Which I have no idea what it is or why it's necessary...
- Bear:
- It's copying registry hives.
- Derek:
- So what happened to Merced? Did it get cancelled?
- Bear:
- No.
- Bob:
- No, I believe it's on the market now.
- Bear:
- They've had performance problems.
- Bob:
- It's not proving to be as popular as they thought it would be, but they're shipping it.
- Bear:
- It's slower than their Pentium line of stuff.
- Derek:
- That's kind of embarrassing.
- Bob:
- On the other hand, it has a 64-bit architecture.
- Derek:
- So there's room to grow.
- Bob:
- Yeah, there's a growth path there.
- Derek:
- Did they speed up HP-PA over the years?
- Bear:
- Dramatically.
- Bob:
- Oh, yeah. The newer processors are much faster than the older ones.
- Derek:
- I think I looked at a spec; it might've been some other HP processor. They can make... sort of... baroque designs.
- Bob:
- I have not tried to keep track of the particular chips in the HP-PA lineup. I have a specification, which I got from somebody other than HP, about the relative speeds of their various processors, which vary depending on the number of CPUs in the box, as well as which CPUs they are.
- Derek:
- Do you know if HP has clustering?
- Bear:
- Yeah, they do. It's not... at least, in HP-UX, it's not very sophisticated.
- Bob:
- You have what you need?
- Derek:
- Oh yes, and more. Thanks again for your time.