Monday, October 24, 2016


The Story of Mel
Prev Appendix A. Hacker Folklore Next
The Story of Mel

This was posted to Usenet by its author, Ed Nather (<>), on May 21, 1983.

A recent article devoted to the macho side of programming
made the bald and unvarnished statement:

    Real Programmers write in FORTRAN.

Maybe they do now,
in this decadent era of
Lite beer, hand calculators, and "user-friendly" software
but back in the Good Old Days,
when the term "software" sounded funny
and Real Computers were made out of drums and vacuum tubes,
Real Programmers wrote in machine code.
Not FORTRAN.  Not RATFOR.  Not, even, assembly language.
Machine Code.
Raw, unadorned, inscrutable hexadecimal numbers.

Lest a whole new generation of programmers
grow up in ignorance of this glorious past,
I feel duty-bound to describe,
as best I can through the generation gap,
how a Real Programmer wrote code.
I'll call him Mel,
because that was his name.

I first met Mel when I went to work for Royal McBee Computer Corp.,
a now-defunct subsidiary of the typewriter company.
The firm manufactured the LGP-30,
a small, cheap (by the standards of the day)
drum-memory computer,
and had just started to manufacture
the RPC-4000, a much-improved,
bigger, better, faster — drum-memory computer.
Cores cost too much,
and weren't here to stay, anyway.
(That's why you haven't heard of the company,
or the computer.)

I had been hired to write a FORTRAN compiler
for this new marvel and Mel was my guide to its wonders.
Mel didn't approve of compilers.

"If a program can't rewrite its own code",
he asked, "what good is it?"

Mel had written,
in hexadecimal,
the most popular computer program the company owned.
It ran on the LGP-30
and played blackjack with potential customers
at computer shows.
Its effect was always dramatic.
The LGP-30 booth was packed at every show,
and the IBM salesmen stood around
talking to each other.
Whether or not this actually sold computers
was a question we never discussed.

Mel's job was to re-write
the blackjack program for the RPC-4000.
(Port?  What does that mean?)
The new computer had a one-plus-one
addressing scheme,
in which each machine instruction,
in addition to the operation code
and the address of the needed operand,
had a second address that indicated where, on the revolving drum,
the next instruction was located.

In modern parlance,
every single instruction was followed by a GO TO!
Put that in Pascal's pipe and smoke it.

Mel loved the RPC-4000
because he could optimize his code:
that is, locate instructions on the drum
so that just as one finished its job,
the next would be just arriving at the "read head"
and available for immediate execution.
There was a program to do that job,
an "optimizing assembler",
but Mel refused to use it.

"You never know where it's going to put things",
he explained, "so you'd have to use separate constants".

It was a long time before I understood that remark.
Since Mel knew the numerical value
of every operation code,
and assigned his own drum addresses,
every instruction he wrote could also be considered
a numerical constant.
He could pick up an earlier "add" instruction, say,
and multiply by it,
if it had the right numeric value.
His code was not easy for someone else to modify.

I compared Mel's hand-optimized programs
with the same code massaged by the optimizing assembler program,
and Mel's always ran faster.
That was because the "top-down" method of program design
hadn't been invented yet,
and Mel wouldn't have used it anyway.
He wrote the innermost parts of his program loops first,
so they would get first choice
of the optimum address locations on the drum.
The optimizing assembler wasn't smart enough to do it that way.

Mel never wrote time-delay loops, either,
even when the balky Flexowriter
required a delay between output characters to work right.
He just located instructions on the drum
so each successive one was just past the read head
when it was needed;
the drum had to execute another complete revolution
to find the next instruction.
He coined an unforgettable term for this procedure.
Although "optimum" is an absolute term,
like "unique", it became common verbal practice
to make it relative:
"not quite optimum" or "less optimum"
or "not very optimum".
Mel called the maximum time-delay locations
the "most pessimum".

After he finished the blackjack program
and got it to run
("Even the initializer is optimized",
he said proudly),
he got a Change Request from the sales department.
The program used an elegant (optimized)
random number generator
to shuffle the "cards" and deal from the "deck",
and some of the salesmen felt it was too fair,
since sometimes the customers lost.
They wanted Mel to modify the program
so, at the setting of a sense switch on the console,
they could change the odds and let the customer win.

Mel balked.
He felt this was patently dishonest,
which it was,
and that it impinged on his personal integrity as a programmer,
which it did,
so he refused to do it.
The Head Salesman talked to Mel,
as did the Big Boss and, at the boss's urging,
a few Fellow Programmers.
Mel finally gave in and wrote the code,
but he got the test backwards,
and, when the sense switch was turned on,
the program would cheat, winning every time.
Mel was delighted with this,
claiming his subconscious was uncontrollably ethical,
and adamantly refused to fix it.

After Mel had left the company for greener pa$ture$,
the Big Boss asked me to look at the code
and see if I could find the test and reverse it.
Somewhat reluctantly, I agreed to look.
Tracking Mel's code was a real adventure.

I have often felt that programming is an art form,
whose real value can only be appreciated
by another versed in the same arcane art;
there are lovely gems and brilliant coups
hidden from human view and admiration, sometimes forever,
by the very nature of the process.
You can learn a lot about an individual
just by reading through his code,
even in hexadecimal.
Mel was, I think, an unsung genius.

Perhaps my greatest shock came
when I found an innocent loop that had no test in it.
No test.  None.
Common sense said it had to be a closed loop,
where the program would circle, forever, endlessly.
Program control passed right through it, however,
and safely out the other side.
It took me two weeks to figure it out.

The RPC-4000 computer had a really modern facility
called an index register.
It allowed the programmer to write a program loop
that used an indexed instruction inside;
each time through,
the number in the index register
was added to the address of that instruction,
so it would refer
to the next datum in a series.
He had only to increment the index register
each time through.
Mel never used it.

Instead, he would pull the instruction into a machine register,
add one to its address,
and store it back.
He would then execute the modified instruction
right from the register.
The loop was written so this additional execution time
was taken into account —
just as this instruction finished,
the next one was right under the drum's read head,
ready to go.
But the loop had no test in it.

The vital clue came when I noticed
the index register bit,
the bit that lay between the address
and the operation code in the instruction word,
was turned on —
yet Mel never used the index register,
leaving it zero all the time.
When the light went on it nearly blinded me.

He had located the data he was working on
near the top of memory —
the largest locations the instructions could address —
so, after the last datum was handled,
incrementing the instruction address
would make it overflow.
The carry would add one to the
operation code, changing it to the next one in the instruction set:
a jump instruction.
Sure enough, the next program instruction was
in address location zero,
and the program went happily on its way.

I haven't kept in touch with Mel,
so I don't know if he ever gave in to the flood of
change that has washed over programming techniques
since those long-gone days.
I like to think he didn't.
In any event,
I was impressed enough that I quit looking for the
offending test,
telling the Big Boss I couldn't find it.
He didn't seem surprised.

When I left the company,
the blackjack program would still cheat
if you turned on the right sense switch,
and I think that's how it should be.
I didn't feel comfortable
hacking up the code of a Real Programmer.

This is one of hackerdom's great heroic epics, free verse or no. In a few spare images it captures more about the esthetics and psychology of hacking than all the scholarly volumes on the subject put together. (But for an opposing point of view, see the entry for Real Programmer.)

[1992 postscript — the author writes: "The original submission to the net was not in free verse, nor any approximation to it — it was straight prose style, in non-justified paragraphs. In bouncing around the net it apparently got modified into the 'free verse' form now popular. In other words, it got hacked on the net. That seems appropriate, somehow." The author adds that he likes the 'free-verse' version better than his prose original...]

[1999 update: Mel's last name is now known. The manual for the LGP-30 refers to "Mel Kaye of Royal McBee who did the bulk of the programming [...] of the ACT 1 system".]

[2001: The Royal McBee LPG-30 turns out to have one other claim to fame. It turns out that meteorologist Edward Lorenz was doing weather simulations on an LGP-30 when, in 1961, he discovered the "Butterfly Effect" and computational chaos. This seems, somehow, appropriate.]

[2002: A copy of the programming manual for the LGP-30 lives at]
Prev Up Next
OS and JEDGAR Home Appendix B. A Portrait of J. Random Hacker


Monday, May 16, 2016

Google+ is dead. Long Live Google+.

As you've probably read at, Google announced Spaces, a tool for small group sharing.
Group sharing isn’t easy. From book clubs to house hunts to weekend trips and more, getting friends into the same app can be challenging. Sharing things typically involves hopping between apps to copy and paste links. Group conversations often don’t stay on topic, and things get lost in endless threads that you can’t easily get back to when you need them.
 Anyone think of Google+ circles when they read that opener above?

Tuesday, February 2, 2016

Information Systems Development — System Analysis and design methods

Please note that the following publication is available on an "as-is" basis and has not undergone any editorial review. 

Review questions:

3. Table 3-1 in the textbook illustrates the difference in a typical project's duration, person-months, quality, and cost, depending upon whether an organization's system development process is at CMM level 1, 2, or 3. Between which two CMM levels does an organization gain the greatest benefit in terms of percentage of improvement? What do you think is the reason for this? 
~> While this is by no means the only correct answer, I'd argue that the greatest benefit in terms of percentage of improvement comes between level 1 and 2. While it is not the goal of CMM to stop at level 2 but to move on to as high a level as feasible, the fact that all organizations start at level 1 and must move up one step at a time means that the transition between levels 1 and 2 is one that all organizations have made or need to make. Achieving this level is critical because this is the first level where earlier project successes are looked into to help with current and future projects. A stable level 2 means that transition to standard processes in the next level will be as smooth as possible. A solid foundation at this stage could make standardizing processes for the next levels less troublesome and less costly. At a cursory glance, it also appears that the business value (ratio between investment in CMM and the dollar value generated by using CMM) would be be greatest for the transition from level 1 to level 2. Finally, the level 1 organization is totally immature and there is almost no basis for judging the quality or efficiency at that stage. Jian Wang at the Univiersity of Michigan at St Louis wrote, "In an immature organization, there is no objective basis for judging product quality or for solving product or process problems." The percentage improvement in going from nothing to something is something you cannot beat at any other level. For example at higher levels, you are probably trying to go from a ~60% (in network efficiency or up time or so on) to a ~80% or at level six (optimized), you are probably looking at the proverbial six nines and achieving 99.9999%. Going from a 60% to an 80% does not come close to the infinite percentage improvement you get from going from a zero to a finite number (say around 20%).from  
Thus, I believe that the greatest gain in terms of percentage of improvement is between levels 1 and 2.

4. Systems development methodology and system life cycle are two terms that are frequently used and just frequently misused. What are the differences between these two terms?
System development methodology is a formalized approach to the systems development process. It is a standardized process that includes the activities methods, best practices, deliverables, and automated tools to be used for information systems development. System life cycle is the factoring of the lifetime of an information system into two stages: 1. systems development 2. systems operation and maintenance—first you build it and then you use and maintain it. Eventually you cycle back to redevelopment of a new system. 
SDM is planned. SLC just happens.
SDM executes the system development of the SLC. There is more to SLC than just development.
Methodology can be purchased or homegrown. Life cycle decisions must, ultimately, be in-house.

6. A number of underlying principles are common to all systems development methodologies. Identify these underlying principles and explain them.
The principles are as follows:
  1. Get the system users involved: Seek agreement from all stakeholders. Try to minimize miscommunication and misunderstanding between technical staff and the users and management.
  2. Use a problem-solving approach: Understand the problem and follow all steps when using the classical problem-solving approach.
  3. Establish Phases and Activities: Establish and follow the phases and activities as decided.
  4. Document throughout development: Documentation is critical. Document as you go and try to avoid post-documentation.
  5. Establish standards: Standards are important in order to allow disparate objects to work together in sync. Embrace standards, not proprietary technology, wherever possible.
  6. Manage the process and projects: Be consistent. Use your system development process/methodology in all projects.
  7. Justify information systems as capital investments: Make sure that the cost-benefit analyses are properly conducted and the estimated costs truly reflect the reality. Pay attention to the unexpected and overheads as well as expanding scopes. 
  8. Don't be afraid to cancel or revise scope: Creeping commitment can cause entire projects to never deliver. Do not try to accomplish too much. Do not be afraid to draw lines in sand. Use sound economics and common sense and understand the concept of sunk costs. If necessary, do not be afraid to call the whole project off.
  9. Divide and conquer: Use factoring (dividing a system into subsystems and components in order to more easily conquer the problem and build the larger system).
  10. Design systems for growth and change: Plan ahead.

8. Each phase of the project includes specific deliverables that must be produced and delivered to the next phase. Using the textbook's hypothetical FAST methodology, what are the deliverables for the requirements analysis, logical design, and physical design/integration phases? 
Requirement analysisBusiness requirement statement
Logical designLogical system models and specifications
Physical Design/integrationsome combination of Physical design models and specifications, design prototypes, and redesigned business processes

Research project #2. You are a new project manager and have been assigned responsibility for an enterprise information systems project that touches every division in your organization. The chief executive officer stated at project initiation that successfully implementing this project was the number one priority of your organization. The project is in midst of the requirements analysis phase. While it is on schedule, you notice that attendance of the system users and owners at meetings on requirements has been dropping. A more experienced project manager has told you not to worry, that this is normal. Should you be concerned?
~> Yes. Although the project is on schedule and it is highly likely that we will ship on time, if the final product is not what the stakeholders need, then the project will need costly modifications. We know that fixing code in pre-release form is much cheaper than fixing code that has already been shipped. Moreover, truly capturing and understanding what the stakeholders need and translating it to be an integral part of the initial project makes the most sense. Thus, yes, there is a reason to be concerned about dropping attendance. If attendance cannot be improved, an alternative way to encourage stakeholder participation has to be found.

Saturday, February 18, 2012

Repetition is clever and it works

Family Guy and teachers know that repetition works. Observe how many times they repeat the word: Wheat thins. Observe and learn.