SECRET OF CSS

Software Engineering in 1980. If there’s one ‘takeaway’ from my… | by Graeme Bentley | Jul, 2022


If there’s one ‘takeaway’ from my 50-year IT career, it is to always use the right tool for each scenario

0*HMK9Bh7ZEuD1npON
Photo by Frank Okay on Unsplash

My father was a carpenter/builder and Australians have a rather old derogatory expression for a hammer — it was called a ‘yankee screwdriver’, where even screws are treated like nails.

It is the classic case of using a single tool for every job even if there are better tools. (The ‘Yankee’ brand tool goes back to WW1 days, typically a screwdriver type tool with interchangeable bits and usually with push-pull ratchet action).

0*3fHSBdfSlWZL9eat
Image, royalty free from dreamstime.com

Having first learned FORTAN2 programming at a school science students camp in 1966, I progressed to a general science degree including programming numerical methods data analysis in year 1. My 3rd-year major subject was Computer Science (1969) including 5 or 6 different languages, from assembler, to functional to object-oriented (SIMULA67 — a simulation extension pre-processor to ALGOL60). Assignments/projects ranged from the mundane to 3D modeling with perspective plotting, to digital circuits simulation. Sub-major subjects were Physics and Pure Mathematics.

I then took a GAP year working on a biochemistry laboratory automation system.

I returned to university for post-graduate research in Database Management on a mini-computer, then used that for research into deductive logic processing.

Although my computing orientation had been technical, I could see that my best future was in the commercial world, so I took a summer-school course in Introductory Accounting (which I augmented by becoming treasurer of a charity organization I was involved with, then later my church’s treasurer).

The project I want to focus on was a city-wide sewage works redevelopment in 1980. A Prime Contractor was handling the civil engineering, an electronics sub-contractor was building the control systems — although they had micro-processor experience, they sub-contracted the system computer management system to my employer.

When I arrived, I found a couple of engineers fiddling around the edges of the project. But I could immediately see that the project design needed a Chronological Functional Decomposition (the right tool for the job at that stage). Yes, there was real-time data acquisition and process control and real-time graphic status display. But past that, real-time was not a factor — there was an alarm log, shift (8 hourly) report, daily and monthly reports incorporating manual data entry (typically chemical analysis results). So I came up with the following Data Flow Diagram.

Data Flow Diagram — rolling summarized data forward from Teal Time, through Hourly, Daily and Monthly summaries. Augmented with Manual Data Entry, there are Real Time Graphic display, reports and Process Control of Outputs.

The plant system was to have 700 binary alarm signals and 150 digital value inputs (analog-to-digital converted). None of these were identified, but would be totally configurable through the electronics switch panel. Manual data entry was required, but again there were no specifics defined. Similarly, the timing of reports was specified, but there were no specifications of how many reports or what data was to be reported. Reports contents and layout had to be ‘end-user configurable’.

My company had just recently undertaken training in the PRIDE data dictionary methodology (long gone and I can find no trace of it on the internet). So I decided to build a sub-system of data management built around a data dictionary. Data elements were defined with their source (an input type and number, or manual entry), labels for screen and reports (short and long), storage data type and output format. File records were defined as a set of data elements, and reports were similarly defined as a set of elements along with an index to one of the predefined data manipulations algorithms (average, mean, etc).

A generic table-driven data entry program was developed which mapped a file/record definition to the screen. So this provided the functionality for manual data entry of test result type data and process control parameters. Similarly, a generic report generator retrieves data items from its base table, applies any summarization algorithm, and builds the report line. An obvious(?) step was to define a table of metadata that described the application data dictionary records. So the system tables and reports configurations were built up from ‘the bootstraps’.

Similarly, the process controls were table-driven. The 32 output signals were derived from input signals as coded in their process control records, along with manual parameter control values, entered and maintained through the generic data maintenance program. A generic, parameter-driven process control feedback algorithm is used — there are 4 process control outputs for each of the 8 sludge digesters, eg. the temperature in a sludge digester must be kept within a narrow range to keep the anaerobic bacteria alive, and the heat-exchange temperature is controlled. Similarly, the recycling pump flow adjusts according to the level of sludge in the digester tank.

There were three real-time graphic displays requested in some detail, but the requirement also wanted a ‘user-configurable display design for future use’. The engineering company had already selected a RAMTEK graphic display terminal that used a ‘mark-up’ language to draw the graphic design (this was 13 years prior to HTML). The mark-up was sent to the terminal as an ASCII data stream. For example:-

[COLOR RED][RECT <x-value>,<y-value>,<width value>,<height value>]

All <values> were in pixels. (Forgive me RAMTEK if I haven’t remembered your syntax correctly). So in fact it was quite reasonable to draw up the required layout on graph paper and code the required drawing sequence. But the graphics we required had to display dynamic data in “real-time”. There would be meter readings, red/yellow/green signals, switches/stop-valves that indicated “open/closed”, arrows with changeable direction, etc. — think “Dashboard”.

My solution was to extend the RAMTEK language with three simple constructs.

  • Firstly, values could be replaced by named variables that referred to values in the in-memory data streams. There are two types of variable, booleans (ON/OFF) and integers, represented as “Bnnn” and “Innn”.
  • Secondly, labels could be assigned to any statement — eg. Lnnn:.
  • Thirdly, a simple variable test construct was defined with simple boolean and logical operators, that if tests “true” directs “execution” to a specified label (eg. [IF B123 ON L456].

In today’s parlance, this is known as “templating” (eg. PHP c.1995). The implementation was our own interpreter program that processed a template file, substituting variables with their real-time values, performing the testing and branching, and continuously outputting a stream of drawing commands to the terminal. The end of a template file simply looped back to the start. So the “real-time” screen refresh was determined by how quickly this interpretation loop was executed.

The three displays were:-

  1. Hydraulic flow — flows, tank levels, pump statuses;
  2. Sludge digesters monitoring — flows, temperature, recycle pump operation, and tank level;
0*6F B8z 4CxDKSB 1
(Apologies for the hand-drawn diagrams — that’s all I have left from the project)

3. Electrical generation monitoring (powered by methane from the sludge digesters) feeding back into the city’s electricity grid — circuit breaker statuses, voltages, power usage within the plant, switches conditions.

I view all of these as ‘tools’ for various stages in the SDLC. Many of these can now be found in structured management methodologies like PRINCE2. Others more toward the technical end are now available within IDEs and custom software architectures.

But the key skills of the best Software Engineers are the creation of application-specific tools and languages with their own grammar. The data dictionary and generic data entry and reporting tools, and mark-up language templating, in this project exemplify tools development necessary in the early years before such tools became commercially available.

This system was developed on a PDP-11 under the RSX11M real-time operating system. The RSX11 shared memory partition was used as the core receptacle for the plant input data, before averaging and writing at intervals to the hourly logs.

The Pascal programming language with some system programming extensions was used. Only serial and direct access files were supported.

With the project being in 1980, it was before Personal Computers and Word processors, so documentation was all long-hand and hand-drawn diagrams.

I left after 6 months of completion of software development. The design of the system required extensive configuration by the Control System electronics sub-contractor in hand with plant personnel. I believe that the Prime Contractor ran into financial difficulties, so the final implementation had a long delay.



News Credit

%d bloggers like this: