Retrocomputing Asked by supercat on December 8, 2020
The Apollo Guidance Computer had its code stored in six modules that held 6 kwords of storage each, and the design of each module was such that changing even a single bit after construction would have been very difficult. On the other hand, the computer was constructed in such a way that swapping one module out for a different one would have been fairly easy.
Was any effort made to design the code in such a way that if a defect was discovered, it would be possible to fix the code by rebuilding one or two modules, using the other four or five without modification? If so, what techniques were used for that purpose?
If code was simply assembled from scratch after each time it was changed, many address references scattered throughout would be likely to change if any instructions were added or removed, thus requiring that all memory modules be rebuilt. On the other hand, there are a number of approaches that would seem possible to minimize such issues.
One approach would be to subdivide the software into six separately-built pieces of firmware, each of which started with a jump table to all of the externally-callable routines. If all inter-module calls are dispatched through the jump table, changes to the code within a module would have no effect on external calls that were dispatched through the jump table.
An alternative approach would be to have the jump table for all modules, as well as most of the unused space, consolidated within one module. If any routines need to be modified, it would be necessary to change the module containing the master jump table, but if the routines aren’t too big, no matter which module originally held them, a fixed copy could be placed within the master jump module without having to modify any other modules.
Did the AGC or any other early programs make use of such techniques to ensure that changes to ROM could be consolidated in as few modules/chips as possible?
The AGC, at least, didn't employ any such indirection. Calls to subroutines in other banks were performed via the BANKCALL routine (TC BANKCALL
followed by a CADR
pseudo-instruction containing the target label). CADR encoded the destination bank and address directly. If a routine moves because previous content in the same bank got longer or shorter, all callers would need to be updated. Calls to subroutines in the same bank, or subroutines in the first two "fixed-fixed" banks (which were always addressable regardless of the bank-switch register) were done by direct TC
jump, again encoding the address directly (in fact, since TC is opcode 0, the instruction word is the address).
Answered by hobbs on December 8, 2020
Any time you have a machine with multiple microcontrollers, you will have multiple, independent ROMs. Factory automation systems for example contain dozens of microcontrollers linked together on a CANopen network, with predetermined Node IDs on the bus and Object IDs (addresses) on each node. If a node needs to communicate directly with another node, it needs to know the second node's Node ID and and the destination Object ID for the message, similar to a jump table in a single-CPU solution.
In this way, each ROM is independent of each other and can be updated independently with new firmware without requiring addresses (node and object IDs) to change. It's basically a decentralized solution to the problem of address references changing when a ROM is changed.
Answered by snips-n-snails on December 8, 2020
Isn't that the basic usage of any modularization? As soon as there are multiple storages, they can be exchanged vor updates - For the Apple II, for example, it happened twice: Changing the Monitor-ROM with Autostart-ROM and/or Integer-BASIC-ROM(s) with APPLESOFT-ROMs.
Beside that, most (large/early) Computers didn't have massive ROMs but only microcode. The AGC is an outlier here as it is more of an embedded system. Microcode was more often than not patched on a level smaller than a module. Like exchanging single cards or rewiring some bits.
Also, a closed system/single application, as the AGC is, doesn't need to waste much space for abstraction layers, as any replacement is better made to simply fit the external references (entry points) exactly like its predecessor.
Answered by Raffzahn on December 8, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP