Retrocomputing Asked on January 29, 2021
CGA on the original IBM PC produced sixteen colors, one bit each for red, green, blue and overall intensity modifier.
The preferred output device was the later-arriving 5153 color monitor, which accepted the RGBI signal in digital form, four bits (each on a separate wire, using TTL voltage levels) and performed its own digital-analog conversion.
This is a contrast with later VGA, where DAC was on the video card and the monitor accepted analog signals. Of course, the number of bits per pixel in VGA was much greater, so it’s easy to see why VGA did DAC on the card.
Why did CGA leave DAC to the monitor? One possible answer would be that because there are only four bits per pixel, there is no particular disadvantage, so basically: why not.
But the other CGA output option was NTSC, and that involved doing DAC on the card after all. Admittedly not to the same analog format as the monitor ends up using, but it still seems intuitively likely that some of the circuitry could’ve been shared. Which in turn suggests there should be some offsetting positive advantage to leaving DAC in the monitor.
So why did CGA leave DAC to the monitor instead of putting it in the video card?
But the other CGA output option was NTSC, and that involved doing DAC on the card after all.
I think here the basic logic error of your question is hidden. Colour in NTSC is neither an analogue level, nor tied to intensity. Intensity the base b&w part of a colour TV signal and formed independent as level (I wouldn't dare call it a DAC, it just emits two levels), while the colour signal is added and encoded as timing based information.
Admittedly not to the same analog format as the monitor ends up using,
Both are not what a/the monitor is using:
With NTSC input, intensity and colour shift of the signal has to be translated into three colour signals with intensity varying on each of the three signals intervened - which means that an even more complex mixing scheme has to be used than the simple DAC aboard the CGA.
When using RGBI input the three colour signals are already decoded and ready to use after intensity gets added (which is a quite simple circuit). Not much to do here.
but it still seems intuitively likely that some of the circuitry could've been shared.
It's less a shared circuitry than leaving out any circuitry at all. the RGBI signal is directly delivered form the digital video logic and handed toward the NTSC encoder - the digital outputs are nothing else as grabbing and offering these signals prior to the NTSC encoder.
So if at all, the question is rather why IBM went all the length to add an internal NTSC encoder - having it external would have been more apropriate (*1).
Which in turn suggests there should be some offsetting positive advantage to leaving DAC in the monitor.
Since there is none, at least none worth that name, there is noting to be left to the CRT electronics :))
*1 - In fact, Apple went exactly that way with the Apple /// XRGB video. Its output is 4 TTL level signals - which can easy be used as RGBI. Instead of adding a colour encoder to the video card, they left it to external devices (like the monitor) to decode it into colour or gray levels (a way Macs also continued to use for quite some time). Due the low number of signals, any transfer into RGB is just a bunch of resistors. Converting it into NTSC is way more work.
Answered by Raffzahn on January 29, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP