2013: Advent Computing: Bits

by on December 2, 2013

If you watched yesterday’s video, you’ll have spotted the symbols that were being written to that “infinitely long” tape were “1” and “0”. There’s a reason binary – counting with only ones and zeros – is used so extensively in computing, and it’s essentially that it’s much easier for electronics to measure “on” or “off” than it is to measure a scale.

A bit on its own is rarely useful, however, so we combine them. The standard unit of computing is a byte, which is eight bits. One byte, on its own, is enough to store any number between 0 and 255. The number 177, for example, is 10110001. A byte hasn’t always been eight bits, however. Some very old computers considered a byte to be four bits, or in one case, seven. We’ve pretty much standardized on eight now, though.

One of the reasons for that eight byte standard is probably to do with a byte previously being called a “character”. One byte, traditionally, was enough to represent a single character, such as “a” or “8” or “%”. There were lots of different ways of deciding which number meant which character, but the most common one was “ASCII”, which defines a character for every number between 0 and 127, including all the letters (upper and lower case), numbers, a whole bunch of common symbols, and a few special things like markers for the end of a line, and one indicating to delete the previous character.

Of course, that leaves us with 128 to 255 spare, and what we do with those numbers is on the list for tomorrow.

Leave a Reply