Table of Contents
- Binary Code
- History of Binary Codes
- Binary Language
- Uses of Binary Language
- Binary Representation
- Lesson Summary
Binary code is a system by which numbers, letters and other information are represented using only two symbols, or binary digits. The binary definition to a computer is a 1s and 0s code arranged in ways that the computer can read, understand, and act upon.
![]() |
As part of its hardware, a computer has a central processing unit, or CPU, consisting of microelectronics, or transistors, that executes instructions. A modern computer's CPU has billions of transistors, or switches, that control the flow of electricity; they can turn an electrical current on or off. When electricity is applied, the switch is on, and the current flows from one electrode through the transistor to the other electrode. If the switch is off, no electricity flows.
The binary number system comes into play because on and off can be represented numerically by 1 and 0. Thus, the off and on positions of an electrical switch inside a computer are physical representations of 1s and 0s. If a switch is on, it is represented by 1, and electricity flows through the transistor. If a switch is off, it is represented by 0, and no electrical current passes through.
When a letter is typed on a keyboard, a signal is sent and converted to binary language which the computer can store and process. This binary code can be explained as machine language which makes sense to the computer. For people to make sense of this binary information, they use a code of 1s and 0s, or binary digits. A binary digit is defined as a bit. A bit can represent either 0 or 1. To represent other values, bits need to be combined. Eight bits comprise a byte, which is the smallest measure used in computer operating systems. Bytes are generally written as two groups of four-bits. The smallest byte is a series of zeroes, 0000 0000, which is the decimal equivalent of zero, and the largest byte is a series of ones, 1111 1111, which is the decimal equivalent of {eq}2^8 {/eq} or 256. It is common to see data referred to in kilobytes, megabytes, gigabytes and terabytes.
Examples of binary codes include alphanumeric codes such as The American Standard Code for Information Interchange (ASCII) and Binary Coded Decimals (BCD), where each decimal digit is represented by a 4-bit binary number.
When was Binary Invented?
As currently used in computers and devices around the world, binary code was invented when the German polymath, Gottfried Wilhelm Leibnitz (1646-1716), introduced a system of using only the binary digits 0 and 1 to perform arithmetic operations.
![]() |
Numerical binary schemes existed prior to Leibnitz, but not as comprehensive number systems. Binary language examples include the I-Ching, the ancient Chinese divination text, which used the duality of Yin and Yang for religious and philosophical purposes. The writings of the poet and mathematician, Pindala, illustrate use of a binary system in Indian antiquity. Ancient Egyptians employed a binary system for multiplication and for computation of volume.
Binary codes can also be non-numerical. Francis Bacon (1561-1626) devised a system nearly a century before Leibnitz to represent the letters of the alphabet using five-letter variations of only two letters. Today, this would be called a five-bit binary code and can be thought of more as a secret or encrypted code rather than a system of mathematics.
![]() |
Another example of non-numerical binary code is Braille, a writing system of patterns of raised or un-raised dots embossed on paper that the blind can read using their fingers.
![]() |
Communicating in binary language requires manipulation of the binary code. Reading binary numbers differs in meaning depending on whether the reader is a computer or a person. Computers read, understand, and act upon binary information. When people refer to reading binary, they generally mean converting the binary, or base 2, numbers into more familiar decimal numbers. When reading binary, it is helpful to understand that the binary system, like the decimal system, is a positional number system, where the value of each digit relies on the position of that digit. Moving from left to right along decimal number, the value of each digit increases by a power of 10, the base. Conversely, moving from right to left results in the opposite, with the value of each digit decreasing by a power of 10. Writing a decimal number in expanded form illustrates this principle: $$2,307 = 2 \times 10^3 +3 \times 10^2 + 0 \times 10^1 +7 \times 10^0 $$ Binary numbers work the same way, with the value of each digit changing by power of 2, the base. Accordingly, 11001 in the written in the binary system equals $$1 \times 2^4 + 1 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 1 \times 2^0, $$ or 25 in the decimal system (16 + 8 + 1.)
The following chart illustrates how each additional power of 2 creates the subsequent positional column in binary notation.
Decimal | Binary |
---|---|
1 | 1 |
2 | 10 |
4 | 100 |
8 | 1000 |
16 | 10000 |
All binary addition is based on four equations: 0 + 0 = 0, 1 + 0 = 1, 0 + 1 = 1, and 1 + 1 = 10. To add 101 + 110 in binary, the operation begins with the ones column and moves left, just like adding decimals, resulting in 1011. This can be double checked using expanded notation.
$$101 = 1 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 = 4 + 0 + 1 = 5 $$$$110 = 1 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 = 4 + 2 + 0 = 6 $$
$$1011 = 1 \times 2^3 + 0 \times 2^2 + 1 \times 2^1 + 1 \times 2^0 = 8 + 0 + 2 + 1 = 11 $$
Since 5 + 6 = 11, the binary addition is correct.
To carry a digit in binary addition, it is helpful to understand that in binary, 1 + 1 + 1 = 10 and that 10 + 1 = 11, resulting in 1 + 1 + 1 = 11. The binary equation 1101 + 1110 = 11011 illustrates this point.
Any kind of data can be represented in binary. In the 1960's, as digital communications began to rival telecommunications, a need arose for standardization so as not to impede interstate commerce. President Johnson decreed The American Standard Code for Information Interchange (ASCII) as the common code to be used for converting characters into binary digits.
Decimal | Binary | ASCII |
---|---|---|
65 | 0100 0001 | A |
66 | 0100 0010 | B |
67 | 0100 0011 | C |
68 | 0100 0100 | D |
69 | 0100 0101 | E |
70 | 0100 0110 | F |
71 | 0100 0111 | G |
72 | 0100 1000 | H |
73 | 0100 1001 | I |
74 | 0100 1010 | J |
75 | 0100 1011 | K |
76 | 0100 1100 | L |
77 | 0100 1101 | M |
78 | 0100 1110 | N |
79 | 0100 1111 | O |
80 | 0101 0000 | P |
81 | 0101 0001 | Q |
82 | 0101 0010 | R |
83 | 0101 0011 | S |
84 | 0101 0100 | T |
85 | 0101 0101 | U |
86 | 0101 0110 | V |
87 | 0101 0111 | W |
88 | 0101 1000 | X |
89 | 0101 1001 | Y |
90 | 0101 1010 | Z |
The word 'BINARY' in binary language is written as 01000010 01001001 01001110 01000001 01010010 01011001. Using the ASCII chart, each letter is converted to a binary number.
B - 0100 0010
I - 0100 1001
N - 0100 1110
A - 0100 0001
R - 0101 0010
Y - 0101 1001
The limitations of ASCII were soon exposed, especially its lack of capacity to convert characters from other languages. By the 1980's, a more universal coding system was developed, Unicode. Rather than the 8-bit ASCII model, Unicode expanded to using 16 bits, which enabled it to accommodate multiple alphabets and characters. Standardizing the representation of emojis is an example of how Unicode continues to evolve and address the growing and changing communication needs of the computing world.
The process of converting a computer programmer's instructions into machine language that a computer can understand and act upon is called compiling. Computers understand machine language, but to write strings upon strings of 1s and 0s code is tedious and time consuming. Programmers write instructions to a computer in high level languages such as Python, FORTRAN and Java. These instructions are user friendly but cannot be read by computers. They must be sent to a compiler to translate them into binary language. Examples of compilers include GCC and AOCC. Once compiled, the instructions are in effect translated from human language into binary code that can be understood and acted upon by a computer.
This table shows the binary equivalent of the decimal numbers 0 through 10.
Decimal | Binary |
---|---|
0 | 0 |
1 | 1 |
2 | 10 |
3 | 11 |
4 | 100 |
5 | 101 |
6 | 110 |
7 | 111 |
8 | 1000 |
9 | 1001 |
10 | 1010 |
Without a table, writing and converting binary code can be explained by continuous division by 2, keeping track of each quotient and remainder until the remainder equals 0. The remainder of the initial division by 2 is referred to as the least significant bit (LSB) and the remainder of the final division by 2 is referred to as the most significant bit (MSB.) Writing the remainders from the MSB to the LSB results in the binary equivalent of the digit.
For example, convert the decimal 14 into binary digits.
![]() |
Writing the remainders from MSB to LSB results in 1110. Thus, the decimal number 14 is represented by 1110 in binary. This can be double checked using expanded notation.
$$14 = 1 \times 2^3 + 1 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 $$
Similarly, binary representation of a decimal can be ascertained without a table by reconfiguring the number as the sum of powers of two. For instance, to represent 136 in binary, 136 could be rewritten as the sum of powers of two as follows: $$136 = 128 + 8 = 2^7 + 2^3 $$
Expanding this expression to include a coefficient of zero or one for each of the powers of two from 7 through 0 will result in the following expression: $$136 = 1 \times 2^7 + 0 \times 2^6 + 0 \times 2^5 + 0 \times 2^4 + 1 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 0 \times 2^0 $$ The coefficients, 1000 1000, from this expression comprise the binary representation of the decimal 136.
The 4-bit unit is the building block of binary notation, and it can represent 16 (or {eq}2^4 {/eq}) different values, ranging from 0000 to 1111, or decimal 1 - 15. The number of values that can be represented by any given number ('n') of binary digits is {eq}2^n. {/eq}
Binary code is a system that represents numbers, text and other information in binary digits. Computers use binary code to process and store information since the physical states of a computer's transistors being on or off lend themselves to a two-symbol method of notation. A single binary digit is referred to as a bit and is represented by zero or one; eight bits make up a byte. The number of values that can be represented by any given number ('n') of binary digits is {eq}2^n. {/eq} Although some examples pre-date him, Leibnitz is credited with inventing the modern binary system in the 17th Century. Binary numbers can be converted to decimals by reconfiguring them as the sum of powers of 2. Tools like ASCII and Unicode are used to convert binary and text. Unlike ASCII , which is primarily used for English, Unicode is designed to work with most of the world's languages. Computer programmers write instructions in higher level languages which are converted by compilers into machine language, or binary code, that computers can understand and act upon.
To unlock this lesson you must be a Study.com Member.
Create your account
In binary, 10101 converts to 21 in decimal. This can be written in expanded notation as the following equation: 1 x 2^4 + 0 x 2^3 + 1 x 2^2 + 0 x 2^1 + 1 x 2^0 = 16 + 0 + 4 + 0 + 1 = 21.
From the ASCII conversion chart, the letters "HELLO" can be represented by binary numbers as follows:
H - 01001000
E - 01000101
L - 01001100
L - 01001100
O - 01001111
Therefore, HELLO in binary is written as 01001000 01000101 01001100 01001100 01001111.
All digital data stored and used by computers consists of strings of such binary digits, or bits which can be thought of as machine language. When a letter is typed on a keyboard, a signal is sent to the computer which is converted to strings zeros and ones (binary digits) which the computer can store and process. These zeros and ones can be referred to as machine language which makes sense to the computer.
Already a member? Log In
Back