Binary Language of Computers: Definition & Overview

An error occurred trying to load this video.

Try refreshing the page, or contact customer support.

Coming up next: Flowchart Symbols in Programming: Definition, Functions & Examples

You're on a roll. Keep up the good work!

Take Quiz Watch Next Lesson
 Replay
Your next lesson will play in 10 seconds
  • 0:05 Definition
  • 1:36 Binary Notation
  • 3:36 Binary Coding Systems
  • 6:25 Bits and Bytes
  • 7:35 Lesson Summary
Save Save Save

Want to watch this again later?

Log in or sign up to add this lesson to a Custom Course.

Log in or Sign up

Timeline
Autoplay
Autoplay
Speed Speed

Recommended Lessons and Courses for You

Lesson Transcript
Instructor: Paul Zandbergen

Paul has a PhD from the University of British Columbia and has taught Geographic Information Systems, statistics and computer programming for 15 years.

All digital data used in computer systems is represented using 0s and 1s. Binary coding systems have been developed to represent text, numbers, and other types of data.

Definition

All data in a computer system consists of binary information. 'Binary' means there are only 2 possible values: 0 and 1. Computer software translates between binary information and the information you actually work with on a computer such as decimal numbers, text, photos, sound, and video. Binary information is sometimes also referred to as machine language since it represents the most fundamental level of information stored in a computer system.

At a physical level, the 0s and 1s are stored in the central processing unit of a computer system using transistors. Transistors are microscopic switches that control the flow of electricity. If a current passes through the transistor (the switch is closed), this represents a 1. If a current doesn't pass through (the switch is open), this represents a 0.

Binary information is also transmitted using magnetic properties; the two different types of polarities are used to represent zeros and ones. An optical disk, such as a CD-ROM or DVD, also stores binary information in the form of pits and lands (the area between the pits).

No matter where your data is stored, all digital data at the most fundamental level consists of zeros and ones. In order to make sense of this binary information, a binary notation method is needed, also referred to a binary code.

Binary Notation

Each binary digit is known for short as a bit. One bit can only be used to represent 2 different values: 0 and 1. To represent more than two values, we need to use multiple bits. Two bits combined can be used to represent 4 different values: 0 0, 0 1, 1 0, and 1 1. Three bits can be used to represent 8 different values: 0 0 0, 0 0 1, 0 1 0, 1 0 0, 0 1 1, 1 0 1, 1 1 0 and 1 1 1. In general, 'n' bits can be used to represent 2^n different values.

Consider the example of representing the decimal numbers 0 through 10. There are more than 8 unique values to represent, which will, therefore, require a total of 4 bits (since 3 bits can only represent 8 different values). The table shows the binary equivalent for the numbers 0 through 10. This is an example of standard binary notation, or binary code.

To represent larger numbers, you need more bits. Modern computers use a 32-bit or 64-bit architecture. This represents the maximum number of binary digits that can be used to represent a single value. A total of 32 bits can be used to represent 2^32 different values. The equivalent of this in decimal notation is 4,294,967,295. This is the largest number that can be used without resulting in rounding issues.

Want to see what this really means? Use a computer application that works with numbers such as Excel. Type a really large number. If your operating system is 32-bit, after 10 digits, the next digits will be rounded to 0. There are ways around this, but the number of bits provides a physical limit to how many unique digits can be stored in a single value.

Binary Coding

The same logic used to represent numbers can be used to represent text. What we need is a coding scheme, similar to the binary notation example for the numbers 0 through 10. How many characters do we need to represent text? The English language includes 26 letters. Upper and lower case have to be treated separately, so that makes 52 unique characters. We also need characters to represent punctuation, numeric digits, and special characters.

A basic set may only need about 100 characters or so, very much like the keys on a keyboard, but what about different languages that use a different script? All the characters we want to represent are known as a character set. Several standard character sets have been developed over the years, including ASCII and Unicode.

The American Standard Code for Information Exchange (ASCII) was developed from telegraphic codes, but then was adapted to represent text in binary code in the 1960s and 1970s. The original version of ASCII used 7 bits to represent 128 different characters (2^7).

Character sets developed later typically incorporate the same 128 characters, but added more characters by using 8-bit, 16-bit, or 32-bit encoding. This table shows a small sample of the complete set of the128 ASCII characters.

While ASCII is still in use today, the current standard for encoding text is Unicode. The basic principle underlying Unicode is very much like ASCII, but Unicode contains over 110,000 characters, covering most of the world's printed languages. The relatively simple 8-bit version of Unicode (referred to as UTF-8) is almost identical to ASCII, but the 16- and 32-bit versions (referred to as UTF-16 and UTF-32) allow you to use just about any character in any printed language.

To unlock this lesson you must be a Study.com Member.
Create your account

Register to view this lesson

Are you a student or a teacher?

Unlock Your Education

See for yourself why 30 million people use Study.com

Become a Study.com member and start learning now.
Become a Member  Back
What teachers are saying about Study.com
Try it risk-free for 30 days

Earning College Credit

Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.

To learn more, visit our Earning Credit Page

Transferring credit to the school of your choice

Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.

Create an account to start this course today
Try it risk-free for 30 days!
Create an account
Support