Units of Measurement: Megapixels, Kilobytes & Gigahertz

An error occurred trying to load this video.

Try refreshing the page, or contact customer support.

Coming up next: File Systems: FAT, NTFS, and HFS+

You're on a roll. Keep up the good work!

Take Quiz Watch Next Lesson
 Replay
Your next lesson will play in 10 seconds
  • 0:07 Units of Measurement
  • 0:29 Bits and Bytes
  • 6:02 Pixels and Megapixels
  • 8:28 Gigahertz
  • 10:58 Lesson Summary
Add to Add to Add to

Want to watch this again later?

Log in or sign up to add this lesson to a Custom Course.

Login or Sign up

Timeline
Autoplay
Autoplay

Recommended Lessons and Courses for You

Lesson Transcript
Instructor: Paul Zandbergen

Paul has a PhD from the University of British Columbia and has taught Geographic Information Systems, statistics and computer programming for 15 years.

Computer systems use many different units of measurement. This lesson will review some of the most widely used units, including megapixels for digital photographs, kilobytes for storage and gigahertz for frequencies.

Units of Measurement

When you walk into a computer store to shop for a new computer, you better come prepared. The specifications of equipment often include very technical details. For example, that new computer you are looking at has a 3.4 gigahertz processor and 4 gigabytes of memory. It's time to look at what these units mean.

Bits and Bytes

You have probably heard the terms 'bits' and 'bytes.' Both are used to express the amount of information stored by a computer system. These terms are often confused, so let's look at each of them in detail.

The term bit is a contraction of the words 'binary' and 'digit.' Binary means there are only two possible values. One bit, or binary digit, is used to represent either a 0 or a 1. All data in a computer system consists of binary information. Computer software translates between binary information and the information you actually want to work with on a computer, such as decimal numbers, text, photos, sound and video.

To actually store a bit as information in a computer system, you need to have a technique that can represent two values. One way to do this is by using transistors. A transistor is like a microscopic switch that controls the flow of electricity based on whether the switch is open or closed. These states represent a 0 and 1, respectively. So, to store one bit, you need one switch, or one transistor. There are other ways to store binary information, but thinking of bits as switches that are open or closed is a useful way to visualize how information can be stored.

To represent more than two values in the binary system, you need to use multiple bits in sequence. Two bits combined can be used to represent four different values: 0 0, 0 1, 1 0 and 1 1. You can visualize this as a sequence of two switches, and there are four possible combinations: open-open, open-closed, closed-open and closed-closed. A sequence of three bits can be used to represent eight different values: 0 0 0, 0 0 1, 0 1 0, 1 0 0, 0 1 1, 1 0 1, 1 1 0 and 1 1 1.

To represent more unique values, you need more bits. In general, bits can be used to explain different values. For example, 8 bits can be used to represent 2^8, or 256, unique values. Historically, computer systems used 8 bits to encode characters. A total of 256 unique values are enough to represent the alphabet in lowercase and uppercase, numbers and special characters.

Each unique character consists of a unique combination of 8 bits. For example, in the widely used UTF-8 character encoding system, the lowercase letter 'a' is represented by 01100001 in binary code. So, it takes 8 switches to represent a single character.

Now let's look to the term byte. A byte consists of 8 binary digits, or 8 bits. This was done in part because computer systems historically used 8 bits to encode characters. The size of 8 bits became the unit for storing data, and it was named byte - one byte stored one character. The term was a deliberate misspelling of the term 'bite' to avoid accidentally shortening it to 'bit.'

While bytes have their origin in 8-bit computer architecture, bytes are now mostly used to describe the size of computer components, such as hard disk drives and memory. You can visualize a single byte as a sequence of 8 switches.

The lowercase letter 'b' is widely used as the symbol for bits, but it is better to simply use 'bits.' For example, one thousand bits would be equal to one kilobit, or one kbit, and one million bits would be equal to one megabit, or Mbit. You may see kb and Mb used instead, but this is not recommended, and it may get confused with bytes.

The standard and widely accepted unit symbol for byte is the uppercase letter 'B.' Below are some of the widely used unit multiples.

one thousand bytes = 1 kilobyte, or 1 kB

one million bytes = 1 megabyte, or 1 MB

one billion bytes = 1 gigabyte, or 1 GB

one trillion bytes = 1 terabyte, or 1TB

Let's return to the new computer you had your eye on with a memory of 4 gigabytes. That means the memory is four billion bytes, and this represents the amount of information the computer system can hold in memory during processing.

Bits and bytes are also used in transmission systems to express the amount of data transmitted per unit of time. The use of bits is much more common, so you will typically see 'bits per second,' or 'b/s,' or bps. For example, a relatively fast Internet connection supports data transfer speed in the order of 1 Mbps, or megabits per second.

Pixels and Megapixels

Now, let's say you are also looking at digital cameras in the computer store. You look at one of the shiny new cameras, and it says '8 megapixels.' That is more than your old camera had - it only had 3.1 megapixels. But, what does this really mean?

Digital photographs are a type of raster graphics. Raster graphics consist of pixels. In technical terms, raster graphics use a rectangular grid of cells of equal size, and each cell has its own color. These cells are also called 'pixels.' The combination of pixels of different colors creates the photograph. Raster graphics are also called 'bitmaps.'

A defining characteristic of a raster graphic is that when you zoom in very closely, you start to see the actual pixels. An important property of a digital photograph is its resolution. Resolution indicates the amount of detail, so a higher resolution means more detail.

You can achieve a higher resolution by using more pixels, which is why digital cameras with more pixels result in sharper photographs. Because the number of pixels for a digital photograph quickly gets very large, the most commonly used unit is a megapixel. A megapixel consists of one million pixels in a raster graphic. The commonly used symbol for megapixels is MP.

To unlock this lesson you must be a Study.com Member.
Create your account

Register to view this lesson

Are you a student or a teacher?

Unlock Your Education

See for yourself why 30 million people use Study.com

Become a Study.com member and start learning now.
Become a Member  Back
What teachers are saying about Study.com
Try it risk-free for 30 days

Earning College Credit

Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.

To learn more, visit our Earning Credit Page

Transferring credit to the school of your choice

Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.

Create an account to start this course today
Try it risk-free for 30 days!
Create An Account
Support