Back To CourseBusiness 109: Intro to Computing
10 chapters | 84 lessons | 9 flashcard sets
As a member, you'll also get unlimited access to over 75,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.Free 5-day trial
Paul has a PhD from the University of British Columbia and has taught Geographic Information Systems, statistics and computer programming for 15 years.
When you walk into a computer store to shop for a new computer, you better come prepared. The specifications of equipment often include very technical details. For example, that new computer you are looking at has a 3.4 gigahertz processor and 4 gigabytes of memory. It's time to look at what these units mean.
You have probably heard the terms 'bits' and 'bytes.' Both are used to express the amount of information stored by a computer system. These terms are often confused, so let's look at each of them in detail.
The term bit is a contraction of the words 'binary' and 'digit.' Binary means there are only two possible values. One bit, or binary digit, is used to represent either a 0 or a 1. All data in a computer system consists of binary information. Computer software translates between binary information and the information you actually want to work with on a computer, such as decimal numbers, text, photos, sound and video.
To actually store a bit as information in a computer system, you need to have a technique that can represent two values. One way to do this is by using transistors. A transistor is like a microscopic switch that controls the flow of electricity based on whether the switch is open or closed. These states represent a 0 and 1, respectively. So, to store one bit, you need one switch, or one transistor. There are other ways to store binary information, but thinking of bits as switches that are open or closed is a useful way to visualize how information can be stored.
To represent more than two values in the binary system, you need to use multiple bits in sequence. Two bits combined can be used to represent four different values: 0 0, 0 1, 1 0 and 1 1. You can visualize this as a sequence of two switches, and there are four possible combinations: open-open, open-closed, closed-open and closed-closed. A sequence of three bits can be used to represent eight different values: 0 0 0, 0 0 1, 0 1 0, 1 0 0, 0 1 1, 1 0 1, 1 1 0 and 1 1 1.
To represent more unique values, you need more bits. In general, bits can be used to explain different values. For example, 8 bits can be used to represent 2^8, or 256, unique values. Historically, computer systems used 8 bits to encode characters. A total of 256 unique values are enough to represent the alphabet in lowercase and uppercase, numbers and special characters.
Each unique character consists of a unique combination of 8 bits. For example, in the widely used UTF-8 character encoding system, the lowercase letter 'a' is represented by 01100001 in binary code. So, it takes 8 switches to represent a single character.
Now let's look to the term byte. A byte consists of 8 binary digits, or 8 bits. This was done in part because computer systems historically used 8 bits to encode characters. The size of 8 bits became the unit for storing data, and it was named byte - one byte stored one character. The term was a deliberate misspelling of the term 'bite' to avoid accidentally shortening it to 'bit.'
While bytes have their origin in 8-bit computer architecture, bytes are now mostly used to describe the size of computer components, such as hard disk drives and memory. You can visualize a single byte as a sequence of 8 switches.
The lowercase letter 'b' is widely used as the symbol for bits, but it is better to simply use 'bits.' For example, one thousand bits would be equal to one kilobit, or one kbit, and one million bits would be equal to one megabit, or Mbit. You may see kb and Mb used instead, but this is not recommended, and it may get confused with bytes.
The standard and widely accepted unit symbol for byte is the uppercase letter 'B.' Below are some of the widely used unit multiples.
one thousand bytes = 1 kilobyte, or 1 kB
one million bytes = 1 megabyte, or 1 MB
one billion bytes = 1 gigabyte, or 1 GB
one trillion bytes = 1 terabyte, or 1TB
Let's return to the new computer you had your eye on with a memory of 4 gigabytes. That means the memory is four billion bytes, and this represents the amount of information the computer system can hold in memory during processing.
Bits and bytes are also used in transmission systems to express the amount of data transmitted per unit of time. The use of bits is much more common, so you will typically see 'bits per second,' or 'b/s,' or bps. For example, a relatively fast Internet connection supports data transfer speed in the order of 1 Mbps, or megabits per second.
Now, let's say you are also looking at digital cameras in the computer store. You look at one of the shiny new cameras, and it says '8 megapixels.' That is more than your old camera had - it only had 3.1 megapixels. But, what does this really mean?
Digital photographs are a type of raster graphics. Raster graphics consist of pixels. In technical terms, raster graphics use a rectangular grid of cells of equal size, and each cell has its own color. These cells are also called 'pixels.' The combination of pixels of different colors creates the photograph. Raster graphics are also called 'bitmaps.'
A defining characteristic of a raster graphic is that when you zoom in very closely, you start to see the actual pixels. An important property of a digital photograph is its resolution. Resolution indicates the amount of detail, so a higher resolution means more detail.
You can achieve a higher resolution by using more pixels, which is why digital cameras with more pixels result in sharper photographs. Because the number of pixels for a digital photograph quickly gets very large, the most commonly used unit is a megapixel. A megapixel consists of one million pixels in a raster graphic. The commonly used symbol for megapixels is MP.
To count the number of pixels in a photograph, you multiply the number of horizontal pixels by the number of vertical pixels. This is just like calculating an area by using length and width. For example, a digital photograph of 2,048 (horizontal) by 1,536 (vertical) pixels uses 3,145,728 pixels. This number gets rounded to 3.1 megapixels. You don't see 3,145,728 pixels written on the box of your camera.
More pixels provide more detail to represent the object being photographed. So, your new 8 megapixel camera results in more detailed photographs compared to a 3.1 megapixel camera. However, more pixels also mean that more data storage is needed, and high resolution digital photography can take up a lot of storage space.
The frequency of something is defined as the number of cycles per second. The unit of frequency is hertz, and the symbol is Hz.
Frequencies are used in a number of different ways to describe aspects of computer systems. Probably the most important use is when describing the speed of a processor.
The central processing unit, or CPU, of a computer is labeled in terms of its clock rate. The clock rate of a CPU is one measure of its performance. Simply put, a computer carries out one instruction per cycle. Fortunately, the clock rate of a typical CPU is extremely fast.
A typical CPU of a present-day computer has a clock rate of at least 1 gigahertz, or GHz. A gigahertz represents one billion cycles per second. That new computer you had your eye on with a 3.4 gigahertz processor can theoretically process 3.4 billion instructions per second.
When you buy a new computer, the clock rate, or speed of the processor in GHz, becomes one of the deciding factors. More is better (and costs more), although many other factors determine the actual performance of a computer.
Another very common use of frequencies is when describing wireless signals. Electromagnetic radiation is often described by its frequency, or the number of oscillations of the electric and magnetic fields per second. Radio signals are usually measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz).
Light is electromagnetic radiation with frequencies in the range of tens (infrared) to thousands (ultraviolet) of terahertz. However, the characteristics of light are more commonly specified in terms of their wavelengths, so you don't see those frequencies used very often.
WiFi is an example of a radio signal. WiFi operates in the frequencies of 2.4 GHz and 5 GHz. Higher frequencies allow for the signal to carry more data. The frequencies used by WiFi are considerably higher than those used for cell phones. This is the reason why you can typically download files faster on your smartphone using a WiFi connection to the Internet than your regular cellular network connection.
Computer systems use many different units. Bits and bytes are used to express the amount of information stored by a computer system. A bit is a contraction of the words 'binary' and 'digit' and is used to represent either a 0 or a 1. All data in a computer system consists of binary information.
A byte consists of 8 bits. Historically, computer systems used 8 bits to encode characters. Bytes are used to describe the size of computer components, such as hard disk drives and memory. One gigabyte, or GB, consists of one billion bytes.
A megapixel indicates one million pixels in a raster graphic. A digital photograph is one example of a raster graphic. Pixels of different colors form digital photographs. More pixels in a digital photograph result in more detail.
The frequency of something is defined as the number of cycles per second. The unit of frequency is hertz, and the symbol is Hz. This is used to express the clock rate of a computer's processor and the oscillation frequency of wireless signals.
Once you've finished with this lesson, you will have the ability to:
To unlock this lesson you must be a Study.com Member.
Create your account
Already a member? Log InBack
Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Back To CourseBusiness 109: Intro to Computing
10 chapters | 84 lessons | 9 flashcard sets