8, 16, 32, 64 & 128-bit Integers

An error occurred trying to load this video.

Try refreshing the page, or contact customer support.

Coming up next: Java: Floating Point Numbers

You're on a roll. Keep up the good work!

Take Quiz Watch Next Lesson
 Replay
Your next lesson will play in 10 seconds
  • 0:00 Integer Data Storage Types
  • 1:40 Overflow/Underflow
  • 3:47 Efficiency Considerations
  • 5:44 Lesson Summary
Save Save Save

Want to watch this again later?

Log in or sign up to add this lesson to a Custom Course.

Log in or Sign up

Timeline
Autoplay
Autoplay
Speed Speed

Recommended Lessons and Courses for You

Lesson Transcript
Instructor: Thomas Wall

Thomas is a professional software developer, online instructor, consultant and has a Masters degree.

In this lesson, you'll learn how computer programming languages represent integer values and the storage requirements of each data type, as well as appropriate usage and efficiency considerations.

Integer Data Storage Types

Computers use binary digits (bits) to represent integer numeric values. The number of bits determines the allowed range of values that can be stored. Here's the long and short of integer storage types:


Size Minimum Value Maximum Value
8-bits -(2^7) = 128 2^7 - 1 = 127
16-bits -(2^15) = 32,767 2^15 - 1 = 32,767
32-bits -(2^31) = -2,147,483,648 2^31 - 1 = 2,147,483,647
64-bits -(2^63) ~= -(9 x 10^19) 2^63 - 1 ~= 9 x 10^19
128-bits -(2^127) ~= -(1.7 x 10^39) 2^127 - 1 ~= 1.7 x 10^39


As the table shows, if a storage type is n-bits wide, the minimum value that can be correctly stored is -(2^(n-1)) and the maximum value is 2^(n-1) - 1.

Why are there multiple storage types? Why not just store all integer values in a 128-bit data type? It's so the programmer can choose a storage type that minimizes wasted space while still giving good execution speed. For example, if the programmer knows a specific integer value will only represent days of the week (0=Sunday, 1=Monday, ..., 5=Friday, 6=Saturday), a 128-bit data type wastes a lot of storage and takes extra machine cycles to fetch/store each data value operated on. A smaller data type, such as 8-bit, would be more appropriate in this case.

If an integer value is to represent the population of a country, it must be able to hold a value of at least a billion, so at least a 32-bit data type is needed. However, if one is calculating the total population of the world, the sum of the population of all countries will exceed the maximum value that can be stored in a 32-bit data type, so that value should be stored in at least a 64-bit data type.

Overflow/Underflow

Okay, let's talk about overflow and underflow now. Overflow occurs when a positive value exceeds the maximum that can be stored in a data type, while underflow occurs when a negative value is less than the minimum that can be stored in a data type. Choosing a data type wide enough to avoid computational overflow or underflow is of great importance. Overflow/underflow can cause a program to crash and/or behave in an incorrect manner that's difficult to diagnose.

Consider an example where a variable named NumberOfChairs is declared as a 16-bit integer data type. Our computer program initially sets this variable to zero and every time another chair is encountered in our inventory, the value of NumberOfChairs is incremented by one.

When our chair manufacturing business was small, there's no way our warehouse could hold more than 32,767 chairs. However, over time, our business grew, as did our warehousing capabilities, and one day, out of the blue, we had an inventory of 32,768 chairs. Depending on the programming language used and/or the computer processor, adding 32,767 + 1 to a 16-bit integer will either result in a total program abort or (worse yet), a zero value and the program will silently continue running.

Basically everything is going along fine and, for no apparent reason, our inventory processing program either crashes or silently produces incorrect results. Diagnosing and fixing this will be both painful and costly.

A practical programming rule of thumb is to choose a data type that can hold a value bigger than you think can realistically ever occur, then choose the next larger size data type. The infamous Millennium (a.k.a., Y2K) bug cost billions of dollars to fix and was the result of thousands of older computer programs only allocating a two-digit (i.e., 00 through 99 decimal) storage value to represent the year, instead of a full four digits.

To unlock this lesson you must be a Study.com Member.
Create your account

Register to view this lesson

Are you a student or a teacher?

Unlock Your Education

See for yourself why 30 million people use Study.com

Become a Study.com member and start learning now.
Become a Member  Back
What teachers are saying about Study.com
Try it risk-free for 30 days

Earning College Credit

Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.

To learn more, visit our Earning Credit Page

Transferring credit to the school of your choice

Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.

Create an account to start this course today
Try it risk-free for 30 days!
Create an account
Support