Sorting Algorithm Comparison: Strengths & Weaknesses

Instructor: Lyna Griffin

Lyna has tutored undergraduate Information Management Systems and Database Development. She has a Bachelor's degree in Electrical Engineering and a Masters degree in Information Technology.

In this lesson, we will examine the different types of sorting algorithms and learn the way they operate. We will compare their performance levels, strengths, and weaknesses using the Big-O Notation.

What are Sorting Algorithms?

A sorting algorithm is used to reorder a group of items into a specified order. This sort could be by alphabetic order or some increasing or decreasing order. There are different types of sorting algorithms, each having its own strengths and weaknesses.

The Big-O Notation

The Big-O Notation is a standard by which the performances of algorithmic functions are measured. In the Big-O Notation, there are 4 main different time complexities that reflect the performance of a function.

  • O(1) - Constant Time Complexity
  • O(n) - Linear Time Complexity
  • O(log n) - Logarithmic Time Complexity
  • O(n^2) - Quadratic Time complexity

Figure 1 shows a graph of the different time complexities and their performances as n increases. The n is the number of elements in the array.


Big O


Bubble Sort Algorithm

The Bubble Sort algorithm uses a method of comparison to execute its sorting operations. Pairs of values are compared and the necessary swapping done according to the order required. The larger element is constantly compared and bubbled to its right position in the array.

Bubble Sort: Method

We want to sort the following sequence of numbers - (72639) with the largest at the top from the right.

First Round

a > b = a is greater than a < b = a is less than b

(7 2 6 3 9) -> (2 7 6 3 9): [7>2 swap]

(2 7 6 3 9) -> (2 6 7 3 9): [7>6 swap]

(2 6 7 3 9) -> (2 6 3 7 9): [7>3 swap]

(2 6 3 7 9): [7<9 NO swap]

Second Round

(2 6 3 7 9) -> (2 6 3 7 9): [6>2 NO swap]

(2 6 3 7 9) -> (2 3 6 7 9): [6>3 swap]

(2 3 6 7 9) -> [6<7 NO swap]

Third Round

(2 3 6 7 9) -> (2 3 6 7 9): [3>2 NO swap]

The Sorting is complete. Sorted Array - (2 3 6 7 9)

Bubble Sort: Performance

The best case scenario is depicted by O(n). In this instance, the algorithm executes in a time directly proportional to the size of the array. The worst-case scenario of its operation occurs when the array needs to be 'reverse sorted' and is depicted by O (n^2) where the time increases exponentially as the number of sorted elements increase.

Strengths

  • It is easy to understand
  • Easy to implement
  • No demand for large amounts of memory
  • Once sorted, data is available for processing

Weaknesses

  • Sorting takes a long time

Selection Sort Algorithm

In the Selection Sort algorithm, the array is sorted into two sections: a sorted selection and an unsorted selection. The operation involves identifying the minimum element in an unsorted array and placing it into a sorted one. This process is done repeatedly until the array is completely sorted.

Selection Sort Method

Let us say we have an array: (9, 6, 2, 5, 4) as an example as seen in Figure 2.


Selection Sort Algorithm


Selection Sort: Performance

The Selection Sort best and worst case scenarios both follow the time complexity format O(n^2) as the sorting operation involves two nested loops. The size of the array again affects the performance.

Strengths

  • The arrangement of elements does not affect its performance.
  • Uses fewer operations, so where data movement is costly it is more economical

Weaknesses

  • The comparisons within unsorted arrays requires O(n^2) which is ideal where n is small

Merge Sort Algorithm

With the Merge Sort algorithm, the operation involves dividing the array into multiple sub-arrays. Each sub-array is sorted and then brought back together into a sorted array. This method is often referred to as the divide and conquer paradigm.

Merge Sort Method

Figure 3 shows the Merge Sort algorithm in progress on the following array:

(15, 32. 28, 9, 40, 18, 38, 50)


Merge Sort 1


Steps 1-4 in Figure 3, show the repeated process of dividing whole array being divided into equal parts and sub equal parts till elements are in an atomic state which can no longer be divided.


merge sort 2


Steps 5-7 in Figure 4 show the atomic elements being brought together in the same order in which they were divided but in an intelligent way. At each stage, the elements are sorted and resorted. The process is repeated resulting in a sorted array after step 7.

Merge Sort: Performance

The best case and worst-case scenarios are denoted by O (n log n).

Strengths

  • More applicable in accessing data with slow access rates, typically tape drives and hard disks
  • The size of the file does not adversely affect the performance
  • It is good for sorting through data sets that are accessed in sequence
  • Implements a stable sort

To unlock this lesson you must be a Study.com Member.
Create your account

Register for a free trial

Are you a student or a teacher?

Unlock Your Education

See for yourself why 30 million people use Study.com

Become a Study.com member and start learning now.
Become a Member  Back
What teachers are saying about Study.com
Free 5-day trial

Earning College Credit

Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.

To learn more, visit our Earning Credit Page

Transferring credit to the school of your choice

Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.

Create an account to start this course today
Try it free for 5 days!
Create An Account
Support