Problem 1 (5 points)
Give a real-world example that requires sorting.
A social media platform frequently has to sort posts, by most recent or most liked. When viewing a creator on Youtube, their videos will automatically be sorted by most recent, but allow users to inspect by most popular as well.
Problem 2 (10 points)
Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 4n2 steps, while merge sort runs in 32 nlog(n) steps. For which values of n does insertion sort beat merge sort?
Plotting the slopes in desmos, we see critical points when n=1.57231 and n=6.5071. When speaking strictly of runtimes: Insertion sort runs faster on inputs
- Fractional inputs dont make sense for array sorting, so practically: insertion sort is faster on
and merge sort is faster for all other inputs
This passes the sanity check, as insertion sort is
Problem 3 (25 points)
Write an INSERTION-SORT algorithm that sorts into decreasing order (example - [4,1,6,5,3,4] → [6,5,4,4,3,1]). Show the time complexity of the code on each step along the way, then write out the final time complexity of your algorithm.
Pseudocode:
def insertion_sort(int[] A):
for i, n in A[1:]: # O(n)
j = i - 1 # O(1)
while j >= 0 and n > A[j]: # O(i)->O(n)
A[j+1] = A[j] # O(1)
j-- # O(1)
A[j+1] = n # O(1)
return A
Since we have nested O(n) loops, in the worst case (a sorted array) the inner loop executes i times, for each n input.
For different inputs the algorithm has the following time complexity:
- Sorted Input: (
A=[1, 2, 3, 4, 5]). This is the worst case of input. When this happens, the array is forced to backtrack for every number it encounters, and runs in O(n^2) time. - Reverse Input: (
A=[5, 4, 3, 2, 1]). This is the best case of input. In this case, the algorithm will run in O(n) time, iterating over the array but not having to backtrack. - Unsorted: This case is also O(n^2). The running time of the algorithm on an average input trends toward its worst case, with a half sorted array as an example:
Problem 4 (20 points)
Solve the following recurrence relations and give a Θ bound for each of them.
- (a) T(n) = 8T(n/2) + n 3
- (b) T(n) = 49T(n/25)+ n 3/2 log n
- (c) T(n) = T(n-1) + 2
- (d) T(n) = T(n-1) + c n , where c > 1 is some constant
Solving (a)-(b) using master theorem:
a) a=8, b=2, c=3.
, therefore:
b) a=49, b=25, c=3/2
( ), therefore:
c)
d)
Problem 5 (15 points)
Convert decimal to binary number and show recurrence relation and how time function (time complexity) for this divide and conquer problem is calculated. 1
def recurse(int n, int A[]):
if n==0:
return
recurse(n//2, A)
A.append(n%2) # Remainder
def decToBinary(n):
return "".join(recurse(n, []))
The algorithm will repeat for
Solving using master theorem:
- a=1, b=2, c=0
, therefore:
Problem 6 (15 points)
Suppose you are choosing between the following two algorithms:
- Algorithm A solves problems by dividing them into seven subproblems of quarter the size, recursively solving each subproblem, and then combining the solutions in linear time.
- Algorithm B solves problems of size n by dividing them into nine subproblems of size n/3, recursively solving each subproblem, and then combining the solutions in O(n 2 ) time.
What are the running times of each of these algorithms (in big-O notation), and which would you choose?
Alg A:
Alg B:
Solving each runtime using master theorem:
- Algorithm A: a=7, b=4, c=1
, therefore:
- Algorithm B: a=9, b=3, c=2
, therefore:
Since the running time log Algorithm A has a smaller running time than Algorithm B,
Problem 7 (10 points)
Verify by contradiction for the following:
- (a) 7n2 + 3n 6= O(n)
- (b) 3logn 6= Ω(nlogn) 2
Part a:
Part b: