How to measure algorithm speed
So, I’ve finished writing system design interview. But it’s not the end. It’s time for data structures and algorithms now!
In software development, whenever we are creating a new feature, it’s important to choose the right data structure. For example, with autocomplete, it’d be very difficult for us to create it without Trie.
Now, I’m not gonna put links here, yet. We’ll investigate individual structures. But to give an idea on efficiency, consider an array:
It may look like we always want an Array, right? Inserting is slower. Deleting is same. Searching is same.
But what really is the question is: Is it more important for our code to have no duplicates, or faster insertion
So before I go forward, let me put an example of the importance:
Consider searching in an array:
Now, I’ve used O(n) notation above. And that’s the talk of this part. So, what is it?
So, complexity! Ever so weird word! The next image shows a complexity chart:
Now, we can see a bunch of stuff in there. From best to worst:
Well, let’s put some more examples:
[1,2,3][0]
or [1,2,3][2]
.So, we’ve learned a little about Big Oh. The gist of it is, it’s a math way of saying “Hey, this takes some time”.
Now, why do we care about something taking time? Well, imagine that your’e developing Facebook. You have A LOT of data. If you store it in a bad way, your product will be so slow it becomes unusable. Would you still use Facebook if it took 10 mins to send a message? Probably not.
So, what can we do with it? Well, we use it to speed up code.
As I mentioned before, we have a sorting algorithm.
So, all in all, it’s just a short way of saying how efficient an algorithm is. And if we deem it too slow, then we can improve.
So, how can we use Big Oh to speed up our code?
Well, we can’t use Big Oh itself. What we choose is using:
So, let’s consider we’re searching in a sorted array:
const arr = Array.from({length: 100}, (v, k) => k);
Let’s consider we’re searching for number 60.
So, let’s use binary search!
Array.prototype.binarySearch = function (what) {
let lowerBound = 0;
let upperBound = this.length;
while (upperBound >= lowerBound) {
const middle = Math.floor((upperBound - lowerBound) / 2 + lowerBound);
if (this[middle] === what) return what;
if (this[middle] < what) lowerBound = middle;
else upperBound = middle;
}
return undefined;
};
Now, with this, we’re basically looking at every middleth element. The most it can take is log(100). Or 7 steps (round up). Can we use this always? Of course not. We can do it only on sorted arrays! But it’s a great example of usefulness!
One thing to note here is the clever way with indices. Imagine the following code to do the search:
Array.prototype.binarySearch = function (what) {
let _this = this;
while (_this.length > 1) {
const middle = Math.floor(_this.length / 2);
if (_this[middle] === what) return what;
if (_this[middle] < what) _this = _this.slice(middle);
else _this = _this.slice(0, middle)
}
return _this[0] == what ? what : undefined;
};
While it seemingly does the same thing, notice the slice
method. Slicing an array is actually a
JavaScript implementation of a slicing algorithm, which would take some time. So, to actually be efficient, we’d
have to perform the clever index play.
How would we perform the sort? Well, I’ve mentioned bubble sort and quick sort. Let me quickly create bubble sort, because it’s very easy, but also very slow:
Array.prototype.bubbleSort = function () {
let numberOfSorted = 0;
let sorted = false;
while (!sorted) {
const unsortedUntil = this.length - numberOfSorted;
sorted = true;
for (let i = 0; i < unsortedUntil; i++) {
const previous = this[i];
const next = this[i + 1];
if (previous > next) {
this[i] = next;
this[i + 1] = previous;
sorted = false;
}
}
numberOfSorted++;
}
};
arr.bubbleSort();
In this case, we start with reversed array. Then, we compare each items, and if they are out of order, we sort them
And that’s it. Once we apply this bubble sort ONCE, all subsequent searches will be fast. And that’s the importance.
Now, in the previous example, we’ve sped up the search by applying it on a sorted array. We’ve:
Without sorting, all searches would be O(n). With it, all searches are O(log n).
Now, it may very well be the case that 2 algorithm of same “Oh Rank” would not have the same speed.
Consider the following example:
// first
for (let i = 0; i < 1_000_000; i++) {
const j = i;
const c = i;
console.log(i);
}
// second
for (let i = 0; i < 1_000_000; i++) {
console.log(i);
}
In the example, we have 2 for loops. But the first one is thrice as fast, because it performs less operations. However, both would be considered O(n). That’s because constants are ignored.
If something takes O(3 * n) - such as the first example - it’d be still considered O(n).
That’s also the reason why O(1) doesn’t mean it always takes 1 step. It means that no matter the circumstances, it’ll always be equally fast (or slow). For an example of this, you can think of objects:
const a = { a: 1, b: 2, c: 3 };
const b = { c: 3 }
const res1 = a['c'];
const res2 = b['c'];
In the above example, the res1
and res2
retrievals take the same time.
With that in mind, I’ve previously done bubble sort. Let me quickly show selectionSort:
const arr = Array.from({ length: 100 }, (v, k) => k).reverse();
Array.prototype.selectionSort = function() {
for (let i = 0; i < this.length; i++) {
for (let j = i + 1; j < this.length; j++) {
let prev = this[i];
let curr = this[j];
if (prev > curr) {
this[i] = curr;
this[j] = prev;
}
}
}
}
arr.selectionSort();
If we compare the two, we basically have:
In other words:
With that in mind, we can actually say that when an array is sorted ascending, bubble sort does nothing!
However, selectionSort will still perform ~5k operations.
So, even though both are O(n^2), you can choose one for better performance, depending on its usage.
I want to make one thing absolutely clear - none of these sorts are used nowadays as they are slow. However, they are good examples of how something can be done it different ways with same “Oh Rank”, but have different speeds depending on the usage.
So, we’re at the end of Big Oh! The key takeaway about Big Oh is:
To get an idea of common use cases and algorithms used:
Furthermore, we basically have 3 ways to optimize code: