Its always good to have a look at worst-case time complexities of common data structure operations frequently.
Its always good to have a look at worst-case time complexities of common data structure operations frequently.
Arrays are one of the basic and important data structures to learn, They take constant time to read and Insert elements at the end and takes a linear time for the remaining.
Stack takes constant time for Push, Pop & Peek operations.
In Queue for Enqueue, Dequeue & Peek operations it takes only Constant time.
Here we are considering we are using tails for all single linked lists (Some implementations might not have it).
Linked List is the data structure that comes with a lot of different operational scenarios, we have to think about head & tail usage in every operation we are doing. And operation logic and complexity changes at the head, tail, and middle. Typically insertion at head & tail takes constant time and insertion in middle takes linear time. Search can take linear time. Deletion at the head takes constant time and it can take linear time in remaining scenarios.
Trees: basic concepts
A tree is a data structure where a node can have zero or more children. Each node contains a value. Like graphs, the connection between nodes is called edges. A tree is a type of graph, but not all of them are trees (more on that later).
These data structures are called "trees" because the data structure resembles a tree π³. It starts with a root node and branch off with its descendants, and finally, there are leaves.
Here are some properties of trees:
The top-most node is called root.
A node without children is called leaf node or terminal node.
Height (h) of the tree is the distance (edge count) between the farthest leaf to the root.
A has a height of 3
I has a height of 0
Depth or level of a node is the distance between the root and the node in question.
H has a depth of 2
B has a depth of 1
Implementing a simple tree data structure
As we saw earlier, a tree node is just a data structure that has a value and has links to their descendants.
Here's an example of a tree node:
We can create a tree with 3 descendents as follows:
That's all; we have a tree data structure!
The node abe is the root and bart, lisa and maggie are the leaf nodes of the tree. Notice that the tree's node can have a different number of descendants: 0, 1, 3, or any other value.
Tree data structures have many applications such as:
Querying an LDAP (Lightweight Directory Access Protocol)
Representing the Document Object Model (DOM) for HTML on Websites.
Binary Trees
Trees nodes can have zero or more children. However, when a tree has at the most two children, then it's called binary tree.
Full, Complete and Perfect binary trees
Depending on how nodes are arranged in a binary tree, it can be full, complete and perfect:
Full binary tree: each node has exactly 0 or 2 children (but never 1).
Complete binary tree: when all levels except the last one are full with nodes.
Perfect binary tree: when all the levels (including the last one) are full of nodes.
Look at these examples:
These properties are not always mutually exclusive. You can have more than one:
A perfect tree is always complete and full.
Perfect binary trees have precisely 2k-1 nodes, where k is the last level of the tree (starting with 1).
A complete tree is not always full.
Like in our "complete" example, since it has a parent with only one child. If we remove the rightmost gray node, then we would have a complete and full tree but not perfect.
A full tree is not always complete and perfect.
Binary Search Tree (BST)
Binary Search Trees or BST for short are a particular application of binary trees. BST has at most two nodes (like all binary trees). However, the values are in such a way that the left children value must be less than the parent, and the right children is must be higher.
Duplicates: Some BST doesn't allow duplicates while others add the same values as a right child. Other implementations might keep a count on a case of the duplicity (we are going to do this one later).
Let's implement a Binary Search Tree!
BST Implementation
BST are very similar to our previous implementation of a tree. However, there are some differences:
Nodes can have at most, only two children: left and right.
Nodes values has to be ordered as left < parent < right.
Here's the tree node. Very similar to what we did before, but we added some handy getters and setters for left and right children. Notice that is also keeping a reference to the parent and we update it every time add children.
Ok, so far we can add a left and right child. Now, let's do the BST class that enforces the left < parent < right rule.
BinarySearchTree.js linkUrl linkText
Let's implementing insertion.
BST Node Insertion
To insert a node in a binary tree, we do the following:
If a tree is empty, the first node becomes the root and you are done.
Compare root/parent's value if it's higher go right, if it's lower go left. If it's the same, then the value already exists so you can increase the duplicate count (multiplicity).
Repeat #2 until we found an empty slot to insert the new node.
Let's do an illustration how to insert 30, 40, 10, 15, 12, 50:
We are using a helper function called findNodeAndParent. If we found that the node already exists in the tree, then we increase the multiplicity counter. Let's see how this function is implemented:
findNodeAndParent goes through the tree searching for the value. It starts at the root (line 2) and then goes left or right based on the value (line 10). If the value already exists, it will return the node found and also the parent. In case that the node doesn't exist, we still return the parent.
BST Node Deletion
We know how to insert and search for value. Now, we are going to implement the delete operation. It's a little trickier than adding, so let's explain it with the following cases:
Deleting a leaf node (0 children)
We just remove the reference from node's parent (15) to be null.
Deleting a node with one child.
In this case, we go to the parent (30) and replace the child (10), with a child's child (15).
Deleting a node with two children
We are removing node 40, that has two children (35 and 50). We replace the parent's (30) child (40) with the child's right child (50). Then we keep the left child (35) in the same place it was before, so we have to make it the left child of 50.
Another way to do it to remove node 40, is to move the left child (35) up and then keep the right child (50) where it was.
Either way is ok as long as you keep the binary search tree property: left < parent < right.
Deleting the root.
Deleting the root is very similar to removing nodes with 0, 1, or 2 children that we discussed earlier. The only difference is that afterward, we need to update the reference of the root of the tree.
Here's an animation of what we discussed.
In the animation, it moves up the left child/subtree and keeps the right child/subtree in place.
Now that we have a good idea how it should work, let's implement it:
For instance, let's say that we want to combine the following tree and we are about to delete node 30. We want to mix 30's left subtree into the right one. The result is this:
Now, and if we make the new subtree the root, then node 30 is no more!
Binary Tree Transversal
There are different ways of traversing a Binary Tree, depending on the order that the nodes are visited: in-order, pre-order, and post-order. Also, we can use them DFS and BFS that we learned from the graph post. Let's go through each one.
In-Order Traversal
In-order traversal visit nodes on this order: left, parent, right.
In-order traversal would print out the following values: 3, 4, 5, 10, 15, 30, 40. If the tree is a BST, then the nodes will be sorted in ascendent order as in our example.
Post-Order Traversal
Post-order traversal visit nodes on this order: left, right, parent.
Pre-order traversal would print out the following values: 10, 5, 4, 3, 30, 15, 40. This order of numbers is the same result that we would get if we run the Depth-First Search (DFS).
So far, we have discussed how to add, remove and find elements. However, we haven't talked about runtimes. Let's think about the worst-case scenarios.
Let's say that we want to add numbers in ascending order.
We will end up with all the nodes on the left side! This unbalanced tree is no better than a LinkedList, so finding an element would take O(n). π±
Looking for something in an unbalanced tree is like looking for a word in the dictionary page by page. When the tree is balanced, you can open the dictionary in the middle and from there you know if you have to go left or right depending on the alphabet and the word you are looking for.
We need to find a way to balance the tree!
If the tree was balanced, then we could find elements in O(log n) instead of going through each node. Let's talk about what balanced tree means.
If we are searching for 7 in the non-balanced tree, we have to go from 1 to 7. However, in the balanced tree, we visit: 4, 6, and 7. It gets even worse with larger trees. If you have one million nodes, searching for a non-existing element might require to visit all million while on a balanced tree it just requires 20 visits! That's a huge difference!
We are going to solve this issue in the next post using self-balanced trees (AVL trees).
Big O Notation
time complexity
it allow us to talk formally about how the runtime of an algorithm grows as the input grows.
n = number of operation the computer has to do can be: f(n) = n f(n) = n^2 f(n) = 1
f(n) = could be something entirely different !
O(n):
functionaddUpToSimple(n:number) {let total =0;for (let i =0; i < n; i++) { total += i; }return total;}
array: push() and pop()** are always faster than _unshift() and shift()_ because inserting or removing element from beginning of an array requires reIndexing all elements**
Common Patterns
functionbinarySearch(sortedArr:number[], value:number):number {let min =0;let max =sortedArr.length-1;while (min <= max) {let middle =Math.floor((min + max) /2);if (sortedArr[middle] < value) { min = middle +1; } elseif (sortedArr[middle] > value) { max = middle -1; } else {return middle; } }return-1;}
indexOf() includes() find() findIndex() all this methods doing linear search behind the scene
O(n)
functionlinearSearch(arr:number[], value:number):number {for (let i =0; i <arr.length; i++) {if (arr[i] === value) {return i; }return-1; }}
binary search
O(Log n)
functionbinarySearch(sortedArr:number[], value:number):number {let left =0;let right =sortedArr.length-1;while (left <= right) {constmiddle=Math.round((right + left) /2);if (sortedArr[middle] > value) { right = middle -1; } elseif (sortedArr[middle] < value) { left = middle +1; } else {return middle; } }return-1;}
Sorting Algorithms
array.sort()
array.sort(cb) will turn all values to string then sort it based on it's unicode
["a","c","b","f","d"].sort(); // (5) ["a", "b", "c", "d", "f"][1,10,6,8,2,3,5].sort(); //(7) [1, 10, 2, 3, 5, 6, 8]/* also receive callback function by two arguments: a: previous number b: next number */// if callback return NEGATIVE number a will placed before b[1,10,6,8,2,3,5].sort((a, b) => a - b); // (7) [1, 2, 3, 5, 6, 8, 10]// if callback return POSITIVE number a will placed after b(7)[(1,2,3,5,6,8,10)].sort((a, b) => b - a); // (7) [10, 8, 6, 5, 3, 2, 1]// if callback return ZERO a and b will placed at the same position
// merge two sorted arrayfunctionmerge(arr1:number[], arr2:number[]):number[] {let result = [];let i =0;let j =0;while (i <arr1.length&& j <arr2.length) {if (arr1[i] < arr2[j]) {result.push(arr1[i]); i++; } else {result.push(arr2[j]); j++; } }while (i <arr1.length) {result.push(arr1[i]); i++; }while (j <arr2.length) {result.push(arr2[j]); j++; }return result;}functionmergeSort(arr:number[]):number[] {if (arr.length<=1) return arr;constmiddle=Math.floor(arr.length/2);constleft=mergeSort(arr.slice(0, middle));constright=mergeSort(arr.slice(middle));returnmerge(left, right);}
quick sort
in following implementation we always assume first item as pivot
general: O(n Log n) sorted: O(n^2)
// place pivot in the right index and return pivot indexfunctionpivot(arr:number[], start =0, end =arr.length-1) {constpivot= arr[start];let pivotIndex = start;for (let i = start +1; i < end; i++) {if (arr[i] < pivot) { pivotIndex++; [arr[pivotIndex], arr[i]] = [arr[i], arr[pivotIndex]]; } } [arr[start], arr[pivotIndex]] = [arr[pivotIndex], arr[start]];}functionquickSort(arr:number[], start =0, end =arr.length-1) {if (left < right) {constpivot=pivot(arr, start, end);// leftquickSort(arr, start, pivotIndex -1);// rightquickSort(arr, pivotIndex +1, end); }return arr;}
radix sort
O(nk) n: the number of items we sorting k: average length of those numbers
// get the actual number at the given indexfunctiongetDigit(num:number, i:number):number {returnMath.floor(Math.abs(num) /Math.pow(10, i)) %10;}// get number lengthfunctiondigitCount(num:number):number {if (num ===0) return1;returnMath.floor(Math.log10(Math.abs(num))) +1;}// return number by most lengthfunctionmostDigits(arr:number[]):number {let maxDigits =0;for (let i =0; i <arr.length; i++) { maxDigits =Math.max(maxDigits,digitCount(arr[i])); }return maxDigits;}functionradixSort(arr:number[]):number[] {let maxDigitCount =mostDigits(arr);for (let k =0; k < maxDigitCount; k++) {let digitBuckets =Array.from({ length:10 }, () => []);for (let j =0; j <arr.length; j++) { digitBuckets[getDigit(arr[j], k)].push(arr[j]); } arr = [].concat(...digitBuckets); }return arr;}
// implement stack using arrayconststack= [1,2,3];stack.push(4); // [1,2,3,4]stack.pop(); // [1,2,3]// stacks just have push and popstack.unshift(0); // [0,1,2,3]stack.shift(); // [1,2,3]
// implementing queue using arrayconstq= [];q.push(1);q.push(2);q.shift(1); // out first items first// orq.shift(1);q.shift(2);q.pop(); // out first items first
there is two main strategies to traversal a tree : Breadth-first-search and Depth-first-search
class_Node {constructor(public value:number) {}public left:_Node|null=null;public right:_Node|null=null;}classBinarySearchTree {public root:_Node|null=null;publicinsert(value:number):BinarySearchTree|null {constnode=new_Node(value);if (!this.root) {this.root = node; } else {let currentNode:_Node=this.root;do {if (value ===currentNode.value) returnnull;if (value <currentNode.value) {if (currentNode.left) { currentNode =currentNode.left; } else {currentNode.left = node;break; } } else {if (currentNode.right) { currentNode =currentNode.right; } else {currentNode.right = node;break; } } } while (currentNode); }returnthis; }publichave(value:number):boolean {let currentNode =this.root;while (currentNode) {if (value ===currentNode.value) {returntrue; } else {if (value <currentNode.value) {if (currentNode.left) { currentNode =currentNode.left; }break; } else {if (currentNode.right) { currentNode =currentNode.right;continue; }break; } } }returnfalse; }/* breadth first search (bfs) : traverse tree horizontally*/publicbfs():_Node[] {constvisited:_Node[] = [];if (this.root) {constq:_Node[] = [this.root];while (q.length) {if (q[0].left) q.push(q[0].left);if (q[0].right) q.push(q[0].right);visited.push(q[0]),q.shift(); } }return visited; }/* depth first search (dfs) : traverse tree vertically following contains three dfs searching methods: 1. preOrder : add node => going to left and add left => going to right and add right 2. postOrder : going to left and add left => going to right and add right => going to node and add node 3. inOrder : going to the left and add left => add node => going to the right and add right */publicdfsPreOrder():_Node[] {constvisited:_Node[] = [];if (this.root) { (functiontraverse(node:_Node):void {visited.push(node);if (node.left) {traverse(node.left); }if (node.right) {traverse(node.right); } })(this.root); }return visited; }publicdfsPostOrder():_Node[] {constvisited:_Node[] = [];if (this.root) { (functiontraverse(node:_Node):void {if (node.left) {traverse(node.left); }if (node.right) {traverse(node.right); }visited.push(node); })(this.root); }return visited; }dfsInOrder():_Node[] {constvisited:_Node[] = [];if (this.root) { (functiontraverse(node:_Node) {if (node.left) {traverse(node.left); }visited.push(node); f;if (node.right) {traverse(node.right); } })(this.root); }return visited; }}
traversal comparison
depth-firstvsbreadth-first : they both timeComplexity is same but spaceComplexity is different if we got a wide tree like this:
breadth-first take up more space. cuz we adding more element to queue.
if we got a depth long tree like this:
depth-first take up more space.
potentially use cases for dfs variants (preOder postOrder inOrder) preOrder is useful when we want a clone of tree. inOrder is useful when we want data in order that it's stored in tree.
Binary heaps
terminology
a binary heap is as compact as possible (all the children of each node are as full as they can be and left children and filled out first)
each parent has at most two children
Max Binary Heap:
parent nodes are always greater than child nodes but there is no guarantees between sibling
Min Binary Heap:
child nodes are always greater than parent nodes but there is no guarantees between sibling
A data structure which every element has a priority. Elements with higher priorities are served before elements with lower priorities.
In the following example, we implemented a priority queue using minBinaryHeap but you should know binaryHeaps and priority queue is two different concepts and we just use an abstract of it
There is possibility for handle collisions is hash tables :
Separate chaining ( e.g. using nested arrays of key values implemented in following hash tables )
linear probing ( if index filled place {key, value} in next position )
typeEl= [string,any];classHashTable {private keyMap:El[][];constructor(size:number=53) {this.keyMap =newArray(size); }public_hash(key:string):number {let total =0;constWEIRD_PRIME=31;for (let i =0; i <key.length; i++) {constcharacterCode=key.charCodeAt(i) -96; total = (total + characterCode *WEIRD_PRIME) %this.keyMap.length; }return total; }set(key:string, value:any):El[][] {constindex=this._hash(key);if (!this.keyMap[index]) {this.keyMap[index] = []; }this.keyMap[index].push([key, value]);returnthis.keyMap; }get(key:string):El|undefined {constindex=this._hash(key);constelements=this.keyMap[index];if (elements) {for (let value of elements) {if (value[0] === key) return value[1]; } }returnundefined; }getkeys():string[] {constkeys:string[] = [];for (let value ofthis.keyMap) {if (value) {for (let _value of value) {keys.push(_value[0]); } } }return keys; }getvalues():any[] {constvalues=newSet<any>();for (let value ofthis.keyMap) {if (value) {for (let _value of value) {values.add(value[1]); } } }return [...values]; }}
Graphs
A graph data structure consists of a finite (and possibly mutable) set of vertices or nodes or points, together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for directed graph.
terminology
vertex :node
edge : connection between nodes
directed/ undirected graph: in directed graph there is a direction assigned to vertices an in undirected no direction assigned.
weighted/ unweighted graph: in weighted graph there is a weight associated by edges but in unweighted graph no weight assigned to edges
adjacency matrix
adjacency list
adjacency list vs adjacency matrix
Operation
Adjacency List
Adjacency Matrix
Add vertex
O(1)
O(V^2)
Add Edge
O(1)
O(1)
Remove vertex
O(V+E)
O(V^2)
Remove Edge
O(E)
O(1)
Query
O(V+E)
O(1)
Storage
O(V+E)
O(V^2)
|V| : number of Vertices
|E| : number of Edges
Adjacency List take less space in sparse graph( when we have a few edges ).
Adjacency List are faster to iterate over edges.
Adjacency Matrix are faster to finding a specific edge.
It's a method for solving a complex problems by breaking it down into a collection of simpler problems, solving their subProblems once and storing their solutions. technically it using knowledge of last problems to solve next by memorization
example Fibonacci sequence
Let's implement it without dynamic programming:without dynamic programming:
const abe = new TreeNode('Abe');const homer = new TreeNode('Homer');const bart = new TreeNode('Bart');const lisa = new TreeNode('Lisa');const maggie = new TreeNode('Maggie');abe.descendents.push(homer);homer.descendents.push(bart, lisa, maggie);