This section tells you a few things you need to know before you get started, such as what you’ll need for hardware and software, where to find the project files for this book, and more.
The chapters in this short but important section explain what’s built into the Kotlin Standard Library and how you use it in building your apps. You’ll learn why one algorithm may be better suited than another. You’ll also learn what the Big-O notation is and how you can continue to answer the question: “Can we do better?”
The Kotlin Standard Library contains essential functionality for the Kotlin language. Inside the Kotlin Standard Library, you’ll find a variety of tools and types to help build your Kotlin apps, including data structures.
Answering the question, “Does it scale?” is all about understanding the complexity of an algorithm. The Big-O notation is the primary tool that you’ll use to think about algorithmic performance in the abstract and independent hardware or language. This chapter will prepare you to think in these terms.
This section looks at a few important data structures that form the basis of more advanced algorithms covered in future sections.
A linked list is a collection of values arranged in a linear, unidirectional sequence.
A linked list has several theoretical advantages over contiguous storage options such as the array, including constant time insertion and removal from the front of the list, and other reliable performance characteristics.
The stack data structure is identical in concept to a physical stack of objects.
When you add an item to a stack, you place it on top of the stack.
When you remove an item from a stack, you always remove the topmost item.
Stacks are useful, and also exceedingly simple. The main goal of building a stack is to enforce how you access your data.
Lines are everywhere, whether you’re lining up to buy tickets to your favorite movie or waiting for a printer machine to print out your documents.
These real-life scenarios mimic the queue data structure.
Queues use first in, first out ordering.
In other words, the first element that was enqueued will be the first to get dequeued.
Queues are handy when you need to maintain the order of your elements to process later.
Trees are another way to organize information, introducing the concept of children and parents. You’ll look of the most common tree types and see how they can be used to solve specific computational problems.
Trees are a useful way to organize information when performance is critical. Adding them to your toolbelt will undoubtedly prove to be useful throughout your career.
The tree is a data structure of profound importance.
It’s used to tackle many recurring challenges in software development such as representing hierarchical relationships, managing sorted data and facilitating fast lookup operations.
There are many types of trees, and they come in various shapes and sizes.
In the previous chapter, you looked at a basic tree where each node can have many children.
A binary tree is a tree where each node has at most two children, often referred to as the left and right children.
Binary trees serve as the basis for many tree structures and algorithms.
In this chapter, you’ll build a binary tree and learn about the three most important tree traversal algorithms.
A binary search tree facilitates fast lookup, addition, and removal operations.
Each operation has an average time complexity of O(log n), which is considerably faster than linear data structures such as arrays and linked lists.
In the previous chapter, you learned about the O(log n) performance characteristics of the binary search tree. However, you also learned that unbalanced trees can deteriorate the performance of the tree, all the way down to O(n). In 1962, Georgy Adelson-Velsky and Evgenii Landis came up with the first self-balancing binary search tree: the AVL Tree.
The trie (pronounced as “try”) is a tree that specializes in storing data that can be represented as a collection, such as English words.
The benefits of a trie are best illustrated by looking at it in the context of prefix matching, which is what you’ll do in this chapter.
Binary search is one of the most efficient searching algorithms with a
time complexity of O(log n.
This is comparable with searching for an element inside a balanced binary search tree.
To perform a binary search, the collection must be able to perform index manipulation in constant time and must be sorted.
A heap is a complete binary tree, also known as a binary heap, that can be constructed using an array. Heaps come in two flavors: Max heaps and Min heaps. Have you seen the movie Toy Story with the claw machine and the squeaky little green aliens? Imagine that the claw machine is operating on your heap structure and will always pick the minimum or maximum value depending on the flavor of heap.
Queues are lists that maintain the order of elements using first in, first out (FIFO) ordering.
A priority queue is another version of a queue that, instead of using FIFO ordering, dequeues elements in priority order. A priority queue is especially useful when you need to identify the maximum or minimum value given a list of elements.
Putting lists in order is a classical computational problem. Sorting has been studied since the days of vacuum tubes and perhaps even before that. Although you may never need to write your own sorting algorithm — thanks to the highly optimized standard library — studying sorting has many benefits. You’ll be introduced, for example, to the all-important technique of divide-and-conquer, stability, and best- and worst-case timing.
Studying sorting may seem a bit academic and disconnected to the real world of app development, but understanding the tradeoffs for these simple cases will lead you to a better understanding and let you analyze any algorithm.
O(n²) time complexity doesn’t have great performance, but the sorting algorithms in this category are easy to understand and useful in some scenarios.
These algorithms are space-efficient and only require constant O(1) additional memory space.
In this chapter, you’ll look at the bubble sort, selection sort and insertion sort algorithms.
In this chapter, you’ll study one of the most important sorting algorithm based on the divide and conquer principle.
You’ll learn how to split a list, sort it recursively and then merge the two parts together.
So far, you’ve been relying on comparisons to determine the sorting order.
In this chapter, you’ll look at a completely different model of sorting.
Radix sort is a non-comparative algorithm for sorting integers in linear time.
There are multiple implementations of radix sort that focus on different problems.
To keep things simple, you’ll focus on sorting base 10 integers while investigating the least significant digit (LSD) variant of radix sort.
Heap sort is another comparison-based algorithm that sorts an array in ascending order using a heap. This chapter builds on the heap concepts presented in Chapter 12,
“The Heap Data Structure.” Heap sort takes advantage of a heap being, by definition, a partially sorted binary tree.
Quicksort is another divide-and-conquer technique that introduces the concept of partitions and a pivot to implement high-performance sorting. You‘ll see that while it’s extremely fast for some datasets, for others, it can be a bit slow.
Graphs are an extremely useful data structure that can be used to model a wide range of things: webpages on the internet, the migration patterns of birds, protons in the nucleus of an atom. This section gets you thinking deeply (and broadly) about how to use graphs and graph algorithms to solve real-world problems. The chapters that follow will give the foundation you need to understand graph data structures. Like previous sections, every other chapter will serve as a Challenge chapter so you can practice what you’ve learned.
What do social networks have in common with booking cheap flights around the world? You can represent both of these real-world models as graphs.
A graph is a data structure that captures relationships between objects.
It’s made up of vertices connected by edges.
In a weighted graph, every edge has a weight associated with it that represents the cost of using this edge.
This lets you choose the cheapest or shortest path between two vertices.
In the previous chapter, you explored how graphs can be used to capture relationships between objects. Several algorithms exist to traverse or search through a graph’s vertices.
One such algorithm is the breadth-first search algorithm, which you can use to solve a wide variety of problems, including generating a minimum spanning tree, finding potential paths between vertices and finding the shortest path between two vertices.
In the previous chapter, you looked at breadth-first search, where you had to explore every neighbor of a vertex before going to the next level. In this chapter, you’ll look at depth-first search, which has applications for topological sorting, detecting cycles, pathfinding in maze puzzles and finding connected components in a sparse graph.
Have you ever used the Google or Apple Maps app to find the shortest or fastest route from one place to another? Dijkstra’s algorithm is particularly useful in GPS networks to help find the shortest path between two places. Dijkstra’s algorithm is a greedy algorithm that constructs a solution step-by-step and picks the most optimal path at every step.
In previous chapters, you looked at depth-first and breadth-first search algorithms.
These algorithms form spanning trees. In this chapter, you’ll look at Prim’s algorithm, a greedy algorithm used to construct a minimum spanning tree.
A minimum spanning tree is a spanning tree with weighted edges where the total weight of all edges is minimized. You’ll learn how to implement a greedy algorithm to construct a solution step-by-step and pick the most optimal path at every step.