15 Common Mistakes When Studying Algorithms (And How to Fix Them) | LearnByTeaching.ai
Algorithms is one of the most intellectually demanding CS courses because it requires both mathematical rigor and creative problem-solving. Students often struggle because they try to pattern-match solutions rather than building the reasoning skills to derive them from first principles.
Memorizing solutions instead of understanding techniques
Students study specific problem solutions rather than internalizing the underlying algorithmic technique. When an exam presents a novel problem that uses the same technique, they can't adapt.
A student memorizes the exact DP table for the 0/1 knapsack problem but can't solve a variant where items have dependencies, because they never understood how to define DP states and transitions from scratch.
How to fix it
For each algorithm or technique, practice defining the subproblem, recurrence, and base case for at least five different problems. Focus on the process of deriving the solution, not the final answer. If you can explain why the recurrence is correct, you understand the technique.
Incorrect Big-O analysis by confusing worst, average, and best case
Students conflate Big-O (upper bound) with average-case or expected complexity. They also forget that Big-O describes the worst case unless otherwise stated, leading to incorrect performance claims.
A student claims quicksort is O(n log n) without qualification, ignoring that its worst case is O(n^2). On an exam asking for worst-case complexity, they lose points for not specifying the quadratic bound.
How to fix it
Always specify which case you're analyzing: worst, average, or best. Know the distinction between O (upper bound), Omega (lower bound), and Theta (tight bound). When a problem asks for Big-O, it's asking for the tightest upper bound you can prove.
Failing to prove greedy algorithm correctness
Students assume that if a greedy approach produces correct output on a few test cases, it must be correct. Greedy algorithms require a formal proof (exchange argument or greedy stays ahead) to ensure correctness.
A student proposes a greedy algorithm for a scheduling problem that works on the provided examples but fails on edge cases. Without an exchange argument proving optimality, the algorithm is unreliable.
How to fix it
For every greedy algorithm, write a formal correctness proof. The exchange argument shows that swapping any element from the optimal solution with the greedy choice doesn't worsen the result. If you can't prove correctness, the greedy approach may be wrong — consider DP instead.
Misapplying the Master Theorem
Students try to apply the Master Theorem to recurrences that don't fit its form, or they misidentify the values of a, b, and f(n). The theorem only applies to recurrences of the form T(n) = aT(n/b) + f(n).
A student applies the Master Theorem to T(n) = T(n-1) + n, which is not a divide-and-conquer recurrence (it's T(n-1), not T(n/b)). The Master Theorem does not apply here; this requires the recursion tree or substitution method.
How to fix it
Before using the Master Theorem, verify the recurrence is in the correct form: T(n) = aT(n/b) + f(n) where a >= 1 and b > 1. If it's not, use the recursion tree method or substitution. Keep the three cases written on a reference card until you've internalized them.
Not recognizing overlapping subproblems in DP
Students can't distinguish problems that require dynamic programming from those solvable by divide-and-conquer or greedy. The key indicator for DP is overlapping subproblems combined with optimal substructure.
A student tries to solve the longest common subsequence problem with a greedy approach, choosing the earliest matching character at each step. This fails because greedy doesn't account for downstream consequences of early character choices.
How to fix it
Ask two questions: (1) Can the optimal solution be built from optimal solutions to smaller subproblems? (optimal substructure) (2) Are the same subproblems solved multiple times? (overlapping subproblems) If both are yes, use DP. Draw the recursion tree — if you see repeated nodes, that's your signal.
Choosing BFS vs DFS vs Dijkstra incorrectly for graph problems
Students don't have a clear decision framework for when to use which graph traversal algorithm. They apply BFS to weighted shortest paths or Dijkstra to unweighted graphs, wasting time or getting wrong results.
A student uses BFS to find the shortest path in a weighted graph, treating all edge weights as 1. The path BFS finds has the fewest edges but not the minimum total weight.
How to fix it
Use this decision tree: unweighted shortest path → BFS; non-negative weighted shortest path → Dijkstra; negative weights possible → Bellman-Ford; all-pairs shortest path → Floyd-Warshall. For exploring or detecting cycles, either BFS or DFS works, but DFS is usually simpler.
Writing pseudocode that is too vague to analyze
Students describe algorithms in English prose rather than structured pseudocode. This makes it impossible to analyze correctness or runtime precisely.
A student writes 'sort the elements and pick the best ones' instead of specifying the sorting algorithm, the selection criterion, and the iteration structure, making complexity analysis impossible.
How to fix it
Write pseudocode with clear loop structures, base cases, and return values. Every operation should have an identifiable cost. Use a consistent format: 'for each element x in sorted(A)' is analyzable; 'process the elements' is not.
Ignoring edge cases in algorithm design
Students design algorithms that work for typical inputs but fail on empty arrays, single-element inputs, duplicate elements, or disconnected graphs.
A student's binary search implementation returns an incorrect index when the target is not in the array, because they never handled the case where low > high.
How to fix it
Always test your algorithm on these inputs: empty input, single element, all identical elements, sorted/reverse-sorted input, and the maximum size input. For graph algorithms, test on disconnected graphs, self-loops, and graphs with a single node.
Confusing memoization with tabulation
Students treat top-down memoization and bottom-up tabulation as identical. While they solve the same problems, they differ in space usage, call stack depth, and which subproblems they compute.
A student implements a recursive Fibonacci with memoization for a problem with n = 10^6, causing a stack overflow. Bottom-up tabulation would solve this iteratively without stack depth issues.
How to fix it
Understand the tradeoffs: memoization is easier to write and only computes needed subproblems, but uses call stack space. Tabulation avoids stack overflow and may enable space optimization (keeping only the last row). Choose based on the problem constraints.
Not practicing under time pressure
Students solve algorithm problems in unlimited time but never practice timed problem-solving. Exams and interviews impose strict time limits that require rapid technique identification.
A student can solve medium-difficulty LeetCode problems in 45 minutes but is given 20 minutes per problem on their exam. They know the technique but can't identify and execute it fast enough.
How to fix it
Practice with a timer. Start with generous limits and reduce them as you improve. For interview prep, aim to solve medium problems in 20-25 minutes. For exam prep, do past exams under actual time constraints. Speed comes from pattern recognition, which comes from volume.
Skipping proof of correctness for loop invariants
Students skip proving that their iterative algorithms are correct because loop invariants feel tedious. But this is exactly what exams test and what separates working code from provably correct algorithms.
A student writes an insertion sort implementation and claims it's correct because 'it works on examples' but can't state the loop invariant: at the start of each iteration i, the subarray A[0..i-1] is sorted.
How to fix it
Practice stating loop invariants in three parts: initialization (true before the first iteration), maintenance (if true before an iteration, it's true after), and termination (when the loop ends, the invariant plus the termination condition imply correctness). This structure makes proofs systematic.
Studying algorithms without implementing them
Reading about algorithms creates an illusion of understanding. Implementation reveals hidden complexity: off-by-one errors, pointer management, and edge cases that pseudocode glosses over.
A student understands merge sort conceptually but can't implement the merge step correctly, consistently producing arrays with duplicated or missing elements due to incorrect index management.
How to fix it
Implement every algorithm you study in your language of choice. Start from pseudocode, not from someone else's code. Debug until it passes all edge cases. The gap between understanding and implementing is where real learning happens.
Forgetting amortized analysis for data structure operations
Students analyze each operation individually and conclude that a data structure is slow, when amortized analysis shows the average cost per operation is much lower.
A student says dynamic array insertion is O(n) because resizing copies all elements, ignoring that resizing only happens every n insertions. The amortized cost is O(1) per insertion.
How to fix it
Learn the three amortized analysis methods: aggregate (total cost divided by operations), accounting (assign charges that prepay for expensive operations), and potential (track a potential function). Apply amortized analysis whenever a costly operation occurs infrequently.
Failing to reduce problems to known problems
When encountering a new problem, students try to solve it from scratch rather than recognizing it as a transformation of a problem they already know how to solve.
A student is asked to find the longest increasing subsequence and attempts to develop a novel algorithm from scratch, not recognizing it as a classic DP problem with a well-known O(n log n) solution.
How to fix it
Build a mental catalog of canonical problems: shortest paths, minimum spanning tree, maximum flow, LCS, LIS, knapsack, edit distance. When facing a new problem, ask 'what known problem does this reduce to?' This reduction skill is what algorithm courses are really teaching.
Cramming problems without reviewing mistakes
Students solve many problems but never go back to analyze why they got stuck or made errors. Without deliberate review, they repeat the same mistakes and plateau in ability.
A student solves 200 LeetCode problems but keeps failing on graph traversal problems because they never identified that their recurring mistake is forgetting to mark nodes as visited.
How to fix it
Keep an error journal. After each problem, write: what technique was needed, where you got stuck, and what you'd do differently. Review this journal weekly. Deliberate review of 50 problems is worth more than mindlessly grinding 200.
Quick Self-Check
- Can you identify whether a problem needs greedy, DP, or divide-and-conquer within two minutes of reading it?
- Can you write a formal correctness proof for a simple greedy algorithm like activity selection?
- Do you know when the Master Theorem does and does not apply to a recurrence?
- Can you implement BFS, DFS, and Dijkstra from scratch without referencing code?
- Do you review your mistakes after solving algorithm problems, or just move to the next one?
Pro Tips
- ✓When stuck on a DP problem, start by writing the brute-force recursive solution, then identify overlapping subproblems, and add memoization; this is easier than jumping straight to the bottom-up table.
- ✓For graph problems, always start by clarifying: directed or undirected, weighted or unweighted, cyclic or acyclic? These properties determine which algorithms apply.
- ✓Build a 'technique trigger' list: seeing 'shortest path' should trigger Dijkstra/BFS, 'optimal substructure' should trigger DP, 'making locally best choices' should trigger greedy with proof obligation.
- ✓Practice explaining your algorithm's correctness before writing code; if you can't convince yourself it's right in words, the code won't be right either.
- ✓Time yourself solving problems and track your speed by category; this reveals which technique families need the most practice.