There are roughly three thousand problems on LeetCode and you do not have time to grind them. The engineers who pass FAANG loops are not faster typists or sharper memorizers. They have learned to recognize a small set of recurring problem shapes — about a dozen — and to map any new question onto one of those shapes within the first two minutes. That recognition is the entire interview. Once you know the pattern, the code is muscle memory. This guide walks through the twelve patterns that cover almost every coding question on Blind 75, NeetCode 150, and the modern technical screen.
Why pattern recognition beats problem volume
The grind-thousands-of-problems strategy treats every interview question as a unique puzzle. That is statistically false. Interview problems are reskins of a small number of underlying algorithmic shapes, dressed in different domain language — pharmacies instead of arrays, weather forecasts instead of integers, ride-share matching instead of bipartite graphs. The shape underneath rarely changes. If you have practiced the shape, the dressing does not matter.
The math works out brutally. If twelve patterns cover ninety percent of interview questions, ten focused hours per pattern beats five hundred hours of random grinding. You finish faster, you retain more, and you walk into the room with a recognition reflex instead of a memorized solution that breaks the moment the interviewer twists one constraint.
This is also how senior engineers actually work in production. We do not invent novel algorithms for every ticket. We recognize 'this is a fan-out problem' or 'this is an interval merge' and we reach for the right shape. The interview is not testing whether you can solve a hard problem cold. It is testing whether you can pattern-match under time pressure.
Linear-scan patterns: sliding window, two pointers, prefix sums
The first family of patterns turns brute-force quadratic scans into single-pass linear ones. They share a structural trait: you walk an array once, maintaining a small bookkeeping invariant that lets you avoid re-examining elements you have already seen.
- Sliding Window — keep a window of contiguous elements; expand right, contract left when an invariant breaks. Variable windows handle 'longest substring with K distinct,' fixed windows handle 'max subarray of size K.'
- Two Pointers — converge or chase a pair of indices. Sort first, then scan inward (3Sum, container with most water). Or fast/slow pointers for cycle detection in linked lists and arrays.
- Prefix and Suffix Sums — precompute once, query in O(1). The duct tape of subarray-sum, range-XOR, and 'product of array except self' problems.
A reliable tell: if the brute force is two nested loops over the same array and you only care about a contiguous range or a sorted-pair condition, one of these three patterns collapses it to O(n). The hard part is spotting the invariant, not coding the loop.
// Variable-size sliding window: longest substring with at most K distinct chars.
function longestWithKDistinct(s: string, k: number): number {
const counts = new Map<string, number>();
let left = 0, best = 0;
for (let right = 0; right < s.length; right++) {
counts.set(s[right], (counts.get(s[right]) ?? 0) + 1);
while (counts.size > k) {
const c = s[left++];
const n = (counts.get(c) ?? 0) - 1;
if (n === 0) counts.delete(c); else counts.set(c, n);
}
best = Math.max(best, right - left + 1);
}
return best;
}Stack and heap patterns: monotonic invariants and top-K
When the question asks 'next greater element,' 'days until warmer,' 'largest rectangle in histogram,' or anything that hinges on the next or previous extremum in a sequence, you are looking at a monotonic stack. The stack maintains a decreasing or increasing invariant; the moment a new element breaks it, you pop and resolve the boundary.
Heaps own the top-K and merge-K family. Top-K largest? Min-heap of size K. Top-K frequent? Counter plus min-heap of size K. Merge K sorted lists? Min-heap keyed on each list's head. Median of a stream? Two heaps — a max-heap for the lower half, a min-heap for the upper, balanced by size.
Both patterns share a deceptively simple complexity story: amortized O(n) for the stack family, O(n log k) for the heap family. Get the data structure right and the rest is bookkeeping.
Graph and tree patterns: BFS, DFS, union-find, topological sort
Half of the medium-tier problem set is a graph in disguise. Word ladders are BFS on a string graph. Course schedules are topological sort. Number of islands is BFS or DFS on a grid. Friend circles are union-find. The grid problems hide the graph; the prerequisite problems hide the dependency edges. Once you see the graph, you pick the traversal.
| Problem shape | Pattern | Complexity |
|---|---|---|
| Shortest hops in unweighted graph | BFS | O(V + E) |
| All paths / connected components | DFS or Union-Find | O(V + E) / α(n) |
| Order with dependencies | Topological sort (Kahn's) | O(V + E) |
| Cycle detection in undirected | Union-Find | near O(α(n)) |
| Shortest path with weights | Dijkstra (heap-based) | O((V + E) log V) |
| Weighted with negatives | Bellman-Ford | O(V · E) |
Tree problems are graph problems with a single root and no cycles, which means almost all of them are DFS. Pre-order, in-order, post-order, level-order — the variants are about which moment in the recursion you record state. Recursion plus a return value carrying a partial answer is the workhorse.
Search-the-answer-space and dynamic programming
Binary search on the answer is the pattern that catches new candidates flat-footed. The array is not sorted in the input — but the predicate is monotonic. 'Smallest capacity to ship within D days,' 'minimum eating speed,' 'split array largest sum' all work the same way: define a feasibility check, then binary search the integer range of possible answers. If the answer space is large but the check is cheap, this is your pattern.
Dynamic programming is the largest pattern by surface area but the smallest by recipe. Define a state, write a recurrence, prove the subproblems overlap, then either memoize the recursion or fill a table. The hard work is choosing the state — once you have it, the code writes itself.
- 1Decide what changes from one subproblem to the next — that becomes the state index.
- 2Write the recurrence as 'best (or count) over choices' from this state.
- 3Identify base cases and the answer cell.
- 4Memoize first; convert to a table only if you need iteration order or space optimization.
Backtracking and interval sweeps: the last two everyone forgets
Backtracking is structured search with pruning. Subsets, permutations, combinations, N-queens, Sudoku — the template is identical. Make a choice, recurse, undo the choice. The art is in the pruning conditions. A backtracking solution that does not prune is just an exponential brute force in a trench coat. The pruning is the algorithm.
Interval sweeps come up in every calendar, scheduling, and log-processing problem. Sort by start, then sweep. Merge overlapping intervals, count meeting rooms (a heap of end times), insert into a sorted set of intervals — they all share the sort-then-walk skeleton. If the problem mentions 'overlap' or 'merge' or 'minimum number of rooms,' you are sweeping intervals.
“I have interviewed five hundred candidates. The ones who get offers do not solve harder problems faster. They recognize the pattern in ninety seconds and spend the next eighteen minutes coding cleanly.”
That is the whole game. Twelve patterns, recognized fast, executed cleanly. Drilling fifty problems per pattern with deliberate practice — naming the pattern out loud before coding, timing yourself to twenty minutes, reviewing the recurrence after — beats grinding three thousand problems by an order of magnitude.
How to drill these patterns without burning out
Pick one pattern. Solve five medium problems back to back, all from that pattern, with the timer running. After each problem, write the recurrence or invariant in plain English. After all five, write a one-paragraph 'pattern card' describing the recognition cue, the template, and the off-by-one to watch for. Keep the cards in a doc you can re-read in five minutes the night before an onsite.
Then mix. After two weeks, do a session of ten random problems and force yourself to name the pattern before writing any code. The naming is the muscle. If you reach for the keyboard before naming, stop and start over. The interview will reward the same instinct.
Stop grinding. Start patterning.
Alpha Code is a patterns-first interview prep platform — coding, system design, behavioral, mocks, and ML/AI engineering all under one $19/mo subscription.