Published on Oct 03 2025
Last updated on Oct 03 2025
Most people stop at “it runs.” That’s true in coding interviews and in production. The test passes, the deploy works, you move on. But the difference between a junior and a senior isn’t whether the code works—it’s how you think about the solution.
When you look at a piece of code and ask:
Is this the best possible complexity?
What are the trade-offs in memory, readability, or maintainability?
Should I optimize further—or stop here?
That’s when you’re starting to think like an engineer, not just a problem solver.
Let’s use a few LeetCode-style problems to illustrate the point.
Take the classic Group Anagrams problem. A common solution is to sort each string and use the sorted string as a hash map key. It works fine and is simple, but sorting costs O(m log m)
per string, where m is the length of the string.
solution.js
1groupAnagrams(strs) {2let resMap = new Map();34for (let str of strs) {5// Sort string6let key = str.split('').sort().join('');78if (resMap.has(key)) {9resMap.get(key).push(str);10} else {11resMap.set(key, [str]);12}13}14return [...resMap.values()];15}
Now consider this alternative approach:
solution.js
1/**2* @param {string[]} strs3* @return {string[][]}4*/5var groupAnagrams = function(strs) {6let alphabetsArr = new Array(26).fill(0)7let bucket = new Map()89for (let str of strs) {10for (let i = 0; i < str.length; i++) {11let index = str.charCodeAt(i) - 9712alphabetsArr[index]++;13}1415let key = alphabetsArr.toString()16if (bucket.has(key)) {17bucket.get(key).push(str)18} else {19bucket.set(key, [str])20}21alphabetsArr = new Array(26).fill(0)2223}2425return [...bucket.values()]26};
Here we build a frequency array and join it as the hash key. Complexity drops to O(m) per string, because counting characters is linear.
Sorting-based key:
✅ Easy to write and understand.
✅ Memory-friendly (no extra arrays).
❌ Slower for long strings (O(m log m)).
Frequency-array key:
✅ Faster (O(m)).
✅ Cache-friendly: fixed-size array of 26 ints.
❌ More memory allocations.
❌ Harder to extend if input includes Unicode or case sensitivity.
This is exactly the kind of decision engineers make in production systems:
Cache-friendliness: fixed arrays often beat dynamic strings in memory-heavy environments.
Maintainability: a sorted key may be easier to debug.
Constraints: if the input is English words, arrays make sense; if the input is Unicode, sorting might be simpler.
Correctness is not the end. The trade-off analysis is the real exercise.
This shift applies everywhere. Consider a few other examples from practice:
Majority Element: a map-based counter works, but adding an early exit (if count > n/2
) changes runtime characteristics in real workloads. Worst case is still O(n), but average case gets faster. That’s the kind of micro-optimization that matters when n is millions.
A junior dev might think “O(n) is O(n), so why bother.” A senior dev knows average-case performance and real-world workloads matter. Early exits are everywhere in real systems:
Database engines stop scanning once enough rows are found.
Compilers bail out when a branch can’t possibly optimize further.
Distributed systems often short-circuit when a quorum is reached.
You’re not just counting elements—you’re rehearsing the principle of don’t do unnecessary work.
Longest Common Prefix: scanning the shortest string first seems trivial, but it’s an important heuristic. Tight bounds prevent wasted work. The same principle drives query optimizers and compiler passes—start with the limiting factor.
The insight: find the tightest bound and start there. It’s not just a string trick—it’s the same principle query optimizers use when picking the smallest index first, or compilers when checking the narrowest type constraints.
Remove Element: there’s a temptation to overcomplicate. But the two-pointer overwrite solution is already O(n) and O(1). Since O(n) is the lower bound—you must touch each element—there’s nothing left to optimize. The experienced move isn’t to add more tricks. It’s to know when to stop. This is also part of engineering judgment: sometimes the simplest code is the optimal code. Knowing when to stop is just as important as knowing how to optimize.
At scale, engineering is not about getting the answer. It’s about choosing which answer is worth pursuing.
A production system isn’t a neat coding problem. It has constraints: memory budgets, latency targets, developer experience. The skill that translates from LeetCode to real work isn’t “solving problems fast.” It’s learning to think about:
Time vs space. Always check if complexity can be improved.
Consider average-case, not just worst-case.
Simplicity vs performance.
Implementation cost vs long-term maintainability.
When you train yourself to look past correctness, you’re rehearsing those trade-offs.
In production systems, the same principles decide whether a feature is maintainable, whether an API is usable, whether a query scales to millions of rows.
LeetCode is just the sandbox. The real skill is in the analysis.
Solving problems makes you a coder. Analyzing solutions makes you an engineer.
LeetCode won’t make you a great engineer by itself. But how you approach it can. If you treat it as practice in analyzing trade-offs—hash map vs array, early exit vs full scan, simplicity vs over-engineering—you’re not just passing tests. You’re building the habit of thinking like an experienced engineer, someone who is passionate not just about finding a solution, but the right solution.
Correctness is the floor. Judgment is the ceiling.
Written by Alissa Nguyen
FollowAlissa Nguyen is a software engineer with main focus is on building better software with latest technologies and frameworks such as Remix, React, and TailwindCSS. She is currently working on some side projects, exploring her hobbies, and living with her two kitties.
Learn more about me
If you found this article helpful.
You will love these ones as well.
Built and designed by Alissa Nguyen a.k.a Tam Nguyen.
Copyright © 2025 All Rights Reserved.