JavaScript Performance Gotchas: Avoid scaling issues through under-the-hood knowledge
Originally published April 2023. Modern V8 / SpiderMonkey / JavaScriptCore have made most of this advice less load-bearing than it was — the entries below are updated for what still holds up in 2026. The headline rule: measure with performance.now(), --prof, or a profiler before you optimize. Outside a tight, hot loop, almost everything else is noise.
array.push vs. the spread operator
Both work for adding items, but the cost shapes differ:
array.push(item)— O(1) amortized, mutates in place.[...array, item]— allocates a new array of length n+1 and copies every element. O(n) per call, and accidentally O(n²) inside a loop.
Verdict: the gotcha isn't "spread is slower" — it's quadratic blowup when spread runs inside a loop. Use push, or build the array once at the end.
for...in vs. for vs. forEach
for...inis for object properties. On arrays it walks the prototype chain, returns string keys, and iterates in property-insertion order rather than index order — almost always a correctness bug, not just a perf one.forandfor...ofgive predictable index iteration.forEachlooked slower in 2018-era V8 microbenchmarks. Modern engines (Maglev / Turbofan) inline monomorphic callbacks; the gap is unmeasurable in practice.
Verdict: avoid for...in on arrays for correctness reasons. for vs forEach is a style choice today.
String concatenation with +
The classic "use Array.prototype.join for performance" advice predates rope / cons-string optimizations that have been in V8 since ~2018. += in a loop is now competitive with or faster than join for typical sizes.
Verdict: += is fine. Use join for readability when you actually have an array of strings, not as a performance trick.
Global vs. local variables
The scope-chain cost is real but tiny, and inline caches absorb most of it after the first call. With ES modules and "use strict", true globals are rare anyway.
Verdict: prefer locals for code clarity and module isolation, not for measurable performance.
Repeatedly accessing deep properties
Hidden classes plus inline caches make a.b.c.d essentially free in monomorphic hot paths after warmup. The cost shows up when:
- the path goes megamorphic (different shapes hit the same call site),
- an intermediate node is a
Proxy, getter, orObject.defineProperty-installed accessor, - or you re-resolve via dynamic keys (
obj[name]).
Verdict: cache only inside hot loops where you've measured a polymorphism penalty — blanket caching is just visual noise.