Finding the Balance: When to Use Object Destructuring and When to Avoid It

Object destructuring of function parameters is the right default for any function with three or more parameters, any optional ones, or a signature you expect to grow. Positional parameters are still right for one or two required arguments and for callbacks or math utilities where the argument order is intuitive. The bits where conventional advice trips up are heading capitalization, where the type system actually lives, and the performance lore.

Where destructuring earns its keep

1interface CacheOptions { 2 ttlSeconds: number 3 staleWhileRevalidate?: number 4 tags?: string[] 5} 6 7function cacheRemember<T>( 8 key: string, 9 fn: () => Promise<T>, 10 opts: CacheOptions, 11): Promise<T> { /* ... */ }
  • Call sites are self-documenting: cacheRemember('feed', loadFeed, { ttlSeconds: 60, tags: ['feed'] }) — no mental mapping of position to meaning.
  • Optional props with ? and defaults inside the destructure: function f({ a, b = 'default' }: Opts) {}. Cleaner than if (b === undefined) b = ....
  • Adding an optional property is non-breaking. Adding a required one still breaks every caller; that's true positional or destructured.
  • Editor completions key off the type, so the options object is fully assisted at the call site.
  • TypeScript 4.9+ satisfies lets callers type a literal options object precisely without losing inference: const opts = { ttlSeconds: 60 } satisfies CacheOptions.

Where positional is fine

  • One or two required parameters where order is unambiguous: clamp(value, min, max), add(a, b).
  • Callbacks with established signatures: arr.map((item, index) => ...), el.addEventListener('click', (event) => ...). Forcing destructuring fights the runtime contract.
  • Math and utility functions with conventional argument orders.

Performance, in practice

The performance lore around destructuring overstates the cost. Destructuring an existing object — what the function body sees — doesn't allocate; it's just property reads, which V8 has compiled to direct slot access since the introduction of inline caches. The real cost is at the call site, where f({ a, b }) allocates the literal. Inside a tight loop that allocation can show up in a flame graph; everywhere else it's invisible next to I/O, JSON parsing, and GC pressure from response payloads. Profile before rewriting to positional — the bottleneck is almost certainly somewhere else.