After re-reading YDKJS: Async and Performance. I had the realization that I have been making over-optimizations. I had not taken into consideration JS engine optimizations.
Iterators
Like always using iterators directly
const foo = new Map(/* ... */)
const results = []
for (let [key, value] of foo) {
results.push(value + "baz")
}
Instead of transforming it to an array first, to take advantage of the convenient array methods
const foo = new Map(/* ... */)
const results = [...foo].map(([key, value]) => {
return value + "baz"
})
I thought that this would cause two loops. Iterating over the Map once to transform it into an array and then a second to perform operations on each value.
Compare them: https://jsperf.com/map-to-array-forach-vs-for-of
Array.reduce
Also opting for the reduce
const foo = [
/* ... */
]
const result = foo.reduce((acc, num) => {
if (num > 3) acc.push(num * 22)
return acc
}, [])
Instead of a more readable map and filter
const foo = [
/* ... */
]
const result = foo.filter(num => num > 3).map(num => num * 22)
I thought that reduce would also reduce the number of loops.
Compare them here: https://jsperf.com/reduce-vs-filter-map-22
In some cases the over-optimizations I was making do tend to be faster. But barely, and are they really worth the compromise on code readability?
I'm thinking now of JS as being more declarative (what I want the code to do) rather than the actual sequence of events that happen. This is not to say throw any performance optimizations out the window, but instead to test assumptions I make before they end up everywhere in my code.