There are quite a few reasons to dislike JavaScript, but its arithmetic is particularly painful. In a recent post on X social, someone was pretty cross about some failed boolean operations they were testing. It looked something like this:
018 == '018'
true
017 == '017'
false
This seemed to have confused a lot of folks, so I figured I would try to explain it a bit here.
You see, JavaScript wants to be incredibly helpful to developers. It does so by trying to anticipate what a developer wants to do and automatically correcting for this. For example, if you were to start a number with a leading zero, it wants to assume you are writing a number in octal. Under the hood, it will perform an implicit conversion and carry on. This may not seem terrible at first glance, but it becomes inconsistent quickly.
In our example above, 018
is not a valid octal number. Octal only allows for 8 digits (the numbers 0 through 7). But (unlike some other languages) JavaScript won't throw an error here. It will assume the developer had a specific purpose for writing this and it tries to guess what that may be. Since 018
is not valid octal, JavaScript will fall back and treat this as a decimal number instead. Internally, this will be represented as the number 18. In JavaScript, our strings will always be parsed as decimals, so here '018'
is equal to 18 and our expression becomes true.
But in the second example, 017
is valid octal. In this case, JavaScript will internally represent our number as 15. Its string ('017'
) will be treated as decimal, leaving us with 17. But now we have a problem… 15 is not equal to 17. Therefore the second expression is false.
In JavaScript's overly zealous attempt to be user-friendly, it mindlessly converts anything you toss at it into any structure it can, without ever giving the developer the opportunity to notice inconsistencies.
But wait! There's more
Let's examine another place math gets weird. For context, JavaScript only has one numeric data type: 64-bit double precision floats. Any numbers you throw at it will be represented as this data type internally.
Now, take the following expression:
0.1 + 0.7 == 0.8
false
There is a really good article on why this happens, so I will just link it here: JavaScript Corner: Math and the Pitfalls of Floating Point Numbers
The short version is, when performing floating point arithmetic, the internal representation of our sum has notable rounding constraints.
And it doesn't stop there! Type correction gets complicated when you start changing operators. Take, for example:
'6' + 4
"64"
'6' - 4
2
In our first case, the +
operator is treated as string concatenation, giving us the string "64"
. But there is not a -
operator for strings, so it will assume you want to perform arithmetic and will treat your string as a decimal.
You can see this same behavior with empty arrays as well.
[] + 1
"1"
[] - 1
-1
In our first example, the empty array []
is treated as an empty string, and in the second example, it's treated as an an empty numeric value (basically zero.)
Python isn't a perfect language, for example it suffers from the same floating point rounding errors. However, it would have thrown errors around implicit type casts and would rather encourage the developer to be explicit instead.
>>> '6' + 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate str (not "int") to str
It would have been great if JavaScript encouraged more explicit statements like this, but here we are. I guess this is why it's so hard to have nice things. 🙂