There's more to it. Computers making math processing so cheap that we tend to just throw in the numbers without thinking about what's going to happen. TFA mentions that in the last paragraph, but unfortunately most of it is misleading sensationalism.
When you perform subtraction, you generally lose precision. 1234678 - 1234567 = 1. You start with 7 digits of precision and end up with 1. Look at it this way: if the first number was off by one, and error of about one part in a million, your answer would become 0 or 2 instead of 1 -- an error of one part in one.
When you are designing a control system, you have to understand what precision you need in your final answer, and you have to know whether each step in your algorithm maintains the necessary precision. If you don't know how to do that, you're not qualified to design the algorithm. Unfortunately, it's quite possible that neither you nor your bosses know that, so you'll design it anyway.
Handing people a copy of Excel and saying, "Here, this thing will do math for you -- it multiplies and divides and does a whole lot of statistical functions" is like putting them in an airplane and saying "Here, this thing has controls to make it go up and down, left and right -- go fly it."
It is true that computer software tends to hide the internals of how arithmetic gets done, and as a result it's particularly easy to get into trouble. The problems have been understood since before computers were invented. What's changed is that you used to have to study to find out how to do mathematical computation, and in the process you might learn enough to avoid the problems. Now the software tends to be distributed without even small print warnings that there are problems and you can get into serious trouble if you don't understand how things work.