Sadly, these popular math 'geniuses' and child 'geniuses' never seem to do a damn thing that's truly notable.
Perhaps except Terrence Tao; a famous math prodigy, who also became an incredibly successful mathematician, "Such is Tao's reputation that mathematicians now compete to interest him in their problems, and he is becoming a kind of Mr Fix-it for frustrated researchers. "If you're stuck on a problem, then one way out is to interest Terence Tao," says Charles Fefferman [professor of mathematics at Princeton University].". Also Erik Demaine, who finished PhD and became a professor at MIT at 20; he has a less impressive history than Tao, but still a fruitful career.
With cash, many transactions can happen between getting from a checkpoint (banks) to another, and some bills simply don't go to banks so often. Sometimes a bill just used for payment is given out again as change. Sometimes a transaction can happen in private, such as in internet auctions. Sometimes there's a vendor that doesn't go through banks so much. Finally, shops simply don't take note of your own identity, so even when the police traces the money back to a shop owner, the shop owner won't be able to tell which bad guy gave him a $20 bill with a certain ID on what day.
In Bitcoin, every single transaction is traceable, it's just not so easy to identify the endpoints. Up till now there have been many ways to avoid being identified, but as more regulations and infrastructure are brought in, eventually it's going to be less anonymous than cash.
Unless you're being sarcastic, no. Hell no. When any power plant is built there's absolutely no revenue, and absolutely no KHW, so to you it's a loss, and we won't be building any power plants. This is classic beancounter-level thinking, and I hope you are no where near running any business or voting for budget matters.
The point of having a power plant is to provide electricity, which, even if all you care about is money, supports economy output. The overall cost is the cost of the electricity it produces, and it is a justification of building this plant over another plant, among weighing other pros and cons.
If it works out that we're getting cheap (and clean / safe etc.) electricity after all, it's good to have built this power plant. Perhaps we can discuss how they should have priced the electricity to cover these costs, or where the original profits should have gone; building another power plant isn't a bad way to go, as with funding the decommission of another plant. Either way we can't let this kind of stupid short-term accounting get in the way of providing essential long term services.
It's a lot of rather common techniques put together, but it's a meaningful result. But the summary is rubbish: on the same benchmark for which this algorithm gave 98.52%, there was an older algorithm that gave 96.33%, and a graph showing small incremental improvements up till then; saying that older results "never came close" is a blatant lie.
The methodology seems legit, in the sense that it's consistent with those used for previously accepted results. On the other hand, I still have the same old concern on all machine-learning type research. I believe the model is useful, but getting experimental results like this is about tweaking details to fit the same test data, and as soon as the accuracy hits a new high you can publish. There are certainly ceilings that a model can't break, and the results certainly show that their model has a higher ceiling for a specific dataset than before, but it doesn't prove that the algorithm is universally better than previous ones, certainly not humans.
Time complexity aside, insertion sort makes sense to laymen, bubble sort not so much. At least not that much easier than mergesort, quicksort and heapsort. There are ways to organize code that make those slightly more advanced algorithms easier to read.
Bubble sort has a problem that it is not painfully obvious you'll end up with a sorted list; a layman only sees a lot of swaps and it takes a little bit of non-layman thinking to believe in it.
With heapsort you first use function names like InsertValue and RetrieveMinimumValue to encapsulate the voodoo, and if the reader believes in these functions he'll understand the purpose. The voodoo part is less straightforward, but it still can be organized in certain ways that's easier to understand. You can name operations with the proper level of abstraction and granularity: RemoveMinimum, FindNewMinimum, SwapParentChild etc. A layman would most likely get stuck only at the lowest levels, that is understanding how FindNewMinimum works, and I don't think that part is harder than bubble sort.
Mergesort and quicksort are more easily understood by laymen iteratively, when you go bottom-up, rather than top-down by recursion. With similar abstraction like above you can reduce both of them to a structure that makes sense to the laymen except possibly at the bottom level; I personally think for mergesort even the bottom level is quite straight forward (merging lists), while with quicksort the proof lies slightly more outside the scope of the code.
These, of course, are not the most desirable codes for professional programmers.
"Ninety percent of baseball is half mental." -- Yogi Berra