I guess we'll just punish your brain, then.
They must have used a Neuralink then. Interesting feature.
There's actually a proposal to use Starships as space stations themselves. The fuel tanks could be turned into living quarters, and multiple starships could be connected together to make a space station. No need to carry up modules, the starships are the modules.
What would have been the alternative? If antimatter was repelled by earth, it would also be repelled by the sun, by the center of our galaxy, etc. So which way would it end up going? The very idea of relativity would be instantly refuted if you could reverse every gravitational effect in the universe. Motion would end up being absolute after all. That would be a pretty unlikely outcome.
I wonder when someone is finally going to produce actual hardware neurons. Right now we're just simulating neural nets with classic computers. No matter how much they distribute the load over many cores, it's still just processors doing calculations. If we can have the silicon equivalent of actual neurons, that would be a major breakthrough.
Interesting how the article mentions the previous low volume in October 2020... that was right before Bitcoin climbed from 12000 to 60000.
And it could have worked just as well with just the top part for AC and DC. Tesla proposed this and implemented it in Model S, it worked great and everybody could adopt it, but nooooo... not invented here, too convenient. So Tesla had to outfit Model 3 with those horrendous double plugs.
The european plug would not be so bad if they would just use the AC pins for DC as well (like the old european Model S plugs did). But for some reason they decided to go with this ridiculously large double plug with separate parts for AC and DC, it's a total abomination.
I bet that DMCA complaint did wonders for the SDAROT website visitor count, though.
I always enjoy negative comments about new Apple products. Especially when I read them again a few years later. They age so well...
I've looked at optimized code, and while some optimizations indeed occur like you pointed out, there's still an awful lot of low hanging fruit to be optimized. I once stepped through a fused-multiply-add library call, which should have just been one instruction, but instead it was a function calling a function calling a function looking up the address of a function and then calling that, each of them saving a few registers on the stack. It was like 50 instructions instead of one. (On MacOS)
And I've seen plenty of other instances where I could rewrite a loop to be 50% faster just by avoiding some simple recalculations and things like that.
They shouldn't call it "new algorithms" though. It's micro-optimization which is definitely a good thing but doesn't actually change how the algorithm works (if I understood correctly).
I do believe this has a lot of potential: high level languages are designed with human programmers in mind, preventing them from making mistakes but also adding way too much code in the process. Manually optimizing that bloated code for a whole app, one instruction at a time, is basically impossible for humans (without making mistakes) but would be easy for AI. It can turn nicely structured but overly complicated functions back into good old spaghetti code that runs faster while producing the same results.
The point is that they are optimizing the output from a C compiler. Contrary to the article, they are not developing new "algorithms" but optimizing low level code.
The optimum committee has no members. -- Norman Augustine