The only reason anyone is talking about this is we’ve made it too hard to build new power generating capacity on earth.
When something goes badly wrong with a nuclear power plant, the entire human population sees an uptick in cancer rates and a chunk of the planet gets declared uninhabitable for 10,000 years.
That's true of only fission reactors, but TFA is talking about fusion reactors. Aside from the radiation being much, much less, it has a much, much shorter half-life. Additionally, the chances of something going horribly wrong are much, much less since fusion reactors can't have a run-away chain reaction.
Half the purpose of the entire practice of engineering is exactly that. Making a reliable thing that you need, from unreliable things that you have.
TCP/IP being the obvious example.
The booster has worked property through stage separation on all but the first launch, and has had 8 landing attempts that would have been successful apart from things like GSE problems or extreme descent profiles meant to push its limits. The final 3 block one starhips made it to near-orbit successfully and survived reentry to splashdown. The initial block 2 starships had some trouble, but the final three all made it to near-orbit and the last two both survived reentry and splashed down successfully.
You have to keep in mind that this is a development project and they are improving the design with each test flight, they're not just failing over and over again, or having a small number of successful flights at random. Even the block three ships are not the final planned ideration. SpaceX intends to mass produce these, and fly them at an incredible rate. If you think about other things that are mass produced, like cars, they make tons of prototypes and release candidates before they settle on the final version and tool up to mass produce it. What SpaceX is doing with Starship is no different. They really want it to be as inexpensive and reliable as possible. It's nothing like any space development project that's come before.
The Apple Calculator leaked 32GB of RAM. Not used. Not allocated. Leaked.
First, AFAIK, leaking memory means you allocate it, but don't deallocate it. So how can he say "Not allocated?"
Second, leaked how? If it's leaking 32GB of RAM on, say, every keystroke, that would be serious; but if it allocates 32GB RAM once on start-up and simply forgets to deallocate it upon termination, it doesn't matter since the OS will reclaim the RAM for the entire process.
Today's real chain: React > Electron > Chromium > Docker > Kubernetes > VM > managed DB > API gateways.
OK, those are lots of layers of abstraction and they each use memory, perhaps a lot, and he has a point that modern software tends to use too many layers, but that doesn't mean that any of that memory is leaked: just used.
Based on that part of his rant, is he complaining more about the 32GB size of the (alleged) leak of the Calculator app, i.e., why should a calculator need 32GB? Sure, complaining that a calculator using 32GB is valid, but it's not a leak, just inefficient or lazy on the part of the programmer.
They are initially streamed on twitter, but they will still post them to YouTube afterward.
They haven’t completed an orbit because they want to be definitely certain they can deorbit it reliably as it is not demisable. They have absolutely demonstrated that it has the ability to reach orbit and survive reentry consistently.
All those goals are reasonable when you consider the assembly lines they are building and their success with recovering the first and second stages. Consider the launch rate of Falcon 9 and then consider the fact that they are building twice as many launchpads while designing the boosters to be immediately reflown.
The only real question is whether they will have the same initial teething problems with their third generation of the rocket that they did with the first two, but I doubt they will.
Single tasking: Just Say No.