I (had (no (problem)) reading (this (summary)))
A railgun LITERALLY cannot put something into orbit. It can do suborbital trajectories, and it can theoretically do escape trajectories, but orbits are impossible. You end up with an apogee of however high, and a perigee of SEA LEVEL. Bare minimum, you need a rocket to raise your perigee above the atmosphere. And since the ram pressure during launch would be so tremendous, in practice I doubt a railgun can be built that breaches the Karman line with a worthwhile load.
Skylon is a very, very high-risk proposal. An air-breathing rocket needs to maximize the amount of time it spends in the atmosphere, thus it follows a much more depressed trajectory, thus reaching far, far higher speeds while still in the atmosphere, and the end result is that the thermal load placed on the craft is tremendous. And because Skylon burns LH2, it's an extremely bulky vessel as well, which exacerbates the problem.
SpaceX is following a calculated path. The first-gen rocket was tiny, and completely conventional. The second-gen rocket used a conventional layout, conventional fuel, conventional combustion cycle, but it did everything extremely well. It has the highest thrust-to-weight ratio of any rocket, and the highest specific impulse of any gas-generator LOX+RP1 rocket. They've been slowly pushing towards first-stage reusability, which gets you 90% of the benefits of SSTO, with none of the drawbacks. Their planned third-gen rocket is absolutely massive, uses a cutting-edge combustion cycle, and burns an innovative fuel. They aren't instantly revolutionizing the field, but they're on track to do it.
What alternatives are there?
Railguns (or any other sort of ground-based propulsion) physically cannot put something into orbit, and practically speaking, they can only provide a moderate boost. You'd still need at least a full rocket stage, possibly two, to actually get to orbit.
Ground-based laser propulsion might work, but it's never been tested at scale. And if there is a catastrophic failure, it will be on the ground-based components, not the vessel. In other words, the big explosion will be on the ground instead of the air.
Air-breathing rockets are even more dangerous than traditional ones, since they spend more time in the atmosphere at extreme speeds and temperatures.
Electric rockets (ion engines of various types) don't have the thrust to break out of Earth's gravity.
Nuclear thermal rockets might work, but they won't be much safer IMO.
The problem ultimately boils down to "it takes too much energy to get to space". You're going from at best 400-something m/s to Mach 20+. That takes a lot of energy, and so we have to use very energetic means to do so.
Oh hey, thanks for updating the one I posted in a past article. I was wondering why it seemed so familiar.
IIRC, Stage III failures are responsible for a very high percentage of launch failures.
Well, let's see what statistics has to say.
I ignored failures of the payload, non-propulsion systems, or any failures where I could not identify which stage failed. Failures of staging mechanisms were rounded up to the higher stage. Flights with missing stages were still counted. All variants of each rocket family were included - Soyuz includes R7 launches, Thor/Delta includes Japanese licensed derivatives.
Soyuz: 25 first-stage failures, 18 second-stage failures, 29 third-stage failures
Proton: 9 first-stage failures, 11 second-stage failures, 17 third-stage failures
Ariane: 4 first-stage failures, 0 second-stage failures, 4 third-stage failures
Thor/Delta: 5 booster failures, 22 first-stage failures, 16 second-stage failures, 11 third-stage failures
Soyuz and Thor racked up a lot of early failures of the first stage, which seems attributable to simple inexperience (each had one failure from failing to fill the fuel tanks before launch). Even compensating for that, it seems the first stage is a dangerous stage, prone to failure.
However, your statement seems to bear out - the third stage does seem disproportionately dangerous. Oddly, it often seems to be control of the third stage that fails, not always the rocket itself.
The Proton rocket has gone through a number of redesigns over its long life. The latest version, the Proton-M, first flew in 2001, and they kept flying the Proton-K for many years (for reasons I actually don't know). They've only done 90 flights of the Proton-M, and half of them were in that post-2010 period of "repeated failures" (although they had about as many failures for pretty much all of the 2000s as well).
I would highly expect the faulty pump to have been redesigned with the Proton-M modifications, based simply on that analysis.
Okay, I know it's probably the least important thing about the craft, but still...
Why are they using such an ancient, decrepit-ass rocket motor? The AR2-3 is incredibly old - it dates back to a Gemini-era trainer, basically a modded F-104 that NASA used for early tests and training for spaceflight. It was made back when rocket chemistry was still in the "even the experts don't know much" stage, so it burned jet fuel and high-test peroxide (90%+ H2O2 in H2O).
Jet fuel is not good for rockets - basically, the restrictions on what compounds can be present is fine for jet engines, but leads to horrible problems with rockets. There's a specific petroleum-product blend designed for rockets, called RP-1, which clamps down on things like sulfur compounds or alkenes that love to gum up the works. This rocket was originally used on a jet fighter and shared fuel with it, so that was understandable... but the USAF recertified the engine for modern JP-8 instead of the old JP-4. So they already went through the effort of making it work with a new but similar fuel. Unless the X-37 hides a jet engine on itself somewhere, I don't see why they couldn't have used RP-1 instead.
Further, rocket science moved away from peroxide for a reason - it's dangerous. It will explode for basically any reason - peroxide decomposes exothermically, so once it starts reverting to H2O + O2, it's nearly impossible to stop. And it reacts with tankage surprisingly often. Oh, and it does horrible things to your specific impulse, which really hurts you on a last-stage engine like this one.
Honestly, using the engine at all is a weird choice. Sure, maybe they had some laying around... from the sixties... but that's like putting an F-104 engine in a prototype aircraft, it just doesn't seem right when other off-the-shelf systems work better. An AJ-10 would have worked beautifully. An RL10 might not have fit the aero package (hydrogen is a bulky fuel), but would have given them some impressive dV if they wanted it. Aestus would be a perfect match as well, if they didn't mind outsourcing to the Euros. Even Kestrel would work (although it first flew around the same time as this, so understandable not to use it). Point is, they had options, and being the Air Force, they could easily have just had an engine custom-made for it if they so wanted.
So what are the implications? All I can think of is a) they don't care how badly the rocket performs, b) they probably aren't going to keep that engine in whatever "production" version they build, or c) they have some other reason to use peroxide or JP8. Maybe peroxide is also their monoprop for RCS? That isn't really worth it though, particularly when UDMH works better as RCS and in the main motor.
Do you understand the difference between "has to be" and "can be"?
Steam is in charge of downloading and installing the game. After that, you can launch it directly.
If, and only if, the game was coded to additionally use Steam's DRM features, it will then check that Steam is running and attempt to authenticate (which can be as simple as the local Steam instance having a cached authentication).
If it doesn't use Steam's DRM, it will just run as a regular old executable. Steam does not mandate ANY DRM, it only mandates that if you use non-Steam DRM, you have to make a note of it on your store page.
I have dozens of games bought and downloaded on Steam that do not touch Steam's DRM. I've actually copied some of them over to other computers and had it still work without Steam even being installed.
Rust is not yet production-usable. It has enough known bugs in the tracker that I can't even contemplate using it for a personal project, let alone for real.
And yet they're already pushing the marketing, proclaiming it as a guaranteed C-killer. I'm sorry, but they've said that about every compiled language since C, and it hasn't been true for one of them. And you're pushing it this hard, when you're still this early along in development?
Nobody uses C or C++ because they love the language. They use it because it has all the tools they need to debug, and all the libraries they need to run, and all the performance they need for the task. Rust maybe has the last one, but only has the second by being C-compatible (defeating the purpose of using a new language, particularly when you have to write this much wrapper code around it) and has none of infrastructure needed for large modern projects.
Dear Rust devs: stop writing articles about how great Rust will be, and start writing stuff to make your language actually usable. Maybe then people with their heads outside their asses will listen to you.
Steam itself doesn't universally apply DRM - a large number of games on Steam don't have DRM at all, you can just copy the files to wherever you want and run them.
They do offer their own DRM, which is about as non-intrusive as you can get while still being DRM, and they allow publishers to include their own DRM as long as it is noted on the store page. You can be mad about games using DRM, but Valve isn't the one to be angry at.
PS: Valve's talked about issuing a patch to disable the DRM if they ever go out of business. Realistically that probably won't happen (too many licensing problems), but the DRM is trivially bypassed as long as you have the game downloaded already, so they really can't stop it.
Sadly, their brags of "only five years behind" is an underestimate. It's a 65nm chip - its heyday was 2006-2007, on tail-end Pentium IVs, early Core 2, and Phenoms. 45nm hit in 2008, followed by 32nm in 2010. In 2012 Intel hit 22nm, but most others were on a 28nm half-node. Currently, 14nm is shipping from some vendors, and the rest are gearing up for it.
Account for the fact that these chips most likely won't actually be delivered until 2016, and you'll see they're really 10 years behind, not 5. That will probably still be fine for desktops or industrial use, but mobile is out, and servers will be very inefficient compared to modern ones.
SpaceX is getting some of the benefits of skipping the LAS, by using the same system for at least two tasks.
The primary use is as a propulsive landing system. That's probably the main way they'll be used. There's a backup parachute system, but they want powered landings to be the norm.
The secondary use is as an abort engine. It'll probably be rarely used, and I think it uses up all the fuel so an aborted launch will have to use parachutes, which will make for rougher landings but still plenty survivable. This way, they won't be carrying fuel that isn't used in some way during the flight.
A third possible use is as an in-flight maneuvering system. This is mostly done using the smaller Draco engines, not the big SuperDracos, but they run off the same fuel supply and are mounted in the same pod. But if they ever need to do significant orbital maneuvers, I expect they'll light up the SuperDracos.
Because they're trying something new with it. They're using the same set of engines for emergency escape as they are for propulsive landing of the capsule. That's fairly innovative in and of itself, and the changes required for that (side rockets instead of a top-mounted tower) let it also be used for a longer period of the flight.
As always, the answer is "it depends on a lot of things":
1. Is the language little-used because it's a special-purpose language? UnrealScript probably doesn't crack the top twenty as far as general programming languages go, but in the game dev field it's probably one of the biggest. Using a specialized language for a specialized task is fine - usually even a good thing.
2. Is the language little-used, but library-compatible with a more common language? Clojure is a rare language, but it can call Java libraries and code, which is a massive boon. Actual programming languages don't matter so much as the libraries they allow you to use, and if you can piggyback on a bigger library of libraries, you can go far with a small, obscure language. This isn't sufficient to make the language OK to use, because:
3. Is the project going to be worked on by more than one person? Personal projects, sure, use whatever language you feel like. Small groups can decide what to use. But if it's a big project that's likely to cycle through developers, think about the impact using an uncommon language will have.
4. Is there something about your problem that makes common languages inefficient or ineffective? Is the uncommon language objectively better at the exact task you're solving? Or is it just "the syntax is slightly cleaner"? This isn't a full deciding factor, but unless the language shows promise as being useful in the future, I wouldn't use it on a personal project.