DfwRevolution wrote:estorilm wrote:I'm well aware of STS-51 and the well-known "limits to inhibit" radio call. There's a million parts in that engine, and I have no clue how you're questioning the success of the design based on a bad batch of sensors from a random contractor. This is precisely why the engine is "human-rated" and has redundant backups, internal health monitoring, etc including the ability of the flight controller to inhibit automatic shut-downs of the remaining engines. Obviously it's a lot easier today with high-bandwidth data links and more elaborate plausibility protocols and engine management software, etc.
400+ "engine-missions" and a single in-flight failure on one of the most complex liquid-fueled rocket engines (and first reusable) ever made? A design out of the 70s? It's a spectacular piece of engineering, and your "with a different roll of the dice" comment is ridiculous. You're flying 1/4 million-pound aircraft into space - everything is a "calculated" roll of the dice.
To say that engine achieved a 99.7% reliability rating and a 100% mission-success rating (ie. most abort protocols require multiple engine failures anyways) speaks for itself.
These paragraphs can be summarized with three words: normalization of deviance.
You're taking a small sample size of data and reducing the outcome to "it didn't go boom." You're not considering how close to "boom" the SSME came on other missions. I used the phrase "roll of the dice" for a reason. The right SSME during STS-93 could have suffered catastrophic failure and loss-of-crew had the liberated LOX pin taken a different trajectory after it broke off.
I never said the SSME wasn't an amazingly high-efficiency machine. I never compared the SSME to anything SpaceX has developed. It doesn't particularly matter when they were developed. I am taking issue with the claim that they are "bulletproof." They aren't.
How is the shuttle program a small sample size? The Merlin comes to mind because there's 5 of them each flight, but still - has exponentially fewer parts, less thrust and first flew 30 years later. Even then, with simple design and smaller thrust, the Merlin has had multiple catastrophic failures and the RS-25 has had zero. The RD-107 is indeed an incredible engine, though at half the thrust its design would have been incompatible with the shuttle and SLS designs, and has a bearing on reliability - in addition to being expendable.
Normalization of deviance doesn't apply when you're talking about a mind-boggling number of parts that all have to work perfectly or you get a "failure" and the statistical hit to match, yet again after over 400 "engine" missions, there was a single "failure" - those numbers speak for themselves, I don't know why you keep saying "well it almost blew up" - given the context of our discussion, that comment seems out of place. Normalization of deviance implies that every mission was a near failure and after 400+ engine flights, they just happened to remain lucky? Negative. You don't really get "lucky" with space programs, just ask the Russians about the N1.
No one is mentioning that this is a staged-combustion / closed-loop design, in addition to a twin-shaft design with independent control of each turbo pump. There really isn't anything like this - the RD-180 is similar, though single-shaft, and had its first flight two decades
after the RS-25 and is neither reusable nor human-rated.
And yes, though requiring significant overhauls and inspections - there were 405 engine launches (combined) and only 46 RS-25 engines ever built.
In summary - it's not a Honda Civic - you're going to have sensor/component faults, etc. and given its unprecedented design (first operational staged-combustion engine) human-rating, efficiency level, reusability, and era in which it was designed/developed, along with the fact that it achieved a
100 % mission success rate, factoring in complexity levels and variables.. personally, I believe this engine defines bullet-proof. In space flight context it's a relative term, not a blanket-statement.