There was nothing telling the pilots what it was as the light wasnt there, by the time they flicked through 200 odd pages to find the answer to something that even ETs MAX Simulator (which boeing has now admitted didnt act like a MAX in flight) couldnt teach them, they were dead.
To me the question is very simple (Boeing had made it very simple by insisting that there only superficial differences between 737NG and 737MAX, like different position of switches, etc)
So the question is Had the LionAir and ET crew been sitting in 737NG, not 737MAX and everything else would be the same - e.g. AoA failure, flight parameters, and their response, WOULD THEY CRASH?
I think the answer to this is pretty simple. They would not. So they were trained for safe operation of NG, but not for the safe operation of MAX. The key Boeing argument "no additional training" goes down the toilet. The longer they insist on it, the worse they look.
The question about pilot training is immaterial in this case, if it is established that their action in NG would not result in accident, but in MAX it resulted in loss of life.
However, in the hypothetical situation where they had a runaway stabilizer on the NG along with the other parameters, I believe that they still would have crashed. I don't think you can say that they were well trained for the safe operation of the NG. I think the few orders of magnitude higher incidence of runaway stabilizer on the MAX due to MCAS makes it appear that your statement is true.
I wouldn’t trivialise a ‘few orders of magnitude incidence’ of runaway trim. Functional Safety analysis is all probabilistic - we rarely talk in absolutes.
Let’s say you’re doing a Layers Of Protection Analysis or bow tie. You have an initiating event frequency, which is where the Max appears to differ significantly from the NG. You then have risk controls which have an associated Probability of Failure on Demand (PFD - For a low demand application) - these reduce the risk level. The risk controls can be split into two types - those that prevent Loss Of Control (on the left hand side of the bow tie) and those that mitigate or prevent a Loss Of Control turning into a hazard (the right hand side of the bow tie). You want sufficient independent layers of risk controls that a) ultimately get the residual risk down to an acceptable risk level and b) are demonstrably As Low As Reasonably Practicable (ALARP).
Looking at this from my armchair, there are big issues here:
1. An initiating event frequency going up by several orders of magnitude is a real problem. It will likely drag any residual risk out of the acceptable or tolerable residual risk zone, and if that’s an increase in comparison with a previous model you’ll have a hard time proving ALARP. You need to get the risk back down, either by reducing the initiating event frequency, by beefing up risk controls to make them more effective (lower PFD) or by introducing more independent risk controls.
2. Risk controls are never assumed to be perfect, and in the industry I work in we’d rarely consider people (no matter how well skilled and educated) as a sole safety critical risk control. A person based control must be part of a series of layers that ultimately result in the residual risk being acceptable. That means we typically allow, at best, a PFD of 0.1 for a human based risk control - in other words we expect people, at best, to get a risk control wrong once in every ten attempts when they’re required to do that task in an emergency to prevent a problem turning into something worse. The human based risk control should be just one in many layers of risk controls so that failure shouldn’t always result in a significant consequence by itself.
We would have a hard time justifying a human based control having a PFD of less than 0.1. If the safety case has a low initiating event frequency, multiple layers of protection, and you end up with an acceptable residual risk then that’s fine - in other words, in the NG case, people can be a long way from perfect in conducting the manual trim checklist and there is still an acceptable safety argument.
However, if your initiating event frequency goes up by an order of magnitude or more, we’d have an extremely hard time making a safety argument that a human based control could be beefed up sufficiently to still argue the system is safe - legislation, best practice and precedent would likely prevent that argument being accepted.
If you’ve got a situation when an initiating event has gone up by several orders of magnitude, your main solutions for making a successful safety argument from my perspective are to either get the initiating frequency back down, or introduce additional safety-rated engineering risk controls. Expecting people to have a PFD of 0.01 or 0.001 conducting a safety critical risk control in an emergency situation is just not done - I’ve never seen it accepted.