Moderators: richierich, ua900, PanAm_DC10, hOMSaR
LabQuest wrote:Bonin was a bone head from the get go.
Starlionblue wrote:LabQuest wrote:Bonin was a bone head from the get go.
While certainly crew decisions were less than optimal, eventually leading to the loss of the aircraft, I try not to judge them too harshly.
It was the middle of the night over the ocean. They were fatigued. They had not trained for anything like this. Looking at the CVR transcript, it seems clear that disorientation set in rather quickly.
Did they make very poor decisions? Certainly. It is easy to judge them sitting here at zero airspeed and zero altitude. Tired and bored in the cockpit, back of the clock, with not much going on, is a different situation. The brain does funny things to you.
Starlionblue wrote:We need to be good at decision making first, and many of our decisions are concerned with automation use.
N1120A wrote:Starlionblue wrote:We need to be good at decision making first, and many of our decisions are concerned with automation use.
And this shows that a well meaning, but poorly taken mentality still exists. If you lose everything, you hand fly a known pitch and attitude and let the other pilots fix anything else. Expecting George to do it all is just a recipe for disaster and flying a perfectly flyable airplane into a 30000'+ stall like this one. Sriwijaya 182 involved a similar mentality - and in a much older airplane with a much less robust autopilot. The first instinct should be to revert to the most difficult case - hand flying the airplane with nothing but a compass and maybe an AI.
Etheereal wrote:Starlionblue wrote:LabQuest wrote:Bonin was a bone head from the get go.
While certainly crew decisions were less than optimal, eventually leading to the loss of the aircraft, I try not to judge them too harshly.
It was the middle of the night over the ocean. They were fatigued. They had not trained for anything like this. Looking at the CVR transcript, it seems clear that disorientation set in rather quickly.
Did they make very poor decisions? Certainly. It is easy to judge them sitting here at zero airspeed and zero altitude. Tired and bored in the cockpit, back of the clock, with not much going on, is a different situation. The brain does funny things to you.
And that was precisely the problem: Nobody believed an airbus could stall like that. This accident, and later 8501 would show that an Airbus can definitely stall in ALT LAW.
ThirtyWest wrote:Tomorrow marks the 13th anniversary of the crash of AFR447. The tragedy was one of those watershed moments in aviation safety and not only revealed hidden shortcomings in human factors understanding and training, but also stirred a fascinating and at times somewhat forensic debate around the role of automation: whether, on balance, the widespread, full use of autoflight systems around the world has saved more lives than would otherwise have been lost in accidents due to pilots' flightpath mismanagement during manual flight (e.g., Asiana 214).
As regards automation use, most airline pilots around the world (AFAIK) are trained to use the minimum level of automation necessary to unload themselves for other flight management tasks. That of course varies across individuals and situations. Most seem satisfied with the amount of discretion given to pilots under these policies.
As a member of the US aviation safety community, my impression is that US attitudes toward automation usage on the whole are somewhat insular and, to use one example, do not account for the fact that, worldwide, airline pilots aren't routinely and frequently hand-flying visual approaches using visual separation into LHR, ICN, AMS, DXB, etc. And then there's the fact that many RNAV/RNP SIDs, STARs, and even the newest approach procedures, now and in the future, are expected to be flown with the higher levels of automation because the airplane is better than the pilots at maintaining a precise lateral track at the required navigation performance.
I've covered a lot of ground here* But ultimately here are two things for discussion:
(a) I'd love to hear from the tech community here on your views as to how the industry has implemented the various human factors and other lessons learned from the AFR447 tragedy specifically. Will we ever completely close the book on AFR447? Or have we already?
(b) What are your thoughts on any divergence between US and global perspectives on automation use and human factors, among both airlines and regulators?
Thanks in advance!
*I think it's probably best not to bring the 737 MAX into this discussion, as that situation involved much, much more than what I've outlined above, although it too produced extremely valuable human factors lessons regarding the design of flight decks and airplane systems, highlighted in the NTSB's safety recommendations report on the accidents.
rigo wrote:
I'm by no means an expert but my understanding is that on AF447 the FBW system switched off due to a frozen Pitot tube and left two inexperienced pilots fly manually, without them realising what was actually happening. Had the FBW been functional, it would have prevented the stall and thus saved the aircraft and everyone on board.
So from this point of view, the accident was due to a lack of working automation, not excessive automation.
hitower3 wrote:rigo wrote:
I'm by no means an expert but my understanding is that on AF447 the FBW system switched off due to a frozen Pitot tube and left two inexperienced pilots fly manually, without them realising what was actually happening. Had the FBW been functional, it would have prevented the stall and thus saved the aircraft and everyone on board.
So from this point of view, the accident was due to a lack of working automation, not excessive automation.
Dear Rigo,
Just a small correction: The A330 operating AF447 did never "switch off FBW". The only change vs. normal operation of the flight controls was the degradation of the envelope protection from Normal Law to Alternate Law, reducing the level of protection. The aircraft will remain controlled "by wire" even in this situation.
BR Hendric
rigo wrote:That's what I tried to say: the envelope protection system that would have saved the aircraft was not working (or not fully). It was just my 2 cents on the debate about whether there should be more or less automation.
gloom wrote:rigo wrote:That's what I tried to say: the envelope protection system that would have saved the aircraft was not working (or not fully). It was just my 2 cents on the debate about whether there should be more or less automation.
I guess your answer emphasizes two approaches to automation. Not only in plane FBW, but as a wider question, perhaps for automotive industry and also other.
Do we want the computers to offload work and allow humans to concentrate on non-standard situations, or should the computer assist be there always?
So far both the experiences and reliability point out to work offload. We are not yet ready to go further. I guess AF447 faith, and subsequent upsets with MAX for example made us learn that.
That's also why everyone above underlines importance of high-alt training.
The computers are here to help pilots through boring phases of flight, not to help them in difficult cases. Computers will be ready for that in a few years. Maybe 10, maybe 40, I guess we're not quite sure when. But definitely computer will not help you now, in every possible situation.
This is the real lesson we all learned, I guess. And still learning.
Cheers,
Adam
gloom wrote:rigo wrote:That's what I tried to say: the envelope protection system that would have saved the aircraft was not working (or not fully). It was just my 2 cents on the debate about whether there should be more or less automation.
I guess your answer emphasizes two approaches to automation. Not only in plane FBW, but as a wider question, perhaps for automotive industry and also other.
Do we want the computers to offload work and allow humans to concentrate on non-standard situations, or should the computer assist be there always?
So far both the experiences and reliability point out to work offload. We are not yet ready to go further. I guess AF447 faith, and subsequent upsets with MAX for example made us learn that.
That's also why everyone above underlines importance of high-alt training.
The computers are here to help pilots through boring phases of flight, not to help them in difficult cases. Computers will be ready for that in a few years. Maybe 10, maybe 40, I guess we're not quite sure when. But definitely computer will not help you now, in every possible situation.
This is the real lesson we all learned, I guess. And still learning.
Cheers,
Adam
phugoid1982 wrote:Poor training at AF which seemed to a systemic problem, and French arrogance wrt the investigation played a role just like with the AF 4590 Concorde crash which we'll coming up on the 22nd anniversary on in less than two months. Yeah, nothing wrong with taking/off into a tailwind, with less than a fully useable runway, missing spacer, and oh...overweight!
kalvado wrote:gloom wrote:rigo wrote:That's what I tried to say: the envelope protection system that would have saved the aircraft was not working (or not fully). It was just my 2 cents on the debate about whether there should be more or less automation.
I guess your answer emphasizes two approaches to automation. Not only in plane FBW, but as a wider question, perhaps for automotive industry and also other.
Do we want the computers to offload work and allow humans to concentrate on non-standard situations, or should the computer assist be there always?
So far both the experiences and reliability point out to work offload. We are not yet ready to go further. I guess AF447 faith, and subsequent upsets with MAX for example made us learn that.
That's also why everyone above underlines importance of high-alt training.
The computers are here to help pilots through boring phases of flight, not to help them in difficult cases. Computers will be ready for that in a few years. Maybe 10, maybe 40, I guess we're not quite sure when. But definitely computer will not help you now, in every possible situation.
This is the real lesson we all learned, I guess. And still learning.
Cheers,
Adam
I would say a hard no.
Do we want the crew to be able to fully take over control in a most challenging situation? OK, we need a standby navigator, flight engineer and radio operator to share workload with the pilots. Otherwise, automation must do the best it can to keep limited crew within workload limits. Increased productivity is one of big point of an automation.
I would describe AF447 situation as computer becoming disoriented. That can totally happen with the human pilot as well. What happened next is disoriented computer passed control to an equally - if not worse - disoriented human. Computer probably still had an idea about the pitch going too high.
Could computer save AF447? Yes, if there was no reliance in general on humans as being superior, and computer kept control in an emergency, being "aware" of its degree of disorientation. Can computer fly flying pitch and trim in such situation? sure it can be programmed, and it would "know" if pitch data is good. But we still believe humans are better in emergencies. There is a lot to be said about that...
Automation is a bit of a chicken and egg situation. Once certain things become automated, humans inevitably start loosing those skills. Automation is generally designed for the existing set of skills, maybe with some discount for the loss. Looks like loss often goes deeper than expected. Training can partially mitigate that - but only that much.
It goes way beyond airplane handling. Try handing 11.25 to a store clerk for a 6.21 charge. The younger the clerk is, the higher their chance to turn into a blanket stare. Yes, they passed those math classes in school - but yet...
FGITD wrote:phugoid1982 wrote:Poor training at AF which seemed to a systemic problem, and French arrogance wrt the investigation played a role just like with the AF 4590 Concorde crash which we'll coming up on the 22nd anniversary on in less than two months. Yeah, nothing wrong with taking/off into a tailwind, with less than a fully useable runway, missing spacer, and oh...overweight!
…none of which would have brought that plane down, without the introduction of the metal strip.
Starlionblue wrote:kalvado wrote:gloom wrote:
I guess your answer emphasizes two approaches to automation. Not only in plane FBW, but as a wider question, perhaps for automotive industry and also other.
Do we want the computers to offload work and allow humans to concentrate on non-standard situations, or should the computer assist be there always?
So far both the experiences and reliability point out to work offload. We are not yet ready to go further. I guess AF447 faith, and subsequent upsets with MAX for example made us learn that.
That's also why everyone above underlines importance of high-alt training.
The computers are here to help pilots through boring phases of flight, not to help them in difficult cases. Computers will be ready for that in a few years. Maybe 10, maybe 40, I guess we're not quite sure when. But definitely computer will not help you now, in every possible situation.
This is the real lesson we all learned, I guess. And still learning.
Cheers,
Adam
I would say a hard no.
Do we want the crew to be able to fully take over control in a most challenging situation? OK, we need a standby navigator, flight engineer and radio operator to share workload with the pilots. Otherwise, automation must do the best it can to keep limited crew within workload limits. Increased productivity is one of big point of an automation.
I would describe AF447 situation as computer becoming disoriented. That can totally happen with the human pilot as well. What happened next is disoriented computer passed control to an equally - if not worse - disoriented human. Computer probably still had an idea about the pitch going too high.
Could computer save AF447? Yes, if there was no reliance in general on humans as being superior, and computer kept control in an emergency, being "aware" of its degree of disorientation. Can computer fly flying pitch and trim in such situation? sure it can be programmed, and it would "know" if pitch data is good. But we still believe humans are better in emergencies. There is a lot to be said about that...
Automation is a bit of a chicken and egg situation. Once certain things become automated, humans inevitably start loosing those skills. Automation is generally designed for the existing set of skills, maybe with some discount for the loss. Looks like loss often goes deeper than expected. Training can partially mitigate that - but only that much.
It goes way beyond airplane handling. Try handing 11.25 to a store clerk for a 6.21 charge. The younger the clerk is, the higher their chance to turn into a blanket stare. Yes, they passed those math classes in school - but yet...
The flight control system become "disoriented", in so far as it lost a critical input, speed, and thus could no longer provide give reliable guidance.. The logical automated action then is to disengage the autopilot, since the autopilot cannot give reliable guidance without speed data.
The three ADIRUs would have known very well what the pitch angle was. Could it be programmed to fly pitch and power? Sure. But there are a lot of variables there. Since speed is already lost, can the flight control system be certain that other inputs are good? It's that old computer thing: Garbage In Garbage Out.
Since we can't be sure of data, we can't be sure that the protections are functioning properly either. And so we end up in Alternate Law. It is important to emphasize that in Alternate Law, we're not in some sort of life-threatening situation per se. It's just a normal plane with the loss of envelope protection.
Starlionblue wrote:Tired and bored in the cockpit, back of the clock, with not much going on, is a different situation. The brain does funny things to you.
Starlionblue wrote:You don't need to be Chuck Yeager to deal with unreliable airspeed, especially if, as in this case, you know it is unreliable. You do need training.
The pilots would also not have speed data. But they did have a methodology to deal with the issue that the flight control and autoflight systems lacked.
Training was "reactive" in this case, because the need hadn't been seen before. We do plenty of "proactive" training as well. The objective of an accident investigation is to prevent future accidents. In some cases this means changing training methods.
kalvado wrote:I suspect "human is superior" assumption is the biggest design flaw here
gloom wrote:kalvado wrote:I suspect "human is superior" assumption is the biggest design flaw here
I myself don't feel "superior", and I think it would be the case for most of pilots as well.
But we, as humans, have an ability to assess large amounts of data and predict an outcome - a thing I refer to as intuition (of any kind).
Computers are faster than us, and make less errors. And will ultimately be superior. But technology today, they run linear and are much less capable in critical situations.
Airlines (and pilots) were smart enough to understand and use to the benefit of all of us.
Cheers,
Adam
GalaxyFlyer wrote:Give computers more authority and the risks grow rapidly. AF447 was in trim and at a usable thrust setting for the conditions. If the pilots did nothing, they’d have landed at CDG. The “computers” were fine until the pilots added inputs which were not designed for—large pitch up beyond the performance capability of the plane. Physics got ‘em.
GalaxyFlyer wrote:The “computers” had the all the authority they needed to do their task—straight and level, in trim. When they lost a vital piece of information, they said “we can’t do our task any longer, you try it human”. Human only had do nothing, leave the thrust at cruise setting, maintain attitude, wait until air data returned as it would. But, no humans untrained in handling the plane, started a rapid climb, which was the exactly wrong thing. Automation is there is assist pilots, not get pilots out of a jam they got themselves into.
GalaxyFlyer wrote:Then we have a disagreement—pilots should be perfectly capable of flying the plane without the autopilot system keeping them out of trouble. An old school position, possibly.
kalvado wrote:Starlionblue wrote:You don't need to be Chuck Yeager to deal with unreliable airspeed, especially if, as in this case, you know it is unreliable. You do need training.
The pilots would also not have speed data. But they did have a methodology to deal with the issue that the flight control and autoflight systems lacked.
Training was "reactive" in this case, because the need hadn't been seen before. We do plenty of "proactive" training as well. The objective of an accident investigation is to prevent future accidents. In some cases this means changing training methods.
A bigger objective of investigation should be a contribution to an overall safety system - influencing training, operations, design, and whatnot. Part of AF447 seem to be
Proactive approach would be understanding why this training deficiency occurred and patching all the similar ones. People like to speculate that lack of manual flying is root cause - more stick time would make pilots more aware of aircraft behavior and handling, so specific training would not be needed. probably there is at least some truth in this, and implications are pretty complex.
Another thing to think about is a human-machine interface and its core assumption that human is the best possible solution. Which may or may not be the case. Bored, sleepy, with the sudden inrush of adrenaline when alarm sounds.... It's well known that people do fail under stress. This is not to blame any pilot in particular, this is to say they are still mammals.
A common approach for emergency management is to place the system into a safe state and deal with everything from there. It is a bit difficult for an aircraft in the air; what I hear so far is that pitch and thrust may be the closest to the safe state in a aerodynamically intact aircraft with engines running. Or you think about anything else for unreliable airspeed? Pitch and trim is certainly programmable. Yes, all computer inputs may be corrupted - but they would also be affected for pilots as well...
I suspect "human is superior" assumption is the biggest design flaw here, the root cause which has to be dealt with - and trainings cannot fix that.
GalaxyFlyer wrote:Then we have a disagreement—pilots should be perfectly capable of flying the plane without the autopilot system keeping them out of trouble. An old school position, possibly.
kalvado wrote:GalaxyFlyer wrote:Then we have a disagreement—pilots should be perfectly capable of flying the plane without the autopilot system keeping them out of trouble. An old school position, possibly.
I totally hear you. However, everything and everyone fails once in a while (says a guy who accidentally tore 5/8" hardened steel bolt in halves). And people may not appreciate what those failure rates are for a regular person under stress - especially compared to - what is it, 1 in 10 million currently? - crash rate of commercial flights.
So a general question - absolutely not unique for aviation - is how to maximize the reliability of the system built of unreliable components. HUmans being one of those unreliable components, of course.
Old school thinking is perfectly understandable, and probably was unavoidable in the days of early electronics and active technology development.
There is a pretty interesting book, "the right stuff" by Fred Wolffe - a lot of things clicked into place for me thanks to that. Not that I agree, but I do understand (I think).
Starlionblue wrote:kalvado wrote:Starlionblue wrote:You don't need to be Chuck Yeager to deal with unreliable airspeed, especially if, as in this case, you know it is unreliable. You do need training.
The pilots would also not have speed data. But they did have a methodology to deal with the issue that the flight control and autoflight systems lacked.
Training was "reactive" in this case, because the need hadn't been seen before. We do plenty of "proactive" training as well. The objective of an accident investigation is to prevent future accidents. In some cases this means changing training methods.
A bigger objective of investigation should be a contribution to an overall safety system - influencing training, operations, design, and whatnot. Part of AF447 seem to be
Proactive approach would be understanding why this training deficiency occurred and patching all the similar ones. People like to speculate that lack of manual flying is root cause - more stick time would make pilots more aware of aircraft behavior and handling, so specific training would not be needed. probably there is at least some truth in this, and implications are pretty complex.
Another thing to think about is a human-machine interface and its core assumption that human is the best possible solution. Which may or may not be the case. Bored, sleepy, with the sudden inrush of adrenaline when alarm sounds.... It's well known that people do fail under stress. This is not to blame any pilot in particular, this is to say they are still mammals.
A common approach for emergency management is to place the system into a safe state and deal with everything from there. It is a bit difficult for an aircraft in the air; what I hear so far is that pitch and thrust may be the closest to the safe state in a aerodynamically intact aircraft with engines running. Or you think about anything else for unreliable airspeed? Pitch and trim is certainly programmable. Yes, all computer inputs may be corrupted - but they would also be affected for pilots as well...
I suspect "human is superior" assumption is the biggest design flaw here, the root cause which has to be dealt with - and trainings cannot fix that.
There is no assumption that humans are superior. The autopilot can fly the plane more accurately than we can. However, the autopilot can only do so with valid inputs. If the inputs are not valid, keeping the autopilot is very dangerous. As my first flight instructor said, "Remember, the autopilot will kill you quickly".
You can certainly add more and more logic to the autoflight system for edge cases, but that means more cost and more potential for misprogramming and other edge cases. Note that we're talking early 90s computer tech for the A330. A generation later, the A350 is more robust when it comes to autoflight edge cases. For example, if the aircraft slows to alpha prot on the A350, the autopilot will not disengage. It will go into "AP IN PROT" mode.
The assumption is that humans can keep the aircraft stable, then troubleshoot the issue. This should not be too much to ask.GalaxyFlyer wrote:Then we have a disagreement—pilots should be perfectly capable of flying the plane without the autopilot system keeping them out of trouble. An old school position, possibly.
Spot on. And we train for this. The aircraft also needs to be certified to permit "average pilots" to control it without the autoflight system. What's the point of alternate law, direct law, and backup if the aircraft is not controllable in those laws?kalvado wrote:GalaxyFlyer wrote:Then we have a disagreement—pilots should be perfectly capable of flying the plane without the autopilot system keeping them out of trouble. An old school position, possibly.
I totally hear you. However, everything and everyone fails once in a while (says a guy who accidentally tore 5/8" hardened steel bolt in halves). And people may not appreciate what those failure rates are for a regular person under stress - especially compared to - what is it, 1 in 10 million currently? - crash rate of commercial flights.
So a general question - absolutely not unique for aviation - is how to maximize the reliability of the system built of unreliable components. HUmans being one of those unreliable components, of course.
Old school thinking is perfectly understandable, and probably was unavoidable in the days of early electronics and active technology development.
There is a pretty interesting book, "the right stuff" by Fred Wolffe - a lot of things clicked into place for me thanks to that. Not that I agree, but I do understand (I think).
I think you mean Tom Wolfe. First of all, "The Right Stuff" is a fictional retelling of real events. Secondly, it does not accurately represent the mindset of modern pilots. That gung ho, Skygod attitude is long gone. We are taught to be methodical, follow checklists, slow down. SOP will save your life.
There is a checklist for the AF447 situation. The difference today is that we train for high altitude upsets, discuss in class and so forth.
Even for something as simple as turning off an engine for single-engine taxi, we are required to pull out the checklist and follow it.
kalvado wrote:Maybe accepting that 1 in few million flights crashes is the ultimate answer after all?
kalvado wrote:Starlionblue wrote:kalvado wrote:A bigger objective of investigation should be a contribution to an overall safety system - influencing training, operations, design, and whatnot. Part of AF447 seem to be
Proactive approach would be understanding why this training deficiency occurred and patching all the similar ones. People like to speculate that lack of manual flying is root cause - more stick time would make pilots more aware of aircraft behavior and handling, so specific training would not be needed. probably there is at least some truth in this, and implications are pretty complex.
Another thing to think about is a human-machine interface and its core assumption that human is the best possible solution. Which may or may not be the case. Bored, sleepy, with the sudden inrush of adrenaline when alarm sounds.... It's well known that people do fail under stress. This is not to blame any pilot in particular, this is to say they are still mammals.
A common approach for emergency management is to place the system into a safe state and deal with everything from there. It is a bit difficult for an aircraft in the air; what I hear so far is that pitch and thrust may be the closest to the safe state in a aerodynamically intact aircraft with engines running. Or you think about anything else for unreliable airspeed? Pitch and trim is certainly programmable. Yes, all computer inputs may be corrupted - but they would also be affected for pilots as well...
I suspect "human is superior" assumption is the biggest design flaw here, the root cause which has to be dealt with - and trainings cannot fix that.
There is no assumption that humans are superior. The autopilot can fly the plane more accurately than we can. However, the autopilot can only do so with valid inputs. If the inputs are not valid, keeping the autopilot is very dangerous. As my first flight instructor said, "Remember, the autopilot will kill you quickly".
You can certainly add more and more logic to the autoflight system for edge cases, but that means more cost and more potential for misprogramming and other edge cases. Note that we're talking early 90s computer tech for the A330. A generation later, the A350 is more robust when it comes to autoflight edge cases. For example, if the aircraft slows to alpha prot on the A350, the autopilot will not disengage. It will go into "AP IN PROT" mode.
The assumption is that humans can keep the aircraft stable, then troubleshoot the issue. This should not be too much to ask.GalaxyFlyer wrote:Then we have a disagreement—pilots should be perfectly capable of flying the plane without the autopilot system keeping them out of trouble. An old school position, possibly.
Spot on. And we train for this. The aircraft also needs to be certified to permit "average pilots" to control it without the autoflight system. What's the point of alternate law, direct law, and backup if the aircraft is not controllable in those laws?kalvado wrote:I totally hear you. However, everything and everyone fails once in a while (says a guy who accidentally tore 5/8" hardened steel bolt in halves). And people may not appreciate what those failure rates are for a regular person under stress - especially compared to - what is it, 1 in 10 million currently? - crash rate of commercial flights.
So a general question - absolutely not unique for aviation - is how to maximize the reliability of the system built of unreliable components. HUmans being one of those unreliable components, of course.
Old school thinking is perfectly understandable, and probably was unavoidable in the days of early electronics and active technology development.
There is a pretty interesting book, "the right stuff" by Fred Wolffe - a lot of things clicked into place for me thanks to that. Not that I agree, but I do understand (I think).
I think you mean Tom Wolfe. First of all, "The Right Stuff" is a fictional retelling of real events. Secondly, it does not accurately represent the mindset of modern pilots. That gung ho, Skygod attitude is long gone. We are taught to be methodical, follow checklists, slow down. SOP will save your life.
There is a checklist for the AF447 situation. The difference today is that we train for high altitude upsets, discuss in class and so forth.
Even for something as simple as turning off an engine for single-engine taxi, we are required to pull out the checklist and follow it.
Skygod attitude maybe became a bit milder, but is still doing very well. In fact "not too much to ask" is exactly that type of thing. Especially if we're talking about really tiny number of accidents occurring today. That is especially bad approach as it is not actionable. Better training, extra tasks in the sim... Is there more total sim time after all that is added? Increase of drop off rates for unfit newcomers?
A better approach is asking "it's not too much to ask - but if they screw up, then what?" And looks like industry's answer is SOPs, checklists, memory items - which is all great for everyday tasks and "routine" emergencies. But what about unknown unknowns? Disoriented, lost, stressed pilot mistakes? It is a really difficult question as there is often only that much time until things go irreversable... Maybe accepting that 1 in few million flights crashes is the ultimate answer after all?
gloom wrote:kalvado wrote:Maybe accepting that 1 in few million flights crashes is the ultimate answer after all?
No - it never is. Most of the progress is we don't accept that. Back in the 50s, no one dreamed of 1 in a million. We're now over it, and still have ideas where and how to improve.
Starlionblue said every generation of AP goes further. I've said we can see AP taking full responsibility, no matter when exactly. But it's not now.
Past experiences show the best in the reliability section is advanced AP taking duty most of the flight, pilots taking duty on specific flight sections/situations, and pilots trained of (among others) possible mishups between automation and pilots. This works best.
If you say perfecting automation is worth another crash (even one), you are as wrong as possible. Do you really think we can learn something from a crashed plane, instead of having one that landed safely with the pilots? Or perhaps you think with pilots having their run mostly when something is essentially wrong with AP, they will crash the plane and AP will not (or will do it much more often)? Sorry to say, but - I don't think so.
I strongly advise to read a short story from Stanislaw Lem, one called Ananke. It's an SF - a good one - where Earth-Mars cargo spaceliner crashes. Sort of SciFi detective story. Eye-opener, and many parallels can be drawn. Strongly advised.
Cheers,
Adam
kalvado wrote:Skygod attitude maybe became a bit milder, but is still doing very well. In fact "not too much to ask" is exactly that type of thing. Especially if we're talking about really tiny number of accidents occurring today. That is especially bad approach as it is not actionable. Better training, extra tasks in the sim... Is there more total sim time after all that is added? Increase of drop off rates for unfit newcomers?
A better approach is asking "it's not too much to ask - but if they screw up, then what?" And looks like industry's answer is SOPs, checklists, memory items - which is all great for everyday tasks and "routine" emergencies. But what about unknown unknowns? Disoriented, lost, stressed pilot mistakes? It is a really difficult question as there is often only that much time until things go irreversable... Maybe accepting that 1 in few million flights crashes is the ultimate answer after all?
Starlionblue wrote:kalvado wrote:Starlionblue wrote:
There is no assumption that humans are superior. The autopilot can fly the plane more accurately than we can. However, the autopilot can only do so with valid inputs. If the inputs are not valid, keeping the autopilot is very dangerous. As my first flight instructor said, "Remember, the autopilot will kill you quickly".
You can certainly add more and more logic to the autoflight system for edge cases, but that means more cost and more potential for misprogramming and other edge cases. Note that we're talking early 90s computer tech for the A330. A generation later, the A350 is more robust when it comes to autoflight edge cases. For example, if the aircraft slows to alpha prot on the A350, the autopilot will not disengage. It will go into "AP IN PROT" mode.
The assumption is that humans can keep the aircraft stable, then troubleshoot the issue. This should not be too much to ask.
Spot on. And we train for this. The aircraft also needs to be certified to permit "average pilots" to control it without the autoflight system. What's the point of alternate law, direct law, and backup if the aircraft is not controllable in those laws?
I think you mean Tom Wolfe. First of all, "The Right Stuff" is a fictional retelling of real events. Secondly, it does not accurately represent the mindset of modern pilots. That gung ho, Skygod attitude is long gone. We are taught to be methodical, follow checklists, slow down. SOP will save your life.
There is a checklist for the AF447 situation. The difference today is that we train for high altitude upsets, discuss in class and so forth.
Even for something as simple as turning off an engine for single-engine taxi, we are required to pull out the checklist and follow it.
Skygod attitude maybe became a bit milder, but is still doing very well. In fact "not too much to ask" is exactly that type of thing. Especially if we're talking about really tiny number of accidents occurring today. That is especially bad approach as it is not actionable. Better training, extra tasks in the sim... Is there more total sim time after all that is added? Increase of drop off rates for unfit newcomers?
A better approach is asking "it's not too much to ask - but if they screw up, then what?" And looks like industry's answer is SOPs, checklists, memory items - which is all great for everyday tasks and "routine" emergencies. But what about unknown unknowns? Disoriented, lost, stressed pilot mistakes? It is a really difficult question as there is often only that much time until things go irreversable... Maybe accepting that 1 in few million flights crashes is the ultimate answer after all?
Never accept. Always strive for excellence.
Memory items are few and far between nowadays. SOPs and checklists are not really increasing in number. They're just changing. Knowing there is an SOP or checklist for a given situation is important. As CaptainJoe says, "a checklist also gives us hope ". We know this is solvable. Knowing that there is a procedure means that even tired and stressed we can fall back on it and work through it.
Sim time isn't really changing in magnitude. It just changes in nature. As things come into focus in the industry, you may find them in your next recurrent training sim.
There are vanishingly few "unknown unknowns".The industry plans for even the most gnarly "non-routine emergencies", and we have procedures for those. Examples would be unreliable airspeed and cargo fire.
We don't go flying every day risking some "out there" scenario that we have neither trained for nor foreseen. The 737MAX crashes were really an outlier there. And even then, you could argue that they would have been prevented with a proper design.gloom wrote:kalvado wrote:Maybe accepting that 1 in few million flights crashes is the ultimate answer after all?
No - it never is. Most of the progress is we don't accept that. Back in the 50s, no one dreamed of 1 in a million. We're now over it, and still have ideas where and how to improve.
Starlionblue said every generation of AP goes further. I've said we can see AP taking full responsibility, no matter when exactly. But it's not now.
Past experiences show the best in the reliability section is advanced AP taking duty most of the flight, pilots taking duty on specific flight sections/situations, and pilots trained of (among others) possible mishups between automation and pilots. This works best.
If you say perfecting automation is worth another crash (even one), you are as wrong as possible. Do you really think we can learn something from a crashed plane, instead of having one that landed safely with the pilots? Or perhaps you think with pilots having their run mostly when something is essentially wrong with AP, they will crash the plane and AP will not (or will do it much more often)? Sorry to say, but - I don't think so.
I strongly advise to read a short story from Stanislaw Lem, one called Ananke. It's an SF - a good one - where Earth-Mars cargo spaceliner crashes. Sort of SciFi detective story. Eye-opener, and many parallels can be drawn. Strongly advised.
Cheers,
Adam
Your point about learning from planes that landed safely is very valid, and consistently implemented in the industry.
A few years ago my company introduced no jeopardy reviews of events where things didn't go as planned, in order to learn from them and investigate if changes are needed.
Also, management looked at statistical information regarding events such as higher than normal sink rate on approach. Why were these occurring much more frequently at certain ports? Instead of punishing the pilots for not being consistently excellent, a revised approach policy was implemented to asssist our decision making.
Starlionblue wrote:Did you regularly do upset recovery prior to AFR447, or only once the recommendations were made after the accident?What is "still happening despite training programs" is that many regulators and operators don't seem to train in a very rigorous fashion, and adopt punitive approaches to non-normal events
Right now, that choosen balance is to give the pilot authority over everything - but at the same time restrain the pilot with SOPs as much as possible..
The SOPs don't restrain us. They are our tools to do things correctly.
My impression is that balance between computer authority and human authority is the biggest knob to tweak by now. And it doesn't have to be more power to one or the other, it has to be about using the best sides of both - which also means understanding weak sides of both.
Hence why mode awareness is a key skill for a modern pilot.
bluecrew wrote:Starlionblue wrote:Did you regularly do upset recovery prior to AFR447, or only once the recommendations were made after the accident?What is "still happening despite training programs" is that many regulators and operators don't seem to train in a very rigorous fashion, and adopt punitive approaches to non-normal events
Right now, that choosen balance is to give the pilot authority over everything - but at the same time restrain the pilot with SOPs as much as possible..
The SOPs don't restrain us. They are our tools to do things correctly.
My impression is that balance between computer authority and human authority is the biggest knob to tweak by now. And it doesn't have to be more power to one or the other, it has to be about using the best sides of both - which also means understanding weak sides of both.
Hence why mode awareness is a key skill for a modern pilot.
In the US it's been pretty common for 20+ years (might not be 100% sure - I'm not that old yet) - basic premise has always been to kick the airplane down to the lowest possible automation or all automation off, and focus on flying the airplane and recovering. I would say, when well taught, it helps you even better understand the basic physics of how an airplane flies, and makes you a better pilot.
Of course you still see people fiddling with the autopilot VS when they get a TCAS RA, so I'm not thinking that the philosophy has truly been absorbed by the community,
Something sudden and unexpected like AFR447 - the airplane just shouldn't do that, so you're not going to see it coming - is entirely the reason we have maneuvers we commit to memory and focus on flying the airplane. I'm not going to Monday morning quarterback the dead, and it was obviously confusing to both pilots, but they had all the automation kick off on them, ended up banking all over the place, and somehow ended up at 40 degrees AOA. It really doesn't take a rocket scientist to fly for pitch and figure out the clearly innacurate airspeed issue. This was just really bad airmanship - the automation didn't get in front of them (it turned off), their control inputs were backwards (climbing into a stall), and they showed a total inability to recover from the mistake. The accident report reads like a LionAir crash in French. Plenty of accidents in far less capable aircraft, correspondingly with almost certainly a much higher body count, in the US in the 1980s and 1990s, which is why we have upset recovery programs.
I have no idea if that philosophy extends outside the US - in my limited experience abroad it definitely did not. (Also if anyone references the A300 that went down in JFK and would like to indict the AAAMP for that, that's a non-sequitir - they never told pilots to go out there and kick the rudder in wake turbulence, guidance has always been smooth, gradual rudder inputs in the right direction until the airplane starts to respond)
I've always viewed this accident as an unfortunate series of events that would have had all of those threats mitigated, maybe not neutralized, by a competent, well trained crew; they couldn't fly the airplane when the automation was off.
zeke wrote:Can we stop with the skygod references, it seems like an underhanded insult against people who have acquired different skills.
bluecrew wrote:Starlionblue wrote:Did you regularly do upset recovery prior to AFR447, or only once the recommendations were made after the accident?What is "still happening despite training programs" is that many regulators and operators don't seem to train in a very rigorous fashion, and adopt punitive approaches to non-normal events
Right now, that choosen balance is to give the pilot authority over everything - but at the same time restrain the pilot with SOPs as much as possible..
The SOPs don't restrain us. They are our tools to do things correctly.
My impression is that balance between computer authority and human authority is the biggest knob to tweak by now. And it doesn't have to be more power to one or the other, it has to be about using the best sides of both - which also means understanding weak sides of both.
Hence why mode awareness is a key skill for a modern pilot.
In the US it's been pretty common for 20+ years (might not be 100% sure - I'm not that old yet) - basic premise has always been to kick the airplane down to the lowest possible automation or all automation off, and focus on flying the airplane and recovering. I would say, when well taught, it helps you even better understand the basic physics of how an airplane flies, and makes you a better pilot.
Of course you still see people fiddling with the autopilot VS when they get a TCAS RA, so I'm not thinking that the philosophy has truly been absorbed by the community,
Something sudden and unexpected like AFR447 - the airplane just shouldn't do that, so you're not going to see it coming - is entirely the reason we have maneuvers we commit to memory and focus on flying the airplane. I'm not going to Monday morning quarterback the dead, and it was obviously confusing to both pilots, but they had all the automation kick off on them, ended up banking all over the place, and somehow ended up at 40 degrees AOA. It really doesn't take a rocket scientist to fly for pitch and figure out the clearly innacurate airspeed issue. This was just really bad airmanship - the automation didn't get in front of them (it turned off), their control inputs were backwards (climbing into a stall), and they showed a total inability to recover from the mistake. The accident report reads like a LionAir crash in French. Plenty of accidents in far less capable aircraft, correspondingly with almost certainly a much higher body count, in the US in the 1980s and 1990s, which is why we have upset recovery programs.
I have no idea if that philosophy extends outside the US - in my limited experience abroad it definitely did not. (Also if anyone references the A300 that went down in JFK and would like to indict the AAAMP for that, that's a non-sequitir - they never told pilots to go out there and kick the rudder in wake turbulence, guidance has always been smooth, gradual rudder inputs in the right direction until the airplane starts to respond)
I've always viewed this accident as an unfortunate series of events that would have had all of those threats mitigated, maybe not neutralized, by a competent, well trained crew; they couldn't fly the airplane when the automation was off.
Starlionblue wrote:bluecrew wrote:Starlionblue wrote:Did you regularly do upset recovery prior to AFR447, or only once the recommendations were made after the accident?What is "still happening despite training programs" is that many regulators and operators don't seem to train in a very rigorous fashion, and adopt punitive approaches to non-normal events
Right now, that choosen balance is to give the pilot authority over everything - but at the same time restrain the pilot with SOPs as much as possible..
The SOPs don't restrain us. They are our tools to do things correctly.
My impression is that balance between computer authority and human authority is the biggest knob to tweak by now. And it doesn't have to be more power to one or the other, it has to be about using the best sides of both - which also means understanding weak sides of both.
Hence why mode awareness is a key skill for a modern pilot.
In the US it's been pretty common for 20+ years (might not be 100% sure - I'm not that old yet) - basic premise has always been to kick the airplane down to the lowest possible automation or all automation off, and focus on flying the airplane and recovering. I would say, when well taught, it helps you even better understand the basic physics of how an airplane flies, and makes you a better pilot.
Of course you still see people fiddling with the autopilot VS when they get a TCAS RA, so I'm not thinking that the philosophy has truly been absorbed by the community,
Something sudden and unexpected like AFR447 - the airplane just shouldn't do that, so you're not going to see it coming - is entirely the reason we have maneuvers we commit to memory and focus on flying the airplane. I'm not going to Monday morning quarterback the dead, and it was obviously confusing to both pilots, but they had all the automation kick off on them, ended up banking all over the place, and somehow ended up at 40 degrees AOA. It really doesn't take a rocket scientist to fly for pitch and figure out the clearly innacurate airspeed issue. This was just really bad airmanship - the automation didn't get in front of them (it turned off), their control inputs were backwards (climbing into a stall), and they showed a total inability to recover from the mistake. The accident report reads like a LionAir crash in French. Plenty of accidents in far less capable aircraft, correspondingly with almost certainly a much higher body count, in the US in the 1980s and 1990s, which is why we have upset recovery programs.
I have no idea if that philosophy extends outside the US - in my limited experience abroad it definitely did not. (Also if anyone references the A300 that went down in JFK and would like to indict the AAAMP for that, that's a non-sequitir - they never told pilots to go out there and kick the rudder in wake turbulence, guidance has always been smooth, gradual rudder inputs in the right direction until the airplane starts to respond)
I've always viewed this accident as an unfortunate series of events that would have had all of those threats mitigated, maybe not neutralized, by a competent, well trained crew; they couldn't fly the airplane when the automation was off.
Caveat: I didn't fly aïrliners yet in 2009, so I can only speak from what I've heard that the training was like.
AFAIK upset and recovery training definitely happened before 2009. However, high altitude upset and recovery training was not performed or emphasized. Please correct me if I'm wrong about that.
Modern airliners have quite a narrow speed range at high altitude. The performance envelope in general is narrow. The aircraft is finicky to handfly. Therefore awareness and practice are required.
kalvado wrote:Starlionblue wrote:bluecrew wrote:Did you regularly do upset recovery prior to AFR447, or only once the recommendations were made after the accident?
In the US it's been pretty common for 20+ years (might not be 100% sure - I'm not that old yet) - basic premise has always been to kick the airplane down to the lowest possible automation or all automation off, and focus on flying the airplane and recovering. I would say, when well taught, it helps you even better understand the basic physics of how an airplane flies, and makes you a better pilot.
Of course you still see people fiddling with the autopilot VS when they get a TCAS RA, so I'm not thinking that the philosophy has truly been absorbed by the community,
Something sudden and unexpected like AFR447 - the airplane just shouldn't do that, so you're not going to see it coming - is entirely the reason we have maneuvers we commit to memory and focus on flying the airplane. I'm not going to Monday morning quarterback the dead, and it was obviously confusing to both pilots, but they had all the automation kick off on them, ended up banking all over the place, and somehow ended up at 40 degrees AOA. It really doesn't take a rocket scientist to fly for pitch and figure out the clearly innacurate airspeed issue. This was just really bad airmanship - the automation didn't get in front of them (it turned off), their control inputs were backwards (climbing into a stall), and they showed a total inability to recover from the mistake. The accident report reads like a LionAir crash in French. Plenty of accidents in far less capable aircraft, correspondingly with almost certainly a much higher body count, in the US in the 1980s and 1990s, which is why we have upset recovery programs.
I have no idea if that philosophy extends outside the US - in my limited experience abroad it definitely did not. (Also if anyone references the A300 that went down in JFK and would like to indict the AAAMP for that, that's a non-sequitir - they never told pilots to go out there and kick the rudder in wake turbulence, guidance has always been smooth, gradual rudder inputs in the right direction until the airplane starts to respond)
I've always viewed this accident as an unfortunate series of events that would have had all of those threats mitigated, maybe not neutralized, by a competent, well trained crew; they couldn't fly the airplane when the automation was off.
Caveat: I didn't fly aïrliners yet in 2009, so I can only speak from what I've heard that the training was like.
AFAIK upset and recovery training definitely happened before 2009. However, high altitude upset and recovery training was not performed or emphasized. Please correct me if I'm wrong about that.
Modern airliners have quite a narrow speed range at high altitude. The performance envelope in general is narrow. The aircraft is finicky to handfly. Therefore awareness and practice are required.
Isn't that high altitude usable condition space what to get call a "coffin corner"?