Been lurking on this forum for a while now, finally thought I’d chip in. To start, I have no doubt that increasing automation, if done correctly, can help to significantly improve the safety in the industry. And I do believe that there is an economic gain to be had too (beyond the examples already shared) - in certain phases of flight, e.g. cruise, take-off, and landing, automation may allow for higher efficiency by selecting the optimum or near-optimum configuration for the required task. Computers are good at handling a large amount of information (dealing with lots of parameters) at any point in time, so monitoring of complex systems is very much up their alley, particularly when in the "normal" operating window.
However, I have my concerns about the challenges of full automation (i.e. completely pilotless aircraft). Let's start with the technical problems.
Firstly, the need for communication. As some have already pointed out, a complete digitalization of existing ATC infrastructure is required for fully autonomous aircraft. There are significant regulatory, economic, and technical challenges associated with this. Since such a system would be able to direct the direction of an aircraft, it must be secure against potential nefarious actors.
In the case of emergencies (perhaps medical), such a communication system needs to be flexible enough for on-board crew to communicate with ATC, e.g. to seek advice from/inform emergency services. Again, such a contextually-aware communication system has to be robustly implemented and tested for emergency use, which will be… non-trivial. A fallback to a voice-recognition based system is possible, but also problematic. For one, there’s the issue of reliability and overall contextual awareness. Can this be done? Maybe – but there are computational limits and difficulties associated with the type of deep-learning networks that are often employed for these types of tasks.
There is also the issue of dealing with “unknown-unknowns” in the case of emergencies. This has traditionally been an area where AIs have been not so great at. Autonomous systems may be unable to improvise during emergencies, which may lower the chances of survival. For example, consider the Air Astana incident with cross-rigged controls. The pilots were about to regain some control by learning, quite literally, on-the-fly. Many machine learning systems often require hours of supervised learning in order to generate a model capable of responding in an acceptable manner, so dealing with reduced/altered functionality in a dynamic environment would again be difficult. (Of course, for argument’s sake – yes, an autonomous system might have been able to detect the maintenance fault before the pilots ever did, hence my preamble. But this example serves to highlight a specific challenge that engineers/computer scientists will have to grapple with – the challenge of developing a system that can deal with situations that are radically different from what might be expected.)
Then comes the problem of trust, especially when dealing with complex deep-learning algorithms. From a pilot’s perspective, when making certain decisions, pilots adopt a rule-based framework. Such a framework is rooted in a logical understanding of cause and effect, and on an intuitive understanding of the principles of flight and the physical processes that power all of an aircraft’s systems. Deep-learning models, however, do not always rely on such rule-based frameworks. Instead, they rely on brute-force association with a lot of sample data. So there is often an issue of generalizability and reliability in fringe scenarios. Quite a number of my university’s physics profs have questioned the “validity” of using deep-learning models in understanding physical phenomena – we often don’t “know” what the machines are “thinking”. For application to aviation, where not only is safety critical but also managing the public perception of risk, this poses a non-trivial psychological hurdle.
Now we get to ethical problems… many of which have already been raised in the case of autonomous cars (e.g. what to do when there is a need to decide between conflicting goals, especially when collateral damage is expected). Can AIs quickly assess the risk to (and both material/non-material value of) civilian life/property in determining an optimal location to perform a crash-landing? And if forced to make a “choice”, who is ultimately responsible? The issue with aviation, as opposed to autonomous cars, is that even though the industry is far more regulated and chances of accidents are far lower, the damage done on a per-accident basis is far greater, as is the social/psychological impact.
The way I see it, safety-critical decisions can often be broken into two categories: predictive actions and reactive actions. Automation can help greatly with predictive actions, especially when systems are complex, and need to be checked frequently.
However, humans are better at reactive actions – diagnosing problems/adapting to issues that have no precedent. Essentially, humans are (as it stands) better at dealing with ambiguity and doing guesswork based on a logical process. It’s no wonder that pilot candidate interviews have questions that test a candidate’s reasoning process and ability to think on his/her feet.
At the end of the day, aviation is a human enterprise that has its roots in hospitality (although, sad to say, this may be changing). The ability of a pilot to empathize and understand the concerns of stakeholders, and the immediacy with which he/she makes decisions is ultimately comforting.
I’m not saying that autonomous airliners will never happen. I just think that it is unwise to underestimate the scale of the challenge.