The argument is really: can a flight control computer ever be good enough to handle every possible situation as well or better than a human could? Look, I'm not saying that it could I'm just saying: Airbus believes so, and so do others in the industry, it must if they believe in a future with autonomous aircraft. The A350 already has a vastly superior AP system to the 320/330 generation, I am sure to achieve certification Airbus will need to improve it vastly further still. However, the Dassault 10X is a test case as they are already boasting of releasing the aircraft with single-pilot operability in cruise, and have added an 'upset recovery button' which they claim can right the aircraft from an unusual attitude, as opposed to giving up and leaving it to the pilot.
It remains to be seen how vastly advanced the flt ctrl computers can become and whether tools like AI and machine learning can get it to a point where it's as good as or better than a human. I know many people will scoff at this statement. But believe it or not there are a lot of industries, enterprises and endeavors that are gambling (I.e. planning) that the progress in power and ability that computers have made since their inception will continue.
In some cases computers have surpassed humans, e.g. in speed and power to do arithmetic/math, sort data etc. It is in the more complex cognitive tasks where humans still have the edge. That's where the leaders of technology and science are focusing all their attention right now. Governments, big business, institutions, they are investing a lot in machine learning, AI etc. It's also the case that the more data is driven into these things the smarter they get. It's really the question of our age, whether we can develop something that eventually makes us obsolete.
This thread highlights the issues are mainly human ones around things such as risk acceptance, industrial practice, regulatory prerogatives, cost effectiveness, etc.
It's a truly multi-faceted problem.
Here's how I'd frame the core issue: Given we know both humans and computers aren't perfect, can we settle on a way to decide if the risk levels are equivalent, or will we continue to see a focus on one-off cases where human determination and creativity saved the day and continue to ignore the situations where automation has improved overall saefety and continue to ignore the one-off cases where human weaknesses doomed flights?
I'm not seeing the strong correlation between "we are investing in something that does X" and "we're sure that we can do X". I think a lot of the investment is "we better get in and give it a try just in case someone figures it out". Clearly companies invest in areas where the solution is unknown. If they don't figure it out, at least they are in a position to evaluate other solutions and in many cases use their resources to buy into the solution. This is for instance why companies give engineers incentives to make patents. Building the company's patent portfolio gives them a bargaining chip to use. If they don't hold the solution they can use the patent to trade for access to the solution or to undermine exclusivity claims.
As I've suggested before I'm a bit surprised EASA, Ky, Airbus, CX et al are pushing this forward with as much gusto as we see. At some level it pays to prepare the ground for such a game changing thing, but on the other hand the risk is you get yourself too far ahead of events and fail and set back the entire concept by many many years.
I think the comparison with Tesla Autopilot is apt. If you over-hype something, each failure gets picked apart and the end result very well can be disappointment. On the other hand, Tesla advocates will say we don't count all the cases where Tesla Autopilot makes better decisions than humans and thus is overall raising safety.