I think the issue with some modern aeroplanes, which links in well with this discussion on semi-autonomous vehicles, is that although the planes are chock full of useful/helpful features and protections which have been part of reducing the overall accident/fatality rate, when the systems eventually give up it can be a big surprise.
Going suddenly from everything OK to here, you have a go can be very messy and has lead to some fairly spectacular crashes. The human element of the operation has not been involved in the decision loop or had access to certain inputs until the software decides that its goose is cooked, gives up the ghost and leaves the resulting mess to a startled operator to make some sense of. It takes alertness, skill, deep technical knowledge and a large dose of luck to recover from a situation where the automatics have gone ???, as it is normally when there have been multiple failures which have defeated sensor logic.
There is also the issue of de-skilling, which inevitably happens when an automated system produces better overall results than a human driven one. You let the automatics do the job but do not get enough practice to stay competent or never reach that level in the first place. How can a neural network (you) get good at something without sufficient training?