It's easy to design something that people can do. It's tough to design a system that people can't fail at. And that's where there's a big, soft, squishy line that divides what people can generally keep up with and the things that people have to work at to get wrong.
As a software engineer, I require the first, and aim for the latter. It's tough.
My uncle was an BART engineer. He controlled BART ([San Francisco] Bay Area Rapid Transit) trains in the SF Bay Area for a living. The train had doors on both sides of the train and some stations opened on one side, or the other.
BART trains are frequently "up in the air" as much as 50 feet, where the expectation is that you climb a flight or two of stairs to the BART station and board the train. And, for passengers, the doors automatically opened on the correct side so that nobody got hut.
For passengers. But the engineers were expected to manually open the doors on the appropriate side when leaving their station. Now, it's not particularly difficult to look outside the door and see which side the station is on, and the doors for passengers automatically opened on the correct side.
This is where that big, squishy line starts to rear its ugly head. Because while passengers weren't expected to remember which side to get off, engineers were. And my poor uncle made a mistake one day, and opened the wrong side. It was a fatal mistake.
Answer me this: Why would we expect that passengers would never get it right, but engineers would never get it wrong?
Intelligently designed systems that account for and prevent common human mistakes is a design goal. It's tough to do because you have to predict what the end user will likely get wrong and account for that. Nonetheless, it's a hallmark of engineering advancement that we've designed something so safe and resistant to human error as a car that casually travels 100 MPH with as low a death toll as we see today.