... despite of me being an engineer, and a computer scientist, ...
Okay, so you claim to be an engineer AND a computer scientist. That means a LOT of math classes for you.
A driverless car cannot stop within abrupt short time.
Yes it can. That's basic math. Stopping distance is determined by 3 things:
1. reaction time (computers are quicker than humans)
3. surface conditions
So the autonomous car should stop in a shorter distance than a human would.
Just one, one only, example: If presented by either hitting a 4-year-old child or an octogenarian; ...
Someone with a degree in computer science should know that computers only run programs. Therefore, SOMEONE would have to have made the decision to program the autonomous car to categorize certain objects as "4-year-old child" and other objects as "octogenarian".
Furthermore, someone with a degree in computer science would know how extremely difficult such a task would be.
Whereas recognizing "obstacle" is much easier to program. So the same action would be taken no matter what the obstacle was. And that action should be to stop.
If the passenger wants to take over control of the vehicle at that time then that is an option. But the autonomous car should just stop. And it would do that fast than a human could do that.
A bus with 12 passengers comes up frontally (driven by an imperfect human driver, I guess).
Again, someone with a degree in computer science can tell you how difficult it would be to write a program that could, correctly, determine how many passengers there were in a vehicle.
So, when presented with an obstacle, the autonomous vehicle should stop. And do so faster than a human could.
Now, from a BUSINESS viewpoint the company would be liable for damages should they ship a car that incorrectly identified an obstacle as anything other than an obstacle ("a 4-year-old child", "an octogenarian", "bus with 12 passengers") which resulted in injury or death to the occupants of the autonomous vehicle. Therefore, no company would write such a program.
Whichever the decision, the perfect driverless car becomes a pragmatic killing machine.
You have confused "artificial intelligence" with "autonomous car".
An autonomous car is not the same as an artificial intelligence. Nor would an autonomous car be programmed with the sub-routines that you are postulating.