Your argument is badly broken by neural nets and machine learning paradigms. They are very much NOT same inputs, same output. The output depends on what the training set was, and the order items were added to it. Take the same algorithm and train it on two different data sets, or even the same set in a different randomized order and you can get two different results. The worse part is that we would never be able to tell someone why- neural nets don't use logic, or understanding. So there can be no rational explanation for some of their decisions other than a chance correlation.
That can have extremely negative effects in the real world. An easy example is recommendation engines. Due to how they work, it's easy to get pigeon holed into certain types of content. It's a major cause in political radicalization and bubbles that we see today.
Another is inherent bias in systems. An AI is only as good as its input is. Crap input, crap output. Which is why when trained on data to try to find criminals, there have been multiple studies where it only picked black people. Why? Because they were outsized in the training set. The opposite reason is why facial recognition has trouble with them- underrepresented in that sample set.
Which all doesn't mean that we shouldn't use AI. It means we should use it carefully, keep an eye on surprise negative effects, regularly improve and adjust training sets, and decide that there are some places where we should leave humans in the loop and an AI is advisory at most. If we had an AI reading the signals from sensors in 1983 instead of Stanislav Petrov, we'd be a radioactive crater right now.