Absolutely right, though there are still useful things that could be explained: the types of inputs the algorithm accepts, the range of outputs it can potentially give, the model used, etc. In the '90s researchers experimented with building more scrutable models (like decision trees) using a neural net as the training source, with encouraging results , but I think the work languished when neural nets went out of fashion.
More importantly, I think this showcases how opaque learning systems (while potentially powerful) may not be appropriate for circumstances when people need to know *why* the system reached a particular conclusion. Predictive accuracy should not be the only metric of concern when developing a machine learning model; comprehensibility of the decision process also needs to be taken into account.
 Craven, M. W. and Shavlik, J. W. (1997). Using neural networks for data mining. Future Generation Computer Systems, 13:211–229.