You use the words "algorithm based in big data" as if they had intrinsic value. They don't, period. The data - a great foundation, but the algorithm is human-made.
Afraid of machines? Not really. We're not really making any progress towards machines having any sort of free will. I'm afraid of generation gap.
At the moment we have a certain view of the world. This shapes our goals and interpretations. That shapes the algorithms we create. Goal functions. Criteria. Queries.
Now, bugs aside, those algorithms will do exactly what they were designed to do. The point is, we're not nearly infallible. The goals we set now are our best current guesses about what matters. If we're short-term satisfied with the results of passing the responsibility for something to computers, we're going to just let them do it and never look back.
Now, the world of the next generation (not really, could be 5 years apart) includes goals of the previous one implemented as core services, as the ground truth. Any mistakes of the previous one become hard to fix - you'd have to deactivate something that by now is a crucial service and rebuild it from scratch, with new goals. Not likely, there are layers and layers of useful utilities built on this, revenue streams, etc.
In short - passing decisions to algorithms working on big data restricts our future flexibility. The algorithms are there as decision support, that's how it should be. Do not automate strategic decision making. Humans can realize they are wrong, algorithms can't, because, really, they're not - they do what they were designed to do, period. With our tendency to build new technology, processes, etc. on existing solutions if they seem to work well, that creates future dependencies which make error correction very difficult and costly.
The science-fiction scenarios about humans as slaves to machines are likely pure fantasy. Slaves to ancient ideas of how things should be, enforced by machines... now that's much more realistic.