I have tried AI several times. Different AI models, different versions. When I started testing them, hallucinations was a major problem. It was basically waste of time to ask anything, because answers were so often incorrect. Asking it to write code was also waste of time, because code was so buggy.
But I noticed a big leap from Gemini 2 to Gemini 2.5. It is still unable to answer "difficult" questions, but it is remarkably good as a search engine. It is also good for reading hundreds of pages of pdf and answering simple questions from it. Another thing it is good at is writing simple scripts like "retrieve data from this json file and put it into this database. The quality is not something I would use as a professional, but for hobby projects it works really well (not perfect, but I still get results faster than by doing it manually). Currently, instead of wasting my time, it actually saves sometimes few hours of work or searching.
I have tried to use AI to do actual thinking also, but currently it still fails miserably, it other words, it can not produce similar results in thinking as I can. I would not trust anything that comes as a result of thinking process from AI. I think this is why people think AI is useless.