Comment Re:And so it begins (Score 4, Insightful) 31
Programmer friends at Google, Meta and Amazon have certainly convinced me that code is being assisted successfully by AI. However, the author's level of extrapolation to other fields and situations destroys any credibility he had.
For example, the author - Matt Shumer, who is an AI company founder, booster and frequent submitter to other AI-hype websites, but apparently is not legally trained - spends many paragraphs and anecdotes talking about how a partner at a law firm now has to use AI because he "knows what's at stake" and that AI can do legal work better than their associates.
Nope, the reason that partner is doing it is because he's scared of being left behind, which is the entire motive behind hype pieces like this. I'd wager that hypothetical partner is not the one who beats out all his colleagues and becomes "the most valuable person in the firm" but rather the one that gets sanctioned for submitting briefs with hallucinated cases (which is still happening in the wild regularly). As a lawyer, I can say even current flagship AI models cannot solve the problem of lawyer bar-required ethical duties which require effectively re-doing the work AI does so we can attest it is correct, taking more time than doing the work ourselves the first time.
Shumer similarly gives an "oh god, it's getting so good so fast!" timeline that includes AI passing the bar. That 2023 story was debunked in 2024 and somehow this guy is unaware of that. Why in the world would someone so unable to identify reliable information be trusted on AI reliability?
There may be some functional AI work - like coding within specific environments and circumstances - but there is a huge AI bubble built on this silly "it will do everything better" hype.