Forgot your password?
typodupeerror

Comment Re:aka (Score 1) 128

...and the Cybertruck range and launch date, and the Model 2, and the 4680 battery process, and...

Based on Musk's track record, you can pretty much count on this being a lot less than what is promised and a lot later.

I also just don't see the opportunity. I wouldn't call myself all that knowledgeable about WeChat and its ilk, but I think these "super apps" emerged as China's mobile revolution was taking off, meaning that people started out doing banking and ride sharing etc within these apps. In the US, all those services came out separately.

Admittedly the app landscape is fairly cluttered, but I just can't see the path to US consumers suddenly wanting to hail a ride inside of Xwitter. It's not how most Americans learned to hail rides, and the consumer value in having it all inside of Xwitter seems pretty minimal.

Comment Re:"enable anyone to build products"? No. Not at a (Score 1) 24

I'm not sure I fully agree. If you know what you're doing, LLM-based code can be quite helpful. I just built a Python scraper using Antigravity in a few hours that would've taken me many days of work and required a lot of effort to learn async function syntax.

It's not a super complex code base, so there's nothing architecturally complex about it. Even then, if I hadn't had a decent understanding of how playwright works, I would've had a much harder time debugging things and fixing some of the dumb decisions the LLM made.

The "anyone can build anything" hype is clearly bogus, but "people who can clearly specify things" can build a lot more than otherwise.

Comment Re:Dead company walking (Score 2) 24

That's actually quite tricky for Google. LLM-based searches unquestionably cannibalize traffic to the web properties that make them the lion's share of their revenue.
However, they surrender their position directing traffic to websites to competitors like OpenAI or Anthropic, then they are far less able to make ad revenue period.

I think their hope is probably to outlast some of the hype cycle and then come in with decent products that leverage their current dominance in ad sales.

Comment Re:Yet they want cheaper healthcare... (Score 1) 34

The argument that AI will reduce costs is deeply flawed, because it assumes that the interactions between insurers and providers will remain unchanged other than AI "doing the paperwork". That is very unlikely to be true, because both providers and insurers are likely to deploy AI to manage the new flows of information.

You're already seeing it in billing. On the provider side, AI goes through and harvests diagnoses that can justify higher billing ("increased coding intensity") while insurers are deploying AI that downcodes. Both have to pay the AI vendors, and now there's just more "paperwork" to keep track of.

The same way that adding lanes to a highway does not successfully keep travel times fast over the medium and long term - the lanes just fill up.

Comment Re:Good (Score 1) 199

I agree with the point you're making - there are vast variations by geography such that this range can represent a very financially constrained lifestyle or a pretty comfortable income.

I'd also add one other big confounder, and that's overall household wealth (including one's family). If you are 24 years old and you are living with a relatively low salary, there is a huge difference between knowing can always pull the plug and live with your parents or go to grad school on their dime versus having nothing to fall back on. Same income, totally different life.

Or you can be making over $300K and supporting both your own kids AND your parents (plus possibly helping fill gaps for extended family) and still be pretty uncertain about your finances. That's justifiable, too - the monthly cost for a single person in skilled nursing is easily over $12K for many facilities. That ends when your parent/loved one dies, or they run out of money and go on Medicaid.

All that said, I believe it's true that the upper middle class has expanded. Part of why Trump's form of class warfare works is that the educated upper middle classes (of who I am member) have overall seen growth in income and wealth, but they don't see themselves as "rich" or "taking all the money". Trump has succeeded in pointing a lot of the lower 60%'s rightful economic frustrations at these folks, because they're visibly well off but want to be believe they are "middle class".

While I think the really rich need to pay in a hell of a lot more to the system, I also think the "true" upper middle classes (the subset of AEI's cohort that has a comfortable set of resources) also need to pay in more.

Comment Re: uhh (Score 2) 77

I have been an on-and-off user of OpenOffice/LibreOffice for many years, and I have to say I've always found it extremely clunky. By all accounts the OOO codebase is pretty convoluted.
Given how many variations of browser-based office suites now exist, I just don't see the point in starting from the OOO design or code.

Comment Also not in Anthropic's report - education (Score 2) 153

AI has had a profoundly negative effect in education, which naturally none of the AI vendors will take any responsibility for.

It turns out that "retrieving answers to exams" is something that LLMs excel at. Since most early education is about learning stuff that's already well known to older or more educated people, it is nearly impossible for teachers to devise assignments that are appropriate to the learning level of students that cannot be easily answered by LLMs.

My teenage daughter reports that many of her classmates basically cannot do any work without LLM. Her lacrosse coach recently assigned an exercise of watching a video, and most of the members of the team put it into AI to give answers. That may be shortsighted and self-destructive behavior, but these are minors who we don't expect to understand or deal with long-term consequences. That's why we don't let them vote, drink, drive cars, gamble or do other things that are destructive to self and others.

Yet nobody at OpenAI or Anthropic seems to give two shits about destroying the education of millions of young people, and saddling teachers and schools - who have salaries/budgets many orders of magnitude smaller than these speculative cash receptacles - with the fallout of a perfect assignment-faking machine.

We're still stuck hearing the platitudes about how "AI can help them learn in new ways", which is a radioactive pile of nuclear bullshit, while Anthropic's "research" says nothing about the impacts on education right now.

Submission + - AI doctor ready to triple your opiates and help you make meth (mindgard.ai) 1

electroniceric writes: Doctronic, the AI medical chatbot that convinced Utah to make a "regulatory sandbox" to permit it to operate before undergoing full regulatory approval, has been pwned.Redteamers from Mindgard got it to spill its system prompts, then poisoned it with fake updates from reputable-seeming organizations. They were then able to get it to repeat COVID vax conspiracies, recommend unsafe doses of opioids, and give detailed instructions on how to make methamphetamine.

Comment Re:readin and ritin get recked (Score 2) 109

Exactly.

Most tech in the classrooms missing the key part of what makes people learn: another person motivates and helps them to do it.

Technology is mostly peripheral to that, and as you aptly note, it's modest pros are outweighed by huge cons.

I think AI is one of the biggest environmental contaminants ever see to humans. It has poisoned learning across the globe, and the US has foolishly drunk it up. My daughter reports that so many of her classmates now use AI for assignments that they don't even know enough about the assignment to figure out whether they can use what AI wrote.

LLM-based AI has a lot of problems, but one thing it is extremely good at is replicating academic assignments. Its introduction has been like giving toddlers machines guns to play with on the schoolyard and asking them not to hurt one another.

Comment Re:We've seen technological revolutions before.... (Score 1) 75

I look at this through the lens of automation, regardless of the technology.
Automation tends to be successful when routine aspects of a process can be handled by a machine in such a way that the effort of finding and handling exceptions doesn't swamp the productivity gains of the mainline automation. But that's actually quite hard in practice, because it means that one has to identify and cordon off the parts of a process that are repeatable and where failures can be readily detected, and then create ways of switching to non-automated processes that are efficient enough not to swamp the gains of the automated work. In environments with physical automation, that's slow, incremental work involving developing process controls and efficient online-offline handoffs. It isn't just a question of building a machine, but of learning and rehearsing the process between automation and people.

The theory of the current AI companies are advancing is that somehow their LLM based technologies can magically eliminate that incremental road to automation and replace the varied tasks that people do. I think this is likely to be untrue on a lot of levels.

First, as we know from software development, a lot of the hard work of building a system is deciding what it should do and how it should work. Those kinds of decisions are not particularly amenable to LLMs because they are usually about generating consensus and shared knowledge among people, as well as making intuitive predictions about what will be needed and useful in the future.

Second, the history of automating routine office work is already littered with various forms of automation. Countless platforms and languages were supposed to automatically filter our email, generate replies, track tasks, etc. A lot of the use cases where that is straightforward are already automated by more conventional tools (ticketing systems, chatbots, phone trees, web forms, etc). Sure there are some use case where more automation can be done, like creating skeletal code or prototype code. But specifying how things work remains a central job no matter how the code is built.

Third, in human groups and organizations, many decisions are not really "computable". That is, they aren't just some form of inferential or statistical logic mapping from the priors to the output decision. Rather they involve people forming perceptions, views, and feelings, and from that defining an acceptable decision. Human decision-making involves the nervous system and the amygdala not just cognition. That's not a bug, that's a feature - it keeps us in sync with our agency as living and sentient beings.

The fast and easy road to automation is what AI companies are banking on to increase productivity by the amounts that justify the stratospheric valuations. I'd be pretty surprised if the LLMs actually enable automation in this way rather than the "slow boring" way.

Comment Re:"Profit" on one side of the scale... (Score 1) 64

Is there a word that combines laughable and infuriating?

Because this is both. They fact that MetaZuck has the gall to say openly that they can be trusted to find a "balance" around privacy when their business model is surveillance makes me want to scream. And then when I'm done screaming, to laugh.

The only glimmer of hope is that the downsides of this will probably show up so quickly and they will respond with the usual mealy mouth platitude and lies, that it may finally force some sort of real oversight of their data collection and sales practices. Things I expect we will see soon include:

  • Crimes and murders recorded both intentionally and inadvertently
  • Corporate, government, and military espionage
  • Non-consensual sexual recordings
  • Illegal behavior with, by, and involving minors

Yes, much of this could have been recorded with a phone, but with these glasses, determining whether someone is recording will be effectively impossible. And because of Facebook's business model, they will always want to record as much as possible.

Buckle up.

Comment The right term is "free-riding" (Score 2) 21

Crypto is free-riding on the banking system.

Banks do an awful lot more than just put money in accounts and move it around. They are responsible for finding fraud, mediating disputes over claimed funds, collaborating with law enforcement to prevent illegal activities, as well as engaging in well-understood and sound lending and risk management practices. They also provide separation between financial activities like storing money, lending, and investing. These are all critical functions to ensure that regular people can count on stable and safe vehicles for storing and using their money. The banks' failure in these functions in the 1920s was a major cause of the Great Depression, and the reason that we brought the industry into the trade of government guarantees in exchange for regulations ensuring stability and soundness.

Do they succeed at all of these all of the time, or even as much as they should? Doubtless they do not.

But crypto provides none of these functions, and in fact leverages them. When people want real money from the real financial system, they get it out of crypto and leverage all that the banking system provides. Crypto wants to free-ride on the whole system, and offer the investment and profit upsides without the financial system responsibilities. Pure leechery.

"Just be a bank" is 100% correct.

Comment Re:Bull (Score 1) 48

Super interesting explanation, thanks.

I have heard of Jevon's Paradox in cases like the introduction of LED lights, which did indeed create an explosion of use of lighting everywhere. I guess I've always thought of this in terms of latent demand (people actually wanted to light up more things than they could afford to). The way you're describing it sounds more like demand creation (people did even think about all the things they could do with lights until LEDs made them incredibly affordable).

I think in this case both are operative. I certainly agree that AI has not, is not, and IMHO will not, cause software development costs to fall in the way that LEDs made lighting costs fall. I'm actually also not even convinced that if it could do so that the latent demand is there.

That's because the primary use case the AI bulls envision is still a variation of process automation. The success of process automation has to do with the balance of reducing costs on highly automatable steps vs costs for identifying exceptions, handing them off to non-automated process, and then re-integrating all that into the mainline workflow. A lot of knowledge worker stuff looks automatable at first blush but when one starts examining the exceptions and handoffs they get a lot messier and more expensive.
The technology to do email filtering and automated replies has been around for at least 20 years. The problem is that most of the time the discussion in an email requires thought, action or accountability that is not obvious. I struggle to see how LLMs can overcome that.

Comment Who is going to listen to all those podcasts? (Score 2) 20

I won't repeat the excellent "who asked for this?" comments, but one corollary that is worth discussing is the supply and demand aspects of "summarize this into a blog post or podcast".
If Adobe (and Microsoft and everyone else AFAICT) are making it easy to make X document into a podcast, then the natural result is that there will be a ton more podcast video/audios posted online. Which means that getting someone to actually care about your podcast will get even harder.
We're already seeing AI slop take over the YouTubes and Facebooks. While this isn't quite the same as Shrimp Jesus, it's still low-effort stuff that adds to the noise and makes the signal harder to find.
I imagine Microsoft and Adobe would shrug and say "not my problem", but it sure seems to me like it makes features like this a lot less useful.

Slashdot Top Deals

Whenever people agree with me, I always think I must be wrong. - Oscar Wilde

Working...