Forgot your password?
typodupeerror

Comment Re: uhh (Score 1) 35

I have been an on-and-off user of OpenOffice/LibreOffice for many years, and I have to say I've always found it extremely clunky. By all accounts the OOO codebase is pretty convoluted.
Given how many variations of browser-based office suites now exist, I just don't see the point in starting from the OOO design or code.

Comment Also not in Anthropic's report - education (Score 2) 153

AI has had a profoundly negative effect in education, which naturally none of the AI vendors will take any responsibility for.

It turns out that "retrieving answers to exams" is something that LLMs excel at. Since most early education is about learning stuff that's already well known to older or more educated people, it is nearly impossible for teachers to devise assignments that are appropriate to the learning level of students that cannot be easily answered by LLMs.

My teenage daughter reports that many of her classmates basically cannot do any work without LLM. Her lacrosse coach recently assigned an exercise of watching a video, and most of the members of the team put it into AI to give answers. That may be shortsighted and self-destructive behavior, but these are minors who we don't expect to understand or deal with long-term consequences. That's why we don't let them vote, drink, drive cars, gamble or do other things that are destructive to self and others.

Yet nobody at OpenAI or Anthropic seems to give two shits about destroying the education of millions of young people, and saddling teachers and schools - who have salaries/budgets many orders of magnitude smaller than these speculative cash receptacles - with the fallout of a perfect assignment-faking machine.

We're still stuck hearing the platitudes about how "AI can help them learn in new ways", which is a radioactive pile of nuclear bullshit, while Anthropic's "research" says nothing about the impacts on education right now.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re: sure (Score 2) 150

But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.

Submission + - AI doctor ready to triple your opiates and help you make meth (mindgard.ai) 1

electroniceric writes: Doctronic, the AI medical chatbot that convinced Utah to make a "regulatory sandbox" to permit it to operate before undergoing full regulatory approval, has been pwned.Redteamers from Mindgard got it to spill its system prompts, then poisoned it with fake updates from reputable-seeming organizations. They were then able to get it to repeat COVID vax conspiracies, recommend unsafe doses of opioids, and give detailed instructions on how to make methamphetamine.

Comment Re:What actually happened? (Score 1) 52

I'm not sure what it could be -- every testing/checking tool I can find online passes it (and I learned a lot from that, including removing old cyphers), the banners/HELO etc are largely anonymized, yet by and large Google says "yeah nah" to the first few new emails to a new gmail address.

It'd be fantastic if they had a test page where you could send them an email or click a "start test" button and it'd go through and check everything that *THEY* look for, but it feels like they don't have a vested interest in that -- they want you to just use their service, and I refuse.

Comment Re:What actually happened? (Score 2) 52

Google is like this - their anti spam tools are only available if you *are* sending UCE. The small private domains sending a few hundred to a few thousand emails to gmail addresses annually cannot get access to them.

I have all the things set right: DKIM, DMARC, SPF, IP is in a "good neighbourhood", all the blackhole lists show my IP as clear, yet sending "hey, nice meeting you today, here's my email, looking forward to speaking with you again" type emails to a new gmail address almost always end up in their junk. And there's nobody to contact at Google about it -- it's a completely automated system.

Microsoft has their junk mail reporting whatever and registering with them (not an easy thing to find until you know what it is) solved all my outlook.com issues.

Comment Re:Please don't use Paramount+ Platform (Score 3, Interesting) 55

(+1, Truth)

Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.

We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?

Comment Re:readin and ritin get recked (Score 2) 109

Exactly.

Most tech in the classrooms missing the key part of what makes people learn: another person motivates and helps them to do it.

Technology is mostly peripheral to that, and as you aptly note, it's modest pros are outweighed by huge cons.

I think AI is one of the biggest environmental contaminants ever see to humans. It has poisoned learning across the globe, and the US has foolishly drunk it up. My daughter reports that so many of her classmates now use AI for assignments that they don't even know enough about the assignment to figure out whether they can use what AI wrote.

LLM-based AI has a lot of problems, but one thing it is extremely good at is replicating academic assignments. Its introduction has been like giving toddlers machines guns to play with on the schoolyard and asking them not to hurt one another.

Comment Re: Interesting Summary (Score 1) 58

There's a difference between not using AI tools at all and not using code generated by AIs.

The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.

I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.

I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".

Comment Re:Interesting Summary (Score 2) 58

The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.

Comment Re:We've seen technological revolutions before.... (Score 1) 75

I look at this through the lens of automation, regardless of the technology.
Automation tends to be successful when routine aspects of a process can be handled by a machine in such a way that the effort of finding and handling exceptions doesn't swamp the productivity gains of the mainline automation. But that's actually quite hard in practice, because it means that one has to identify and cordon off the parts of a process that are repeatable and where failures can be readily detected, and then create ways of switching to non-automated processes that are efficient enough not to swamp the gains of the automated work. In environments with physical automation, that's slow, incremental work involving developing process controls and efficient online-offline handoffs. It isn't just a question of building a machine, but of learning and rehearsing the process between automation and people.

The theory of the current AI companies are advancing is that somehow their LLM based technologies can magically eliminate that incremental road to automation and replace the varied tasks that people do. I think this is likely to be untrue on a lot of levels.

First, as we know from software development, a lot of the hard work of building a system is deciding what it should do and how it should work. Those kinds of decisions are not particularly amenable to LLMs because they are usually about generating consensus and shared knowledge among people, as well as making intuitive predictions about what will be needed and useful in the future.

Second, the history of automating routine office work is already littered with various forms of automation. Countless platforms and languages were supposed to automatically filter our email, generate replies, track tasks, etc. A lot of the use cases where that is straightforward are already automated by more conventional tools (ticketing systems, chatbots, phone trees, web forms, etc). Sure there are some use case where more automation can be done, like creating skeletal code or prototype code. But specifying how things work remains a central job no matter how the code is built.

Third, in human groups and organizations, many decisions are not really "computable". That is, they aren't just some form of inferential or statistical logic mapping from the priors to the output decision. Rather they involve people forming perceptions, views, and feelings, and from that defining an acceptable decision. Human decision-making involves the nervous system and the amygdala not just cognition. That's not a bug, that's a feature - it keeps us in sync with our agency as living and sentient beings.

The fast and easy road to automation is what AI companies are banking on to increase productivity by the amounts that justify the stratospheric valuations. I'd be pretty surprised if the LLMs actually enable automation in this way rather than the "slow boring" way.

Comment Re:"Profit" on one side of the scale... (Score 1) 64

Is there a word that combines laughable and infuriating?

Because this is both. They fact that MetaZuck has the gall to say openly that they can be trusted to find a "balance" around privacy when their business model is surveillance makes me want to scream. And then when I'm done screaming, to laugh.

The only glimmer of hope is that the downsides of this will probably show up so quickly and they will respond with the usual mealy mouth platitude and lies, that it may finally force some sort of real oversight of their data collection and sales practices. Things I expect we will see soon include:

  • Crimes and murders recorded both intentionally and inadvertently
  • Corporate, government, and military espionage
  • Non-consensual sexual recordings
  • Illegal behavior with, by, and involving minors

Yes, much of this could have been recorded with a phone, but with these glasses, determining whether someone is recording will be effectively impossible. And because of Facebook's business model, they will always want to record as much as possible.

Buckle up.

Comment The right term is "free-riding" (Score 2) 21

Crypto is free-riding on the banking system.

Banks do an awful lot more than just put money in accounts and move it around. They are responsible for finding fraud, mediating disputes over claimed funds, collaborating with law enforcement to prevent illegal activities, as well as engaging in well-understood and sound lending and risk management practices. They also provide separation between financial activities like storing money, lending, and investing. These are all critical functions to ensure that regular people can count on stable and safe vehicles for storing and using their money. The banks' failure in these functions in the 1920s was a major cause of the Great Depression, and the reason that we brought the industry into the trade of government guarantees in exchange for regulations ensuring stability and soundness.

Do they succeed at all of these all of the time, or even as much as they should? Doubtless they do not.

But crypto provides none of these functions, and in fact leverages them. When people want real money from the real financial system, they get it out of crypto and leverage all that the banking system provides. Crypto wants to free-ride on the whole system, and offer the investment and profit upsides without the financial system responsibilities. Pure leechery.

"Just be a bank" is 100% correct.

Slashdot Top Deals

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...