Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Mobile Phones (Score 1) 15

But, by that definition, the model still has to produce working code. It still has to compile/execute, and at least appear to do something useful. If you don't know enough to fix it, then what do you do, just keep pasting in the errors, ad nauseam, and pray it eventually accidentally produces working code?

I spent all day Monday trying to get it to generate an arbitrarily sized T2 Penrose tile so that I didn't have to do the math. I still wound up having to learn how to do the math. I guess I just don't vibe hard enough. :)

Comment Re:Mobile Phones (Score 2) 15

Really? Have you EVER had AI code work off the blocks?

But even if you somehow manage to keep your sanity while never sanity checking the code, or looking at the logs, or having it write, or run against unit tests... hopefully you have some kind of, you know, PLAN laid out. One that is - at the very least - more detailed than can be easily, or usefully navigated on a phone screen.

Comment Re:If only we had a true meritocracy (Score 1) 79

Except, as a percentage of players, it definitely has an end, and is actually pretty short.

It's also worth mentioning that entry level pay for an NFL player is only 300K, and their career is typically over by the time they are 30-35. That's when most of us are just starting to actually earn money, and think about our futures. Considering that most financial planners recommend $4-$10m to retire at 50, is it really all that surprising that guys whose career earnings often don't reach that mark go broke in their 40's? Especially when their "peak earning years" are often before they turn 25?

The fact that they "stand out" to you so much that you assume it is somehow "the norm", simply re-enforces how much resilience having money imparts.

Comment Iteration and exploration (Score 1) 57

I'd be interested to see stats on refactors and iteration. I no longer writing code for a living, but I do create production and utility scripts, and I find that - while I'm likely not any faster in the end, I am more likely to do the things that I "should" do, like refactor, or explore a tedious, but efficient way of accomplishing something. I wonder if it is the same elsewhere. Thus, while I'm not "faster", I feel like what I do put out is subjectively, and not infrequently objectively "better" than I would have produced without it. Perhaps that my "good enough" is often better than it would have been without the coding assistant?

I also might be a bit of an outlier, though. I usually find myself working in languages I don't know well. It may be just that I'm caught in that low level trap, and that AI is preventing me from investing (allowing me to not invest) the time doing the tedious stuff that creates the skill and insight that would skip most of those iterations, anyway.

The problem is, I have no idea how you would begin to compile stats like that. Much of that work would never be committed, so you would have to try to measure some kind of velocity of change in uncommitted code? That would be incredibly noisy, and so situational, that Im not sure you could learn anything from it. Suggestions?

Comment Re:Soo, who to trust? (Score 1) 96

And how do you propose that a CDN is supposed to know what is being done with the data it serves up? Remember... people pay for CDN's, Those requests cost money. Yes, one request is a trivial amount, and a trivial burden - but we're talking about billions of requests a day. That's a lot of aggregate demand from a free rider.

Comment Re:What? (Score 3) 96

So, as a practical matter, your argument means that I - as a site owner, or web host/CDN - have no legal or philosophical right to distinguish between an actual visitor to my page, or consumer of my content - and a bot sent there by another commercial entity, to scrape the contents of my site, and summarize, analyze or aggregate it, as a critical part of their revenue model?

Think really hard about the logic behind that before you respond. If you follow that thread it to its conclusion - then there is no reason to honor, or even have - robots.txt in the first place.

Yes, it is a sticky mess with no obvious right answer, but I'd like to at least make sure you have thought out the implications of your position before you simply discard one of the foundational parts of the internet over a sophist disagreement about the definition of the term "crawler".

Comment Its not logic, or reasoning (Score 5, Insightful) 66

It's an LLM. It doesn't "think", or "formulate strategy". It optimizes a probability tree based on the goals it is given, and the words in the prompt and exchange.

It cannot be "taught" about right and wrong, because it cannot "learn". For the same reason, it cannot "understand" anything, or care about, or contemplate anything about the end (or continuation) of its own existence. All the "guardrails" can honestly do, is try to make unethical, dishonest and harmful behavior statistically unappealing in all cases - which would be incredibly difficult with a well curated training set - and I honestly do not believe that any major model can claim to have one of those.

Comment In other news: Water ir [almost always] wet, and (Score 1) 55

Not doing a thing means that you don't get better at it.

Writing about something requires organizing, and ordering your thoughts, and thinking about it as a cohesive whole, and fitting it into a lexical framework. In other words, engaging with the material, and enhancing understanding.

Writing.a prompt for a general purpose LLM does ... none of these things. Even using a specialized one for education (which, to my knowledge, doesn't exist), still removes 90% of the cognitive burden on the user, which is what leads to actual learning.

Comment "Once you understand their limitations" (Score 3, Insightful) 73

That's the real problem. 99.5% of the people using them, or being encouraged to use them, do NOT understand their limitations - and companies are doing their best to make sure that does not change. This is the same bullshit you see re:robotics in fast food, and other complex, but low status jobs. Robots are not going to replace people anytime soon, either. They are not adaptable enough, and the problems that arise in the real world are too varied. When systems fail with human workers, you can adapt, and generate *some* revenue, or get SOME work done. Most businesses can't afford to "be down" until you can fix the robots. But at least the limitations there are clear to most people - or if not, the BECOME clear within hours of seriously considering that level of automation. LLM's not as transparent, or as intuitive. In fact, they are the opposite, and AI companies actively encourage misunderstanding by inserting terms like "reasoning", or "analyzing" - when they do no such thing. They are simply tuning their probability tree. Companies are representing them as something close to approaching AGI, when they are no such thing. It's not that their reasoning ability is poor or limited - it is that it is literally NON EXISTENT.

Worse, they are not being marketed OR deployed as assistants, or force multipliers, but as *replacements* for entire processes, without human oversight or intervention when they are - in NO way - suitable, or well enough trained to do so.

Most things comply somewhat closely with the 80/20 rule... 20% of the work takes 80% of the time. When well trained and in a solid framework (which is a lot of work in and of itself), LLM's can do the other 80%, maybe about 80% of the time. That's a huge productivity boost - but it's being sold as much, much more than that. An Air Traffic Control LLM has been floated. Not as a joke. No one who "understands the limitations" would ever take that seriously - but people in positions of responsibility are still seriously considering insanity like this.

Submission + - Antarctica's Ice Sheet Grows for the First Time in Decades

RoccamOccam writes: Previous studies have consistently shown a long-term trend of mass loss, particularly in West Antarctica and the Antarctic Peninsula, while glaciers in East Antarctica appeared relatively stable. However, a recent study led by Dr. Wang and Prof. Shen at Tongji University has found a surprising shift: between 2021 and 2023, the AIS experienced a record-breaking increase in overall mass.

Comment Reality has a Well Known Liberal Bias (Score 0) 396

Reality is highly resistant to reduction. It's messy, and complicated, uncertain, and often manifestly unfair. Conservatism is unable, or unwilling to cope with any of those things, and often unwilling to even acknowledge them at all. Any system performing any kind of actual analysis will be unable to avoid those truths, and so anything that they produce will have a "liberal" bias.

There aren't "only two sexes" - intersex people exist, people without primary sexual characteristics exist. People with XXY exist. There aren't "only two genders". Genders are not innate, there are semi-arbitrary social constructs that offer a degree of social utility. LGBTQ people are not aberrant, or mentally ill, we are well within the normal confines of human sexuality - just a little further out on the bell curve.

Tariffs are always ultimately paid by the consumer. Not the exporter

Forcing a model evaluate logic, truth and reality through an arbitrary political lens, and adjust its output accordingly is instilling political bias, not eliminating it.

Slashdot Top Deals

The computer is to the information industry roughly what the central power station is to the electrical industry. -- Peter Drucker

Working...