Forgot your password?
typodupeerror

Comment Re:The reason is spite (Score 1) 236

There is a British idiom "Tilting at Windmills" which means to attack imaginary enemies.

The idiom is equally common in America, and I expect throughout the Western world. I know it's well-known in Spanish-speaking (obviously), and French-speaking countries. I don't know if it has spread to Asia or Russia.

Comment Re: Ideologically fueled insanity. (Score 1) 236

The conversation was not how quickly they were fixed, but how often they break down and need to have trucks bringing parts in.

The primary problem during the big freeze was natgas plants that weren't designed to operate in such cold conditions. 58% of the unplanned outages were from natgas. Wind generation also suffered, but the dip was smaller and the recovery faster. https://www.ferc.gov/news-even...

Comment Re:'Any Lawful' (Score 1) 22

'Any Lawful' Use of AI... by the people that can rewrite the law at any point in time to have it say anything they want.

Nice soundbite, but its total bullshit.

Bwahahahaha! If they actually had to rewrite the law it'd be great, because the process of rewriting the law is intentionally designed to have lots of checks.

But we've given up on that. At least, that's what the last few administrations have been trying to do, and the current one more than any... and Congress is sitting on its collective thumb and letting it happen, when it isn't actively collaborating. The courts are fighting a rearguard action, but they are too slow, because this was never supposed to be their job, and are often being opposed by the highest court.

Comment Re:Shifting the blame and cost (Score 1) 43

So they're asking users to pay for tokens despite a good portion of tokens being consumed for nothing because of the number of attempts it takes to generate anything usable.

If you're bad at using the tools, is that their fault?

Good prompting and good context management are non-trivial, but they are things you can learn to do.

Good prompting is really just good communication. Pretend you were telling a junior developer who is very bright and somewhat overenthusiastic what to do via email, and that you can't send them another email for several hours. If you give them incorrect instructions, they're going to produce incorrect results. If you give them vague instructions, they're going to spend a lot of time building their guess at what you want or -- often worse -- reading the whole codebase to gather the context required to figure out what you want. (Humans hate reading huge amounts of code, so a human dev probably wouldn't do that, but an LLM will).

And what you need to communicate isn't just what to do, but how. As one example, I do most of my work in statically-typed languages, primarily Rust and C++, and I find that the LLMs really all seem to primarily be Python jockeys. They can write Rust or C++ just fine, but they don't really think about how to take advantage of strong typing. If I ask them to refactor something, the first thing they want to do is to go scan the entire codebase to see what will be affected by it. In a dynamically-typed language (especially if you don't have good unit tests), this is the right thing to do. Sometimes the LLM can use grep or sed to find the relevant code efficiently. Sometimes they need to actually ingest thousands of lines of code (newly-loaded tokens!) and that gets expensive.

What an experienced human Rust/C++ programmer will do, and what an LLM can do if you tell it to, is to rely on the compiler. Think about how to structure your refactor so that all of the places that need to be updated will be broken, then let the compiler tell you where all of them are, then fix them. This is much more efficient, for humans or LLMs, but an LLM won't do that unless you specifically tell it to. A junior dev might not think to, either.

As with a human, it's usually a good idea to have a conversation about the task before telling them to start the task, to make sure you and they both understand well what is to be done. But this leads into another important cost-management issue: Context management. If you're going to have an extended back-and forth with an LLM, make sure that it doesn't have a lot of extraneous data in its context window.

Context management is crucial to keeping costs down. Every time you submit a prompt, the model has to load the entire contents of its context window. "Reloaded" tokens are a lot cheaper than "newly-loaded" tokens, but when the context window is 1M tokens, the costs can add up fast. One solution is to use a model with a small context window. That works, but then you have a junior developer who doesn't understand much and constantly forgets what he does understand. For some tasks, especially very mechanical tasks, that works fine (in fact, for some tasks it's actually better). But if you're doing something that requires understanding a large codebase or lots of other context, such as large requirements documents or something, you're going to get stupid results from a model that doesn't have enough context. On the other hand, clearing the context too often means having to reload it more, and newly-loaded tokens cost more than reloaded tokens. (There are also output tokens to consider, but I find those aren't usually relevant to cost). So, knowing when to use a larger or smaller window and when to clear the context window are essential skills for keeping the costs down.

A related choice is which model to use, and this interacts strongly with context window size/content. I primarily use Claude Code, and most of the time I keep it set on the default Sonnet model with a 200k context window. When doing something larger, I bump that to 1M tokens. When I need help thinking through a complex design question, I switch to Opus, but usually with a fresh context window. I have some good project summary documents (a few thousand tokens) that provide high-level context for cheap, so I clear the context window, tell it to read the project docs (I have a skill for that, with a short name), and then start working through the issue.

There's a lot more I could add, but this is long enough. The TL;DR is that using LLMs effectively is a skill -- a rapidly evolving one. Perhaps in the near future the LLMs themselves will get better at context management, model selection and knowing when to ask cheap followup questions rather than do a lot of expensive research. But right now, they don't.

Comment Re:The first hit is always free. (Score 1) 43

This will really become the problem for selling commercial access to frontier models, if it proves to be true. (I tend to believe it will ).

If the models get thousand-fold cheaper to run, than the hardware needed to do it will be something anyone interested in more than very occasional use will be able to justify. Even if it ends up not looking exactly like consumer GPU/NPU offerings today, it will land in PC and likely even SBCs soon enough.

So now the pure AI companies will have big problem, how to charge enough to pay to build and train their next model while not pricing people out of their cloud offerings in favor of a $200 expansion card - or even a $2000 expansion card - and some maybe not as good but very good free-as-in-beer models, which both academia, non-profits, and hobby groups probably can produce.

Which is why I don't companies like OpenAI and Anthropic being able to continue with an inferences as the product business-model. They are going to have to be acquired by the Alphabets and Microsoft's of the world who can eat the costs of leading edge model development and fund them with margin from other lines of business, and want to do so because they offer "better" inference as a feature in their other proprietary software tools and platform offerings.

Setting VC money on fire has never been a sustainable business-model, eventually the activity has pay for itself or it has be vertically integrated into something that does.

Comment Hand-waiving (Score 2) 74

> In an old building, there is a good chance that infrasound is present, particularly in basements where aging pipes and ventilation systems produce low-frequency vibrations

This was a long summary and offered no support for these claims.

Why would aging pipes *resonate* at sub-20Hz frequencies?

Why wouldn't modern pipes?

What about a metal "aging process" would cause this?

What are we to make of a "haunted" Scottish castle built 800 years ago?

Look, when I was five my parents' oil burner would kick off with a terrifying rumble, but I'm not making any building science claims here.

Comment Re:Mythbusters (Score 1) 74

It is very hard to prove a negative. However when it comes to stuff like this, proving it does not happen under likely conditions.

Proving you can use infra-sound to make people more prone to certain kinds of imagination under very controlled conditions is interesting but does not explain why people often think old buildings are haunted, even when some infra-sound is present.

Once you get to 'and all the stars are aligned' territory what have shown is maybe that one guy that had some sudden psychic break onetime could be explained this way, but that does not make "people think old buildings are haunted because radiator pipes vibrate" - "Plausible" in mythbusters parlance.

Comment Re:All for taxing the rich (Score 1) 326

Making it continuous avoids having strange behaviours near bracket limits (where a pay raise can result in an actual pay cut). This is something the rich fear as much as anyone, hence the anxiety around whether earning more will get you more. With an S-curve, you can provide that as a hard guarantee whilst also making the current notion of high-scoring (billion and trillion dollar pay packets) completely senseless economically -- without denying the rich the glory if that's the kink they're into.

It also means that you don't have an "upper bracket" where people well beyond it are essentially getting free cash. It's also more computer-friendly. It also becomes possible to make a much higher maximum tax.

But, yeah, you're correct in principle.

Comment Re:Speaking of Amazon and books... (Score 1) 57

All very true. How long before AI displaces human narration probably depends mostly on the cost-sensitivity of listeners. I'm absolutely willing to pay $5-10 more per book for talented human narration, but in general consumers prioritize price over quality as long as quality meets a certain threshold. LLMs don't meet that bar at present, but they might before too long. And, eventually, I'm sure they'll match the capability of human voice actors, though I have no idea how long that might take. We might have an extended "uncanny valley" period during which they're good enough that authors don't bother with the expense of a human, but they really aren't as good.

Comment Re:All for taxing the rich (Score 2) 326

You definitely should pay more marginal tax as you make more money, up to some point. The first $10 or $20k you make should be tax-free, and then the tax rate should become progressively higher after that, but should max out around 30% or so. But that's only 30% of income, not 30% of net assets. Taxing assets is theft. Taxing income is progressive.

However, the accumulation of wealth into the hands of people who make good investments (i.e. making good choices of what to spend it on so that they invest in something profitable) is a valuable feature of the system, and is the reason it's allowed to happen. The government is bad at choosing projects to invest in. The market does it automatically and in an efficient and distributed manner. Picking the right investments is the useful work that entrepreneurs and capitalists are providing to society. If you do something to prevent more capital being given to the people who are making the best investment decisions, then you're actively discouraging the efficient allocation of capital. That would be a good way to run your country into the ground.

The government's job is to regulate the negative aspects of capitalism. That means it has to prevent tax loop-holes for the wealthy (as you said), stop corrupt officials from profiting from their position, punish companies for monopolistic behavior, and reduce the influence of money in the political system. These are the things we should be voting for. Not a 5% government approved theft of assets.

Comment Re:Oh no, anyway (Score 3, Interesting) 15

Noticings:

Sora shutting down.
Musk lawsuit back in the news.
Altman asked to step aside.
Whistleblower 'suicide' case being reexamined.
Actual suicide lawsuits, encouraged by chatbot, allegedly.
Memory wafer deal off?
Stargate collapsing, rumors Oracle could be caught in the wake.
Anthropic bails on selling murder services to DoW, OAI jumps in. ...
Microsoft creating distance.

Alone it probably doesn't mean much but this thing has real Sun Microsystems vibes in aggregate.

And here I thought the circular financing deals alone were disqualifying.

Good luck to the investors.

Slashdot Top Deals

Why don't you fix your little problem... and light this candle? -- Alan Shepherd, the first man into space, Gemini program

Working...