Forgot your password?
typodupeerror

Comment sanctions (Score 1) 190

ensuring they can't be traced or confiscated due to sanctions

This got me interested. What exactly is he saying there? Does it mean what I think it means - that they immediately shift that money around, possibly through some mixers, to muddle the origin? And, of course, make it better suited to pay their proxies now that Qatar isn't sending suitcases of cash to them anymore?

Comment Re:Pyrrhic Victory (Score 1) 190

It's designed to keep people off balance, uncertain, distracted and misinformed

Thank you for writing that. I was starting to think I'm going crazy and I can't possibly be the only one who sees through that.

If you ignore the messaging, and pay attention to what's actually happening

And if you realize that Trump is just the clown at the helm. There's literally an entire bureaucracy underneath him doing most of the planning, deciding and executing.

Douglas Adams was right. The role of the president is not to excert power, but to distract from it. President of the Galaxy, president of the USA, no difference.

Comment Re:on the one hand (Score 2) 76

This.

You don't need billions to be care-free. Even double-digit millions in some nice safe assets already give you enough fuck-you-money to be good. And while everyone looks at the super-super-rich and they're in various public lists and tracked by not just the tax authorities, barely anyone knows the multi-millionaires. I know three or so that I'm sure nobody on here has ever heard anything about. They stay quiet, comfortable, private.

Comment YIKES! API Price (Score 4, Interesting) 61

Just saw the reported API pricing for those who are allowed access: $25/$125 per 1M tokens. To put that into perspective Opus 4.6 is $5/$25 per 1M tokens. Even Opus 4 was "only" $15/$75 per 1M. No way this one is coming to any plans. It will be enterprise only when they do open it up more.

Still cheaper than GPT Pro though ($30/$180)

Comment Re:I use gemini (Score 1) 104

You can't code rules into models themselves. Best you can do is try to train the behavior you want but that's never going to be 100% reliable. You can do it by watching the logits from the inference engine an try to redirect the model back on track or force a hard stop. Some are doing this today. The problem is that next word low probabilities are not always the source of this problem. You also run into high probability wrong results, so it's a bit more complicated. The other issue is not all of the APIs expose logprobs, or don't by default (openAI lets you turn them on). So if you don't own the inference engine and your LLM provider doesn't support it, it's not even possible to do it yourself.

And it actually is very much in their best interest. Hallucinations are a huge issue and kill many enterprise projects in the planning or demo stage. Solving it, even if that means returning "I don't know" or a signal in the response would drive more business for them, not less.

Comment Re:Local LMs worth it? (Score 1) 46

That Mac Studio with a 2tb ssd is $7900 not $10K. The old 512GB was a little over $10K but they dropped that option. As for price as far as I know it hasn't gone up. The new M5 Max 128 didn't get a price increase over the M4 Max (with the same size SSD configured) so hopefully the next studio will follow the same pattern.

But yea if you want to run large models for a reasonable price it's the only game in town right now.

Comment Re: I already cancelled my subscription (Score 1) 46

If you want to use it async that's fine as long as async means 10's of minutes or more between turns for development work. And it's not just tokens per second. Prefill is compute bound and is going to be very slow even compared to a low end GPU. Larger contexts are going to pressure the KV cache reads which will also impact tps, and coding generally uses lots of context each turn. It all adds up.

Comment Re:AI is not there yet (Score 2) 51

Funny enough, this is one place where the fix is easy but probably not cheap. They just need to build guardrails that automatically checks any case law references against something like LexisNexis and feeds back to the AI if it makes something up. Case law is extremely well documented and fairly structured in how it's indexed. You wouldn't even need to use AI for the lookup, a competent traditional search algo would work. Of course that's going to be expensive since it will require access to the case law data electronically and SOMEONE is going to make bank on that. It's not something you will get with a $20 ChatGPT subscription.

Slashdot Top Deals

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...