Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Thank goodness! (Score 1) 45

Give me a native Kindle client! The app store version with the overhead of WSA is terrible from a performance and resource perspective. And the Amazon app store is like a closed down graveyard (it should just present the instructions for setting up Google Play Store).

WSL is another story altogether though. Fantastic stuff there (fully integrated python debugging in Linux? Yep!).

Comment My guess, accidentally released with higher Temp (Score 2) 100

Temperature is a variable that manipulates the randomness of GPT-4 and other LLM responses. It's usually defaulted to .7 (with a "standard" range of 0 to 1).

Some models, GPT4 variants included allow this value to go up to 2 (via API). Values above 1 can result in gibberish.

I bet a dev version was released for a bit, resulting in the "insane" results.

Comment Ferris Bueller "Chic-Chic-a-chic-kaw" (Score 1) 47

I use the "Chic-Chic-a-chic-kaw" bit from the song Oh Yeah by Yello as my message notification tone.

* Easy to hear at low volume from a distance (through tinnitus). Surprisingly long distance (high tones with a smidge of beat)
* Even if I don't hear it, others will and mention it.
* Funny accidental conversation starter (if one is into that sort of thing)

I tried one with the "Bueller" by Ben Stein (8 seconds long with the word repeated a second time), but that got annoying.

Comment They've know why for a while now. (Score 1) 110

They've known for a while now, and been talking about it for well over a year.

On Jan 1 2020 a new IMO (International Maratime Organization) regulation went into effect. The shipping industry drastically lowered the sulfur content of its fuels and the SOx content of ship exhaust plumes dropped by about 77%. (Other aspects of the fuel change also reduced some particulate pollution, too.)

The COVID sequestration also reduced shipping (and cloud-seeding exhaust from it), along with aircraft contrails and upper-atmosphere dust, and dust-generating industrial processes and transportation activity, which (like volcanic dust) also reflect sunlight over the ocean and lower temperatures.

I've seen claims that the reduction in ship exhaust plumes, alone, are enough to account for ALL the sea temperature rise since 2020, and that with the low-sulfur fuel in continued use the bulk of that excess heating will continue even as activity ramps up post-COVID.

Comment Regarding the hockey stick graph. (Score 1) 272

Regarding the "hockey stick" graph. (Taking absolutely no position on whether Mann was honest or not, competent or not, etc.)

I was under the impression that the Hockey Stick graph had been shown to be defective as an indicator of warming, primarily because it took tree ring data as one of its proxies for temperature, but carbon dioxide concentration increases alone have been shown to substantially promote tree growth even in the absence of temperature increases. So how much of the sudden rise in the graph is from temperature increase (if any) and how much just from increased CO2 levels is unknown.

But I don't have any links to reliable scholarly articles examining this issue. Do any of you?

Comment Re:java (Score 1) 56

That's the moment that got me! The whole thing went from preposterous to magical.

I wonder about two meetings. The first, committing to the idea of an online toothbrush (WTF). The second, the decision to use Java to power the idea (WTF^2).

And then the events. First, people bought the thing, millions of people (WTF). The second, someone thought to target the toothbrushes (not surprising really).

It's a fantastic story about an idea where crazy people (everyone involved) realize their whims creating and exercising a most unlikely attack vector.

Comment Re:It's not about performance, it's about training (Score 2) 18

This.

Even if you aren't training, running open source LLMs at speed requires non-consumer hardware, either purchased or rented.

At that point the paid offerings by OpenAI and MS Azure OpenAI Services can look reasonable (or the entire concept of setting up the open source LLM AND the expenses look unreasonable).

Weaker hardware can provide a proof of concept, but it will be slow (although, compare the result to a human, per word, 2-3 tokens/second is faster than you over the long run...).

And that 128K token "limit" for GPT4 is rather fantastic.

Training? Yeah, you will be renting a ton GPU time for considerable $. With proper data prep a RAG solution in front of the LLM is a) faster, b) cheaper, c) far easier to maintain/alter, and d) (potentially, it's about data prep, chucking strategy, metadata, etc.) very competitive against fine tuning for results.

Slashdot Top Deals

Without life, Biology itself would be impossible.

Working...