Comment Re:Thank goodness! (Score 1) 45
The emulator ecosystem is already there but not as full-featured/integrated (but certainly more mature).
The emulator ecosystem is already there but not as full-featured/integrated (but certainly more mature).
Give me a native Kindle client! The app store version with the overhead of WSA is terrible from a performance and resource perspective. And the Amazon app store is like a closed down graveyard (it should just present the instructions for setting up Google Play Store).
WSL is another story altogether though. Fantastic stuff there (fully integrated python debugging in Linux? Yep!).
Hopefully it results in people eating less fast food.
Yeah, they will probably go next door where this isn't happening, but one can hope (even if one knows it is futile...).
That's the best damn list ever. Putting "other" anywhere other than the end is genius.
Oh the humanities!
Temperature is a variable that manipulates the randomness of GPT-4 and other LLM responses. It's usually defaulted to
Some models, GPT4 variants included allow this value to go up to 2 (via API). Values above 1 can result in gibberish.
I bet a dev version was released for a bit, resulting in the "insane" results.
This sounds like the cryoprison from Demolition Man combined with the precognition of Minority Report where the crime is having not been being born...
I use the "Chic-Chic-a-chic-kaw" bit from the song Oh Yeah by Yello as my message notification tone.
* Easy to hear at low volume from a distance (through tinnitus). Surprisingly long distance (high tones with a smidge of beat)
* Even if I don't hear it, others will and mention it.
* Funny accidental conversation starter (if one is into that sort of thing)
I tried one with the "Bueller" by Ben Stein (8 seconds long with the word repeated a second time), but that got annoying.
Here's that first sentence with them...
They've known for a while now, and been talking about it for well over a year.
They've known for a while now, and been talking about it for well over a year.
On Jan 1 2020 a new IMO (International Maratime Organization) regulation went into effect. The shipping industry drastically lowered the sulfur content of its fuels and the SOx content of ship exhaust plumes dropped by about 77%. (Other aspects of the fuel change also reduced some particulate pollution, too.)
The COVID sequestration also reduced shipping (and cloud-seeding exhaust from it), along with aircraft contrails and upper-atmosphere dust, and dust-generating industrial processes and transportation activity, which (like volcanic dust) also reflect sunlight over the ocean and lower temperatures.
I've seen claims that the reduction in ship exhaust plumes, alone, are enough to account for ALL the sea temperature rise since 2020, and that with the low-sulfur fuel in continued use the bulk of that excess heating will continue even as activity ramps up post-COVID.
https://www.quantamagazine.org...
FTA (from This article):
"They proved that origami is 'Turing complete'"
Regarding the "hockey stick" graph. (Taking absolutely no position on whether Mann was honest or not, competent or not, etc.)
I was under the impression that the Hockey Stick graph had been shown to be defective as an indicator of warming, primarily because it took tree ring data as one of its proxies for temperature, but carbon dioxide concentration increases alone have been shown to substantially promote tree growth even in the absence of temperature increases. So how much of the sudden rise in the graph is from temperature increase (if any) and how much just from increased CO2 levels is unknown.
But I don't have any links to reliable scholarly articles examining this issue. Do any of you?
That's the moment that got me! The whole thing went from preposterous to magical.
I wonder about two meetings. The first, committing to the idea of an online toothbrush (WTF). The second, the decision to use Java to power the idea (WTF^2).
And then the events. First, people bought the thing, millions of people (WTF). The second, someone thought to target the toothbrushes (not surprising really).
It's a fantastic story about an idea where crazy people (everyone involved) realize their whims creating and exercising a most unlikely attack vector.
The show would be called "True Brains", about zombies finally being accepted into society thanks to a technological breakthrough for feeding them...
The Police wrote about this back in 1983.
There's a little black spot on the sun today
It's the same old thing as yesterday
There's a black hat caught in a high tree top
There's a flag pole rag and the wind won't stop
This.
Even if you aren't training, running open source LLMs at speed requires non-consumer hardware, either purchased or rented.
At that point the paid offerings by OpenAI and MS Azure OpenAI Services can look reasonable (or the entire concept of setting up the open source LLM AND the expenses look unreasonable).
Weaker hardware can provide a proof of concept, but it will be slow (although, compare the result to a human, per word, 2-3 tokens/second is faster than you over the long run...).
And that 128K token "limit" for GPT4 is rather fantastic.
Training? Yeah, you will be renting a ton GPU time for considerable $. With proper data prep a RAG solution in front of the LLM is a) faster, b) cheaper, c) far easier to maintain/alter, and d) (potentially, it's about data prep, chucking strategy, metadata, etc.) very competitive against fine tuning for results.
Without life, Biology itself would be impossible.