Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Jumping the shark. (Score 1) 33

Sounds like Jack went full Kanye, and got booted out of Bluesky.

He didn't get "booted", but it was a rather amusing situation, where he dropped a bunch of seed money on Jay's project to make Bluesky... only to find out that the vast majority of the people who flocked there don't actually like him, and weren't afraid to let him know ;)

Comment Re:Jumping the shark. (Score 1) 33

1) You don't open to more people than you have the capacity to serve.

2) They do not use the same backend. Bluesky's backend is specifically designed to fix Mastodon's design flaws that make it so annoying.

3) Bluesky is growing far faster than Mastodon.

Comment Re:Sure, self driving cars. (Score 1) 151

"Seems like we (as a society) are more ready to accept some deaths and accidents than to foot the bill to keep them from happening."

Really? Last year 43,000 people died in automobile accidents in the US. Averaged out, that's 117 people each and every day,

If an airliner fell out of the sky every three days or so no one would fly. At all. Ever.

But driving?

Just the price we have to pay for our collective automotive "freedom".

Comment Re:Slashdot (Score 2) 79

For me it's the Final Fantasy II trap.

As a kid, my first run through Final Fantasy II, I had gotten like halfway through when I hit a fairly difficult area, and I was getting tired of the fights, so rather than spending time leveling up and whatnot before going there, I just increasingly started making a habit of running away from enemies. And it worked great, I got further and further and further, really quickly. But my level correspondingly fell further and further behind what it should have been for the area, to the point where ultimately I could no longer beat the bosses and advance further.

Comment Re:No thanks (Score 2) 51

Was thinking the same thing.

Don't force me to do 2FA....I have it on accounts that matter...but half the shit on the multiple gmail or other accounts is just crap I don't care that much about....they're often used to sign up for stuff to keep from giving my 'real' info....

Comment Re:That's just RAG. (Score 1) 71

From the standpoint of wanting something reasonably factual, I am pretty sure it would be dumb to rely on any AI that's using Tweets as the main part of its training data set.

Training an AI on X content is the ultimate goal to which the computer science phrase "Garbage in, garbage out" has been moving for almost 70 years.

We can finally close out that phrase and move on to the next frontier in CS.

Comment Re:Sidewalk (Score 0) 173

Funny thing is, Rambo is a film with strong themes of PTSD and mistreatment by the police. These days it would be decried as "woke".

Not really...

Rambo wasn't trans, or gay, or feeling he had to promote any of that to children....and didn't seem big on pronouns.

Rambo also never seemed to play the race card...and he certainly wasn't claiming to be a victim...he never had blue hair and whined incessantly on screen about everything.

And in spite of having some rough times in life, Rambo genuinely loved the United States of America and was proud to have served.

Comment Re:Shamefully misleading use of term (Score 1) 71

Good to see we're abandoning the premise that the logic behind LLMs is "simple".

LLMs, these immensely complex models, function basically as the most insane flow chart you could imagine. Billions of nodes and interconnections between them. Nodes not receiving just yes-or-no inputs, but any degree of nuance. Outputs likewise not being yes-or-no, but any degree of nuance as well. And many questions superimposed atop each node simultaneously, with the differences between different questions teased out at later nodes. All self-assembled to contain a model of how the universe and the things within it interact.

At least, that's for the FFNs - the attention blocks add in yet another level of complexity, allowing the model to query a latent-space memory, which each FFN block then outputs transformed for the next layer. The latent space memory being.... all concepts in the universe that exist, and any that could theoretically exist between any number of existing concepts. These are located in an N-dimensional space, where N is hundreds to thousands. The degree of relationship between concepts can be measured by their cosine similarity. So for *each token* at *each layer*, a conceptual representation of somewhere in the space of everything that does or could exist is taken, and based on all the other things-that-does-or-could exists and their relative relations to each other, are transformed by the above insane-flow-chart FFN into the next positional state.

Words don't exist in a vacuum. Words are a reflection of the universe that led to their creation. To get good at predicting words, you have to have a good model of the underlying world and all the complexity of the interactions therein. It took achieving the Transformers architecture, with the combination of FFNs and an attention mechanism, along with mind-bogglingly huge scales of interactions (the exponential interaction of billions of parameters), to do this - to develop this compressed representation of "how everything in the known universe interacts".

Slashdot Top Deals

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle. - Edmund Burke

Working...