Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Marketing (Score 1) 19

It was the first wearable AI pin, sort of like a star trek communicator. And it was terrible.
 
But every time someone writes a 500 word article about those two facts, it gets a million clicks and tons of ad revenue, so they're gonna continue to do it until people stop clicking on the headline.

Comment Re:Slashdot (Score 1) 49

For me it's the Final Fantasy II trap.

As a kid, my first run through Final Fantasy II, I had gotten like halfway through when I hit a fairly difficult area, and I was getting tired of the fights, so rather than spending time leveling up and whatnot before going there, I just increasingly started making a habit of running away from enemies. And it worked great, I got further and further and further, really quickly. But my level correspondingly fell further and further behind what it should have been for the area, to the point where ultimately I could no longer beat the bosses and advance further.

Comment Re: Rules for thee but not for me (Score 1, Insightful) 36

"This is the taboo question that no none is allowed to ask, because everyone already knows the answer, and the answer is not the evil racist white man."

In fact that often IS the answer. Nations were destroyed with colonialism, and racism was literally invented to excuse it. Many have also been deliberately suppressed since through various foul means including sanctions, backing coups, and outright assassination. That answer is the real taboo, especially if you ask the governments responsible.

Comment Re:The solution no one will implement (Score 0) 39

Here's the obvious solution that none of these companies will implement. Don't create an AI that purports to know anything. They don't. Instead, make one that can explain it's answers or reasoning and doesn't pretend to understand anything.

Nobody knows how to do that, at least not for a model of useful size. It would have to be reasoning in order to explain, but they aren't doing that.

Comment Re:Shamefully misleading use of term (Score 1) 68

Good to see we're abandoning the premise that the logic behind LLMs is "simple".

LLMs, these immensely complex models, function basically as the most insane flow chart you could imagine. Billions of nodes and interconnections between them. Nodes not receiving just yes-or-no inputs, but any degree of nuance. Outputs likewise not being yes-or-no, but any degree of nuance as well. And many questions superimposed atop each node simultaneously, with the differences between different questions teased out at later nodes. All self-assembled to contain a model of how the universe and the things within it interact.

At least, that's for the FFNs - the attention blocks add in yet another level of complexity, allowing the model to query a latent-space memory, which each FFN block then outputs transformed for the next layer. The latent space memory being.... all concepts in the universe that exist, and any that could theoretically exist between any number of existing concepts. These are located in an N-dimensional space, where N is hundreds to thousands. The degree of relationship between concepts can be measured by their cosine similarity. So for *each token* at *each layer*, a conceptual representation of somewhere in the space of everything that does or could exist is taken, and based on all the other things-that-does-or-could exists and their relative relations to each other, are transformed by the above insane-flow-chart FFN into the next positional state.

Words don't exist in a vacuum. Words are a reflection of the universe that led to their creation. To get good at predicting words, you have to have a good model of the underlying world and all the complexity of the interactions therein. It took achieving the Transformers architecture, with the combination of FFNs and an attention mechanism, along with mind-bogglingly huge scales of interactions (the exponential interaction of billions of parameters), to do this - to develop this compressed representation of "how everything in the known universe interacts".

Comment Re:I prefer to be in charge of my vehicle's brakin (Score 1) 281

The speed sensitive cruise control systems should not permit you to choose a following distance which is so excessively close.

They don't. That is one of the chief complaints about adaptive cruise control systems by people

The systems do in fact allow you to choose less than 3 seconds' following distance. People are literally complaining that the system won't let them drive unsafely.

The more space you have between cars the faster you can safely move on the road in question which also means the higher the road capacity.

The faster you go, the more space you need between cars to maintain safe following distance. If I have a safe following distance between me and the car ahead, if someone merges into that space then I no longer have safe following distance, so now we need even more space. At commute times there is not enough road available for all the cars to have safe following distance at speed. This is what happens on any overutilized road. If you wait for that much distance to appear then traffic backs up at the point of entry.

Slashdot Top Deals

One of the chief duties of the mathematician in acting as an advisor... is to discourage... from expecting too much from mathematics. -- N. Wiener

Working...