Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Err, we're already there (Score 1) 84

> Kumar: Very soon climate scientists are just going to ditch their graphs and point out the window with an expression that says, "I fucking told you!"

Don't need a graph to tell all the closed ski resorts that they have no snow, to to tell a polar bear that it's swimming rather than walking on ice.

This joke might have made sense 10 years ago, but sadly we're way past that point now.

I live in the Northeast USA where traditionally we'd get at least a couple of major snowstorms per year as well as more minor snow. It's been 7 or 8 years since my daughter has been able to go sledding, since there is just no snow ...

Comment About as good as USA could do (Score 1) 54

Let's note that all state of the art (3nm) "American" chips such as iPhone processors, NVidia GPUs and AI chips, AMD processors are all in fact manufactured by a Taiwanese company, TSMC, that relies on a Dutch company, ASML for the EUV equipment to make them.

The only major US company to make it's own processors is Intel, who embarrassingly are still stuck at same 7nm node size that Huawei are reporting here.

So basically when hit with sanctions the most we can do it retard China into making American quality chips. No wonder they are trying to deny it or put a spin on it.

Comment Re:Sometimes weather is weird (Score 2) 124

Sure, but increasingly violent weather is dangerous (as well as being the canary in the coal mine for anyone unconvinced by their ski resort having no snow, etc).

Ocean surface temperatures are much higher than normal this year - huge amount of energy in the system - that will manifest as more violent hurricanes etc as the year progresses.

Comment More realistically ... (Score 1) 190

machines could suddenly surpass human-level intelligence and decide to destroy mankind

They don't need to decide to destroy mankind ... that risk is from a distant future where AGI is automonous, with it's own goals, making it's own decisions, and with the agency to execute on them. Even then, it assumes that we've either given it control over sufficiently dangerous aspects of our infrastriucture, or it can gain access via hacking. None of these are impossible, but this is all distant future and detracts from the more immediately realistic threats.

The short term more realistic threat is not autonomous AI gone wild, but simply bad actors (individuals or states) leveraging AI as a tool for disruptive purposes such as hacking and disinformation. We don't need to consider sci-fi scenarios such as using AI to design killer viruses since a state level enterprise could use humans to do that if it really wanted to, but it'd be tantamount to declaring war on the world as well as oneself. Better to think of AI as a tool giving leverage to do what people/states are already inclined to do.

Comment Re:Twitter has improved more in one year than.. (Score 0) 160

better verification

No. Now, the blue check marks only means you've either got a phone number and $8, or you're some celebrity Musk wants people to believe paid $8.

better moderation

No. Not unless you think leaving violent attacks, child porn, and animal torture videos unmoderated is a good thing.

less caving to federal agencies

No. There have been more federal requests complied with under Musk than there were before him. Google it - you can find the numbers.

Comment ChatGPT is not a search engine (Score 1) 176

It seems there's a lot of misunderstanding about what ChatGPT (and similar LLMs) are ... having the tech packaged/presented as a chatbot has been great at popularizing it, but the fact that you can now ask it questions and get replies seems to have made a lot of people think that it's at heart some type of search engine that is attempting to factually answer questions, when really nothing could be further from the truth!

This LLM/transformer tech is built to generate language that is statistically similar to stuff it was trained on. In order to do a good job of this is has necessarily learnt quite a lot about the world being described by the training set, but nonetheless it has no notion of facts or sources .. it's just a giant meat-grinder of text that generates extremely plausible new text ...

Now, if you ask it about something that it was trained on then there's a good chance that it's response will draw from that training material and be "factual", but if you ask it about something where it has less of relevance to draw upon, then it will equally happily generate a bunch of BS out of thin air, and essentially has no way itself to know when it's doing this. It doesn't deal in facts - it deals in language.

If you want to get something more "factual" (ie. constrained by the training data) out of an LLM, then it needs to have been extensively trained on that type of material - e.g. training on programming examples is what makes Codex so good.

In general, the best use of LLMs is not as an all-knowing oracle or search engine replacement, but playing to its strength and core capability as a language processor, and use it for summaries, translations, text generation around a prompted theme etc. Obviously you can ask it questions too, but these generally need to be treated more as brainstorming suggestions, or as things that you need to check for truthfulness if being used in a context where you care.

When integrated with a search engine (vs using ChatGPT), as Bing has done, then you are more likely to get a factual response since it's using the search engine to retrieve BAU search results, and mostly relying on the LLM to understand your "query" and present the response, although you can of course still use it in more unconstrained ways too. Couple of days ago I was talking to Bing getting it to draw SVG platypuses for me. Odd world that we are entering...

Comment Re:Not A Ton Yet, But That's Changing (Score 1) 65

CHatGPT is based on a large neural network model, and when you train neural nets the goal is not for them to memorize the training material (this is known as overfitting - bad) but rather to generalize over it.

The goal of the model when trained was just to try to predict the next word in any given training sentence, so when you run the model what you are getting is a slightly randomized sequence of words that are the statistically most probable continuation of what you fed into it, with those statistics being a mashup/generalization over all the training material.

In other words, the model has no idea whether it's output is "true" (i.e. is roughly similar to something in it's training data) or not, and there simply is no concept of the source - it's just generating word by word, with each word choice having been influenced by billions of other words/contexts.

As someone else has pointed out, the same GPT-4 model that powers CHatGPT has also been integrated into Microsoft Bing (mobile app), and the way they've integrated it is for GPT-4 to perform searches based on your input, then respond based on those search results. In this case the model is aware of these URLs that have been accessed, and typically will correctly cite them (although all bets are off given the nature of these models, as discussed above).

Comment Re:Registrations up at my college (Score 1) 222

> Most students are taking the foundation courses here at a lower cost and transferring to four-year schools later.

Exactly. That's why kids/familes are giving up on college, because the cost has become outrageous and financially crippling.

It was one thing paying for a 4-year life experience when the cost was more reasonable, but very hard to justify now unless you are taking a degree that will lead to a job where you can pay off that $200K of debt.

It's to be expected that community colleges may be seeing admissions go up while the four-year schools see them going down - it's all part of the cost-driven shift out of the higher education system.

Comment NOT modelled after any brain (Score 5, Insightful) 46

> A ChatGPT-style search engine would involve firing up a huge neural network modeled on the human brain

Err, no.

In reality ChatGPT is based on a neural network architecture called a "transformer" that has zero biological inspiration behind it. This type of architecture was first proposed in a paper titled "Attention is all you need" that came out of Google.

https://arxiv.org/abs/1706.037...

The type of model grew out of wanting to be able to process sequences of data more efficiently than can be done with an LSTM (another type of neural net architecture, again with zero brain inspiration).

Comment Re:charging 2 to 3 times going rater/Kwh (Score 1) 155

Subway sandwiches aren't that bad. No great, but certainly not disgusting, unlike the McDonands burger I foolishly chose to have for lunch yesterday.

Certainly an interesting pivot or new direction though. From sandwiches to EV charging!!

It does make a little sense to extent that EV charge time is about as long as it takes to scarf down a subway sandwich, but got to wonder how may Subway locations have room for this, or could even justify the cost of a single EV charge point in the parking lot (if they have one). They tend to be located in high-street type locations where there's no space.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...