Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re: Modern "news" is nothing but opinion pieces. (Score 3, Insightful) 108

conservative media is not engaged in a discussion of subtleties. It's knowingly and deliberately engaged in a plan of mass deception.

While all 24/7 news sources are bad, I agree that conservative media is worse since they tend to argue in bad faith. Historically this goes back to Roger Ailes famous memo to Nixon about creating a propaganda network for conservatives. He was too late for Nixon, but its worked well in recent times. Perhaps too well...

Comment Re:Yeah no shit (Score 2) 199

Private equity exists to make profits. Hospitals exist to help sick people. Combine the two and you get "how can we profit from sick people"

Agreed. In the context of this study, medical errors generate more billable procedures, so they are incentived to cut corners especially if it increases the chance of billable problems. How many professions out there where you make more money when you screw up?

Comment Re:No suprise (Score 1) 176

Weird that we cannot create such sets when it comes to scientific papers and doctorate theses...

The strong LLMs come from companies, and I'm sure they save their training sets and could retrain with data that existed before any poisoning of the well. (Assuming courts don't eventually force them to delete this data.) Presumably, they will consider the trade-offs of using newer data versus the older data along with architectural and training technique advancements to get better LLMs.

So no, the LLMs have no reason to get worse.

Also, we now have improved techniques to determine if a LLM generated some text. Roughly, you give the LLM some of the initial text and see how it completes the text. By testing with all the popular LLM models, with high accuracy you determine if a LLM from your test set generated the text. https://arxiv.org/abs/2305.173...

Comment Re:Victim of Own Success (Score 1) 259

With Netflix unwilling to pay them enough to make up for declining broadcast/theater earnings, the studios had to start their own streaming services.

Corporations want to maximize money not just keep things the same. They started their own streaming services because they thought they could make more money than licensing to Netflix. They set their licensing to ridiculously high levels to match the ridiculously high values they though they could make off their own streaming service.

Comment Re:This guy is measuring citation patterns (Score 1) 114

THIS got into Nature? Were the peer reviewers sleeping?

No offense, but I'll take those blind reviewers over your speculation based on a blurb about the paper and a search for Internet.

The abstract says "We find that the observed declines are unlikely to be driven by changes in the quality of published science, citation practices or field-specific factors", so it looks like they attempted to control for such factors.

Comment Re: An average physician is wrong 90% of the time (Score 1) 70

Do you have an even longer list where the doctor got it right?

Not that I don't agree with you. Doctor's work well for statistically common problems, but are often frustratingly bad for more rare issues. I always found it validating how House got it wrong three times before he got it right. Normally when a doctor gets it wrong, you are off to the next doctor, and they never know/care they got it wrong.

Also, even when you get a correct diagnoses, most doctors are not up on the latest research. If you really research the problem yourself, you might not have the full picture a doctor has from their training and experience, but you will probably have better knowledge of some details and the current research. However, beware of single studies showing a result since we now know that various issues like publication bias and p hacking make much more than 5% of those invalid. It's probably closer to 50%.

Comment Re:For example (Score 1) 113

I worked in that field (and atmospheric science / climatology) for 15 years. It used to be you needed to study thermodynamics, atmospheric chemistry and a whole range of related fields to be able to write a halfway decent prediction model.

Sounds like you are out of the field. Just because people are doing it differently than you did doesn't make them idiots. People will now have to study ML, math, and physics to understand how to improve ML generalization in the face of chaotic behavior. I'm sure many are trying to devise hybrid approaches that include invariants from physics in the ML.

Now I suppose it's true that someone can just cobble some ML together that gets surprisingly good performance and that the marginal gains beyond that with chaos are somewhat small. However, even in your day, someone could use an existing model if they weren't proprietary models that needed supercomputers for timely execution.

Comment Re:Pointless report (Score 1) 65

You need to take your meds, a literal socialist ran for president in 2016.

I assume you are talking about Bernie. First he lost, second he is a self-described democratic socialist not a socialist. https://en.wikipedia.org/wiki/...

Bernie's more in line with Nordic social democracy than some abstract evil notion of socialism. The Democrats consistently in power are more in line with moderate Republicans of the past.

In my opinion, the problem on the right is the political media machine that is built to pander at best and manipulate at worst. They purposely generate bad faith arguments meant to win talking points. Maybe this is not surprising from politicians, but the right needs a more honest media that attempts to hold people accountable and get closer to the truth.

While 24 hour cables networks on the left are mostly fluff, I feel the left media, in its many forms, has more people who actually care about making good faith arguments. These types of people on the right seem to have disappeared...

Comment Re:Hmmmm... I guess that settles it. (Score 2) 151

Your implication here is that an imperfect or lossy copy isn't a copy. I'd have to disagree with that.

I wasn't implying that. I was just saying, based on information theory, these LLM can't store a perfect copy of all the training data. That's it. It can't be reversed.

A hard disk may only theoretically store X TB of information and X*Y TB with compression but in reality using side channel methods we can extrapolate double or even triple that by looking at and contrasting the the analog values and finding interference patterns left behind by previously stored data.

I don't see the point of this. Technically there's close to infinite "information" on the drive, just based on the organization of the atoms, it's just that we didn't write it, so we don't know how to access it or what to make of it if we could.

But aside from that we have lossy copies all over the place in life and computing.

So you are saying the LLM has a lossy copy of the data where lossy just means that some of data can be recovered. Well I guess I can't disagree. The point of the research was to get the LLM to regurgitate some of its training data. Therefore some of the data is effectively stored. That's not surprising.

You are giving a circular definition by defining an average as giving an average. An average is meant to record the significant data in the value. It is an abbreviated summary of that information, in other words, a lossy memorization.

No I'm using the word average in two different ways. You could call it equivocation, but it's not really meant to confuse the issue, and I don't think it confused you. I guess you're upset I hijacked your example, but it was kind of silly. You playing semantic games is also a bit silly.

Not really, they can only generate recombination of their training data.

I guess you could say that but it's pretty meaningless. Their training data contains almost every word in existence, and they are returning a combination of words...

Honestly, it sound like you have never used one of the sophisticated LLM in a significant way. Ask it to do something silly. I just asked GPT 3.5 to pretend it was from Shakespeare's time and tell me how to clean my tankless water heater. That's probably the first time that output text has existed.

Comment Re:Hmmmm... I guess that settles it. (Score 2) 151

Actually they do and store that information both in tokens and relative weights. There is no information in these systems except information which came from training data and because aspect is stored in many redundant weights

There is lots of "information" that comes from the random process that is used to initialize the model and train the model. While you might dispute whether this is useful, evolution would disagree with you.

we know it is algebraically reversible even if we can't practically reverse the network by hand.

Current belief is that it's impossible to reverse the process and get all the data back. The size of these models is too small to encode all the data used for training. For example GPT-4 used 13 trillion tokens and has 1.76 trillion parameters. Assuming those are 16 bit parameters (they are probably 8 bit), the model could only store about 28 Tb of data. Assuming English, those 13 trillion tokens, ignoring spaces and punctuation, would need a compression ratio of around 0.4583 per character which is about twice what is currently state of the art even when using LLM techniques.

That is like claiming a system for calculating bowling average isn't intended to memorize your bowling score.

It isn't meant to memorize your scores. It's meant to give average performance. Summing and averaging throw away lots of information. Not only do you lose the order and number of games played, there's also the finite precision of a real machine with a single output number.

the neural net data is ultimately nothing but a lossy and obfuscated copy of the training input.

These LLM can generate much more than their training data. Ultimately doesn't that make them more just a copy of that data.

Comment Re:Or ever... (Score 1) 114

Q* is the typical name given to the optimal Q function in RL which defines the optimal policy. In general, many algorithms add a * suffix to denote the optimal version. In a similar way, the A* algorithm is the A algorithm that is guaranteed optimal because of an assumption on the scores.

In other words, the name Q* does not imply any connection to best-first search.

Comment Re:The elephant in the room is China, maybe India (Score 1) 218

Its physics. Its not ethics, history or literary criticism. Its not about fairness or population size. It is entirely about how many tons a country is emitting. That's the physics of the matter.

What does the size of a country have to do with physics. Is Vatican City the greenest country in the world? Maybe we should lump all the Western countries together and compare to China. NATO has a population of around 1 billion, so it's at least close to China.

Now I'll agree China is a useful focal point since it has a lot of people, and its government can unilaterally change policy, but it doesn't make sense to put the burden or blame on them. If you really want to consider blame then one needs to look at the total history of CO2 emission. Who do you think put the majority of that extra carbon into the atmosphere?

Comment Re:Creativity (Score 1) 179

If creativity is just previous ideas mixed up then where did the first ideas come from?

Probably borrowed from the evolved behavior of another animal. In fact, even many of our current ideas are borrowed from nature.

AC will not be read or replied to. 99% of you are trolls.

Strange, you just trolled an AC. Maybe you need to reevaluate.

Slashdot Top Deals

Any given program will expand to fill available memory.

Working...