Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:And there's the little footnote (Score 1) 229

I'm not seeing any evidence for god at all

This is like a blind person proclaiming that they see no evidence that light exists. The evidence is all around you. Just because you don't see it, doesn't mean it's not there.

Things exist. Things don't create themselves. Therefore, there is a Creator. It's that simple. I feel sad that you can't see this.

Just because the truth is hard to find (you know, the one religion that got it right), doesn't mean it can't be found, or that it doesn't exist.

Comment Now, how about forced binding arbitration (Score 2) 93

Just about every company requires you to sign, as a condition of employment, an agreement that if you have a dispute with the company, you will submit to binding arbitration, with an arbitrator of the company's choosing. This effectively makes it impossible to sue an employer for misconduct. Binding arbitration needs to go.

Comment Re:50% (Score 1) 37

I too see value in less-than-perfect solutions.

In the case of AI, your proposed solution, isn't any kind of solution, not even an imperfect one. Viewing AI from the perspective of an "AI Doomer" (as the headline suggests) and doing everything one can to limit it, is neither a solution nor productive.

Unlike the Doomsday Clock people, I don't view AI as having the potential to kill off humanity. It's a technology that will be tremendously useful to humanity, and has some (but not overwhelming) risk. Like the computer itself, and the internet, we should embrace and regulate AI.

Comment In the middle of the alphabet (Score 1) 72

In school, I used to be annoyed that my last name starts with "I" which puts me roughly in the middle of the alphabet. So when students were selected in alphabetical order, I was somewhere in the middle. And when the teacher would change things up and reverse the direction, I was *still* right there in the middle!

This research explains a lot!

Submission + - Voyager 1 is sending data back to Earth for the first time in 5 months (cnn.com)

Tony Isaac writes: Voyager 1 is once again communicating back to Earth and appears to be functioning normally. Kudos to those NASA engineers who figured out how to diagnose that a chip was defective, and rewrite its code to avoid using that chip entirely! I can just imagine what kind of spaghetti code that is by now, but they figured out how to get it to work. I guess V'ger isn't quite here yet!

Comment Re:This makes sense (Score 1) 74

I didn't ever say Bing Copilot was used, so your comment makes no sense to me. I offered Bing Copilot as an illustration of a GPT-4 implementation that is able to overcome the age of the training data set, by incorporating search results in its context window. Bing Copilot is able to offer code solutions that pertain to APIs published *after* the GPT-4 model was trained, using this technique. The new API documentation doesn't have to be part of the model, it can still produce reasonable results by incorporating the context window.

The researchers seem to have used the same principle. They wouldn't have to incorporate the CVE into the training model itself, only the context window. If they were updating the GPT-4 training model itself, they would likely have said so in the description of their work. Instead, they used the phrase "by reading security advisories." The word "reading" does not lead one to believe that they updated the training data, but rather, that they used the existing, pretrained model to analyze the documents.

The CVE descriptions might be vague to humans, but more digestible to an LLM. I've seen this clearly illustrated by Google's NotebookLM. https://notebooklm.google.com/... With this tool, you can upload, say, a homeowner's insurance policy, and then ask the AI questions about the policy. That policy language is intentionally obtuse and difficult to read, but the LLM doesn't seem to have any difficulty with it.

Comment Re:This makes sense (Score 1) 74

You are right, and your answer does not conflict with mine.

Bing Copilot uses web searches to populate the context (assuming the web search can find the CVEs). This research used the API (or some other mechanism) to populate the context. Both have the same result. Neither approach requires the model itself to be current.

Comment Re:Turnkey totalitarianism (Score 1) 264

That's not a refutation. It's a rant.

In all your other posts, you've never refuted that Palestinians and Arabs started all 16 wars against Israel. You only said "That's what the State Department wants me to believe." That's not evidence or a supporting argument, it's conspiracy theory. (Conspiracy theories are something Fox News specializes in.)

You never disputed that Hamas has never recognized Israel's right to exist. Not even once. I guess that's OK with you, they apparently have a right to want to wipe out Israel, in your book.

You never acknowledged that Israel has a right to exist and to defend itself.

If you're so sure my points are PR BS, prove it. But there's a big problem for you, you can't. You only have your own talking points, and anything that strays from that, you dismiss as "Fox News PR." If you had an actual argument, you wouldn't be afraid to state it.

Comment Re: But ... (Score 1) 74

It depends on your definition of "working." Sure, a sandbox implementation is easy. Modifying your existing application in such a way that the added functionality works and is "right" is a lot harder. Yes, GitHub Copilot uses GPT-4 to modify your existing source code, updating it based on input prompts. It often generates code changes that are "almost" correct, but in nearly every case, I have to tweak it in some way, often before it will even compile.

Making a standalone bit of code isn't hard. Making it work in the context of something larger, isn't so easy.

Comment Re:This makes sense (Score 1) 74

Your own quote makes it clear. "When given the CVE description, GPT-4..."

GPT-4 is already pre-trained. "Pretrained" is literally the P in the name GPT. They used GPT-4, *combined with* the CVE descriptions. They didn't alter the training of GPT-4, 3, or the other models. If they altered the training of GPT-4, it would no longer *be* GPT-4, but a modified version of GPT-4.

Bing Copilot searches the internet to find documentation. This study provided the documentation via API. It's exactly the same thing, just different document sources fed into the API.

Comment Re:This makes sense (Score 1) 74

The headline makes it quite clear, it does this by "reading security advisories." GPT-4 isn't *trained* with data that includes the advisories, which might well have been released after the cutoff date. What you may not realize, is that recent implementations of GPT-4, such as Bing Copilot, don't just rely on the training data. After you type a question, it often does a web search for related information, digests it, and summarizes what it found. The cutoff date is meaningless with this approach.

I've used Copilot, for example, to help with AWS Cognito API calls for versions that were released after the cutoff. Copilot has no trouble with this kind of thing, because it searches online for the relevant documents, digests them, and spits out code.

Slashdot Top Deals

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...