Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment This arms race eventually ends in human extinction (Score 2) 56

On one hand, this is impressive, and probably useful. If someone made a tool like this in almost any other domain, I'd have nothing but praise. But unfortunately, I think this release, and OpenAI's overall trajectory, is net bad for the world.

Right now there are two concurrent arms races happening. The first is between AI labs, trying to build the smartest systems they can as fast as they can. The second is the race between advancing AI capability and AI alignment, that is, our ability to understand and control these systems. Right now, OpenAI is the main force driving the arms race in capabilities–not so much because they're far ahead in the capabilities themselves, but because they're slightly ahead and are pushing the hardest for productization.

Unfortunately at the current pace of advancement in AI capability, I think a future system will reach the level of being a recursively self-improving superintelligence before we're ready for it. GPT-4 is not that system, but I don't think there's all that much time left. And OpenAI has put us in a situation where humanity is not, collectively, able to stop at the brink; there are too many companies racing too closely, and they have every incentive to deny the dangers until it's too late.

Five years ago, AI alignment research was going very slowly, and people were saying that a major reason for this was that we needed some AI systems to experiment with. Starting around GPT-3, we've had those systems, and alignment research has been undergoing a renaissance. If we could _stop there_ for a few years, scale no further, invent no more tricks for squeezing more performance out of the same amount of compute, I think we'd be on track to create AIs that create a good future for everyone. As it is, I think humanity probably isn't going to make it.

In https://openai.com/blog/planni... Sam Altman wrote:

> At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.

I think we've passed that point already, but if GPT-4 is the slowdown point, it'll at least be a lot better than if they continue at this rate going forward. I'd like to see this be more than lip service.

Survey data on what ML researchers expect: https://aiimpacts.org/how-bad-...
An example concrete scenario of how a chatbot turns into a misaligned superintelligence:
https://www.lesswrong.com/post...
Extra-pessimistic predictions by Eliezer Yudkowsky: https://www.lesswrong.com/post...

Comment Capabilities are outpace alignment (Score 1, Interesting) 61

This is undeniably cool and impressive, but, I think proceeding down this research path, at this pace, is quite irresponsible.

The primary effect of OpenAI's work has been to set off an arms race, and the effect of *that* is that humanity no longer has the ability to make decisions about how fast and how far to go with AGI development.

Obviously this isn't a system that's going to recursively self-improve and wipe out humanity. But if you extrapolate the current crazy-fast rate of advancement a bit into the future, it's clearly heading towards a point where this gets extremely dangerous.

It's good that they're paying lip service to safety/aligment, what actually matters, from a safety perspective, is the relative rates of progress in how well we can understand and control these language models, and how capable we make them. There *is* good research happening in language-model understanding/control, but it's happening slowly, compared to the rate of capability advances, and that's a problem.

Comment I find it easier to mitigate the damage. (Score 1) 385

I've watched my friends get hacked countless times. In the end everything gets taken care of, but for those few days while everything is cancelled or locked down they're broke. Which makes it hard to buy diapers. But fortunately they've got family in town. (I keep lecturing them about using cards at gas stations...)

I've been the victim of credit card fraud once. But I've had cards preemptively cancelled multiple times because they were used at companies that got hacked (target, home depot, etc) I've also had cards cancelled because the issuer (usaa) was switching from mastercard to visa. Sometimes you get notice. Sometimes you don't

So my solution is to keep multiple credit cards and multiple ATM cards. Two of which are normally left at home. Or if I'm travelling the backup cards are deep inside my backpack. If I get hacked or lose my wallet I still have options to pay for things.

Comment Re:Exponential does not just mean "a lot" ... (Score 2) 148

Hard to judge. They might be using it incorrectly or they might be using it in context of what this device is. The overall gear ratio is achieved by feeding one gear stage into the next. Where each additional stage providing a further multiplication. That sounds like exponential growth to me.

Comment Re:Call me crazy (Score 1) 115

Sorry for the bad post. Yes, the first link does not work, but it is what is documented in hpet.c as the reference. A sentence went missing somewhere saying that I couldn't find it. The second link, which does work, is what I've found so far. I have yet to find something newer which documents the latching behavior that was claimed.

Sorry again for the bad post.
-Nyall

Comment Re:Call me crazy (Score 1) 115

OK then. Where in this return statement are the lower 32 bits read first? I don't believe the bitwise or operator is a sequence point. (The logical one is)

return readl(addr) | (((unsigned long long)readl(addr + 4)) http://www.intel.com/hardwared...

but I did find the following, which documents the race condition I explained above.

http://www.intel.com/content/d...

I will search for newer documentation than a 1.0a.

Comment Call me crazy (Score 4, Interesting) 115

Sorry if I've found the wrong stuff. I'm doing this via a quick googling...

Is this really the code for reading and writing the HPET?

http://www.cs.fsu.edu/~baker/d...

I've been a powerpc programmer in aviation for a while. If you need to read the time base register (also a 64 bit up counter) you have to be aware that your read might coincide with the lower 32 bits incrementing and carrying into the upper 32 bits. So you read the upper 32 bits, read the lower 32 bits, then re-read the upper bits and make sure the upper bits didn't change. If they did repeat this process. But if they are the same then you combine the 32 bit halves into a 64 bit time and call it good.

Comment Re:It's a question that WAS relevant (Score 3, Interesting) 161

I think a large part of the confusion is that CISC often means accumulator architectures (x86, z80, etc) vs RISC which means general purpose register (ppc, sparc, arm, etc) In between you have variable width RISC like thumb2.

As an occasional assembly programmer (PowerPC currently) I far prefer these RISC instructions. With x86 (12+ years ago) I would spend far more instructions juggling values into the appropriate registers, then doing the math, then juggling the results out so that more math could be done. With RISC, especially with 32 GPRs, that juggling is near eliminated to the prologue/epilogue. I hear x86 kept taking on more instructions and that AMD64 made it a more GPR like environment.

-Samuel

Slashdot Top Deals

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...