Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment This arms race eventually ends in human extinction (Score 2) 56

On one hand, this is impressive, and probably useful. If someone made a tool like this in almost any other domain, I'd have nothing but praise. But unfortunately, I think this release, and OpenAI's overall trajectory, is net bad for the world.

Right now there are two concurrent arms races happening. The first is between AI labs, trying to build the smartest systems they can as fast as they can. The second is the race between advancing AI capability and AI alignment, that is, our ability to understand and control these systems. Right now, OpenAI is the main force driving the arms race in capabilities–not so much because they're far ahead in the capabilities themselves, but because they're slightly ahead and are pushing the hardest for productization.

Unfortunately at the current pace of advancement in AI capability, I think a future system will reach the level of being a recursively self-improving superintelligence before we're ready for it. GPT-4 is not that system, but I don't think there's all that much time left. And OpenAI has put us in a situation where humanity is not, collectively, able to stop at the brink; there are too many companies racing too closely, and they have every incentive to deny the dangers until it's too late.

Five years ago, AI alignment research was going very slowly, and people were saying that a major reason for this was that we needed some AI systems to experiment with. Starting around GPT-3, we've had those systems, and alignment research has been undergoing a renaissance. If we could _stop there_ for a few years, scale no further, invent no more tricks for squeezing more performance out of the same amount of compute, I think we'd be on track to create AIs that create a good future for everyone. As it is, I think humanity probably isn't going to make it.

In https://openai.com/blog/planni... Sam Altman wrote:

> At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.

I think we've passed that point already, but if GPT-4 is the slowdown point, it'll at least be a lot better than if they continue at this rate going forward. I'd like to see this be more than lip service.

Survey data on what ML researchers expect: https://aiimpacts.org/how-bad-...
An example concrete scenario of how a chatbot turns into a misaligned superintelligence:
https://www.lesswrong.com/post...
Extra-pessimistic predictions by Eliezer Yudkowsky: https://www.lesswrong.com/post...

Comment Capabilities are outpace alignment (Score 1, Interesting) 61

This is undeniably cool and impressive, but, I think proceeding down this research path, at this pace, is quite irresponsible.

The primary effect of OpenAI's work has been to set off an arms race, and the effect of *that* is that humanity no longer has the ability to make decisions about how fast and how far to go with AGI development.

Obviously this isn't a system that's going to recursively self-improve and wipe out humanity. But if you extrapolate the current crazy-fast rate of advancement a bit into the future, it's clearly heading towards a point where this gets extremely dangerous.

It's good that they're paying lip service to safety/aligment, what actually matters, from a safety perspective, is the relative rates of progress in how well we can understand and control these language models, and how capable we make them. There *is* good research happening in language-model understanding/control, but it's happening slowly, compared to the rate of capability advances, and that's a problem.

Comment This is an attack, not a leech (Score 5, Informative) 884

First of all, just to be clear: this isn't leaching, this is someone doing something nefarious. If they just wanted free bandwidth, they would never set up an evil twin network. Most of the replies on this thread are bad advice assuming it's a leech. The person responsible might be nearby, but probably not; if you track down the computer that's responsible, you're likely to find that its owner doesn't know what's going on and it's been taken over by an anonymous attacker over the Internet. Or you'll find a PwnPlug.

The first thing you need to do is notify the police that you're being targeted by hacking. This is important; if your computer/network is taken over and used for something illegal, which is likely to happen, this will protect you. Second: you need to notify your employer, as well as anyone whose confidential data you're in possession of. And third: you need to harden your computer security, and figure out why you might have been targeted.

Comment They said no such thing! (Score 1) 105

This lipid could serve as a way to diagnose people who are at risk of developing neurological disorders after a blast, the scientists say.

No, the paper doesn't say that. I checked. It's also not true; this can't be used for diagnosis (except maybe post-mortem), because it's on the wrong side of the skull.

Comment Based on misunderstanding how transactions work (Score 1) 438

This paper is based on a misunderstanding of how Bitcoin transactions work. If I receive 10BTC, then send 7BTC to someone using the usual software, then 7BTC will go to them and the other 3 will be sent as "change" to a newly-created Bitcoin address that's added to my wallet. It's also common practice for websites that accept Bitcoins as deposits or payment to generate a new address for every customer to send coins to, so that when they send coins they can tell who sent them using the destination address alone. The authors of the study don't seem to know this, so they completely misinterpret the patterns they're finding in the blockchain. If everyone followed the suggested practices of generating a new address for every incoming transaction, then every address would be either empty, or have never had an outgoing transaction.

And speaking of websites that accept Bitcoins as deposits, the recommended security practice is to divide coins into a "hot wallet", kept on the server and used for day-to-day transactions, and a "cold wallet" that's kept off-line for security. A cold wallet should almost never be involved in transactions - but it backs peoples' deposits which are used in transactions, so it's not like it's out of circulation.

Comment Not actually approved (Score 5, Insightful) 386

From the article:

"Ngoc Sinh has been certified as safe by Geneva-based food auditor SGS SA, says Nguyen Trung Thanh, the company’s general director."
"SGS spokeswoman Jennifer Buckley says her company has no record of auditing Ngoc Sinh."

In other words, the article claims that Ngoc Sinh Seafoods Trading & Processing Export Enterprise is using repulsive and unsafe practices, and lying about having been inspected. Bloomberg is accusing them of a crime. The Slashdot headline, on the other hand, converted this into "Approved for Consumers" - accusing a different group, the regulators, which appear to be innocent.

Comment Because ordinary errors don't lead to retractions (Score 4, Informative) 123

You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

Comment Center For Applied Rationality (Score 3, Interesting) 263

Consider giving it to the Center For Applied Rationality. Their goal is to make people more rational, by teaching about cognitive biases and scientific decision making, and studying how to do so effectively. They're doing great things, on relatively little resources; your marginal dollars would go a long way.

GUI

Submission + - Textcelerator: A speed reader for the Web 2

Jimmy_B writes: "Textcelerator extracts the text from web pages, and displays it in a cool way that makes you read it faster. Because it's written in Javascript, Textcelerator can either be embedded in a site, as a "Speed Read This" button, or installed as a browser plugin, which makes it work everywhere. Could this be the solution to information overload?"

Comment It's a sunk cost (Score 5, Insightful) 119

If Google had won a wireless spectrum auction (they didn't), then Google Voice could've been the core of Google's competition with the telco network. Pieces of it are probably still useful for Android, and it could give them negotiating leverage with carriers. So it could've been really important, but didn't turn out that way. The thing with software products, though, is that almost all of the cost is in the initial creation; once created, they cost very little to keep around. So Google keeps Voice running, because it costs them little and turning it off would be very disruptive.

Comment Re:Solution to wrong problem (Score 1) 58

The problem has never been knowing whether a worker is tired or the degree. Workers are well aware of how tired they are. The problem is jobs that pretty much require them to keep working anyway.

Workers may know that they're tired, but they can't easily prove it, and they can hide it if they don't want to lose pay. If someone goes to their boss and says they're too tired to work safely, they're likely to be ignored, and told to keep working. But if there's an impartially generated number that says they're too tired to work safely, that can't be ignored - because if a supervisor ignored that, and there was an accident, it would be easy to prove they were at fault.

Comment Re:More tolerent of human error (Score 1) 510

Also who is liable in a fatal accident caused by a machine?

The insurance company that owns the policy for the vehicle, same as if it were being driven by a human. And while the general public may have a hard time reconciling statistics that say driverless cars are safer with a few stories about them getting into fatal accidents, insurance companies do not have that problem and will support whichever costs them less money in claims.

Comment This is to prevent selling on multiple stores (Score 4, Informative) 294

Lots of comments here that're completely missing the point. This is to prevent you from selling at multiple stores at once. You see, in addition to setting whatever price they want, Amazon also has a rule which says that you're not allowed to set a "list price" that's higher than what you sell it for on other app stores. This means that if you put the same app in both Google Market and Amazon's store, then Amazon's store will always be cheaper - and you can't raise the price to counteract Amazon's discounting without ruining your sales on Google Market.

This is just one of several showstopping issues that ensure that I, as an app developer, will not put anything to Amazon's app store.

Comment I'm a developer, and I won't support this (Score 4, Informative) 222

I'm an Android app developer, and under the terms Amazon's currently offering, there's no way in hell I'll put my app there. There are three very serious problems with it. First, Amazon controls the pricing, not the developer - they can use your app as a loss leader. Second, they require that you give them your app and each update 14 days before you publish it anywhere else (such as on the Android Market) for their review process. That means no emergency fixes, and delayed releases, even if you're mainly publishing on the Android Market and want to put it on Amazon too. And third, it's competing with Android Market, which is preinstalled everywhere, with no users. It would be one thing if they offered more than Android Market's 70% take, but there're simply no advantages to it whatsoever.

Maybe they'll change their terms, and I'll reconsider. But the terms they're offering now are simply a bad deal for developers, and I doubt many will bite.

Comment Re:Nice, but Android? (Score 2, Informative) 109

You aren't forced to write in Java, you're forced to write for the JVM. There are other languages that target the JVM, including versions of Ruby, Python, LISP, and my personal favorite, Scala. Using the JVM means that Android isn't locked in to using any one particular CPU instruction set (which was what destroyed the original PalmOS), and that all Android programs and libraries are API-compatible with each other without the need for setting up special bindings.

Slashdot Top Deals

According to the latest official figures, 43% of all statistics are totally worthless.

Working...