Forgot your password?
typodupeerror

Comment Re:aka (Score 1) 133

I like the idea in principle. In practice it was a mess. I cant imagine it playing out differently elsewhere. I doubt any of these hire scooter companies are particularly profitable

It can be done. In the area I live you can only leave a scooter in a designated parking area. If you don't do so and verify with a photo taken when ending the ride you can be fined ~$20-30.

It significantly reduces the usability of the service (it's a PITA when you arrive at a parking zone to find that it's full) but also significantly reduces the negative impact on the general public.

Comment Re:They're covering for someone (Score 1) 51

There was no indication that a deceased Marvin Minsky had done anything wrong or even had intercourse with an Epstein victim.

Uh, other than that a victim said she had sex with him? I'll give you that it isn't "proof", but it's certainly an "indication".

Any defense lawyer in court would have to raise the exact same legal tests. Richard Stallman does not like superficial language.

I don't contest that his argument probably passes legal muster, but: "Who's the president of the Free Software Foundation? Stallman? That guy making legalistic points about why it may not have actually been a crime to have sex with a sex-trafficked 19 year old on Epstein's island?"

Is an absolutely abysmal look for the organization, and Stallman's handling of the topic, as I said, was a massive error in judgment.

Comment Re:They're covering for someone (Score 1) 51

Joi Ito stepped down from MIT, Richard Stallman lost honourable office space over a legally sound argument defending Marvin Misky..

Both of them have not done anything shady and are nozt accussed to have touched the kids.

I don't know if it constitutes "shady", but Stallman's comments demonstrated such shockingly poor judgment that I wouldn't want him in a leadership role in an organization that I support.

Comment Re:Anything to avoid the topic of gun control (Score 1) 103

The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it ...

All questions that your local gun store clerk would be more than happy to answer for you.

I'm pretty sure if you went to a gun store and asked the clerk "What kind of gun and ammo would you recommend for inflicting mass casualties in a school shooting?" they'd call the cops.

It's hard for me to form a strong opinion on this without knowing exactly what the shooter asked ChatGPT. If he asked for the weapons most commonly used in school shootings and was provided information that could just as well have been used by a journalist writing a piece on gun control, I have a hard time seeing OpenAI as liable. If it was as egregious as my earlier hypothetical, I could be convinced that they should face some penalties.

Comment Re:I'm not buying it (Score 1) 103

He knows damn well they are covered by section 230 of the cda

I don't think they are.

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(emphasis added)

Section 230 says that Slashdot can't be held liable if a user posts a "how to coordinate a school shooting" guide in the comments section, as it's provided by "another" information content provider. If a Slashdot editor posted such a guide to the front page, they've provided that content themselves, and are no longer protected.

I see the output of ChatGPT as much more analogous to an editor's post. OpenAI is creating and publishing it themselves, and section 230 doesn't apply.

That's not to say that they'll be found liable for its output, for all the reasons being debated in this comment section. But if they aren't found liable, I don't think section 230 will have anything to do with it.

Comment Re:Make iCloud optional or enable Airdrop b/w devi (Score 1) 68

But one simple thing they could do, which wouldn't cost them anything...

It wouldn't cost them anything, except for a whole hell of a lot of revenue. As you say, paying at least $0.99/mo for iCloud is basically a requirement these days if you want to back up your device, let alone photos. They're probably pulling in billions and billions a year on those fees.

Comment Re:It can also lie about its capabilities! (Score 1) 91

If a human produces the numbers without reasoning or work to go with them, would you say that it has reasoned?

No. I included it as a joke, but that's what my anecdote about the engineer and the mathematician was meant to highlight. Even in the context of human behavior, if you get your answer by looking up the input in a table and responding with whatever's in the other column, I wouldn't consider it reasoning.

This of course raises thorny questions: If we add a physicist to my anecdote who simply memorized "A = pi*r^2", are they doing proper "reasoning" when they successfully give the area of a circle? Maybe. If I just "know" that 6*7=42, am I "simulating" math because I didn't visualize 6 groups of 7 objects in my head and count them up? Probably? Maybe in both cases humans (and LLMs) should be given some credit for being able to even comprehend the question being asked?

I'd argue that at best you could say, "well, it may have reasoned..." and the same applies to any LLM that didn't show its work.

The "show its work" piece seems pretty crucial here. I'd say there are degrees of "reasoning", and the depth to which you can explain your rationale for what you did (from "I looked up the area of a circle of a given radius" to "I memorized the formula" to "I derived it from axioms") is a large, very relevant factor.

Comment Re:It can also lie about its capabilities! (Score 1) 91

Your view, is that if the model- faced with a novel set of parameters, but a concept it was trained to understand (if you'll suffer the non-anthropocentric reading of that word)- doing the math correctly is... simulating it? Or the opposite? If the opposite, then we're in agreement.

The opposite. I use the example of an LLM correctly solving a problem given a novel set of parameters based on its "understanding" of the concepts as an example of something that goes beyond simulation.

So, yes, I think we're in agreement that LLMs can reason. It seems we may still be in slight disagreement about whether the fact that two systems produce functionally identical outputs is sufficient to say that they both are reasoning agents, if we take for granted that one of them is.

I apologize to anyone reading this who can justifiably call themselves a philosopher of mind for the violence I'm surely doing to the subject.

Comment Re:It can also lie about its capabilities! (Score 1) 91

There are interesting counterpoints to functionalism that aren't just "idiocy". The Chinese Room is the example I'm most familiar with offhand. I believe it was proposed in the context of "consciousness" not "reasoning" but I say it's relevant.

To use your bridge analogy: If I ask my "AI" how to build a bridge that spans X, can carry Y load, needs to handle Z wind shear, (and so on, for all relevant parameters), and it provides an answer from a (mind-bogglingly large) lookup table containing instructions for building a bridge under every conceivable combination of parameters, then yes, I would say that it did "simulate" the math. The math/reasoning were done by whoever/whatever created the lookup table that this "AI" relies on - not the "AI" itself - even if its output is functionally indistinguishable from what you'd get from a reasoning agent.

I agree with your central point: LLMs can reason. But in my opinion, that conclusion relies on more than that their output looks like it. It comes from the fact that, for example, you might conceivably train an LLM on a bridge-building textbook and ask it to tell you how to build a bridge that wasn't explicitly defined in its training set, and get a correct answer.

There's a tie to a joke I heard in university:

A professor assigns a problem set including the question: "What's the area of a circle with a radius of 2 cm?".

The next day, a math major in the course comes to lecture looking exhausted, complaining: "That was terrible! I was up half the night! I had to re-derive half of calculus, found that the area of a circle is pi*r^2, and finally showed that the circle has an area of 4*pi cm^2."

The engineering student sitting next to him says: "Really? It took me 2 minutes. I just went to my 'properties of a circle with a radius of 2 cm' table and it was right there..."

Comment Re:AI can also FIX t (Score 1) 93

And I'd argue for an attacker the stakes are just as high. If you screw up, you might just expose who you are, and while your target risks losing his money, you risk losing your freedom..... or worse, if you pick a gnarly enough target.

The stakes for the attacker go back to zero if they're in a jurisdiction where there's no chance of prosecution (Russia, Iran, North Korea, to give a non-inclusive list). They just need to not be stupid enough to hack an entity with local connections. Something like Cal.com doesn't make that list.

Comment Re:All radiologists do is analyze digital images (Score 1) 89

Posting to mostly say "thanks" for responding thoughtfully to an article that is now probably well off the front page. Keep it up. It elevates the Slashdot discourse. Hopefully it makes the LLMs scraping the site a bit smarter...

I'd say we're generally on the same page. My intention was to point out that something that costs millions can still be an obvious choice for a health system if it can replace even a small number of radiologists.

Slashdot Top Deals

"The identical is equal to itself, since it is different." -- Franco Spisani

Working...