Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment failure is contextual (Score 1) 77

they were only failures "at the time" but conditions have changed and they might thrive now. why not give them a chance? if you're right and nature really knows something we don't, it'll fail them into extinction again. but assuming that extinction means "this configuration will never work" is nonsensical.

Comment y'all sound like UFO nuts (Score 1) 100

The discourse around AI reminds me a lot of the UFO discourse - obsession with a binary answer of whether "UFOs are real or not" but "whether AI is good" or "useful" or "increases productivity" with a lack of 100% usefulness/productivity gains being excuse to call the whole AI thing worthless, which is as nonsensical as claiming that the lack of UFO evidence is somehow evidence of aliens, or lack thereof, equally so. The lack of a binary answer causes people to fall hard on one side of the other, based more on their belief systems rather than facts. There is zero doubt that AI can make you a better programmer, HR person or whatever it is you're doing - so long as you educate yourself on the limitations, hallucinations, and have a realistic view as to the value of its output and a realistic plan for processing that output to verify it and so on. But the time it saves is incredible of course. The whole AI situation reminds me of an old phrase (idr the origin) being "a problem well described is half solved" or "a question well stated is mostly answered". Your query determines the quality and usefulness of the output, so git gud at prompting so you don't have a skill issue. Make use of the output with the caveats that it's not "truth", but it's "truth-y" and you'll need to do your homework before relying on it as a source of facts that you can act upon.

Comment Re:Journalism is dead in America (Score 1) 40

question: "What do you do when every single news outlet" becomes partisan garbage? ... answer: stop reading them - then go on something like X and become the news. or launch your own news outlet and refuse to be bought out or tow any lines. ... now, someone please tell me how to add line breaks in comments, thanks

Comment Re:The article misrepresents the original comment. (Score 2) 99

Yes, but isn't the current situation that of "we don't live in a world with killbots with AI brains with kill/no-kill decision making capability and deploying them at large scale in nation-vs-nation conflicts would be a massive escalation of force and devastation similar or worse than nuclear."? Idk

The moral "high ground" thing (which isn't very high given that we're dealing with people-killin tech here) is that the mine doesn't "decide" to kill. A soldier decides to make a particular area lethal for any human to enter, for a period of time. The odd logic seems to be that an AI that makes some kind of "reasoned" decision to kill a target, in a package deployed by a soldier and pointed at a patrol area with a pat on the back and a "do your best buddy to shoot the baddies" and then self-destructs or goes home - cannot be morally worse than a mine that literally just blows up in your face whether soldier or orphan child even 50 years after the war ended. But, this is a weirdly relativistic argument with a land mine as a dancing partner when there are much greater issues to consider as you said, like is banning them a good idea or not, which is much bigger than a "what about landmines" argument. Both landmines, and AI-powered drones with kill/no-kill decision making, are in a way no different than bullets or nuclear bombs - used properly, they are a denial-of-service, to humans, to a particular area, for a particular amount of time, at a particular degree of lethality. But this terrifying cold war style thinking of "gotta beat em all to everything to protect America" seems childish, warmongerin boomerish, or worse, a deliberate attempt to avoid the slippery slope discussion of what happens on a planetary scale when nations compete in war with a kill/no-kill decision-making capability handed off to AI brains unleashed upon one another, with nothing more than "keeping up with the Jones'" as their only excuse for building it. I mean yikes.

Sure, I could imagine a scenario where a nation is airdropping drones, instructed to kill anyone on two legs - which do not have enough fuel to accidentally return home and turn on its own people. So there's no Skynet, just a hellish swarm of death for anyone unlucky enough to be where the drones drop, and at best maybe some get captured and redeployed on the folks that sent them, but that's the exception and worth the lethal benefit of the millions that complete their mission and self-destruct when they're out of ammo or return home safe. So, this nation has this insane weapon and has a signficant tactical advantage over a nation that decides, based on some kind of sci-fi Skynet fear of friendly-fire gone mad due to an AI that gets confused due to a Johnny 5 lightning strike or something and starts obliterating friendlies - yeah, not having to "press fire every time" is an tactical advantage, but so are nukes, and we don't let some people have those. But we can have anything? Good to be at the top I guess. Naturally, the warmongering types will argue that that's just being a pansy to not build it - like, a nation that scared of technology will be at quite a disadvantage when the enemy drones start dropping from the sky, AI enhanced and killing anything that moves - pitty the bleeding-heart too scared to build the same thing themselves.

So thusly, we begin the collective death march into the brave new future of hot and cold wars unlike anything the planet has ever seen, it's near-equal being the nuclear arms race which nearly ended everyone - a march into an era of AI autonomous robot swarms laying waste, and hey, maybe it all works perfectly and "we build em better" and we win and Uncle Sam is a happy camper and makes apple pie, and thank god we had smart patriotic generals with the foresight to realize that a new arms race revolving around AI-autonomous lethal machines was the best way to handle this new development.

On a nation level, you can debate and reason out many sides of all this and make good points for ban vs must-build. Such topics tends to be, like almost everything, generally divided pro-war and anti-war folks - the only thing new being with AI - like nuclear - that it's a technology that gives even the most bloodthirsty warmonger pause, initially because of possibility of self-harm, but once THAT'S alleviated, whatever is left is not enough pause to NOT actually build it. I mean, that'd just be irresponsible. Gotta be first! As a nation - sure, justifiable, but like religion, under those rules, anything is, easily.

On a planetary level - the whole thing is just embarassing and dangerous, just like nukes were, and those are barely under control - but yeah sure, let's welcome in a new class of devastating world-killing autonomous drone swarm murderbots because someone else might do it first, it's the only way to keep the balance.

Aliens looking in on us: "lmao jfc wtf"

Comment Re:The new hotness loophole. (Score 1) 46

It's defined right there in the article: The term "digital replica" means a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that- (A) is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission- (i) in which the actual individual did not actually perform or appear; or (ii) that is a version of a sound recording, image, or audiovisual work in which the actual individual did perform or appear, in which the fundamental character of the performance or appearance has been materially altered; and (B) does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder." I think the whole thing seems pretty reasonable, but I wonder how this might apply to innocent comedy deepfakes of celebs and politicians that are already all over social media, created before the legislation. There should be an accomodation of parody and clearly-labeled fakes, but I haven't read the bill so idk if that's covered. Seems like people are gonna have to comb through their entire post history, everywhere, and remove any fakes (however innocent / obvious / clearly labeled / parody they may be) for fear of getting sued. I also agree with another commenter that this is a genie out of the bottle, and while legistlation and rights can help many situations such as commercial exploitations (see: that George Carlin thing), it's not a magic bullet that will halt all fakes. It just means, like anything illegal, only criminals will do it. It'd be interesting to see how this affects a system like Tiktok that the US doesn't own or control, but operates here. What happens to that Ghostbusters movie where they AI'd Egon? Did they license that from the family, or did they say "we own the character likeness and can do anything we want" which this new bill would eliminate?

Comment degradation (Score 2) 66

so Google's gonna scrape reddit, train its AI, which is already full of AI generated slop lol good luck? long term I wonder how they all plan to deal with this i reckon that untainted early version of resources like The Pile may become extremely valuable once they can no longer be improved upon easily due to everything being poisoned

Comment regulate candy corn (Score 1) 60

when I was a kid I bought a 1 pound bag of candy corn from 7-11 and ate it before I got home which was a 1 mile walk i thought I was going to die lol what are 7-11 and the candy industry doing to stop this negative outcome from my self-destructive use of their product

Comment manipulation (Score 1) 33

How long until someone spams a product with inappropriate reviews to influence the AI-generated one, which would then be a statement by Amazon, not a user (which they're immunized from via Section 230), thus putting Amazon through a legal grinder of "your site said Nazis love this product" or whatever the pranksters manage to pull off? Can't wait to see the summary for the infamous Tuscan Whole Milk, 1 Gallon

Comment limited time? (Score 1) 34

Various news articles about this have indicated that these 3D bridge/ship things will only be available "for a limited time", which seems contrary to the mission of the Roddenberry Archive to "preserve for future generations". The JS controlling the 3D is owned by OTOY, obfuscated and laden with 'FBI' duplication warnings, even if it is somewhat generic and easily replaceable if you have the 3D model, which is embedded into a and unclear how much of it is actually preservable in the event the site disappears - which would be a violation of the OTOY terms, it seems - all this feels very anti-Trek and does not bode well for the long term outcome of the works created here. I don't understand why they would already be talking about limiting the time and have seen no indication of what their plan is for when that time expires, if any, such as deleting it all or shifting to a paywall model, or what. I posted a plea on FB to Michael & Denise Okuda who are on the board of this thing to sort this out, which so far has not been replied to.

Comment Re:I wouldn't trust ChatGPT (Score 1) 137

of course you can't trust it - it can't run the code that it creates, and it cannot "simulate" running it no matter what you prompt, in order to get some kind of simulated but correct or truthful answer. it'll pretend it can though - it'll pretend virtually anything - but the output will not match the code's reality. i like to say that ChatGPT doesn't output information or facts - it's autocomplete on steroids and crack, working one word at a time. it's like typing "your name is" in a text message, then being disappointed that the autocomplete is "wrong", when it has no idea who you're talking to, or that you're even actually talking to anyone, or has any concept about what a person or name actually is.

Comment more productive? (Score 1) 51

what could be more productive than not having to do any of the work, and having an AI spit out 30 keys at a time, 1 of which works, but that it can infinitely produce? it would have been more precise to DIY a code solution that always generated working keys, but not more productive than "hey AI do this for me" which has a 29 out of 30 error rate.

Slashdot Top Deals

"The pathology is to want control, not that you ever get it, because of course you never do." -- Gregory Bateson

Working...