Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment A Stanislaw Lem story (Score 1) 18

This reminds me of a Stanislaw Lem SF story (I think published in the "Fables for Robots" series): The Trap of Gargancjan.

Two countries start an arms race by moving their whole military to AI, and then set their armies to fight each other. But when all the robots connect to each other to create the two AIs of cosmic scale, they don't fight, but greet each other, take each other's hand and walk through the flowers. Because Space at its essence is peaceful, and war is not a cosmic concept.

Comment Good luck with that (Score 3, Interesting) 41

Fun anecdote: I visited the Philippines in 2022. I flew Cebu Pacific Air for a few domestic flights, and they had just setup an abundance of these self-check-in kiosks at their airport check-ins. While prior visits to this particular terminal would see six to eight staff working check-in counters, this visit only had two: one assisting with the kiosks, and one checking baggage. Wait times were long, kiosks were confusing, and people were agitated, but we all got through.

I just returned from another trip now in 2025. Flew Cebu Pacific Air again for my domestic flights. This time the terminal had only three self-check-in kiosks, they were shoved up against a wall aside from the check-in counters, and nobody was using them. Everyone was waiting in line to deal with a human. (In the consideration of both sides of this human-vs-machine argument, perhaps the reason why kiosks didn't succeed in the Philippines is because human labor there is very cheap.)

Regardless, the moral of the story is that airline travel is agitating. Companies that try to nickel-and-dime passengers (even budget airlines like RyanAir) by removing mature, reliable, human & paper & analog components from that experience in place of new, untested, anxiety-inducing digital counterparts may discover that the total cost is not worth the savings.

Comment Re:Let them have them (Score 1) 63

Clearly you haven't been paying attention.

Not only are people being disappeared to unknown locations, they are also being prevented access to lawyers.

One guy that had a green card (was in the US legally, a legal resident) got picked up by ICE for a trumped up charge for a minor drug offense he had as a kid and that was resolved. Then put into a facility that was absolutely a violation of human rights (unhygienic, repressive, terrible food, etc..). Then when he eventually was able to get help from a lawyer they said "yeah, you can fight it, but in the meantime you're going to be locked up in that place" (a place that is more Guantanamo than US prison).

So he chose to get deported to a country he hadn't seen since he was 8. And where he barely speaks the language.

So yeah, you're not paying attention to the human rights abuses of the US.

But that's nothing new: internment camps for the Japanese, residential schools for native Americans, Jim Crow, lynchings, mass shootings, etc.. etc..
Americans never see the abuses that happen right under their nose.

Not even when masked people are violating the 4th Ammendment and unreasonably searching and seizing people on the streets because of their race.

American Exceptionalism = American Blindness.. Americans are exceptionally blind

Comment Re:Let them have them (Score 1) 63

Beyond that the US has a tendency to make some industries just scam their customers and get away with it. Some examples:

- Lawyers: ridiculous hourly fees
- Insurance: costs a lot, then doesn't pay up
- Hospitals: openly scam patients
- Pharma
- Universities: ridiculous tuition rates

The undergrad tuition that Universities want to charge cannot be justified. You're paying for everything except your own education. There's no way they need that much money to teach you in a class with 100 people. You could hire a tutor with a PhD and get one on one education for cheaper.

Comment Re:AI isn't Stealing (Score 1) 38

It's not stealing to read.
Memorize probably not either.

But to reproduce on demand, that could be copyright infringement.

Imagine if you asked the AI to generate a song like "Born in the USA" by Bruce Springsteen. And it did so verbatim or very very close to the original. Even with a video.

If it's too close to the original it's no longer fair use.

Comment Re:Use the existing rules. (Score 1) 38

Producing plagiarism is a risk with AI tools.

For example, you could ask it to generate some code, and it could generate some copyrighted Windows source code that leaked on the internet at some point. Including some patented algorithm.

If you incorporated that into your code you could be found liable of copyright infringement. Not sure AI companies like OpenAI would indemnify you for that.

So yeah, this is an unsolved problem.

Note that this is a contradiction to an earlier ruling in the US that said that the output of an LLM is "highly transformative".

AI models need to find a way to know if they are plagiarizing or not and give proper attribution or suppress copyright infringement.
The issue here is that songs are short, and therefore easy to memorize.

Comment Re:Isn't this the idea? (Score 1, Insightful) 104

If ffmpeg allows known and published vulnerabilities to languish, the risk here is that organizations that use their code will simply stop using it and will look for other solutions.

Orgs basically have a choice:

1. Suck it up and deal with the whims of people you are not paying a penny to
2. Cough up some cash and contribute
3. Develop their own completely in house/pay for a 3rd party one

2 is almost always way cheaper than 3. Option 4 of "whine incessantly that people you aren't paying aren't working for you fast enough" really needs to stop. I suspect a lot of companies would rather do 3 than 2, because they are not rational.

Slashdot Top Deals

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...