Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment No funny? (Score 1) 167

Active "discussion", but no funny? Okay, I admit that all the jokes I can think of sound cheesy.

"Buy more cheese!" said the refrigerator. "I'm really good at storing cheese."

It's not funny because the big-cheese companies will just pay/bribe Samsung to send those messages.

Comment Re:Everybody knows where the pipelines are (Score 1) 130

Sometimes I think the general gist of Republican philosophy is to extract the maximum amount of wealth possible from ordinary people. For men, that means maximally exploiting their labor, even to the detriment of health and safety. For women, that means usurping their reproductive autonomy (to ensure a ready supply of future workers) and subordinating them to their husbands (to redirect man's resentment at being exploited). To them, a young man playing video games is a tragedy because that time could be spent working or gaining useful job skills.
AI

DeepSeek Writes Less-Secure Code For Groups China Disfavors 36

Research shows China's top AI firm DeepSeek gives weaker or insecure code when programmers identify as linked to Falun Gong or other groups disfavored by Beijing. It offers higher-quality results to everyone else. "The findings ... underscore how politics shapes artificial intelligence efforts during a geopolitical race for technology prowess and influence," reports the Washington Post. From the report: In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes.

Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won't help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.

Those rejections aren't especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new.
CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code.

One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model's training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad).

A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.

Comment More anecdotal experiences with gen AIs (Score 1) 86

Yes, I have tried to encourage the genAIs to ask questions a number of times in various ways, but with no significant success that I can recall.

I can definitely recall the results of my last paired experiments. I prepared a short software specification and gave that text to DeepSeek and ChatGPT.

The DeepSeek result was better in terms of how I described the desired appearance, but one of the four results was wrong. DeepSeek had clearly misunderstood that part of the problem and did NOT ask for clarification. But I admit that I didn't notice the error until I compared it with the results from ChatGPT's second attempt.

ChatGPT failed completely in its first attempt, but its second attempt apparently threw out the appearance parts and produced the right numbers. When I asked ChatGPT about the failure of the first version, its first (verbose) response was useless, but a later response sounded like it was trying to do the right thing--but I still have no idea why it failed so badly. The second version from ChatGPT only required about 70 lines of code, which I then annotated extensively for an acceptable result.

My main problem with the verbosity is that I wind up skimming thought lots of irrelevant stuff hoping to find something that is significant...

Comment Re: Or... (Score 1) 157

I guess I should clarify. In addition to "just the W2" there's also a monthly, quarterly, or yearly payroll tax report that goes to the IRS, along with a whopping large check for the withholding, as part of normal payroll processing. Different companies do different reporting standards, of course. But they're getting the data a lot more often than you think, just from the money paid in *during* the year, before the return is filed for.

Comment People believe what they want to believe? (Score 1) 90

Unless the government insists loudly enough that they believe something else? I used to think it was a case of "Sorry, 'but that trick never works'" until the surreality made-for-TV YOB show took over and converted America into a giant glass house of lies... "Truth will out"? Perhaps, but it scarcely matters if the "bad man" has already died with the most toys. It appears to be already too late to clean up most of the messes.

I also suffered from the delusion that the truth mattered?

(And if you have to feed the cheap sock puppet, can't you at least look for a meaningful, non-vacuous Subject?)

Comment Personality of reality versus surreality? (Score 1) 70

It's also a matter of needing less training. The skilled actors were so valuable because they could pretend to be constructed characters. In the context of a novel or other source, the whole story makes sense and there aren't extraneous elements and needless distractions. The characters do just what is needed to illustrate their characters and to advance the plot.

Real life is different. Most of the events that happen are just random noise and there is no plot. We (often referring to 'historians') interpret the meaning after the fact. Or more accurately, we select a few interesting bits and then try to find causal chains that led to those "significant" outcomes. Constant flux of which bits of real life count as worth describing and huge room for creative imagination in creating causal chains--and the results from real life are often boring.

Which is where surreality TV came into the picture. All they need is some people with "memorable" or "interesting" or "engaging" characters and then they can construct a fake-reality around those characters and film what happens. With enough cameras and enough constructions, they are basically sure to get some highly "entertaining" videos. And thereby sell the ads.

But now the cat has escaped from the bag. With advanced computer graphics it had already become possible to create movies like Star Wars that are barely limited by the director's imagination. And now with apparently cheap generative AI, we are giving those capabilities to the masses... Fortunately most of "the masses" could not care less, but we are at significant risk of a flood of AI slop that will make the Biblical Deluge look like a spring shower. At least that's how it's looking to me these weeks.

AI

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance (theregister.com) 86

AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. "Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.

Submission + - Color-changing organogel stretches 46 times its size and self-heals (phys.org)

alternative_right writes: Scientists from Taiwan have developed a new material that can stretch up to 4,600% of its original length before breaking. Even if it does break, gently pressing the pieces together at room temperature allows it to heal, fully restoring its shape and stretchability within 10 minutes.

Slashdot Top Deals

The Tao is like a stack: the data changes but not the structure. the more you use it, the deeper it becomes; the more you talk of it, the less you understand.

Working...