Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment May have been oversold... (Score 1) 41

Allegedly this was a permitted practice; but the speed with which they said that they will be abandoning it once it became public knowledge; and the number of federal IT people ProPublica was able to find who had never heard of it, suggests that either the proposal that was approved was not entirely candid about what the plan was; or the approver was too low or obscure to actually approve.

This certainly wouldn't be the first time that something perfectly on the up and up was abandoned for PR reasons; but MS would probably be loathe to give up the ability to whitewash whoever into sensitive projects by having an $18/hr copy-paste pal in the loop; so they must see the exposure as potentially serious.

Comment Re: You keep using that word. I don't think it mea (Score 5, Informative) 75

"Penultimate" isn't a synonym for "ultimate"—it means the thing before the ultimate. Likewise we have penumbra for the blurry edge of a shadow (umbra). This results in some truly special words like "antepenult," meaning "the thing before the thing before the final thing," commonly used when discussing where the stress/accent falls in a Greek or Latin word.

"Invaluable" does indeed mean "not able to be valued" when analyzed morphologically, but the standard usage of it is indicating something is beyond value, i.e. infinitely or inestimably valuable. A value of zero is still a value, after all.

"Inflammable" however actually means "able to be inflamed," as in "put in flame" or "set on fire." The confusion comes from assimilation of the Latin preposition "in" (which we have as "in" or "on") instead of the more typical prefix "in-" (which demarcates negation.) You don't have to look very far for other words where "in" doesn't mean "not": indicate, inherit, imply, investigate, indict, involve...

Comment Re:less of a barrier than their terrible UI (Score 2) 78

I've been using LO pretty much constantly for the last two years (even wrote a novel on it). Like any interface, it just takes time to become familiar. In fact, I like the way Writer organizes styles and style configuration far better than Word, and often, even for DOCX files, do initial style set up and layout in Writer and then move to Word if I have to (which is seldom enough).

LO is a damned good office system. Its default UI is older, but since I used MS-Edit and Word pretty extensively back in the 1990s, it feels familiar to me. There is a ribbon interface, but I've only tried it a few times before remembering why it is I actually don't like the Word ribbon.

Comment Individuals or orgs? (Score 1) 54

It would be interesting, and possibly useful, to know how these reports break down in terms of affiliation and motivation.

It's obviously a problem regardless; but, in terms of behavioral change, it seems likely that the well meaning but confused would have different incentives than someone taking advantage of the speed with which a bad bug report can be automated to spam everyone who has a bounty program of some kind in the hopes of getting lucky; someone in over their head and attempting to farm cred as a 'security researcher' would be sort of a hybrid of the previous two; and someone using OSS projects as guinea pigs for some 'AI securifies your code!' startup's training/hype process would be basically the worst case scenario; since they don't need to be motivated by the idea that they are being helpful and their financial incentives are separate from any bounty program.

Comment Re:Uh... I have a bad feeling about this. (Score 2) 29

F = G * (m1 * m2) / r^2

Or as we call it, Newton's inverse square law, where the force of gravity on any two objects is inversely proportional to the square of the distance between them. Space is really really really really really big (the observable universe has a diameter of about 93 billion light-years), so it is literally impossible for any combination of mergers to have any effect beyond an infinitesimal region of the universe. Even a galactic merger which caused two supermassive blackholes to merge would have little or no measurable effect on a neighbouring galaxy as far away as Andromeda is from us (about 2.54 million light years away).

In fact, it's not until LIGO that we have even been able to detect the mergers of super dense and super massive objects like neutron stars and black holes, just to give you an idea of how the inverse square law limits the influences of gravity over very large distances.

Comment The patron saint of lazy 'digital transformation' (Score 1) 73

So, TFA says "Today, CAISO engineers scan outage reports for keywords about maintenance that's planned or in the works, read through the notes, and then load each item into the grid software system to run calculations on how a downed line or transformer might affect power supply."

Sounds like there is already an actual piece of simulation software in place; but the people feeding it scenarios need to manually assemble them from outage reports and maintenance notes; because apparently you can perform, even schedule, maintenance without generating a record of which asset you'll be working on. And so someone proposes that the obvious answer is to bandaid that with a summarizer, and their notorious penchant for accuracy?

Does that strike anyone else as slightly insane? We're already recording outages and maintenance events and those records refer to various pieces of infrastructure; but not in a format that can be programmatically extracted from the report to load into the analysis system; so rather than doing the boring thing and slapping serial numbers on transformers or adding a "things we turned off for maintenance and when" field to the maintenance report; we'll just continue doing that as a big free text blob and hope that an LLM will save us?

Seems sort of like deciding that postal addresses and mailing labels would be a huge pain; so let's find a chatbot that can, hopefully, follow a rambling account of how mail for the Johnson place is 3rd right past the old rail bridge; but if you see the silo you've gone too far.

Comment Re:Apple has to do it (Score 1) 68

Even if we assume, for sake of argument, that trends are favorable and someone can actually figure out how to cover their costs; why do you say that "the train is leaving the station on this"?

So far the LLM guys have appeared to be relatively low moat; with significantly cheaper and only modestly worse competition not far behind the big names and ongoing contention between the ones you've heard of.

If Google, whose main obstacle to dominance of the search market is its own self destructive tendencies; with microsoft a distant second, is paying Apple 20 billion a year to stay default and everyone who isn't them is basically begging you to switch to them, why would we expect that the LLM guys would be in a position to squeeze Apple on terms?

Where is the point where Apple can no longer board the metaphorical train for some reason? There may come a point where they can no longer simply ignore the matter; if Siri's ongoing mediocrity becomes an actual issue or the like; but I'm not seeing why they'd be in a notably worse position tomorrow than they would be today to either get satisfactory terms from one or more of the largely interchangeable and margin-challenged competitors; or just snap up one of the more competent fast-followers(or an open weights model and some appropriate hires) and deal with that when it comes up.

Where's the urgency? What marks the end of the station platform in this increasingly tenuous metaphor?

Comment What's the case for acquisition? (Score 1, Interesting) 68

Even if we, for sake of argument, accept the theory that Apple products are suffering for want of 'AI'; what's the case for Apple to pay a fairly stiff premium to acquire vs. just taking advantage of the current state of the market; where there are multiple people who will fight for the opportunity to lose money on every sale and attempt to make it up in volume?

Absent a fairly specific argument why Apple needs to own one; rather than just snapping up some of the useful people or commanding the influence that being a customer with actual money tends to provide over suppliers that are in the process of bleeding out, the idea that Apple needs to buy into 'AI' seems sort of like the idea that Apple needs to buy into DRAM, except that it's vastly more evident that Apple's products actually need DRAM; and Apple still doesn't go there because why bother with a capital intensive business whose margins are constantly buffeted by spot prices and on the thin side when you are buying enough that you don't need to worry about your place in line?

Comment Perhaps a little tweak? (Score 1) 57

This seems more like a "Zuckerberg threatens hundreds of billions for AI datacenters" situation.

In all seriousness; even if you are an 'AI' optimist(perhaps especially so, since you presumably think that this isn't just Zuck pissing away more money after his metaverse successes); would you want Facebook to have a commanding position in the area? It's not literally the worst possible company to potentially have to deal with; but it tries.

Comment "Planned to" seems dubious (Score 1) 28

Given the level of commitment it implies; basically the most lightweight of expendable pilot programs even if you are saying that you 'plan to' in a legally binding context; is seems at best exceptionally dubious to treat the answers to "do you plan to adopt generative AI?" as straightforwardly meaningful.

The differences mean something; it's just not obvious to what degree they reflect actual company strategy, vs. personal fascination with the new shiny thing, vs. people saying what they think the audience wishes to hear.

Comment Re:Meanwhile... (Score 1) 35

It's sort of an interesting mix of goofy hype and actual(but relatively boring) worth-looking-into.

Not so much because of 'quantum' necessarily; it's entirely possible that someone will get an at least somewhat worrisome classical efficiency improvement worked out before the quantum computing types reach anything of useful size; and it's probably worth betting money that particular cryptographic implementations will turn out to be flawed; but because it takes a fair amount of awareness to even have a complete idea of what you are running; and more than that to know the implications of needing to swap it out in some or all locations.

The people selling 'quantum' and 'post-quantum security' are mostly in the business of "forget your boring arduous problems by focusing on our exciting ones!"(good business; bad way to do security); but it's a pretty solid idea to be aware of the boring arduous problem of exactly what ciphers you use, and what implementations, and whether there are any places where you've inadvertently left a compatibility toggle that allows something to be downgraded to some 90s 'export grade' cipher; and have an idea of how hard it would be to change ciphers or update implementations if you needed to for one reason or another.

Shockingly enough, the people with the biggest marketing blitzes and best 'executive whitepapers' with stock photos of shadowed hoodie hackers and chinese quantum AI owning your cyber are not the ones mostly advising that you should do some really boring systems administration and SBoM stuff while waiting for mature industry-standard implementations to become available; so the people selling immature proprietary implementations and dubious silver bullets tend to out-shout the more sensible ones.

Slashdot Top Deals

"Anyone attempting to generate random numbers by deterministic means is, of course, living in a state of sin." -- John Von Neumann

Working...