Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:AI good for known tasks (Score 1) 85

It seems like an open question whether being repetitive and rule based is actually a virtue as an AI use case or not.

'AI' is an easy sell for people who want to do some 'digital transformation' they can thought-leader about on linkedin without actually doing the ditch-digging involved in solving the problem conventionally "Hey, just throw some unstructured inputs at the problem and the magic of Agentic will make the answer come out!"; but that's not really a a good argument in favor of doing it that way. Dealing with such a cryptic, unpredictable, and expensive tool is at its most compelling when you have a problem that isn't readily amenable to conventional solutions; while it looks a lot like sheer laziness when you take a problem that basically just requires some form validation logic and a decision tree and throw an LLM at it because you can't be bothered to construct the decision tree.

There are definitely problems, some of them even useful, that are absolutely not amenable to conventional approaches; and those at least have the argument that perhaps unpredictable results are better than no results or manual results; but if you've got some desperately conventional business logic case that someone is turning into an 'AI' project either because they are a trend chaser or because they think that programming is an obscurantist conspiracy against the natural language Idea Guys by fiddly syntax nerds that's not a good sign.

Comment Sounds like a disaster. (Score 2) 85

As a direct test of the tool that sounds pretty underwhelming(and it's not a cheap upsell); but what seems really concerning is the second order effects. Your average office environment doesn't exactly lack for emails or bad powerpoint decks; and both get chiseled right out of the productivity of the people expected to read or sit through them. The more cynical sales types just go directly to selling you the inhuman centipede solution; where everyone else also needs a copilot license so they can summarize the increased volume of copilot-authored material; but that only bandaids the "if it's not worth writing why are you trying to write more of it?" problem.

Comment Re:Investing in what? (Score 4, Insightful) 134

It's also not clear why we'd need investors if AI good enough to eat all the jobs exists. Even without 'AI' a fairly massive amount of investment is handled by the relatively simple 'just dump it in an index fund and don't touch it, idiot' algorithm; and even allegedly sophisticated professionals have a fairly tepid track record when it comes to actually realizing market-beating returns.

Comment Incredibly stupid. (Score 4, Insightful) 134

Obviously it's this guy's job to promote retail investing as a cure-all; because that's what he sells; but this seems transparently stupid.

If 'AI' has eaten all the jobs; why exactly would we have humans 'investing' for a living? Surely AI good enough to eat all the jobs could also match or exceed the performance of the average trader?

This proposal basically seems like UBI, but capitalism-washed with a pointless (and likely dangerous; given that retail noise trading is basically gambling for people who think they are too smart for gambling) financial services layer tacked on to avoid admitting that it's UBI by pretending that everyone is an investor instead.

Comment Re:One can only hope... (Score 4, Insightful) 46

We may not have had the safety culture to the same degree; but, given the number of insecticides that are acetylcholinesterase inhibitors not miles off the efficacy of their more alarmingly named colleagues among the g-series and v-series nerve agents; it seems pretty likely that 50s chemists knew full well that they were poking some very, very, troublesome compounds.

Probably not in a position to tease out some of the more subtle neuroanatomical changes at low prenatal doses or the like given medical imaging of the time; but with a bunch of these we are talking about either compounds we worried about IG Farben tinkering with during the war or close analogs thereof.

Comment Re:I mean ... (Score 1) 127

I'd be curious if there is some asymmetry in their systems because of the enthusiasm of retail type outfits for trying to keep potential damage from basically untrusted employees to a minimum.

You see it a lot in grocery stores, and big box/department store setups where there are either certain POS operations that lock up and require manager approval(seems most common if they need to void a mis-scan over a certain value or multiple mis-scans or customer-decides-they-don't-want-it changes of order; or if something is being returned); and in the fast foot setups where there are displays over the various prep stations telling people what needs to be made for a specific order number there often either aren't controls or the controls are not intended to be interacted with(which is sensible design if you've got french fry grease and food safety concerns in the mix; but likely means that the guy at the soda fountain being able to void a screen full of orders is either unsupported or intended to be a very infrequent case).

I could see that going poorly if you just grafted the bot on in place of either the human operator(who will just not take your 18,000 water cup order, so it will never exist as far as the system and its constraints are concerned) or the app(which has no common sense but is both tied to someone's account information and vastly simpler to constrain with boring, ancient, form validation logic) and immediately started dumping its interpretations of orders into the system as valid.

Probably not flood-the-store material; but plausibly quite disruptive if it's intended to be fairly uncommon for orders to need to just be disappeared once they are in.

Comment Re:Sometimes it surprises him? (Score 4, Insightful) 127

What seems frankly depressing is that a C-level would think(and quite possibly have reason to think) that that sort of aw-shucks-lessons-are-being-learned-about-things-nobody-could-have-predicted tone is exonerating outside of a fairly tiny, low stakes, test program somewhere.

It's not like having somebody take a poke at connecting a system that is supposed to be pretty OK-ish at natural language processing and text-to-speech to an ordering system is particularly unreasonable; at the scale they are operating probably more unreasonable not to; but "well, it's live in 500 locations and we've learned that a technology synonymous with prompt injections and a lack of common sense so profound it's almost a category error to suggest it could have any isn't super robust..." makes you sound unbelievably dumb and risk insensitive.

Comment Re:Interesting (Score 2) 49

The specific regulatory formulation probably wouldn't fly in the US; but a municipal regulation that has no enforcement, no penalties and "is merely a guideline... to encourage citizens" is basically just a public service announcement; which is something that's reasonably common and not especially controversial or legally fraught.

PSAs do tend to be treated as a bit of a punchline; but they are common enough; both outright state-sponsored ones and nominally-charitable private sector initiatives to make unsold ad impressions look like community service.

Comment Re:Entitled much? (Score 1) 58

I think it's the very fact that you can(and probably should; at least to some degree) do more or less exactly that is what makes this report seem so hysterical.

It's not like it's false that some Yandex software dude will probably cooperate if the FSB tap him on the shoulder and suggest that it's exciting and mandatory; while John Smith, corn-fed American patriot, is at least going to require some sweet-talking; but if you are just blindly grabbing 'package that some dude put on NPM' your problems are far deeper, and much less exciting, than nation-state sabotage. Even when doing their absolute best; programmers make mistakes all the time; so if the project is basically one dude who maybe debugs his own code if it's too broken you have basically no reason to suspect that innocent vulnerabilities are getting caught; along with the risks posed by the relatively frequent compromises of dev credentials on the various repositories, and the risk that you'll be left unsupported if the random guy gets hit by a bus or finds a new hobby and just walks away.

It's fun to pretend that tedious, labor-intensive, problems don't exist by focusing on sexy threats instead; so I'm not surprised that a 'security' vendor would be working this angle; but, fundamentally, if you are just grabbing random garbage off a repository every time one of your junior devs even thinks too hard about docker you are doing it wrong.

It also seems a bit silly because, if your real problem is nation state adversaries rather than nobody actually looking because it seems like it works and why try harder it would likely be relatively trivial for the trojan horse project to add 'legitimacy'. You want multiple maintainers because we can't trust Sinister Yuri to police himself? Ok, it doesn't take a terribly impressive intelligence agency to conjure up a few additional contributors who make changes to the project from North American or western European IPs and time zones and have a thin but plausible trail of assorted tidbits that suggest that they are consultants or employees of random little companies in friendly nations. You call that a security check?

Slashdot Top Deals

Always look over your shoulder because everyone is watching and plotting against you.

Working...