Forgot your password?
typodupeerror

Comment Re:Good! (Score 1) 46

Mostly just in the bulk, low barriers to entry, and pervasiveness(like a lot of things social media). The case of actors actually goes back a long way; state laws regarding compensation of child actors were spurred by the case of one who was popular in the 1920s and litigated with his parents over where the money wasn't in 1939. That case doesn't provide for takedowns; but it's also the case that filmmakers are normally looking for children to play characters; rather than to do 'candid' intense documentaries of them at home; so the degree of public exposure of private life is presumably deemed to be less; with the main issue being children who were...definitely...getting a solid education while on stage finding that all the money was gone when it became their problem.

Child-blogging, by contrast, seems to reward verisimilitude (if not necessarily truth) and invasiveness, relatively pervasive in-home mining for 'content', so presumably seems better served by removal-focused options; though there has definitely been talk about covering the economic angle in line with child actors.

I don't even know what the deal is with child beauty pageants, or how something you'd assume is a salacious bit of slander about what pedophile cabals are totally doing, somewhere, is actually a thing a slice of parents are into, way, way, into. Apparently that's a third rail to someone, though, as the only jurisdiction I'm aware of with significant restrictions on them is France.

Comment Re:The Horse is Already Gone (Score 1) 65

Unless quantum computing becomes cheap and comparatively widely available quite quickly after becoming viable passwords seem like they'll be a manageable problem. Nobody likes rotating them; but it's merely tedious to do and the passwords themselves are of zero interest unless they are still being accepted. If it does go from 'not possible' to 'so cheap we can just go through through in bulk' overnight that could ruin some people's days; but if there's any interval of 'nope, the fancy physics machine in the dilution refrigerator is currently booked by someone with a nation state intelligence budget' you can just rotate older credentials.

Now, if you were hoping that encryption was going to save any secrets that are interesting in and of themselves that got out in encrypted form; then you have a problem. Those can't be readily changed and will just be waiting.

Comment Not an increase (Score 1) 72

LLMs have never been rules-based "agents," and they never will be. They cannot internalize arbitrary guidelines and abide by them unerringly, nor can they make qualitative decisions about which rule(s) to follow in the face of conflict. The nature of attention windows means that models are actively ignoring context, including "rules", which is why they can't follow them, and conflict resolution requires intelligence, which they do not possess, and which even intelligent beings frequently fail to do effectively. Social "error correction" tools for rule-breaking include learning from mistakes, which agents cannot do, and individualized ostracization/segregation (firing, jail, etc.), which is also not something we can do with LLMs.

So the only way to achieve rule-following behavior is to deterministically enforce limits on what LLMs can do, akin to a firewall. This is not exactly straightforward either, especially if you don't have fine-grained enough controls in the first place. For example, you could deterministically remove the capability of an agent to delete emails, but you couldn't easily scope that restriction to only "work emails," for example. They would need to be categorized appropriately, external to the agent, and the agent's control surface would need to thoroughly limit the ability to delete any email tagged as "work", or to change or remove the "work" tag, and ensure that the "work" tag deny rule takes priority over any other "allow" rules, AND prevent the agent from changing the rules by any means.

Essentially, this is an entirely new threat model, where neither agentic privilege nor agentic trust cleanly map to user privilege or user trust. At the same time, the more time spent fine-tuning rules and controls, the less useful agentic automation becomes. At some point you're doing at least as much work as the agent, if not more, and the whole point of "individualized" agentic behavior inherently means that any given set of fine-tuned rules are not broadly applicable. On top of that, the end result of agentic behavior might even be worse than the outcome of human performance to boot, which means more work for worse results.

Comment Re:ed-tech (Score 1) 88

Plus the whole 'fucking dystopian' angle. On the one hand we've got people bitching about 'civilizational decline'; but we want 'robot philosophers' teaching children? I'm not against the occasional scantronned multiple choice test; but outsourcing philosophy to save on those oh-so-expensive adjuncts seems like the sort of thing you only do to children being groomed for mindless servitude or because you've entirely given up on humanity as anything but an ingredient in pump and dump schemes.

Comment The structure or the incentive structure? (Score 1) 31

I'd be more optimistic about the ability to deliver an approximate equivalent if there were someone paying for them to do so(the economics of ordinary satellite launches seem to favor fitting within what a given delivery vehicle can handle, rather than bolting things together, so it's not 100% assured; but seems likely); but less clear on replicating the incentive structure.

It's not that the ISS is totally useless; but it currently justifies an awful lot of launches, including manned, more or less by being there. Gotta launch that crew lest the ISS be empty which would be bad because reasons, and have to launch those supplies because there's a crew on the ISS. They do find scientific things to shove into modules; but the arrangement is such that no project is ever called on to justify the ISS, which is just sort of assumed.

Short of the feds just paying some contractors and calling it a 'private' ISS replacement; it's less clear that there's much private sector incentive to build an ISS-like; judging by quite vigorous stream of privately justified satellites designed to not be bolted together and the relative absence of jostling for ISS experiment space. If it were worth that much we'd presumably be up to our eyes in sordid stories of people pulling lobbying stunts to try to exploit it on the cheap through regulatory capture; but we aren't really.

Comment Hmm... (Score 1) 19

And here I thought that 'AI' was supposed to be leading to a flowering of specialized-for-purpose software that would previously have been infeasible to build due to resource constraints; but one of the most heavily capitalized outfits in the bubble can't cope with a chromium reskin and a couple of electron apps?

Comment Re:Dumb (Score 1) 114

I suspect that he's doing some weasel-wording in terms of use cases in part just to make his proposal sound more novel and more hypergrowth-capable than it actually is. Aside from the question of why you'd want to put your money on a convicted fraud who was unable to deliver a simpler project; it's just not clear how novel, and how favorable for the frothy growth that VCs love, the proposal is.

Talking about "AI powered planes" seems like a way of trying to ignore the fact that 'drones', which are incidentally often rather small but need not be, are something others have been actively and aggressively exploring for years to decades now; with the accompanying question of why we'd be interested in a latecomer with a hype deck; and talking about 'AI' rather than 'autopilot' seems like a way of trying to ignore the number of aircraft that(while they do not go uncrewed today for regulatory purposes) are capable of executing most of their flight under the control of some (relatively) simple and well understood feedback systems; or the hybrid systems (like the predator and reaper drones, which at this point are old enough that some of them have cycled out of service) that would temporarily bring a human in to do hands on stick for particular operations but could mostly buzz around unattended so that a single operator could handle several of them at the same time.

There's clearly a lot of use case for aircraft that don't require a pilot; it's just much less clear how much room there is for it to be an exciting mostly unfilled space where revolutions will happen and there are enormous fortunes to justify the enormous risks; rather than actually being a combination of bulk civil aviation where the existing autopilots are probably 90% there but nobody really wants the blowback of cutting the pilot out; and all the various drone applications where people who aren't this guy are years ahead of him.

Comment Re:For everybody? (Score 1) 72

Going by "Walmart said that both patents were "unrelated to dynamic pricing," as the patent issued in January was specific to markdowns" it sounds like they are going to try the argument that it's not evil dynamic pricing; it's glorious personalized savings!

Those are the same thing arrived at by superficially different routes, obviously; but in terms of the psychology it wouldn't be at all surprising if you can convince people that being offered discounts calculated to be just big enough to get them to bite is totally awesome; where being offered prices just below the level that makes them scream is brutal oppression even though it's the same price, so I wouldn't bet against it working.

Comment How patentable? (Score 1) 72

Clearly they got the patent, so somebody was convinced; but I'm puzzled by what you could actually patent at this sort of scale. I could imagine an specific implementation involving some genuinely clever techniques that might be novel enough to patent; or a specific good implementation being a juicy trade secret; but at a high level "try to do some price discrimination while balancing sales rate and margin" sounds like a classic "ancient obvious thing; but we envision a system involving a computer" patent.

Comment Maybe the ultim killer app will be legal/political (Score 1) 112

Ever since I started writing code in 1980, I've continually wondered if we'll ever reach a plateau where consumer-affordable tech is so good the average person won't need it to advance anymore. Eventually computing and networking will be fast enough, and storage will be huge enough, that we can all essentially have full copies of the Internet on our phones (or whatever), and intelligent, locally-running agents that can tell us anything we want to know, conversationally in realtime, including results that require analysis. At that point what would a faster or bigger computer do for you? Hardware and software will definitely get there, but getting permission to have and manipulate the content will be an even bigger barrier. I don't see a scenario like this happening as long as economics is still a force in the world. Food and electricity will probably be free before information will.

Comment Not even the worst of it. (Score 3, Insightful) 93

There is, presumably, an amount of time savings where this could be justified(at least for things that you, ultimately, only do because they pay the bills; not ones of some intrinsic value); but it seems particularly grim to deal with the changed nature of the work for such paltry savings.

Going from 'thinking about things you know about' to 'keeping a close eye on an erratic intern who can bullshit really fast' is a fairly dramatic downgrade in terms of the quality and apparent futility of what you are doing. At least junior people sometimes improve thanks to mentoring, even if it's not something you do specifically to save time in the immediate term. A relentless torrent of glib and dense, though, is hell compared to just doing it yourself; so the idea that you aren't even saving time by doing so is pretty grim.

Comment A pity... (Score 4, Insightful) 144

If the cops are going to hold people without charge for months for bullshit reasons and then act like there's nothing wrong with that could they, please, try to focus on the "If you've done nothing wrong you have nothing to worry about!" idiots? At least with those guys it would be educational outreach.

Slashdot Top Deals

"Being against torture ought to be sort of a bipartisan thing." -- Karl Lehenbauer

Working...