Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Kryste! (Score 1) 60

While you have a point, also consider how we would treat someone who, after being robbed, decides to go and steal from others.

If anything, we should be harder on those who victimize others despite having been victims themselves. They should know better. If they don't, they need to find out.

It's an interesting question. To my knowledge, no such correlation exists, so this is purely hypothetical. And there's a part of me that agrees with you. There's also a part of me that recognizes that being a victim of abuse, crime, etc. changes people. I remember how angry I became after somebody stole my camera bag out of my car while I was parked at church. If I had found out who it was, I would probably be in jail right now. If something that minor can cause that much of a psychological change in someone for O(months), I'd hate to imagine how being the victim of sexual abuse could change someone. It probably wouldn't involve increasing their empathy, but rather completely obliterating it for O(decades).

Comment Re:Kryste! (Score 1) 60

But isn't it fun to decide that a person is a pedo because they were abused.

I mean, it's also possible that men who have some other characteristic that makes them ripe for abuse also makes them become abusers. Correlation is not causation, and all that. But the pattern is surprisingly strong.

And then there are women. I often have to check the predators list for one of my organizations. There are women on it too, despite the common narrative.

What's interesting is that the whole correlation where abuse victims are more likely become abusers exists *only* for men. Female abuse victims do not show a higher propensity for becoming abusers. I'm not sure why that is, but I have a feeling it may have something to do with biology and beta males having a drive to become alpha males at a primitive level that's hard to repress without ongoing psychological counseling. But this is just a hypothesis, and I'm not sure how you would test it (ethically, anyway).

Outside of my SO who is younger than me - after I got out of high school, I was mainly attracted to older women. well, I suppose I was then too, but I was kind underage, yaknow? Wonder what the British journal of psychiatry has to say about those predilections?

Don't know. But young people having crushes on older people is pretty common, so I assume they would consider that normal. For that matter, older people being (non-exclusively) attracted to younger people (young as in teen, not child) is also considered normal, psychologically speaking. What makes it problematic is acting on those feelings. And I guess that last part is at least arguably true for attraction to kids as well, though with a much bigger ick factor, and without the "considered psychologically normal" part.

That's what makes the CSAM dragnets a bit disconcerting from a psychology perspective. With the exception of the thirty-odd people who were known to be actively abusing children and probably some percentage of the others who uploaded new CSAM not previously known to law enforcement, there's not necessarily any real reason to assume that any of those 1.8 million are any more likely to abuse children than any other random person who has those same feelings (which, with 8 billion people on the planet is probably in the high tens of millions to low hundreds of millions, or if you count attraction to teens, is several billion).

Comment Re:The numbers don't add up (Score 1) 60

My guess, then, assuming that most consumers are not producers, would be that 90,000 videos is roughly the total number of CSAM videos that are in wide circulation, or at least that it is on the order of maybe a few hundred thousand. And from there, it seems likely that the 1400 people arrested so far are likely people who uploaded new content not previously known to law enforcement.

Rereading the summary, that doesn't match with the claim that only 39 child victims were protected. So that doesn't explain why that number is so low.

Maybe the 1400 people arrested were people whose identities were more easily unmasked for some reason? People who uploaded content in general? People who were stupid enough to download and run a .exe file? People who were running a vulnerable browser?

Or maybe they aren't done. I'm kind of expecting that to be the correct answer, but guessing here, since there's no detail about how they breached the network (which I suppose is entirely reasonable, so long as it doesn't involve fruit of the poisonous tree, though with my computer security nerd hat on, I admit to being a little curious).

Comment Re:Kryste! (Score 1) 60

I mean, what would cause an adult to be interested in kids that way?

Statistically? There's a strong tendency for men who were sexually abused as children to go on to sexually abuse people later in life.

So the whole "feed them to a wood chipper" answer might make folks feel good about themselves, but it kind of misses the mark when about a third of them were victims themselves.

Comment Re:The numbers don't add up (Score 1) 60

The article claims that there were 1.8 million users, 1,400 suspects and 91,000 videos. My guess is that the 1.8 million number is made up. So either they managed to identify ~0.1% of the users, and the average user uploaded 0.05 videos, or much more likely the site had less than 10,000 users.

At first glance, maybe, but when I thought more about it, I concluded that one upload per 20 users is not entirely unrealistic. Bear in mind, though, that I'm making a *lot* of assumptions that may or may not be valid, so take this with a grain of salt.

My first assumption is that most people who watch porn in a fetish category likely don't actually engage in that type of sex in real life, and I'd expect that to be doubly true for child porn. Assuming that assumption is valid, we can safely say that the number of people who could upload videos of their own creation would probably be pretty low (thankfully) as a percentage of such a group.

My second assumption is that there isn't a huge body of CSAM that is broadly available. To be clear, I suspect that more than 90,000 pieces of distinct content exist *somewhere* (though I really don't even have a way to estimate the scope here, so that's entirely a wild guess), but I would assume that the amount of CSAM that is broadly distributed, i.e. the amount of content that a sufficiently large number of interested persons would be likely to possess, is probably a small percentage of the CSAM that exists out there.

In other words, I'm assuming that like baseball cards, collectible toys, or pretty much any other collectible item, you're likely to have a long tail in the distribution, i.e. some small amount of CSAM content would be possessed by a large percentage of interested people, with the vasty majority of CSAM content possessed by only the original creators, their friends, friends of friends, etc. out to some small number of hops.

If that assumption is true, then that would mean that for people with extensive personal collections, those collections would be likely to have a high degree of overlap. So as the number of people in a group increases, the probability of any given video in someone's collection not already being in the shared collection would approach 0 asymptotically.

My guess, then, assuming that most consumers are not producers, would be that 90,000 videos is roughly the total number of CSAM videos that are in wide circulation, or at least that it is on the order of maybe a few hundred thousand. And from there, it seems likely that the 1400 people arrested so far are likely people who uploaded new content not previously known to law enforcement.

But this purely an educated guess based on a lot of assumptions and a general understanding of statistics. Does anybody have any actual numbers to corroborate or contradict that theory?

Comment Re:It's not the year of robotic AI. (Score 1) 71

I doubt that our mutual experience levels are going to allow us to agree on these points.

Consider the architecture of an RTOS vs, say Linux (which has been re-architected in an RTOS release). Let's map the RTOS to a standard kernel for sake of this example.

An RTOS is entirely reactive. Inputs control it at all times. This reactance is like the self-driving application that must make snap judgments of numerous conditions and rapid input changes to alter the path of what it's controlling. It needs excruciating calculative strength to make snap judgments that direct projected control.

Oh, no question that it's a hard problem. The difference is that self-driving can be broken down easily into a bunch of smaller hard problems, and each problem has, to some extent, a right answer, or at least relatively straightforward ways to objectively verify that an answer is not wrong. For example, you can fairly objectively define what constitutes a reasonable driving path, and provide guard rails in running your tests against newly trained models that fail if it goes outside of those parameters. So this is more in the NP-complete level of complexity, where computing it is hard, but verifying the solution is of polynomial complexity.

What constitutes a reasonable architecture for piece of software is either entirely subjective or an intractably large set of objective constraints or some combination thereof, because maintainability considerations play a role, along with data backwards/forwards compatibility, etc. I'm not even sure where you would begin trying to define adequate guard rails. This is more like the broader NP-hard level of complexity, where computing it is hard, and verifying it may well be even harder.

Its state-machine logic has to be incredibly unerring, all whilst moving down the road at speed with humans as cargo.

Without that onus, a kernel tries to systematically deliver interactive response so quickly that users see no pause. There is no human payload, only the satisfaction that screen and device updates are acceptable, perhaps a few pauses now and then as one rotates a vector 3D model through onscreen space.

The coding model for the RTOS behind realtime transport navigation is a different one than say, CRM, web, or pub/sub models with messaging reactance.

No disagreement on any of those points. The tolerances for self-driving car tech are indeed higher than the tolerances for tools that write software. But part of the reason for that is that there's a human in the loop in the latter situation. You aren't trying to write software that can design a word processor from scratch. You're writing software that can design a single function or maybe a small class that performs a specific behavior from scratch, and all of the hard work happened before you even asked — specifically, coming up with the specifications.

I know that's true for self-driving as well, but the difference is that the specifications for working self-driving behavior are largely consistent across platforms, with the exception of some specific rules of the road being different in different countries, whereas the specifications for a word processor are entirely unrelated to the specifications for an image editor or a web server.

So at that general level, being able to drive a car is at best like AI being able to write a word processor, and AI being able to write any arbitrary piece of software is by definition a much broader problem.

In your robotics example, it's connected to a pub/sub network to deliver it largely realtime information about the qualities of characteristics that it navigates, sucking your floor. The pub/sub model micro-rewards various participants in its revenue model, while cleaning your litter and dead skin.

Probably not. The robotic vacs can be used entirely offline; you just lose the ability to control it remotely if you do. And if you believe their privacy policy, no data is stored remotely except for aggregated data.

Tesla and other's nav firmware isn't finished. It's not provable, only a sum of projections;

I mean, that's the very definition of AI.

the realtime driverless taxis clog the veins in SF, as an example, where people uniformly vilify their stupidity.

That would be Cruise, I suspect. The general perception of Waymo in SF seems to be pretty positive.

Clearly, they're not ready. It doesn't matter if you badge them with a Jaguar leaper on their hood or not-- they're working only under very highly confined circumstances.

I wouldn't call it "highly confined". The biggest constraint was lack of support for driving on the freeway. They just started testing that in early 2024, and got regulatory approval in California in mid-2024. Without freeway driving, self-driving cars wouldn't be feasible in a lot of cities. Now that they've started doing freeway driving, I suspect you'll find that the number of situations and environments that they can't handle is remarkably small, bounded largely by the need to do high-definition mapping drives first.

I agree that it is imperfect, particularly when construction is involved, but the difference between the modern Waymo cars and the hesitant Waymo cars from a decade ago is night and day, speaking as someone who periodically ends up driving near one.

No, it's not ready for prime time, and the Chinese robotics meme isn't so much a sham, as it's wishful thinking, a hope and prayer.

I disagree on the first part, but I agree with the second. I have little faith in any mass-manufactured humanoid robots being usable right now. But if they bring the cost of the hardware down enough through mass manufacturing, then as long as they run some sufficiently open operating system, other folks will find interesting ways to use them, and will figure out how to make the software work.

For example, electronics PCB manufacturing is already highly automated, with human workers loading tape reels of components into pick-and-place machines, and the machines doing all the rest. It would be hard to replace all of that hardware with new hardware designed for any sort of automated loading, but it seems obvious that a humanoid robot with the right programming could fully automate the loading of components. That's well within the realm of what robotics can do today.

Similarly, Amazon warehouses have non-humanoid robots that can already pick a lot of things off of warehouse shelves. A humanoid robot could probably do a better job, and they actually have the software engineering resources to make that happen. Whether any of that software would ever become available outside of Amazon is, of course, a different question.

Comment Re:It's not the year of robotic AI. (Score 1) 71

The transient nature of navigating transportation obstacles requires knowing many concepts, and avoiding the ones that lead to bad outcomes. Driving automation and coding intersect at many junctures.

Code is not static, and neither is driving. On a good day, easily summoned choices can be made, and on a bad day, dependencies require astute and rapid choices to be made productively.

Making a choice at its simplest is an if statement. It's boolean logic. Making lots of rapid choices that take into account the data coming in can be measured objectively. Creative efforts can only be measured subjectively. That by itself makes the two fundamentally different in terms of designing training systems.

The timing of transportation doesn't wait; conclusions of many inputs have to render the right choice in an action. Deftly done, all is good, rider arrives at a destination, money earned, no harm no foul.

Let's be realistic here. On 99% of drives, nothing interesting happens. You just have to pick the correct lane, stop for stop signs and traffic lights, and obey the speed limit, and you get there safely. For 1% of drives, somebody cuts in front of you or tries to sideswipe you or steps out in front of you. And for the 95% case, you just have to recognize that this is about to happen by computing the speed and direction of each vehicle, pedestrian, animal, or other large object, and hit the brakes soon enough. By the the time you get into situations where you have to steer to avoid something, you're at more like .0001% of drives.

This, of course, ignores the path planning headaches of some drives involving driving through safety cones for temporary lane shifts, or on rare occasions, having to deal with a human directing traffic, but again, these are relatively rare edge cases.

A similar sequence of events occurs in programming. The only item ostensibly removed is a split-second life/death choice.

Which means it's not a similar sequence at all from a complexity perspective.

In driving, once you have passed a particular spot in the road, what happened back there no longer matters. You can forget about it. And anything that isn't about to happen within the next double-digit seconds also doesn't matter. You don't need to think about it yet. There's a very narrow window of temporal data that matters.

In programming, every single one of those decisions has to take into account at some level every future decision that will lead to successfully building the app, not only immediately, but also making it maintainable for features that you might want to add later. You can't look at one part of the project in isolation, do your job, and walk away unless you are a very junior programmer working on a very well-defined task. And if you are a very junior programmer, for you to even be able to get that well-defined task in the first place, someone else had to think about all of those things; it just wasn't you.

So no, these are not similar. Not at all. The decisions made by a self-driving car are expressible as a simple state machine, with simple outputs (steer X% left or right, brake x%). All of the complexity is in the data gathering (get the location and motion vector for all interesting objects) and path planning. The decisions made by a programmer are not. The inputs are vague English descriptions of what you want to build. The outputs are large, complex software systems.

That under highly confined circumstances, driving after millions of miles of training in limited geography, AI can drive some cars is just a toe-dip in the real world. Phoenix, SF LA-- they get little snow. They have minimal random objects invading spaces.

I'll grant you that snow is a problem. It's also a problem for human drivers, though. Random objects? They're about as likely in the Bay Area as in any other city.

The template you cite is highly-confined, somewhat to maximally arid circumstance and environment. The real world is but a fraction of that.

Actually, quite the opposite. Driving usefully in a dense city with pedestrians and cyclists is worst-case for self-driving tech. Driving out in the middle of nowhere means you're orders of magnitude less likely to encounter other cars, pedestrians, and cyclists, so the decisions tend to be way less critical. Obviously, the lack of updated maps can be a problem, but that's more of a process issue, not a technology issue. The only other real difference is the potential for higher road speeds, but a car's ability to drive at higher speeds is mostly bounded by CPU/GPU performance and camera resolution for seeing things at longer distances, i.e. it is the sort of complexity difference that will go away on its own with a few years of hardware improvements.

Your new robot vacuum doesn't make any money, it just phones home and rats out your living quarters geometry for profits. Look it up. And you know how your Tesla knows your every move. There is no privacy in a Tesla. You're part of the product. You charge at Tesla chargers, use the screen for nav and looking up restaurants. You're part of the product. You're no longer autonomous as a driver, and not really in control. Hope that works for you.

Yes, it does. Thanks for asking. i've read Tesla's privacy policy, and it seems entirely reasonable. Same with Roborock. If that changes, I'll let you know.

Comment One word: embedded (Score 1) 194

If you think companies are going to keep using Windows for their embedded devices when the company has to create an online account before they ship the box to customers, you have another thing coming. Letting the manufacturer of a product have permanent control over that hardware and permanent ability to take control over someone else's account on a device that they own would be a showstopper for literally every company that preinstalls Windows for embedded use.

Comment Re:Oye Vey (Score 1) 27

So what fresh new hell have they cooked up for my domain certs now?

... that they will no longer issue certificates for your domain to someone else? Sounds like a good thing, on the whole!

The thing is, they could shut down and guarantee that with 100% certainty. Making it harder for someone else to get certs for your domain is nice until it prevents you from getting certs for your own domain. Also, it does nothing for the gaping hole that lookalike domains represent.

As far as I am concerned, TLS certificates stopped being beneficial when they stopped costing thousands of dollars. Expanding TLS to every random person who wants to run a web server destroyed their value in recognizing a site's legitimacy. For everybody else, we should have used a separate system, like public keys in DNS records with key pinning a la SSH. Instead, they basically broke TLS completely, so the security icon in Safari for Amazon.com is indistinguishable from the security icon in Safari for my personal website with a free certificate from LetsEncrypt.

From here on, there's nothing that they can realistically do to make TLS trustworthy, because it just isn't. They could make it impossible for other people to get certs for your sites, and that still won't make it more trustworthy. For 99.9999% of use cases, we'd be better off if they stopped trying, let anybody get a cert who wants one, and used key pinning to guarantee that fake certs are useless except at sites you haven't visited. If you have pinning, the only remaining benefit for actually validating the ownership of the domain is for extended validation certs, and again, if you can't tell which ones are which, then the industry has decided that they don't matter, so why bother with that?

In other words, the whole industry is basically a useless tax on the web at this point, and should go away.

Comment Re:It's not the year of robotic AI. (Score 2) 71

No. The same myriad inputs needed for safety in navigating an auto for a passenger is quite similar to the variety of skills needed to be a good coder.

No, not really. Being a good programmer is a creative process involving design aspects, low-level coding aspects, naming things (one of the two^H^H^Hthree hard problems in computer science), and generally figuring out how to express a vague general concept as a series of strict rules.

Driving safely is just combining GPS routing (which is a long-solved problem) with obeying a bunch of fairly well-defined road rules and recognizing hazards and stopping or steering when you see one &mdash identification, path planning, etc

There are several orders of magnitude difference in the difficulty of those two tasks. That's why we have generative AI art with six fingers and two thumbs, while self-driving cars are operating commercially in multiple cities.

AI isn't going to replace either, it's a labor-replacement wet dream.

AI is already replacing drivers. Right now, you can take a ride in a self-driving car from Waymo in Phoenix, much of the San Francisco Bay Area, Los Angeles, and Austin, with 10 more cities rolling out this year. Their at-fault accident rate blows human drivers out of the water. I mean sure, you still have the occasional bizarre story about someone circling a parking lot for twenty minutes or the cars honking at each other all night in a parking lot, but arguing that self-driving car tech isn't mostly a solved problem is disingenuous at best. Tesla mostly has it working on highways. Waymo mostly has it working in cities. Put the two together, and you could at least ostensibly have end-to-end autonomous driving from coast to cost.

Doing autonomous driving well enough and cheaply enough to have it in a vehicle that individuals own is, of course, still not fully solved, and doing it generally in every environment without having to block off certain areas because of construction, etc. is also still not entirely solved, but that's like arguing that robots can't replace vacuum cleaners because they can't get into corners. For the 95% case, it is solved, and that's actually pretty amazing, IMO.

Posting this as I eagerly await the HW4 upgrade for my Tesla and the arrival of my new robotic vacuum cleaner in May.

Comment Re:It's not the year of robotic AI. (Score 1) 71

It's the drooling wet dream of capitalists to cut out labor. That's you and I. First, AI will replace all coders. Yeah, sure, go ahead with that and reap the rewards. It'll take 10x the costs to unravel those bugs.

Put into self-driving vehicles? Wasn't that supposed to happen a few years ago? How many deaths will it take until the lessons are learned. How much money will get burned on the attempts? How many will die in bad crashes in the meantime, boiled in burning lithium battery fires?

I think there's a big differences between self-driving cars and using AI to replace programmers. There's no feasible way to have enough cab drivers and Uber drivers for everyone to stop driving themselves, nor will public transit ever get good enough to be a good alternative to a car outside of large cities. So self-driving car tech is doing way more than just replacing the small number of people who drive for a living. It is also giving mobility to the elderly, giving several hours per week of commute time back to the sorts of workers who can work remotely during the commute, driving down the cost of package delivery and reducing delivery times, and massively transforming probably a lot of other markets. And that is likely to result in more jobs, rather than fewer overall, albeit fewer in a few narrow areas.

The same can't be said for using AI to replace workers in most other areas.

Comment Re:Any who cares (Score 1) 81

Or find someone with the same model and borrow a laptop.

If it turns out to be a bug that's reproducible on the same model of laptop with that monitor, consider getting a free Apple developer account and filing a bug with Feedback Assistant, which will capture logs and submit them to Apple.

Comment Re:Long road (Score 1) 81

As a longtime Mac user, I've been feeling and talking about exactly this with colleagues for some time now. Since "El Capitan", the OS started to suffer greatly from visible lack of direction and feature creeping.

You misspelled Snow Leopard, or maybe Mountain Lion. :-D

But seriously, I mostly blame the decision to merge Mac and iOS development. It took three years for the iOS side of the house to drag the Mac side down enough for you to start noticing, but having them under the same team resulted in constant attempts to merge these very different platforms, always to disastrous effect. The downhill slide for macOS began very clearly when Forstall left Apple and iOS got merged under Craig. Nothing against Craig here — I think he does a great job, or at least he did when I worked under him — but the teams should never have been combined.

Mavericks had some nice changes at a low level; the UI changes were questionable, but not objectionable.

Yosemite went too far, IMO, though users seemed to like it, I guess?

By the time you got to El Capitan, we were calling it "El Crapitan".

I haven't really seen anything of value added since then other than Apple Silicon Support. And that's okay. It's an OS. It doesn't have to constantly change and add new features. At some point, all you can really do is make it worse, and I think they passed that point a while back for the most part. :-)

Comment Re:Any who cares (Score 1) 81

My biggest gripe with Mac OS is that the desktop glitches (with garbage drawn over many of the "tiles" that I guess different GPU cores update), and fullscreen video playback will crash the whole OS within 5 to 15 minutes, on my main monitor.

This is not normal. You almost certainly have either defective VRAM or a defective GPU. For the M*-series, those are the same thing.

Chances are, an address line is marginal, so sometimes data is getting written to the wrong part of VRAM.

I mean, there's a very, very tiny chance that it could be a software bug where VRAM is getting simultaneously allocated to multiple processes at the same time, and given that you're right on the threshold of running out, that's slightly more likely than it otherwise would be, but my money is on hardware.

What I would do is bring your computer and monitor to an Apple store and show them what's happening and then try it with the same monitor on a different Mac and see if the problem reproduces. When it doesn't, tell them you need a new logic board.

Comment Re: We are running out of work (Score 1) 56

And AI is going to effectively destroy

Yeah, like everything else so far did.

If you're being sarcastic, know that this very much has happened.

  • It used to be the case that you could make high salaries doing manufacturing in the U.S.; now, most manufacturing is highly automated and has fewer workers (and is overseas).
  • Doctors have always been some of the best-paying jobs out there, but insurance companies (both health and malpractice) have been squeezing them from both sides, and technology has been pushing up from the bottom to make it easier for nurse practitioners with less training to do more of the jobs that doctors can do with supervision that is becoming less and less, and the result is that most doctors now are imported from other countries, in part because not enough people are going to medical school and in part because the people who do go to medical school are choosing to do research instead of actually practicing medicine. Being a doctor still pays well, but the relative pay has dropped, from 3.47x an average college professor salary in 1980 to just 2.81x today.
  • Speaking of medicine, the entire field of medical transcription has basically been replaced by tech.
  • We're about to see truck driving go the same way as self-driving tech replaces that previously moderately high-paying career.
  • We're seeing hints that generative AI may start to impact tech as well.

What makes this different right now is that the rate of job destruction seems to be accelerating, probably at a faster rate than the market can adapt to.

Slashdot Top Deals

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...