Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:I know a persian (Score 1) 34

Iranians refer to themselves as Persians, and prefer to be called that. They are rightfully proud of their heritage, the great Persian empire.

The only Iranian I've ever met (that I'm aware of) said that when he met people, he usually called himself Persian to avoid the stigma that comes from saying you're from Iran, presumably out of fear that Americans would assume that he was a fundamentalist just waiting for a chance to shout "Death to America" and blow himself up or whatever. He didn't put it exactly that way, but that was the gist.

Comment Re:Spreading misinformation (Score 1) 114

The former seems way more acceptable to me

This is only because you haven't through this through. "detrimental to public health" is not nearly as objective as we need it to be. Instead, it is often a substitute to "advantageous to financial interests of a pharmaceutical company". For example, opioid epidemic and false claims that oxy is not addictive.

Who made claims that oxycontin isn't addictive? The government? No. The manufacturer. The government merely allowed them to do it until their claims were shown to be false.

Spreading claims that would encourage a pandemic to get massively worse by discouraging vaccination falls squarely under "detrimental to public health". At no point were *legitimate* studies that showed safety concerns in any way squashed to favor any company's interest. That's why we know that vector-based vaccines were responsible for a statistically significant number of strokes and heart attacks in otherwise healthy people.

The studies that were squashed were a bunch of very weak, mathematically garbage studies that contained errors so obvious that even I, a non-medical person, could shoot dozens of holes in their methodology. A small number of individuals were behind publishing fraudulent study after fraudulent study, and they kept doing this despite broad consensus that their methodology and their conclusions were pure, unadulterated bulls**t. They did this by publishing in journals significantly outside the areas that were appropriate for the papers, relying on the journals' lack of people with adequate understanding of the subject to shoot it full of holes and recommend not publishing it.

And these folks had a tendency to go on YouTube and spread their bulls**t, using their publication in a "journal" (of physics, social sciences, psychiatry, chiropractic medicine, etc.) to support their absolutely fraudulent claims. YouTube quite literally became a dumping ground for trash science that made the National Enquirer look like respectable journalism by comparison.

It got to the point where my canned response was, "If you are showing me something in a YouTube video instead of a peer-reviewed journal, I automatically assume that what you are saying is pure, unadulterated bulls**t, because out of the roughly one hundred times I have not made that assumption, I have found it to be true every single time. If you want me to read it, write it down, so that at least I can skim it in three minutes and point out why you are wrong without wasting an hour of my time watching your stupid video."

IMO, YouTube was right to crack down on that. When people without medical degrees are basically giving medical advice that contradicts broad medical consensus, this is almost guaranteed to be harming society. And nothing good can come of that. Children dying of measles, smallpox, polio, and other vaccine-preventable conditions is not something we should aspire to. Regardless of whether they have freedom of speech, that doesn't mean companies should be required to be their megaphone. And regardless of whether the government was the group who pointed out how potentially harmful the things they were saying are, the stuff they were saying was still harmful.

Comment Re:Spreading misinformation (Score 1) 114

Removing misinformation is not illegal either. It's common sense.

Who decides it's misinformation?

Quite a few times things which were deemed misinformation back during the COVID times turned out to be different than official sources said (at first or later).

The closest thing I can think of would be the "There are no studies showing that masks are effective when worn by the general public" statements early on when they needed all the N95 masks for medical personnel. But even that wasn't really disinformation; it was just stating the absence of supporting evidence, and later, when supporting evidence appeared, there was no longer a lack of supporting evidence.

There's a difference between being wrong and spreading disinformation. The former requires either knowing that you're wrong or having a mountain of evidence saying that you're wrong, but still saying it anyway. There are definitely some grey areas, particularly in areas related to myocarditis/pericarditis, but there were also a lot of folks spewing stuff way, way on the other side of that grey area. :-)

When such heavy hands occur, especially when the government is pushing it, it makes the act seem extra suspicious, or so I've heard for the last week along cries of fascism.

There's definitely a big difference in my mind between the government pushing industry to not spread claims that it considers to be detrimental to public health and the government pushing industry to not spread claims that it sees as being mean to our current leaders. The former seems way more acceptable to me, in much the same way that regulating commercial speech and licensing doctors are both way less objectionable than regulating political speech.

Comment Re:Maybe everyone under 35 (Score 1) 26

Should stop drinking the AI coolaid. AI is not a complete solution for job replacement. Yes there will be a lot of jobs replaced. If you are working at a call center or paper pushing, maybe even some aspects of accounting and coding can be replaced. But AI is not going to bake your cake and eat it too. It's going to get most of the ingredients together for you and then you get to mix it.

Along with toothpaste and glue.

The biggest difference seems to be that the young folks are impressed with AI because it can do a lot of things some of the time, just like an inexperienced person. They put up with mistakes from AI because they're used to a certain level of errors in their work.

The older folks are unimpressed with AI because, unlike their juniors, whom they put up with because because they know that they are teachable, AI isn't teachable, so they have no real use for it. And they aren't too thrilled about their juniors using AI, either, because that means the quality of their work probably won't improve over time, which means more work for them fixing the mess, without the promise that things will eventually get better.

Comment Re: Cry me a river. (Score 1) 100

Best guess is that in five years, self-driving hardware will add about $15k to the price of the vehicle if they use LiDAR, or $6k if they don't.

Best guess is that in five years we still won't have level 5 autonomy you can trust. I don't mind being wrong, but I don't think I will be. I certainly don't think it's viable for that kind of money and also achieving the kind of safety I think we should be demanding. Not just "better than human" but essentially infallible. The car can have sensors we don't have, it should be able to be a lot better.

To be clear, I meant the sensor suite and steering rack and support parts, not necessarily that there would be a working brain available to the general public by then. Leaning towards yes, but no guarantees.

There's no good reason you'd replace a working tractor unit when you can just swap out the steering rack, bolt on cameras, and add some electronics

I think 20k is an optimistic price point, especially if you're hoping that it's going to deflect liability.

I'll grant you that the liability issue is a giant question mark.

Comment Re: Cry me a river. (Score 2) 100

They won't be able to afford to replace themselves and will be outcompeted by a company that can afford a fleet.

Why would you think that? Cameras a cheap, and LiDAR prices are coming down, too. As companies build them in larger and larger quantities, economies of scale and competition will drive the price down rather quickly. Best guess is that in five years, self-driving hardware will add about $15k to the price of the vehicle if they use LiDAR, or $6k if they don't. And that's including the cost of stuff that a lot of cars come with already, like the electric steering rack. I'd be shocked if it were significantly more than $20k.

So as drivers replace their cabs or semi tractors, they'll spend the extra $20k or whatever to buy versions that are self-driving. For that matter, once the tech is reliable enough, you'll likely see retrofit kits come on the market. There's no good reason you'd replace a working tractor unit when you can just swap out the steering rack, bolt on cameras, and add some electronics, and that's true whether you're an owner-operator or the manager of FedEx's fleet.

Comment Re: Cry me a river. (Score 2) 100

Long haul, local delivery, taxi, bus, you name a driving job and the ruling class will want to automate it.

Oh, absolutely. Most local delivery uses people who already work at the business, and delivery is just a small part of that person's job. So that impact is likely to be close to zero. But that still leaves probably probably around 5 to 10 million taxi drivers and probably three or four million truck drivers.

But taxi and truck drivers won't be replaced overnight. Most taxi drivers and many truck drivers own their own rigs, and although they may eventually replace themselves with robot rigs, they would continue to earn the revenue after doing so. They certainly have no incentive to fire themselves.

Ultimately, somebody has to own the rigs. There's nothing that necessarily requires that robotaxis be fleet vehicles owned by some big company like Uber, no matter how much companies like Uber might prefer it to be that way. Replacing all of those taxis with robot cars costs money, and Uber isn't capitalized that well. Uber's cash on hand wouldn't even be enough to replace all of the taxis in the United States. So while this may shift things around some, I wouldn't expect a taxipocalypse.

Comment Re:Cry me a river. (Score 1) 100

You are 100% wrong. The Uber business plan has always been to shift to self-driving vehicles ASAP, and to use humans only until that is feasible. He is planning to cause a problem, not to have a problem.

I'm not sure why he thinks it will be a problem for drivers. A study a few years ago showed that something like 96% of all Uber drivers quit within the first year. So worldwide, we're talking about only O(350,000) people who will have to find something else to do. The world economy can easily absorb such a tiny number.

Submission + - Tiny new lenses, smaller than a hair, could transform phone and drone cameras (sciencedaily.com) 1

alternative_right writes: Scientists have developed a new multi-layered metalens design that could revolutionize portable optics in devices like phones, drones, and satellites. By stacking metamaterial layers instead of relying on a single one, the team overcame fundamental limits in focusing multiple wavelengths of light. Their algorithm-driven approach produced intricate nanostructures shaped like clovers, propellers, and squares, enabling improved performance, scalability, and polarization independence.

Submission + - Rare-earth tritellurides reveal a hidden ferroaxial order of electronic origin (phys.org)

alternative_right writes: The discovery of "hidden orders," organization patterns in materials that cannot be detected using conventional measurement tools, can yield valuable insight, which can in turn support the design of new materials with advantageous properties and characteristics. The hidden orders that condensed matter physicists hope to uncover lie within so-called charge density waves (CDWs).

Comment Re:Overwrought (Score 2) 63

This does not appear to be holding up in practice, at least not reliably.

It holds up in some cases, not in others, and calculating an average muddles that.

Personally, I use AI coding assists for two purposes quite successfully: a) more intelligent auto-complete and b) writing a piece of code using a common, well understood algorithm (i.e. lots of sources the AI could learn from) in the specific programming language or setup that I need.

It turns out that it is much faster and almost as reliable to have the AI do that then finding a few examples on github and stackoverflow, checking which ones are actually decent, and translating them myself.

Anything more complex than that and it starts being a coin toss. Sometimes it works, sometimes it's a waste of time. So I've stopped doing that because coding it myself is faster and the result better than babysitting an AI.

And when you need to optimize for a specific parameter - speed, memory, etc. - you can just about forget AI.

Comment smoke and mirros (Score 4, Interesting) 63

Hey, industry, I've got an idea: If you need specific, recent, skills (especially in the framework-of-the-month class), how about you train people in them?

That used to be the norm. Companies would hire apprentices, train them in the exact skills needed, then at the end hire them as proper employees. These days, though, the training part is outsourced to the education system. And that's just dumb in so many ways.

Universities should not train the flavour of the moment. Because by the time people graduate, that may have already shifted elsewhere. Universities train the basics and the thinking needed to grow into nearby fields. Yes, thinking is a skill that can be trained.

Case in point: When I was in university, there was one short course on cybersecurity. And yet that's been my profession for over two decades now. There were zero courses on AI. And yet there are whitepapers on AI with me as a co-author. And of the seven programming languages I learnt in university, I haven't used even one of them ever professionally and only one privately (C, of course. You can never go wrong learning C. If you have a university diploma in computer science and they didn't teach you C, demand your money back). Ok, if you count SQL as a programming language, it's eight and I did use that professionally a few times. But I consider none of them a waste of time. Ok, Haskell maybe. The actual skill acquired was "programming", not a particular language.

Should universities teach about AI? Yes, I think so. Should they teach how to prompt engineer for ChatGPT 4? Totally not. That'll be obsolete before they even graduate.

So if your company needs people who have a specific AI-related skill (like prompt engineering) and know a specific AI tool or model - find them or train them. Don't demand that other people train them for you.

FFS, we complain about freeloaders everywhere, but the industry has become a cesspool of freeloaders these days.

Slashdot Top Deals

My sister opened a computer store in Hawaii. She sells C shells down by the seashore.

Working...