Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:The Wrong Ban. (Score 1) 81

Honestly, probably the “illegal ignoring of criteria” went something like this:

1. Jim: Wow, these regulations are thousands of pages, this is too complicated. Hey Bob, what do I do for this case? 2. Bob: Oh, I think Janice had a case like this, ask her.

And thus Janice’ preferences were enshrined into the system. The obvious solution of course, is more regulation!

Comment Re:That's NOT AI (Score 1) 81

Sure, the AI is pattern matching based on a million examples of people following the flowchart. But tell me, when you have an insurance claim denied, is there a person hopping up and down waiting to explain to you in great detail why they denied it and what you can do to fix it? No? Then why does AI need to go above and beyond what the person would do?

Comment Re:The Wrong Ban. (Score 1) 81

Sure, but would you call that a more fair healthcare system, where occasionally someone can get something approved they otherwise wouldn’t because someone risks their jobs? Heck, we could get the same result by just adding an RNG to the AI where it has some small probability of switching the decision it would otherwise make.

The AI regulators believe that people sitting in insurance offices are twiddling their mustaches, waiting for a minority of the Wrong Kind to ask for something, so they can feel a rush of dopamine in rejecting that claim, and they also believe that those same people will make an AI algorithm in their image. Believing this ridiculous notion, they decide to make the AI illegal, because they can at least bring some legal claim against a person. Of course, none of that is true in the first place, white-collar companies aggressively filter out that cartoon racism from their hiring decisions.

Comment Re:The Wrong Ban. (Score 2, Interesting) 81

This is exactly it. They can design an algorithm to consistently make the same decision a human would make, and they did that because hiring tons of people whose only job is to follow a flowchart to make decisions is expensive, and nobody would enjoy that job anyway. Remember how miserable that job was portrayed in The Incredibles? But this is what AI regulation does. It doesn’t actually mean a fairer healthcare system, it means that some techies found a way to automate the most miserable time-consuming paperwork, and the bureaucrats said no!

Comment Engineering Academic Equity is Hard (Score 3, Insightful) 197

One of the best writers about this topic is Freddie de Boer, who has written incisive critiques of educational policy from the left. For the SAT, empirical results repeatedly show and have shown that it is a strong predictor of academic success: https://freddiedeboer.substack... . The reason colleges decided to make the SAT optional was in the hope of increasing diverse representation. But the unfortunate fact is, tinkering with educational policies has basically had no success in creating a diverse educated elite.

Basically, what do you want out of an education? If it’s *just* a matter of teaching skills, then most education anywhere has succeeded on that metric. But the other thing education happens to do is reward winners and punish losers, by offering prizes (selective schools, high-paying white-collar jobs) to people who can perform the best *relative* to their peers. Harvard is more prestigious than a state school because one has to outcompete more people to get into it, based on some idea of merit (grades, connections, wealth, some demonstrable proof of genius). If the goal is to produce a meritocratic ranking of people, then inventing new educational methods that make everyone smarter has no effect on this game.

Unfortunately, the only way so far to change *relative* outcomes is basically to put one’s thumb on the scale. The field of education research is littered with failed attempts to improve equity, and basically every easy policy lever has been tried and found wanting. One can not only predict college success from SAT scores, one can predict it fairly well from Kindergarten assessments: https://freddiedeboer.substack... . We can discuss ways in which everyone can have a dignified life, even if they do not win these educational contests, but we cannot engineer a way to enforce a certain target demographic representation of winners in these educational contests without basically handing them out purely for demographic reasons.

Comment Re:Libertarian vs Not Libertarian: A False dichoto (Score 1) 267

I think the interesting (and possibly uncomfortable) truth is that modern AI would not exist without aggressive data gathering. What made Google/Facebook/Microsoft powerful in the first place was *not* that they had the best machine learning engineers, but that they provided a value proposition in their services that convinced many people to give away their data at scale. At that point the machine learning is easy, because no matter how clever you are, more high quality data is better than more clever brainpower. Indeed, there are still tech startups whose ML operation is formed from a product that encourages users to input useful data for machine learning (like Glass Health, which sold a notetaking app to doctors, which it used to train an AI model for medical decision making--based on the input from actual doctors! Note, I am not affiliated with Glass Health).

However, I would disagree that small businesses and hobbyists are negatively impacted by big tech data harvesting in this sense. Facebook, Microsoft, and Google have been very generous in open sourcing their high quality pretrained models for free, which has the effect of allowing a hobbyist to build a product using these models or fine tuning them on smaller datasets and getting much of the advantages of "big tech." For example, https://dingboard.com/ (I am also not affiliated with this except that I am a user) uses off-the-shelf open source models from big tech firms to allow fast, sophisticated image editing operations in a webapp using deep neural networks. Also in my own line of work, I use pretrained models, finetuned on our proprietary data, which is better than training from scratch.

I have benefited personally and professionally from open source models from big tech monopolies that I did not train, but I use for free (or at most, for the cost of cloud hosting). So I am mostly against AI regulation because I really see it as being cut from the same cloth as hobbyist computing hardware, amateur radio, and hobby application development, where hobbyists have benefited from much larger businesses that had the foresight to allow a few positive externalities for small businesses in order to grow the industry (which benefits them as well).

Comment Libertarian vs Not Libertarian: A False dichotomy (Score 4, Interesting) 267

Whenever stories like this pop up, there are inevitably a bunch of comments along the lines of “Sure, regulation may be tough but I’d rather have fair rules in place than a Mad Max style anarcho-capitalist hellscape!” But that is never the actual debate. Nobody is ever advocating for a Mad Max style anarcho-capitalist hellscape. However, what the argument is about is whether *this* regulation helps more than it harms, or if it entirely quashes an industry accidentally out of an abundance of caution. Or, to add another dimension of nuance, is a regulation that is appropriate in a nation in one time appropriate at a different time when circumstances change?

For example, there are many regulations that require more and more electronics and software in cars in order to achieve compliance (e.g. rear view cameras, software control systems for increasing efficiency). Perhaps if we have easy access to electronics, this is a reasonable cost to impose on auto manufacturers, because the safety gained is worth the tradeoff of increased costs and less competition. But say that our supply chain became weaker, and it was no longer a reasonable expectation that we could outfit every car with these electronics. How do we walk that regulation back? Or do we say that we’re willing to just dramatically reduce the number of cars on the road, even without concomitant improvements in urban planning and non-auto transportation, and forget everybody who can’t afford it?

Same with healthcare. We put so many documentation and IT requirements on doctors, that this has led to a dramatic reduction in individual health providers and an increase in giant hospital systems, because who can juggle all these requirements with a staff of one doctor, one nurse, and one secretary? The giant hospital systems are still suffering from personnel shortages, because even just meeting this regulatory requirements causes burnout and high costs (see the explosive rise in hospital administrators) https://www.athenahealth.com/k... . We *could* walk back some of those regulatory requirements, because they cause explosive costs without concomitant improvement in healthcare outcomes, but that would mean recognizing that just because a regulation was made with good intentions does not mean it has good outcomes, even by its own standards.

With AI, it’s even more frustrating because many of the harms that the regulations are intended to curtail are abuses by monopolies, but in reality many of the survivors of the regulatory environment will be monopolies, and the people who will lose out are small businesses and hobbyists. It would be like if the moment the Personal Computer, the Amateur Radio, and the independent video game were starting to be built up by hobbyists, regulators came in so hard and fast, requiring such onerous restrictions to make sure nothing bad can ever happen (e.g. a hobbyist shocking themselves when wiring their radio up), that only IBM, Motorola, and EA lock up their respective industries before any competitors can even gestate. Would the Apple II or DOOM ever be created if regulators had the same attitudes back then for those industries as they do today for AI?

The EU absolutely aborted its AI industry in utero. OpenAI/Google/Microsoft won’t notice, but every enterprising AI hobbyist knows to stay far, far away from the EU.

Comment Re:What does this have to do with Linux? (Score 1) 21

It’s not about Linux, it’s about the “Foundation.” They are a non-profit, and like any non-profit they have to find a way to keep the lights on, and that mainly means chasing government grants. So they work on finding funding opportunities that tangentially relate to their area of expertise, and try to write proposals that are attractive to sponsors. If there’s government money out there for solving energy problems, Linux Foundation will find a way to tell a story where investment in Linux and open source technology will somehow help with energy problems. It clearly works, as they got the funding.

But what does that have to do with making *Linux* better? Nothing! Nonprofits in general will self-perpetuate however they need to, even if it means they stray from their intended mission.

Comment Re:Of course it's not "AI" (Score 5, Informative) 158

There is a lot of AI that is not LLMs. Before ChatGPT took the world by storm, a lot of AI hype was around image models that could comfortably run on a consumer gaming GPU, or with some compromises even on a mobile device.

I worked with an ex-Kodak engineer who said that even pre-“AI” as we know it, many consumer digital camera images were modified in between the sensor capture and the user seeing it based on user expectations. He would say when you think of “green grass,” “blue sky,” and “yellow sun,” you have a cartoon high-saturation image in your head of what that looks like. When the photo you took has comparatively dull grasses and skies, you are disappointed. So the consumer digital cameras would “fix” those colors with transforms.

These “non-AI” transforms (convolutional blur/sharpen filters, nonlinear lookup tables) often inspired and informed AI imaging models down the road, including ones that do sophisticated upscaling/inpainting/deblurring of consumer camera images. So I can see why someone with a lot of experience in the digital imaging industry would say there’s “no such thing as a real picture.” They’ve been modifying raw sensor measurements for as long as there have been digital cameras.

Comment Re:Typical Google (Score 1) 26

Ah, but unlike the case of paintbrushes and pencils and Photoshop and every other artistic tool, there are masses of regulators jumping up and down to impose crushing regulatory burdens on “AI” because “AI” is the hot topic that will make their (otherwise unremarkable) careers. What if you get the AI to draw a racism or sexism or a pornography or a misinformation? That’s just what they need to justify their careers!

Google, OpenAI, and the other major AI players are well aware that 1. This threat is very real, and 2. They have the size to weather it, and their smaller competitors don’t, *if* they come out upfront and put a lot of effort into keeping the AI from generating anything that has even a whiff of offensiveness.

Comment Re:Amazing (Score 1) 38

I know the companies have the reports. I don’t think the government is entitled to them, especially absent any reasoning. It’s insane to me that “a new industry needs to be regulated” is such a null hypothesis that no one even needs to articulate a reason for the regulation, or even express fear, uncertainty, and doubt about that industry. Demanding these reports to *find* a reason to regulate these companies is really the icing on the cake. Every other recent industry that got regulators hot and bothered for regulation (AI, cryptocurrency, social media, self-driving cars) could at least point to risks, real or imagined. If regulators can’t even be bothered to do the rock-bottom effort of *imagining* a problem that needs to be solved, why should we pay them any mind?

Comment Amazing (Score 1) 38

" through the new department, micromobility app companies may be required to share their GPS delivery data with the city. That data might reveal more about how long delivery riders are working, or how heavy cargo bikes' loads are, which could lead to new regulations.”

So, regulators cannot justify regulations, so they want to demand that data, for free, from companies, so they can look through it and find a reason to regulate them further? They can’t even make an argument from data they might have from government statistics, like # of traffic accidents or amount of congestion or something like that? Sounds like a fishing expedition. “Hey, can I put microphones and cameras in your house? I just want to see if there are any crimes you’re committing. I don’t have any evidence or reason to believe that you are committing crimes, that’s why I need the microphones and cameras!"

Slashdot Top Deals

The solution to a problem changes the nature of the problem. -- Peer

Working...