Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:This is so not going to scale (Score 3, Informative) 23

Making some process using a polymer membrane work for industrial volumes currently handled by fractionating columns is almost certainly going to be impractical. The membrane won't hold up long enough, and to get the separation to happen fast enough you're going to have to heat it anyway.

Somewhat addressed in TFA:

Previous attempts to adapt this process for crude oil have had the hurdle of the membrane swelling and interfering with successful separation. By replacing the amide bond with an imide bond instead, the team made the film more hydrophobic, allowing hydrocarbons to pass through the membrane without this issue arising.

Comment Re:I Disagree (Score 2) 48

Well, yes -- the lies and the exaggerations are a problem. But even if you *discount* the lies and exaggerations, they're not *all of the problem*.

I have no reason to believe this particular individual is a liar, so I'm inclined to entertain his argument as being offered in good faith. That doesn't mean I necessarily have to buy into it. I'm also allowed to have *degrees* of belief; while the gentleman has *a* point, that doesn't mean there aren't other points to make.

That's where I am on his point. I think he's absolutely right, that LLMs don't have to be a stepping stone to AGI to be useful. Nor do I doubt they *are* useful. But I don't think we fully understand the consequences of embracing them and replacing so many people with them. The dangers of thoughtless AI adoption arise in that very gap between what LLMs do and what a sound step toward AGI ought to do.

LLMs, as I understand them, generate plausible sounding responses to prompts; in fact with the enormous datasets they have been trained on, they sound plausible to a *superhuman* degree. The gap between "accurately reasoned" and "looks really plausible" is a big, serious gap. To be fair, *humans* do this too -- satisfy their bosses with plausible-sounding but not reasoned responses -- but the fact that these systems are better at bullshitting than humans isn't a good thing.

On top of this, the organizations developing these things aren't in the business of making the world a better place -- or if they are in that business, they'd rather not be. They're making a product, and to make that product attractive their models *clearly* strive to give the user an answer that he will find acceptable, which is also dangerous in a system that generates plausible but not-properly-reasoned responses. Most of them rather transparently flatter their users, which sets my teeth on edge, precisely because it is designed to manipulate my faith in responses which aren't necessarily defensible.

In the hands of people increasingly working in isolation from other humans with differing points of view, systems which don't actually reason but are superhumanly believable are extremely dangaerous in my opinion. LLMs may be the most potent agent of confirmation bias ever devised. Now I do think these dangers can be addressed and mitigated to some degree, but the question is, will they be in a race to capture a new and incalculably value market where decision-makers, both vendors and consumers, aren't necessarily focused on the welfare of humanity?

Comment Re:No, that's what it is NOW. (Score 2) 44

Isn't that like saying that Apple stopped selling the "toaster" Macs so that Apple could sell both a computer and a display?

No.

Longer answer, you could buy lots of third party displays that worked perfectly well with Macintosh computers (and you still can) so that also means Apple can fail to sell you a display by not integrating it.

The iPad is something like the "personal digital assistants" from ancient times.

The iPad is absolutely capable of running Mac OS, but it's artificially restricted from doing so, in an effort to make you buy Mac OS. And there are Macintoshes which could easily run iOS, but they don't let you do that.

This distinction was created artificially and intentionally both to enforce a certain style of use and to sell more devices. The first thing is a marketing decision, that's understandable and even reasonable. The second thing is also a marketing decision which is also understandable, but repugnant.

There's no reason why Apple could not have simply let you run in both modes on both kinds of hardware, allowing you to choose, and to provide user interface standards for both types of interface — and allow apps to implement one thing or both. And there's no reason why they can't switch to doing that.

The question of whether they should be forced to do so is a lot more complicated, and even I'm not sure they should. But it's telling that Android is embracing Linux as the devices continue to get closer together, while Apple is still trying to distance their platforms from one another. But they're ultimately doing their customers a deliberate disservice. As Linux continues to improve, perhaps more slowly than it "should" but still doing so, there becomes less reason to stick with their artificially limited forced duality.

Submission + - MIT chemical engineers develop new way to separate crude oil (thecooldown.com)

fahrbot-bot writes: The Cool Down is reporting that a team of chemical engineers at the Massachusetts Institute of Technology has invented a new process to separate crude oil components, potentially bringing forward a replacement that can cut its harmful carbon pollution by 90%.

The original technique, which uses heat to separate crude oil into gasoline, diesel, and heating oil, accounts for roughly 1% of all global energy consumption and 6% of dirty energy pollution from the carbon dioxide it releases.

"Instead of boiling mixtures to purify them, why not separate components based on shape and size?" said Zachary P. Smith, associate professor of chemical engineering at MIT and senior author of the study, as previously reported in Interesting Engineering.

The team invented a polymer membrane that divides crude oil into its various uses like a sieve. The new process follows a similar strategy used by the water industry for desalination, which uses reverse osmosis membranes and has been around since the 1970s.

The membrane excelled in lab tests. It increased the toluene concentration by 20 times in a mixture with triisopropylbenzene. It also effectively separated real industrial oil samples containing naphtha, kerosene, and diesel.

Comment Re:It almost writes itself. (Score 2) 49

I don't think there's anything wrong with those sorts of general observations (I mean, who remembers dozens of phone numbers anymore now that we all have smartphones?), but that said this non-peer-reviewed study has an awful lot of problems. I mean, we can focus on the silly, embarassing mistakes (like how their methodology to suppress AI answers on Google was to append "-ai" into the search string, or how the author insisted to the press that AI summaries mentioning the model used were a hallucination, when the paper itself says what model was used). Or the style things, like how deeply unprofessional the paper is (such as the "how to read this paper"), how hyped up the language is, or the (nonfunctional) ploy to try to trick LLMs summarizing the paper. Or we can focus on the more serious stuff, like how the sample size of the critical Section 4 was a mere 9 people, all self-selected, so basically zero statistical significance; that there's so much EEG data that false positives are basically guaranteed and they talk almost nothing about their FDR correction to control for it; that essay writers were given far too little time for the task and put under time pressure, thus assuring that LLM users will be basically doing copy-paste rather than engaging with the material; that they misunderstand dDTF implications; the significant blinding failure with the teachers rating the essays being able to tell which essays were AI generated (combined with the known bias where content believed to be created by AI gets rated lower), with no normalization for what they believed to be AI, and so on.

But honestly, I'd say my biggest issue is with the general concept. They frame everything as "cognitive debt", that is, any decline in brain activity is treated as adverse. The alternative viewpoint - that this represents an increase in *cognitive efficiency* by removing extraneous load and allowing the brain to focus on core analysis - is not once considered.

To be fair, I've briefly talked with the lead author, and she took the critiques very well and was already familiar with some of them (for example, she knew her sample size was far too small), and was frustrated with some of the press coverage hyping it up like "LLMs cause brain damage!!!", which wasn't at all what she was trying to convey. Let's remember that preprints like this haven't yet gone through peer review, and - in this case - I'm sure she'll improve the work with time.

Comment Re: Gee, who didn't see this coming (Score 1) 140

Your argument doesn't represent "effort" anyway. We have protestors against Israel because we are funding Israel. We aren't funding Putin yet. When we are, we can have protests about that, too.

Your whataboutism is whataboutism because you're ignoring obvious facts in order to support your argument. What about this? The answer is obvious. But you're sure that there's some other answer.

Comment Re:Give me a break (Score 1) 53

Face it, we're well past the moment where we need to worry about whether or not government and military data is in the hands of big tech.

It's really whether or not government is in control of corporations, and of course the answer is yes. And that's where the "both sides" argument becomes non-fallacious: both Democrats and Republicans are united on giving them that control, and capital doesn't maintain the commons. It only wants to exploit it.

Granted, there were always ties between government and tech, we're just busting down the myth that those ties didn't exist, and giving much fuller integration rights to the tech elite.

Indeed: The military-industrial complex has been the home of technology since time was time. Many technological developments have come out of military research. The space program is also essentially military, so its developments can be counted here as well.

Comment Re:Valve needs to mandate Linux support next (Score 0) 30

You're asking Valve to cut itself off from sales, and make itself unfriendly. What makes Valve appealing to gamers and license holders alike is that games can be on it almost no matter what (they even allow adult games) and there are only some labeling requirements.

What would be more beneficial to me than banning games is to provide in-app compatibility information, so I don't have to go to protondb.

Comment Re:Vulkan windows, Linux, Macos, Android, iOS, swi (Score 3, Interesting) 30

why cant we have a consistent base API rather than compatibility layers...

https://xkcd.com/927/

We got here from somewhere else. But for the record, I blame Microsoft, and I blame 3dfx for enabling them. If 3dfx had done MiniGL from the start instead of GLIDE, then we would probably have never had Direct3D. Microsoft had a basic, software-only OpenGL renderer which was famously used for screensavers like "pipes" and would have likely gone with OpenGL if it was already dominant.

But in the early days of PC video accelerators, everyone had to have their own API, and there were a ton of competing GPUs. There were around half a dozen versions of Mechwarrior II which supported different video cards — I had at least two of them, as I bought a whole bunch of those different cards to try them out. Besides VooDoo 1 and 2 in their times, and then eventually tnt, tnt2, and a gf2mx which are all kind of after the period in which this story occurs, I had a Mystique (ugly), and a PowerVR (slow), and a Permedia 2 which was actually the best of all of them at the time but just a little slower than 3dfx. I know I'm forgetting another one that I had as well, and I didn't even have all of them! Now we have all of three GPU makers, and Intel is looking shaky again...

I'm super thankful that we have Vulkan now and didn't start going back to vendor-specific standards. I think you can chalk this up to complexity. Nobody wants to have to support such things when it takes so much work to switch APIs.

Slashdot Top Deals

Yes, we will be going to OSI, Mars, and Pluto, but not necessarily in that order. -- Jeffrey Honig

Working...