Forgot your password?
typodupeerror

Comment Re:I'm not buying it (Score 1) 96

That argument isn't logic, though, is it?

You say that before AI, people still shot people. And after AI, people still shot people.

So it's not AI that's shooting people.

But then you jump into the McDonald's analogy which is implying that the guns (that were around before AI, and still are) aren't to blame either.

So there's no logic in lumping those two together by opposite arguments.

Now... you can say that PEOPLE are to blame, and that's fine. And people existed before AI and after AI.

But if the person who does it is to blame, and LEGALLY a human advising how to do that would ALSO be to blame (e.g. someone goading a mentally-incapable person to commit an atrocity on their behalf, which happens more than you think! Think child-soldiers, suicide bombers, etc.)... then there are PEOPLE to blame, not just the person.

In this case, those people are doing so via the use of a tool, the same as the gunman. Whatsapp isn't to blame if you want to plan an atrocity via Whatsapp, so the AI isn't the problem there. It's the people BEHIND the AI services. Because, to my knowledge, the Whatsapp software has never SUGGESTED to people that they should commit atrocities.

Either AI is a tool - and the creators and users are responsible for that tool. Or it's not a tool but a "person", and that way madness lies.

But if I wrote a bit of software that, say, taught you how to commit atrocities... even if I wasn't there when you ran it and learned how to do so... I'm pretty sure that I'd be in BIG TROUBLE. Especially if, for example, I was charging money for that software.

Comment Re: Framework (Score 1) 36

"Lug around"?

It goes into a rucksack.

Like I say, I used to carry a 19". That also went into the same rucksack.

And I don't mean a huge "hiking" thing, I mean... literally just a small bag that every commuter is carrying, the kind of thing you have your kids put their books into to take to school.

I've used it on planes, I've taken it abroad, I've taken it to people's houses... it's not big at all. This is precisely my point. 13" is the kind of thing that, working IT in a school, I give to the kids to take home. They all carry them home in their little bags, as little kids (e.g. ages 10/11), back and forth every day. Hell, I rejected 10" / 11" Chromebooks/laptops for them BECAUSE of that... the difference in size/weight is minimal but it means they aren't all squinting at the screens all day, and they are also often elbow-to-elbow in the classrooms.

Hell, mine has the RTX 5070 module, so it's more "sticky-outy" than the normal 16, but only by an inch at the back. It just slips into a "school" laptop bag I got from a vendor... 15 years ago? Maybe even 20? I'd had to do the maths.

The Thinkpad T43 released in 2009 had a 14.1" screen, ffs.

Comment Framework (Score 1) 36

As an owner of a Framework 16, I honestly don't know how people are using a 13" screen in this day and age.

I'm the last to care about 4K resolutions, etc. and in fact am always moaning about such things because I'm an old fogey apparently, because I can't SEE that damn resolution.

But 13" is pathetic. I can cope with 16". My last laptop before this was 19".

I just hope this doesn't mean that we're going to end up with all kinds of "variations" of the laptop that it becomes a bark to find replacement parts for.

Comment Re:Chatbot Lies (Score 1) 96

The Engineer had agency. The AI (or google search, or a stack of text books) does not.

Of course, if the mad bomber instead posed as a student and found some non-evil reason for wanting the exits to collapse first (even a thin one like directing the dust upwards), the engineer is less culpable or not culpable at all.

But we need to be very careful about imagining an AI has agency. There are many legal and philosophical implications behind that.

Comment Re:But what do they do? (Score 1) 3

Ok, to clarify a few things:

Current designs I've put up:

1. A modernised version of the DeHavilland DH98 and Merlin engine, where I basically fed ChatGPT and Claude with all of the known historic faults and some potential solutions to various problems, then let them run wild, feeding off each other to fix, refine, and clarify the various design. The premise here is that we're using known designs with known properties, changing only materials but doing so carefully so as to ensure that the balance is unchanged from the historic design. The aircraft is probably the least interesting part, as it would be very hard to make that safe, but a fully modernised Merlin that starts where Rolls Royce left off is something that could be built with minimal risk and could be quite interesting in its own right.

2. A High Dynamic Range microphone. This basically riffs off assorted physics technologies for measurement and the basic idea in many HDR schemes that you can split an input into the fine detail (essentially an equivalent of a mantissa) and a magnitude (essentially an exponent), producing a design that aught to permit (if it works) the same microphone with no adjustments handling everything from a nearby whisper to the roar of a jet engine -- but with all of the fine detail still captured from that engine.

3. An electric guitar that operates not by magnetic pickups but by accurate mapping of string behaviour in two dimensions via lasers, where this is then turned into an accurate representation of the sound in an external device. So it's not a synth guitar in the classic sense, it's actually modelling the waveform for each string in two dimensions precisely. The reason for doing 2D modelling is that this has the potential for novel behaviours but without an obligation for it to do so.

4. A synthesiser/wave processor that looks at everything that they knew how to do, and allows you to link it together arbitrarily. It is designed in two forms. The first is engineered to match the components, materials, and knowledge available in 1964, so it is something they could have built if sufficiently insane. The second is a modernised extrapolation of that, using modern digital electronics, where I can show that the modern version is a strict superset of any existing DAW, simply because I started with none of the assumptions and metaphors around which DAWs were subsequently designed.

5. Multiband camera. An attempt to build a digital camera that is far smaller and more compact than a 3CCD camera, but (like the 3CCD design) produces a far better picture than a conventional digital camera, where I don't stop at three frequencies but support many, albeit with the limitation that the time required for a photograph is abysmal.

Each design I've put up has a detailed hardware specification (including wiring where appropriate), validation/verification documents, and testing procedures. Software is defined by means of formal software contracts and occasionally Z-like forms. The designs are extremely detailed, although not quite at the level you could build them right there and then. However, the synthesiser is described right down to the level of individual transistors, diodes, and connectors, and the Merlin engine specifies precise materials, expected temperature ranges, material interactions (and how they're mitigated), and other such information.

Again, it's precise but not quite at the point where an engineer would feed comfortable feeding the specifications into an AI, having it order the bits online, and be sure of building something that works, but it's intended to be close enough that (provided the AIs actually did what they were supposed to) that an engineer would feel very comfortable taking the design and polishing it to working level.

If, however, an engineer looking at these designs comes to the conclusion that the AIs were utterly deluded, then obviously they can't handle something as simple as selecting candidate items from ranged data.

Submission + - Mozilla Firefox uses AI to hunt bugs and suddenly zero days do not feel so untou (nerds.xyz)

BrianFagioli writes: Mozilla says it used an AI model from Anthropic to comb through Firefoxâ(TM)s code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.

The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them.

User Journal

Journal Journal: Inventions to stress-test AI 3

I have been using AI to see if I could invent non-trivial stuff through recycling existing ideas (because AI is bad at actually creating new things). I've been reluctant to post this in my journal, as I dislike self-promotion, but there's so much discussion on AI and whether it is useful, that this isn't really a matter of self-promotion, but rather evidence in the debate on AI as to whether you can actually do anything useful with it.

https://gitlab.com/wanderingnerd50

Comment Re:So, that's a lie. (Score 1) 134

One that enslaves women and snips their clits is bad. If you can't say that, you need to do some soul searching.

I think maybe you need to address the plank in your eye, before you talk about the motes in others.

We don't treat women that much better in the US. Australia has nearly eliminated cervical cancer, because of the HPV vaccine (https://www.hpvworld.com/articles/australia-on-track-to-be-the-first-country-to-achieve-cervical-cancer-elimination/), but we can't require it here, because it might encourage women to have sex outside marriage.

Likewise, the US doesn't think cancers that affect women are as important as the ones that affect men; when my wife got ovarian cancer, the doctors basically threw everything at it, hoping something would work, because they knew almost nothing about it. Breast cancer, on the other hand, is pretty treatable, because it affects men.

If you can't say those things are bad, you're the one who needs to do some soul-searching.

Comment Re:A serious question (Score 1) 41

It's a good question and one I'm working on trying to get an answer to. By giving AI hard, complex engineering problems, and then getting engineers to look at the output to determine if that output is meaningful or just expensive gibberish.

By doing this, I'm trying to feel around the edges of what AI could reasonably be used for. The trivial engineering problems usually given to it are problems that can usually be solved by people in a similar length of time. I believe the typical savings from AI use are in the order of 15% or less, which is great if you're a gecko involved in car insurance, but not so good if you're a business.

If the really hard problems aren't solvable by AI at all (it's all just gibberish) then you can never improve on that figure. It's as good as it is going to get.

I've open sourced what AIs have come up with so far, if you want to take a look. Because that is what is going to tell you if good can come out of AI or not.

Comment Re:Employee conversation in work environment (Score 1, Interesting) 41

The conversations are not private, but PII laws nonetheless still apply. Anything in the messages that violates PII privacy laws is forbidden regardless of company policy. Policy cannot overrule the law.

Now, in the US, where privacy is a fiction and where double-dealing is not only perfectly acceptable but a part of workplace culture, that isn't too much of an issue. The laws exist on paper but have no real existence in practice.

However, business these days is international and American corps tend to forget that. Any conversation involving European computers (even if all employers and employees are in the US) falls under the GDPR and is under the aspices of the European courts and the ECHR, not the US legal system. And cloud servers are often in Ireland. Guess what. That means any conversation that takes place physically on those computers in Ireland plays by European rules, even if the virtual conversation was in the US.

This was settled by the courts a LONG time ago. If you carry out unlawful activities on a computer in a foreign country, you are subject to the laws of that country.

Slashdot Top Deals

Hard work never killed anybody, but why take a chance? -- Charlie McCarthy

Working...