Forgot your password?
typodupeerror

Comment Re:Building blocks origins (Score 2) 18

Well, first of all, hydrogen is the most common element in the universe, and carbon makes up something like 0.5% of the total observed mass of the universe (it's the fourth most common element), so along with other trace elements like sodium, phosphorus and the like, we're simply looking for places where there is sufficient energy to create the necessary reactions to produce organic compounds. No lack of energetic sources, in particular stellar system formation. Indeed many comets and asteroids host a lot of precursors, indicating that some fairly sophisticated organic chemistry was going on early in the solar system's development.

Comment Re:life came from organic compounds (Score 3, Interesting) 18

Panspermia would require that life itself was raining down on the terrestrial planets. Precursors would simply indicate there were a lot of strange and complex organic compounds falling on to the surfaces of planets like Earth, Mars and Venus, and were also likely constituents of bodies like Europa and Titan (well, we know Titan is covered in a literal hydrocarbon stew). What this discovery indicates, at the very least, is there was indeed a lot of organic compound in the early solar system and these organic compounds, at least on Earth, led to abiogenesis. Panspermia would advocate abiogenesis happened at some undetermined point further back.

If we find other life in the solar system, such as in Europa's or Ganymede's oceans, and it has DNA or some very close relative, with similar translation and transcription systems as we find in archaea and bacteria on Earth, then that would be a very strong argument that life in the solar system had a common origin. If however, there is no clear relationship between the two populations; say, they use something similar to DNA, but the genetic codes are different (all extant life on Earth uses the same canonical genetic code mapping codons to amino acids, strongly suggested the canonical code evolved prior to the Last Universal Common Ancestor), then we're very likely looking at an example of convergent evolution, and not in fact at two related populations.

Comment Re:But what do they do? (Score 1) 3

Ok, to clarify a few things:

Current designs I've put up:

1. A modernised version of the DeHavilland DH98 and Merlin engine, where I basically fed ChatGPT and Claude with all of the known historic faults and some potential solutions to various problems, then let them run wild, feeding off each other to fix, refine, and clarify the various design. The premise here is that we're using known designs with known properties, changing only materials but doing so carefully so as to ensure that the balance is unchanged from the historic design. The aircraft is probably the least interesting part, as it would be very hard to make that safe, but a fully modernised Merlin that starts where Rolls Royce left off is something that could be built with minimal risk and could be quite interesting in its own right.

2. A High Dynamic Range microphone. This basically riffs off assorted physics technologies for measurement and the basic idea in many HDR schemes that you can split an input into the fine detail (essentially an equivalent of a mantissa) and a magnitude (essentially an exponent), producing a design that aught to permit (if it works) the same microphone with no adjustments handling everything from a nearby whisper to the roar of a jet engine -- but with all of the fine detail still captured from that engine.

3. An electric guitar that operates not by magnetic pickups but by accurate mapping of string behaviour in two dimensions via lasers, where this is then turned into an accurate representation of the sound in an external device. So it's not a synth guitar in the classic sense, it's actually modelling the waveform for each string in two dimensions precisely. The reason for doing 2D modelling is that this has the potential for novel behaviours but without an obligation for it to do so.

4. A synthesiser/wave processor that looks at everything that they knew how to do, and allows you to link it together arbitrarily. It is designed in two forms. The first is engineered to match the components, materials, and knowledge available in 1964, so it is something they could have built if sufficiently insane. The second is a modernised extrapolation of that, using modern digital electronics, where I can show that the modern version is a strict superset of any existing DAW, simply because I started with none of the assumptions and metaphors around which DAWs were subsequently designed.

5. Multiband camera. An attempt to build a digital camera that is far smaller and more compact than a 3CCD camera, but (like the 3CCD design) produces a far better picture than a conventional digital camera, where I don't stop at three frequencies but support many, albeit with the limitation that the time required for a photograph is abysmal.

Each design I've put up has a detailed hardware specification (including wiring where appropriate), validation/verification documents, and testing procedures. Software is defined by means of formal software contracts and occasionally Z-like forms. The designs are extremely detailed, although not quite at the level you could build them right there and then. However, the synthesiser is described right down to the level of individual transistors, diodes, and connectors, and the Merlin engine specifies precise materials, expected temperature ranges, material interactions (and how they're mitigated), and other such information.

Again, it's precise but not quite at the point where an engineer would feed comfortable feeding the specifications into an AI, having it order the bits online, and be sure of building something that works, but it's intended to be close enough that (provided the AIs actually did what they were supposed to) that an engineer would feel very comfortable taking the design and polishing it to working level.

If, however, an engineer looking at these designs comes to the conclusion that the AIs were utterly deluded, then obviously they can't handle something as simple as selecting candidate items from ranged data.

Submission + - Mozilla Firefox uses AI to hunt bugs and suddenly zero days do not feel so untou (nerds.xyz)

BrianFagioli writes: Mozilla says it used an AI model from Anthropic to comb through Firefoxâ(TM)s code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.

The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them.

User Journal

Journal Journal: Inventions to stress-test AI 3

I have been using AI to see if I could invent non-trivial stuff through recycling existing ideas (because AI is bad at actually creating new things). I've been reluctant to post this in my journal, as I dislike self-promotion, but there's so much discussion on AI and whether it is useful, that this isn't really a matter of self-promotion, but rather evidence in the debate on AI as to whether you can actually do anything useful with it.

https://gitlab.com/wanderingnerd50

Comment Re:The purpose of art (Score 3, Interesting) 88

It makes more sense as a dialogue if we think of it not so much as a one-to-one conversation, but more like an ongoing, global discourse. After all, movies are not made in a vacuum, and they are--generally speaking--not made for a single specific individual to watch. The artist is informed and shaped by their experiences.

I frame it this way because I want to move away from the "maker"/"viewer" framework--this dichotomy of the creator of an experience versus those who experience the creation. There is a kind of feedback at play that is intrinsic to the ability to create art and to enjoy it. We even see this in cinema--the works of actors (which roles they choose, how they play those roles) are invariably influenced by the culture and sentiments that surround them.

In a strict sense, you are right--it's not as if the artist is directly engaging in a back-and-forth literal conversation. But I think that a more encompassing point of view is useful for contextualizing why generative AI being propped up as "art" is so offensive to some. It doesn't feel "real" to us, and it isn't because the tool is "artificial"--we have computer animated films, for instance. It's because it feels disengaged from that feeling of human connection.

Comment The purpose of art (Score 5, Insightful) 88

is not, as many would have you believe, to be found solely in its consumption or appreciation.

Art is a dialogue. It is a conversation between humans--those who feel joy and pain, sorrow and hope; and it is the embodiment of creative expression in which the artist, for all their imperfections and struggle, brings into being something that marks existence--as if to say, "I was once here, in this space that you now observe."

And that is not necessarily pretentiousness or egocentrism. Art is born from a desire to connect with others, across space and time.

The intrinsic problem of "generative AI" as it is presently utilized as a vehicle of artistic expression is that, overwhelmingly, it fails to create a true dialogue, in much the same way that using a chatbot amounts to speaking with nobody but yourself. There may be a director and other humans who are prompting the AI and exerting control over the output, but the lack of human actors and cinematographers means that the result can only ever be a simulation of art, not art itself. It is not until we can create artificial consciousness--machines that experience human emotions and concept of self--that we can ever say that their status can transcend that of mere tools and their product might become art. To be clear, I am not suggesting we should attempt to do so. But what we have today is very, very far away from this.

Maybe a simulation is enough for most people, who think of popular media as nothing more than transitory stories to consume, discard, and forget. That the audience may not have the capacity to respect art as a process, by failing to distinguish what it is and is not, does not invalidate the artist, no more than someone who doesn't understand mathematics or computer programming can decide that it is not worth learning or doing.

The reason why there is a lot of pushback against AI has to do with the preposterous notion that it can (and therefore, should) serve as a substitute for human creativity. Of all of the things that such sophisticated computational models could be used for, the last thing that I would want it to do for me is my thinking and feeling. We should be using technology to make our lives easier and give us more freedom to express ourselves creatively, not less. People who are using it to simulate art have entirely missed the point of why we make art in the first place. Creative expression is not a chore like washing my dishes and scrubbing my toilet bowl. Yes, making art is sometimes painful and difficult and challenging. But that struggle is not something to be eliminated. It is meant to be overcome.

AI apologists--at least, nearly all of those I have met--are, in my view, nearly entirely lacking in understanding of what makes living worthwhile; and those who do understand are intentionally and cynically promoting AI because they stand to gain financially from this position.

Comment Re:A serious question (Score 1) 41

It's a good question and one I'm working on trying to get an answer to. By giving AI hard, complex engineering problems, and then getting engineers to look at the output to determine if that output is meaningful or just expensive gibberish.

By doing this, I'm trying to feel around the edges of what AI could reasonably be used for. The trivial engineering problems usually given to it are problems that can usually be solved by people in a similar length of time. I believe the typical savings from AI use are in the order of 15% or less, which is great if you're a gecko involved in car insurance, but not so good if you're a business.

If the really hard problems aren't solvable by AI at all (it's all just gibberish) then you can never improve on that figure. It's as good as it is going to get.

I've open sourced what AIs have come up with so far, if you want to take a look. Because that is what is going to tell you if good can come out of AI or not.

Comment Re:Employee conversation in work environment (Score 1, Interesting) 41

The conversations are not private, but PII laws nonetheless still apply. Anything in the messages that violates PII privacy laws is forbidden regardless of company policy. Policy cannot overrule the law.

Now, in the US, where privacy is a fiction and where double-dealing is not only perfectly acceptable but a part of workplace culture, that isn't too much of an issue. The laws exist on paper but have no real existence in practice.

However, business these days is international and American corps tend to forget that. Any conversation involving European computers (even if all employers and employees are in the US) falls under the GDPR and is under the aspices of the European courts and the ECHR, not the US legal system. And cloud servers are often in Ireland. Guess what. That means any conversation that takes place physically on those computers in Ireland plays by European rules, even if the virtual conversation was in the US.

This was settled by the courts a LONG time ago. If you carry out unlawful activities on a computer in a foreign country, you are subject to the laws of that country.

Comment Not interesting yet. (Score 4, Informative) 49

It's possible that cetaceans have a true language. They certainly have something that seems to function the same as a "hello, I am (name)", where the name part differs between all cetaceans but the surrounding clicks are identical. The response clicks also include that same phrase which researchers think serves the purpose of a name.

But we've done structural analysis to death and, yes, all the results are interesting (it seems to have high information content, in the Shannon sense, seems to have some sort of structure, and seems to have intriguing early-language features), but so does the Voynich Manuscript and there's a 99.9% chance that the Voynich Manuscript is a fraud with absolutely no meaning whatsoever. Structure only tells you if something is worth a closer look and we have known for a long time that cetacean clicks were worth a closer look. Further structural work won't tell us anything we don't already know.

What we need is to have a long-term recording of activities and clicks/whistles, where the sounds are recorded from many different directions (because they can be highly directional) and where the recording positively identifies the source of each sound, what that source was doing at the time (plus what they'd been doing immediately prior and what they do next), along with what they're focused on and where the sounds were directed (if they were). This sort of analysis is where any new information can be found.

But we also need to look at lessons learned in primate research, linguistics, sociology and anthropology, to understand what ISN'T going to work, in terms of approaches. In all three cases, we've learned that you learn best immersively, not from a distance. If an approach has failed in EVERY OTHER SOCIAL SCIENCE, then assuming it is going to work in cetacean research is stupid. It might be the correct way to go, but assuming it is is the bit that is stupid. If things fail repeatedly, regardless of where they are applied, then there's a decent chance it is necessary to ask that maybe the stuff that keeps failing is defective.

Comment Re:Disinfo (Score 1) 114

Only idiots spy in person. They either pay for an insider or do all monitoring remotely. When was the last time an actual foreign agent was caught in a base? Now look at the number of times they've used USB keys to import malware, used cash to pay off insiders, or used remote sensing technology like microphones capable of analysing vibrations in windows, or other tracking devices.

I'm looking at where spies are caught. And they are never caught trying to be janitors on bases. If they're caught at all, then it's because the people they bribed to do all the inside work were themselves caught.

You have to go by the evidence and the evidence doesn't suggest infiltration.

Slashdot Top Deals

"Thank heaven for startups; without them we'd never have any advances." -- Seymour Cray

Working...