Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Automated knowledge have never existed before n (Score 1) 113

But the part I just quoted reminds me a lot of Just In Time supply chain management, and the trouble that has caused us on numerous occasions. One of those implications you mention may be that we abuse tech's advantages until they become active disadvantages. Heck, that tendency of ours could well lead to the fall of civilization.

Surely we wouldn't be the first civilization to fall into oblivion after dominating the known world.

However, I'd say over-reliance on any single technology doesn't seem to be the cause of such downfalls, but the inability to adapt to new ones when the old ones no longer work. This happens when the system is so complex or fragile that the operational regions and organisms don't obey orders from the effective control center, and this center is no longer capable of forcing them into compliance. The society then decomposes into separate independent subparts, as it happened to e.g. the Roman empire.

I see two possible outcomes to computing with AI: either the increased processing power allows a leading center to hyper-control society and further cohere our civilization, or it allows opposing centers to rise and challenge the current lead, and they fight for dominance. The recent geopolitical trend seems to be towards the latter, but the effects of recent developments in AI remain to be seen.

Comment Re:Unfortunately people are not fungible (Score 3, Interesting) 113

People with affinity for STEM will be fine; even if AIs are very good at making new discoveries, the most important part of any new field of knowledge is realizing what areas are worth exploring; and AI will need to be lead to those areas for quite a long time. Boffins will continue to thrive in such an environment.

What worries me is what will happen to people who are not able to do science. Hospitality and healthcare may indeed thrive, but not everybody is capable of those; and artistic, creative types will suffer when their jobs can be almost equaled at 100x times the rate of production. What will happen to people without any remarkable skill, when their work can be made cheaper and faster by a skilled worker controlling an AI?

Comment Automated knowledge have never existed before now (Score 1) 113

Economics in general has the problem that you can't store work, e.g. pre-work and keep it for future use, as everyone who has ever cleaned a room can tell you. Most work has to be done in the very moment the result is needed. You can't even refill your car before the gas tank is emptied enough to accept new gas.

But STEM jobs are a way to actually pre-work. Once you automate something, you will not have to do exactly that work again.

You hit the nail on the head. Throughout human history, every day we have needed to provide for food and shelter. Our brain evolved to make us better at anticipating where we would get them and create ways to secure those needs.

The thing is, we now have a new technology - computing - that allows to delay all kinds of work, and we still don't know what all the implications to human society will be.

Before the printing press we had ways to store language in physical form and store ideas; but efficient printing made the process efficient, accelerating our capacity to do science and improve our knowledge. This led to new physics discoveries that brought us better energy management, creating automated machines for physical work. Before steam engines, there were machines that used energy to automate work, but they were limited to specific locations and tasks, such as mills, ploughs and cranes.

Now we have a technology that automates the application of knowledge to virtually any kind of work, physical or mental. Whether AIs have thoughts of their own or not, the fact remains that software is capable of condensing large areas of human knowledge (including STEM knowledge) and applying it repeatedly without direct human intervention. And as you said, we will not have to do again the work to understand those task in order to get their benefits. This will again accelerate how parts of our society function, which will again adapt around the new processes created around that increased efficiency.

Comment Re:Guest ssid (Score 1) 100

If you leave your car unlocked and with the key in the ignition, ready to be taken and used for nefarious means, yes, you will get into trouble. If you let your gun lay around and someone takes it to shoot someone, you will most certainly get into trouble.

You are required to keep your belongings safe and secure. Failure to do so will result in you getting into quite uncomfortably hot water.

Perhaps -- but you would not face charges for the specific crime of "aiding and abetting".

For the example of leaving your car unlocked, you'd only have legal trouble if you proceeded to file a fraudulent insurance claim, and in many states would not incur any tort liability

Comment Re:Why admit it? (Score 1) 29

I assume the intend is to have dumber system just enumerate all possible ideas en masse and have them patented automatically. That way if someone proves one approach to be actually useful you can file prior registration.

Thus explaining why "UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines.”

Comment Re:Guest ssid (Score 1) 100

Second, hope that you're not up against a lawyer who equates your open access point to leaving a car unlocked and with the key in the ignition that is then used by a criminal for his crime, because that is aiding and abetting, which would carry a quite similar sentence as the original crime here.

[Citation Needed]

The crime of "aiding and abetting", in any sane nation (and in the USA) has a mandatory component of mens rea.

Comment Re: Guest ssid (Score 1) 100

If you left your front door open and someone wandered into your house and stole a gun or a knife and then committed a crime using said object, you would be liable if you didn't immediately report the theft though.

The mandatory reporting in federal law applies to FFLs (registered commercial dealers) only.

Individuals are not held criminally liable for the actions of thief, regardless of what was stolen -- a car, a knife, a gun or a porn stash. With firearms specifically, federal law protects owners in the event of a stolen weapon.

California and the handful of other states with a "gun theft reporting law" generally do not explicitly impose civil liability for the actions of the thief, though California is moving in that direction. There are proposals, like LA's SB216 to make owners liable even if they do report in a timely manner, but these are long shots at best and unlikely to survive a court challenge.

Comment I thought shame was dead? (Score 3) 100

I see in the article mention of "sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer", but unlike Prenda which made a point of going after consumers of the more outré genres, the article doesn't really mention what kind of porn BitTorrent users are being sued for seeding?

Since there's little shame in vanilla porn, my assumption is that if Strike 3 is in it to get settlements, they would be going after less mainstream content, and not bothering with people who download stuff akin to "Devil In Miss Jones" (the 2005 remake, not the public domain original)

Comment Re: shame (Score 1) 69

I mean, seriously, that data has proven crucial in countless federal, state, and local investigations, it would be a shame to lose it all because a private entity chooses not to save it, right?

Don't worry for them, phone tower companies have it covered.

Clever criminals leave their phone home anyway.

Comment Re:LLMs are nothing but a good search engine (Score 1) 46

I can ask any old search engine to give me a recipe for chocolate chip cookies. I can't ask a search engine to then double the ingredients after it provides me with the recipe. That's not merely "combine the multiple found snippets".

We have a different definition of learning from the training set. The LLM is able to double numbers because it was trained on examples of doubling. If you trained it only with those examples and it learned that x2 means doubling, it would not be able to infer that x3 is "adding three times" and x4 is "adding 4 times" unless you also included examples of tripling and quadrupling. The 'learning' it can exhibit is limited to patterns found in the corpus of input data.

LLMs don't have the capability to deduce new facts from the facts it knows, like a symbolic search engine would; only to probabilistically generate strings or images that simulate such generation. The process is completely different. And it's limited to probabilities over facts in the training data; the prompt is used only to calculate posteriors on those probabilities, not to generate new ones.

I've seen such a capability demonstrated with my own eyes. I pasted a portion of the manual for a proprietary DSL that isn't on the Internet into my context and asked deepseek-coder-33b-instruct to write a program in a language it knew nothing about.

However, as soon as you clear the context, the model will again return to knowing nothing about the language. That's an instance of the generating content most likely to match the multiple levels of activated patterns I talked about. It can create results more elaborate than a search engine (which merely returns the original document unchanged), but it's based on the same principle. It's using the content you provided as input, but not learning it. It won't remember a thing about your DSL or the written program, much less generalize about them for other new prompts.
It has not learned in any meaningful sense of the word, which necessarily implies the information becoming part of the permanent knowledge of the model.
Even if you never deleted and kept the DSL spec and the conversation forever in the context, it would be akin to memorising it rather than learning it, a weaker form. You need something like a LoRA to have the model acquire new knowledge, as only then it becomes part of its training data and can be used as new knowledge.

Comment Re:LLMs are nothing but a good search engine (Score 1) 46

This perspective is a fundamental misunderstanding of what LLMs are.
On the contrary, it's the result of careful consideration of how LLMs operate and reflection on the observed results.

The point of the technology is generalization, the ability to apply learned concepts. It isn't about cutting and pasting snippets of text from a dataset.

I didn't say that it's merely cutting and pasting snippets. As I mentioned, the model has the capability to use learned language to combine the multiple found snippets into a single coherent discourse. But their discourse *is* essentially a regurgitation of the many items of content retrieved from the prompt; some as text snippets, and others as more complex patterns learned directly from their training corpus.

But if you think that the model creates knowledge beyond what's provided in the training data, you're the one with a fundamental misunderstanding. What you call "generalization" is a codified compression of the trained corpus; that compression happens to capture patterns in the input documents at multiple levels - some at a surface syntax level, others connected to more abstract concepts that humans used to create and classify the content (such as style, emotion, and the meaning of the topics themselves).

When you apply those compressed patterns to new content, such as an input prompt, it activates the most relevant of those patterns, and generates the content most likely to match the multiple levels of activated patterns in the context of the current generation point. But the models in their current form have no capability at all to create new patterns at runtime based on their applied use, i.e. no memory and no method to reason about what they see.

So you're be mistaken if you think they have any capability to learn from content that was not part of their training data; they need to be retrained with more unit data in order to acquire such new content. Maybe in the near future there'll be a way to have models with actual online learning that get new knowledge directly from their own interactions, like they do now offline with RLHF.

Comment LLMs are nothing but a good search engine (Score 1) 46

The best way to understand what what LLMs are doing is treat them as a search engine that actually works as intended, retrieving multiple results from its corpus of training documents. Thanks to modern languages processing techniques, they are capable of combining several results in a single narratively coherent reply. Just like a search engine though, the quality of the talents is limited by the quality of the documents provided.

LLMs now have the advantage of not being tainted by SEO techniques and in-place advertising (for now), so we get to experience something similar to how Google worked when it was first released: an unbiased index to all the knowledge humans have shared on digital media.

For the second time we'll have a small window of opportunity to see what it's like to have all that knowledge available, until it's again made unusable by the same market forces that poison the pool of content for small personal gains. No need to blame regulation there, which could actually have an opportunity to reduce that degradation if done well.

Comment Re:What was wrong with these? (Score 4, Informative) 46

What was wrong with these? https://en.wikipedia.org/wiki/...
The best known set of laws are Isaac Asimov's "Three Laws of Robotics".

Are you joking? The *whole point* of Asimov's Three Laws of Robotics was to demonstrate, through his
robot stories, that a small set of simple rules could never work too control artificial intelligence in a complex world full of ambiguities.

Comment Three predictions (Score 2) 78

First, new AI-friendly programming languages will be created, that are simpler for LLMs to learn. Once developers have assistance of the model to create and understand code, easy-to-read concise PLs won't be that essential; but unambiguous precision will be. Code snippets will become more write-only.

Second, business-level programming will become even more dependent on composing pre-built, well-tested library components. Programs will become less of a logically coherent cathedral with solid pillars that (tries to) solve all the needs of an organisation, and more of a camp of communicating, loosely connected tools that each serve one single concern and actually solves the needs of a worker or small team.

Third, thanks to the previous two, most programs won't be built and run with the current enterprise lifecycle of writing code to specs, debug in a development environment, then release; it will become integrated with the AI platform instead, running in the background without the code ever abandoning the environment where it was written. Multimodal AIs like Gemini are capable of writing ad-hoc user interfaces, built on the spot to explore and consume the data according to some current user business needs. Many of the tools will be transient, single use widgets that solve a simple task at hand following the specifications provided by the end user, just like information workers now are able to create spreadsheets to define ad-hoc processes to keep and model data to solve their work. In this scenario, exposing the code is not something that will be needed usually; only to the point that it's needed to check that operations on the data are using the right logic.

Will traditional programming disappear? Of course not; just like system programming is still needed, someone will have to write the well-tested components that the AI combines; and someone will need to keep in check the architecture of more complex combinations of tools that are used in a stable way through time. But for most small tasks, users will finally able to create their own tools for most of their simple and moderately complex information processing needs.

Slashdot Top Deals

Always draw your curves, then plot your reading.

Working...