Forgot your password?
typodupeerror

Comment Re:There's no AI "thinking" (Score 1) 110

All too often, humans fall into the "thinking without thinking" trap, just regurgitating their inputs rather than actually understanding them. Understanding the failings of AI and how it doesn't think, but rather pattern matches like a super search engine, will probably shed light onto many problems relating to humans and genuine understanding vs regurgitating input.

Comment Re:Nonsense (Score 0) 110

While delusional thinking is common to religions and especially cults, provided you do the thing religious conservatives hate and actually think through your religious faith, those problems can be avoided. But crafting a sensible, rational, and informed religious faith is a much harder task than a mindlessly religiously conservative one. Thus ease, convenience, and human laziness lead to the latter propagating. But those dynamics are a consequence of human nature: the problems of religions happen because religions are made out of people. Those problems are as inevitable as weeds growing in the garden, and the problem is that the lazy gardener will simply declare the weeds God's chosen flowers and thus don't needed to be weeded out. And not having a religious faith is not necessarily a good solution for those for whom their faith provides central structural components of their mental makeup. It is easy to be simplistic about religion and faith, some declare it absolutely true beyond question (esp. religious conservatives), and others declare it unquestioningly delusional. Both sides associate mainly with those who agree with them, and so anti-religion quasi-cults form, built on the same human dynamics that power religious cults and echo chambers. Humans are complex, complexity is a b****, and we are easily seduced by things purported to take that complexity away.

Comment Like sycophants (Score 3, Insightful) 110

It's like the way being surrounded by sycophants fuels a dictator's delusions. The first golden rule of using AI is that you must, must, must, verify what they say, and you must therefore have a means to verify what they say. If not, then the unit comprising of you and them turns into an AI feeding itself its own output, and model collapse occurs (or at least something like model collapse). On the human side this manifests as delusional thinking, since the garbage output of a model-collapsed AI has been burned into their brain.

Comment It is tempting. (Score 1) 226

There's a lot I do that doesn't require much CPU power, but for which battery life is important. Stuff that runs happily on a Lenovo T420 under Linux (Chrome with unbloated web pages, terminal, vim). The battery life and power consumption will be insanely low compared to a PC laptop. For stuff on the go that can be massive. (My current solution is a T450 with a pile of spare batteries.) I would love one of these running a Debian Linux with KDE frontend, but could cope with Macos for the sake of reliability and battery life.

Comment Safely Using AI (Score 1) 131

AI isn't going away anytime soon. We have to deal with the fact that it is there. Cars cause accidents and deaths every day. But cars aren't going away anytime soon either. In the case of cars, we teach people how to drive the safely. We could do a lot better, but at least we teach them and test them before letting them roam free in their cars. We need something similar with chatbots: people need to be taught how to use them safely, the problems that can happen, how to recognise them, how to avoid them, and some idea of what an LLM actually is, how they work, and what they aren't, and what they can and cannot do. Without this, many people will fall into the trap of seeing them as 'magic faeries' or such. We need to drive AI safely the way we drive our cars safely.

Comment The Problem With Inherent Human Laziness (Score 2) 109

Training one's mind and brain is hard, just as exercise and physical labour is hard.
If we have the option of some kind of labour saving device, our instinct is to take it.
It takes an uncommon degree of discipline, especially amongst children, in order not to yield to this desire for ease.

Comment Re:Lost Media (Score 1) 75

I'm glad I bought it back in 2009 when I did. The DVD I have is the 4:3 version. I'm not sure I'll find the time to binge watch it all again, but it does make a great binge watch with the massive multi series arc. Then you start to notice details in earlier episodes, innocuous at the time, which have pivotal significance later.

Comment Assumptions of uniformity etc. (Score 2) 48

I fear with Mental Health we are falling into the trap of assuming people are more uniform than they are. For any aspect of health other than the brain, we can e.g. assume for the most part that two different people's biceps work in roughly the same way, two people's kidneys work in roughly the same way, two peoples blood likewise (though one must take care with e.g. blood type). This is the case even up to the level of individual neurons, which basically sum their inputs and ping if a threshold is exceeded. But when things depend upon how those 80bn neurons are configured, this changes. We hit a combinatorial explosion of possibilites, far too numerous to contemplate having any kind of representative sample of them. So scientific approaches that assume a representative sample will fail when one cannot ignore the complexity of the brain and the degree to which two different brains, even if trained on identical training sets (and with each person's life taking a different course, we don't even have that), can be wired very differently.

Just my thought at least. So when it comes to efficacy of microdosing, it's a case of either do the experiment or don't. But it may not be possible to predict the effect of Alice microdosing, given only observations of what happened when Bob and Charlie did. It may, but one should be careful not to assume a uniformity that isn't there.

Comment AI Prompts are kind of another source code (Score 1) 54

I only use free AI like Gemini and ChatGPT, and only for small programs (generally Python or bash, at most a few 100 lines long). It's great as a timesaver, and as a search and learning tool.

The thing is, you have to learn how to precisely tell the AI what you want, if you want to get what you actually want. Then, if you want to modify it, you basically have to prompt the AI to make changes. In the case of e.g. Gemini, once your session is done, you kind of have to start from scratch with the prompts. Hence I record what prompts I use at the start of every script.

Modifying AI output is akin to making changes to the output of a transpiler. Say your compiler takes some high level language and outputs C, and you modify the C. If you want to go back to the high level language, then you have to make your C tweaks all over again. Which is time consuming.

I guess Linus is using this project as a means to have a play with vibe coding on a project where it doesn't matter in a critical way if it works or not. But I think it worth looking at AI prompting as a bunch of new programming languages. You still have to carefully think what you want the computer to do, it's just that the AI saves you a lot of time googling around for packages and looking things up in the docs. Provided you can read and check, or somehow test, what an AI gives you, there's not that much of a problem. It's when you are really vibe coding and don't know how to read the AI output, nor test it for correctness. that the problems seep in.

Comment Re:Markdown vs HTML (Score 1) 60

Standard markdown I'm not sure. But many implementations have features hacked in. My wiki uses Parsedown as the renderer, which passes raw html through. Then it's wrapped in a preprocessor for things like turning WikiWords into links in the old school Wiki way, and a growing list of other tasks.

Slashdot Top Deals

Never ask two questions in a business letter. The reply will discuss the one you are least interested, and say nothing about the other.

Working...