Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Calibration matters? (Score 1) 425

Well let's say there's some degree of error between the calorie in general and the calorie for you. I would think that, it's some multiplier, and that, you should be able to adjust it by monitoring your diet and the consistency with what you eat. Like, if you gain 1 lb a week, and eat 10000 calories during that time, then regardless of what the measure is, you need to either adjust your intake down, or increase your burn rate, or both. I hate to be barbaric about it, but you never see fat people in gulags and concentration camps. Sooner or later, calories DO matter.

Comment Re:Good luck ... (Score 1) 256

I'll mostly leave my wifi off

Good practice, since (for example) a given grocery store can start correlating your media access address with your presence, even if they don't (initially) know your identity. Ditto anyone scanning for wifi pings on the highway.

So here's an elaboration on keeping wifi mostly off: I have an event managing app (in my case, Llama, there are others) that I've configured to shut off wifi every time I disconnect from any network. I manually re-enable whenever I get to my destination (e.g. home); for whatever reasons, it's easier for me to remember to re-enable as I start using some service at home than it is for me to remember to disable as I leave a given location.

I could tell my automator (Llama) to re-enable wifi when I my location gets close to work or home, but locating precisely enough to not turn on wifi at the supermarket near my house requires more battery than I like.

Comment Seems obvious to me. (Score 1) 311

Simply make any use of the well-known logical fallacy types a crime against sanity, then see which methods are used in court. I think exile to Mexico would be a good punishment for those found guilty. Appeal to CowboyNeil would be categorized as "heresy and witchcraft".

Comment Re:A difficult subject (Score 1) 308

Absolutely, on all points. I'd perhaps add one other - we've got potentially good diagnostics, but they're not used for this, they're rare and they're horribly expensive. (Problem is, it's longer and less clear than yours.)

An example. Hospital MRI scanners are around 2 to 2.5 T, which gives sufficient resolution to see severe injuries and malformations but not much more. Medical scanners can go up to 7.3 T and research scanners actively used go up to 9.1 T. At this upper end, blurred sections of the brain are almost crystal clear. You can see not quite to the neuron level but fairly close. Subtle issues can be detected. It's more than good enough to find out if there's a problem with mirror cells, bandwidth issues (too much or too little) and similar fine-scale deformities. The best scanner that can be built that can take a human head is around 13 T. It's unclear what this would show, I've not been able to find any info on it

I wouldn't ask psychiatrists and neurosurgeons to have an underground bunker with dozens of such devices armed with top technicians at the ready, although if one of them is the sole winner of the US Powerball at its current 1.2 billion dollar level, it would be nice if some of it was spent on such things. However, MRI as a diagnostic tool is strongly discouraged, apparently, which seems to defeat its value as a means of rapidly identifying and classifying evidence you can't otherwise get to.

I counted the total number of scanning technologies (excluding minor variants) and came up with 33 different diagnostic tools that could be used at the level of the brain. Of those, I have only known two of those to be used in practice (EEG and MRI), never even remotely close to the levels of sensitivity needed to analyze the problem unless, as I said, there's a problem at the grossest of levels. EEG, for example, is performed with as few leads as possible and the digital outputs I've seen look like the ADC is cheap and low resolution. Nor have I ever been impressed by the shielding used in the rooms (the brain is not a strong source, so external signals matter a lot). I've read papers where MEG is used, but it seems to be almost exclusively research with very, very few hospitals actually using it.

This doesn't contradict your statement that there are no good diagnostic tools, partly because nobody has the faintest idea if these tools would be any good in diagnostics (as it's forbidden by the great overlords), how you'd read the data (if it's not actually used then nobody can understand the output at all, and if it's used but never for diagnostics in mental illness then there's no means of understanding what the output means in this context).

That's just the bog standard medical gear, though. Whilst it should be useful (your experience shapes your brain, your brain shapes your experience and this recursion should mean that you can identify traits of one from the other), there will be other tests. In fact, there are. There are hundreds of questions that make up the official test for autism spectrum disorders, but I've only heard of (and then second-hand) one doctor actually running through them. Most glance at the DSM (which is worse than useless and the criteria listed in it are largely rejected by both the checklist and those definitely in this category) and that's it. The checklist is probably not optimal and is probably incorrect much of the time as autism has a very wide range of causes (both known and suspected) and congealed categories of unrelated conditions won't work with any single checklist. Researchers hotly dispute even when it can be diagnosed and at what age it first appears. That's clearly not very helpful.

But that's positively enlightened compared to something like "Borderline Personality Disorder" (a label given to anyone who doesn't fit any billable category and is generally considered not worth wasting time on by the medical and psychiatric professions). Here, there really isn't an actual diagnosis as such, just an identification that there's a problem and that it's not something insurance will pay for.

We need a good, solid ontology of mechanisms of mental illness that forgets tradition and costing, whether there's any lawful or technological way to detect them or not, because those can be measured (even if only in principle) consistently and reliably. It's not enough, by a long way, but at least there will then be a clear indication of what the gaps are, what precisely we don't have diagnostic tools for (and perhaps even theory for).

This would also be the starting point from which medicines or therapies can be developed. You're right about side-effects. About half my current medications are there simply to counteract the side-effects of other medications. One I was put on, temporarily, shut down my colour vision. The problem? Doctors had to experiment on me. They had no idea what would happen until they tried me on something. I really do not like being used as a lab rat when the long-term effects of even short-term exposure is unknown but where it's known that even the short-term effects include death even at the lowest end of the therapeutic range with no understanding of why or to whom this will happen. It strikes me as... all a bit vague.

Comment Re:If humans have free will (Score 1) 207

It can't be philosophy as it is currently being experimentally tested. And, apparently, has been tested in the past.

Also, there are two branches of philosophy. The only branch of any consequence gave rise to formal logic, the systematic proof of a good chunk of mathematics, constructivism, Bayesian statistics, and so on. In other words, it's more rigorous than hard science, not less. In this branch, any statement determined to be true must be true under any circumstances, even if fundamental constants turn out to be neither fundamental nor constant, even if other universes exist with other physics within them. Doesn't matter. Science can discover what it likes, the statements must still hold and not differ by one iota.

The other branch is never used, even by philosophers. They'll publish stuff under it from time to time, but that's about it.

If you can't tell 1 from 0, then it's no wonder you have trouble with this stuff.

Comment A difficult subject (Score 2) 308

Partly because so little is known about the brain/mind. With something like a heart attack or a murder, there's a fairly clear sequence of cause-effect relationships that start with an known and end with a known. With mental illness, the genetics are obscure and too complex to fathom out by any conventional methods. Genetics aren't, however, the only contributing factor. Epigenetics, chemical signals, environment (including stimuli) right the way through life, it's a nightmare.

There are already 1,100 genes - not SNPs, genes - linked to the brain and 23&Me typically links about 50 SNPs of interest to each gene. That's 55,000 possible mutations, which gives you 2^55000 (10^16500) different combinations. In comparison, there are only 7x10^9 people alive on the planet (which means you can't get good resolution data on how variables interact, even if you studied everyone alive today) and about 10^100 atoms in the universe (which means that you'd nowhere to store sufficient data even if you could obtain it). That's just the genetic contribution, nothing else. What the everything else is, and how it relates, is only known in vague details. That's why news stories on yet another breakthrough are commonplace.

To make things worse, culture hasn't yet caught up to the idea there even is a theory of mind. It's still in some sort of Die Hard - Neolithic stage. Medicine isn't much better, the DSM manual has absolutely bugger all to do with what conditions and illnesses exist, it's about what tag the insurance should be billed under. The American psychiatric association is too busy digging its way out of the threat of criminal charges over direct assistance and fraudulent financial dealings to worry about anyone who is actually sick. The NHS can't afford anything more complex than a door-stop, right now, so don't expect Britain to haul anyone out of this mess. (Britain actually has a fairly good reputation on theoretical and practical psychiatric and neurological treatments, or at least it used to. Now, it's about on equal footing to Zimbabwe.) Australia has a Centre of the Mind, but it looks like it's a long way from getting anywhere - if it does at all. Some of its research seems iffy.

So there's no useful categorization, no meaningful theory, no known mechanics, superficial treatments for only certain diagnoses with rather suspect evidence to back them, no systematic approach towards system analysis, triage or debugging. Not even a definition of what a bug is.

The information in this post plus the fact that I've been here a long time aught to allow anyone here to identify (in very superficial terms) one out of the eight diagnoses I endure. Won't help you, won't help me. Those diagnoses aren't useful if you do want to help anyone, because each is subject to an overlapping combinatorial explosion. No, if you want to be helpful, there are citizen science projects for exploring the brain that will benefit the experts and there are probably insights the deep enthusiasts can contribute somehow by exploring databases and literature from perspectives that aren't obvious to researchers.

When it comes to interacting - understand, respect and listen. Oh, and don't fetishize any principle other than first doing no harm. Every other ethic, philosophy or cultural belief should be expendable if it contradicts that. Consider it a mandatory access control.

Comment Re: If humans have free will (Score 1) 207

See the Free Will Theorum and proof, then find the error in that proof. Talk won't cut it, either your claim is correct and the proof is flawed, or the proof is correct and your argument is flawed.

I am a mathematical realist, not a physicalist, but accept that physical reality is all that exists at the classical and quantum levels. There isn't any need for anything else, there is nothing else that needs to be described. But let's say you reject that line. Makes no difference, the brain is Turing Complete and there is nothing in consciousness that cannot be explained outside of Turing logic.

You might not accept that either. Again, makes no odds. Any change to the brain changes the personality, any change to personality changes the brain. They are tightly interdependent. The only externals are hormones and control signals sent by the microflora. The brain itself is governed by two sets of genes, one set containing one thousand genes, the other containing a hundred. Genes are moderated by epigenetic proteins that provide control signals and interpretation. This provides something in the order of 2^11000 different neurological setups (genes have many nucleotides), although there are likely unknown genes that push the number much higher.

I see no cause for this idea of external stuff. Until you can show a convincing reason to require it, it is not religion but a refusal to multiply entities unnecessarily that makes me say that if it's not needed, it's because it's not there.

Comment If humans have free will (Score 3, Interesting) 207

Then so do subatomic particles. You don't need AI if that's all you want. If subatomic particles do not have free will, then neither do humans. This second option allows physics to be Turing Complete and is much more agreeable.

If computers develop sufficient power for intelligence to be an emergent phenomenon, they are sufficiently powerful to be linked by brain interface for the combination to also have intelligence as an emergent phenomenon. The old you would cease to exist, but that's just as true every time a neuron is generated or dies. "You" are a highly transient virtual phenomenon. A sense of continuity exists only because you have memories and yet no frame of reference outside your current self.

(It's why countries with inadequate mental health care have suspiciously low rates of diagnosis. Self-assessment is impossible as you, relative to you, will always fit your concept of normal.)

I'm much less concerned by strong AI than by weak AI. This is the sort used to gamble on the stock markets, analyse signal intelligence, etc. In other words, this is the sort that frequently gets things wrong and adjusts itself to make things worse. Weak AI is cheap, easy, incapable of sanity checking, incapable of detecting fallacies and incapable of distinguishing correlation and causation.

Weather forecasts are not particularly precise or accurate, but they've got a success rate that far outstrips that of Weak AI. This is because weather forecasts involve running hundreds of millions of scenarios that fit known data across vast numbers of differing models, then looking for stuff that's highly resistant to change, that will probably happen no matter what, and what on average happens alongside it. These are then filtered further by human meteorologists (some solutions just aren't going to happen). This is an incredibly processed, analytical, approach. The correctness is adequate, but nobody would bet the bank on high precision.

The automated trading computers have a single model, a single set of data, no human filtering and no scrutiny. Because of the way derivatives trading works, they can gamble far more money than they actually have. In 2007, such computers were gambling an estimated ten times the net worth of the planet by borrowing against predicted future earnings of other bets, many of which themselves were paid for by borrowing against other predicted future earnings.

These are the machines that effectively run the globe and their typical accuracy level is around 30%. Better than many politicians, agreed, but not really adequate if you want a robust, fault-tolerant society. These machines have nearly obliterated global society on at least two occasions and, if given enough attempts, will eventually succeed.

These you should worry about.

The whole brain simulator? Not so much. Humans have advantages over computers, just as computers have advantages over machines. You'll see hybridization and/or format conversion, but you won't see the sci-fi horror of computers seeing people as pets (think that was an Asimov short story), threats counter to programming (Colossus, 2010's interpretation of 2001, or similar) or vermin to be exterminated (The Matrix' Agent Smith).

The modern human brain has less capacity than the Neanderthal brain, overall and in many of the senses in particular. You can physically enlarge parts of your brain, up to about 20%, through highly intensive learning, but there's only so much space and only so much inter-regional bandwidth. This means that no human can ever achieve their potential, only a small portion of it. Even with smart drugs. There are senses that have atrophied to the point that they can never be trained or developed beyond an incredibly primitive level. Even if that could be fixed with genetic engineering, there's still neither space nor bandwidth to support it.

Comment Wrong approach (Score 1) 115

You always start with the end you want to achieve. You can't get somewhere without knowing where it is, you can't even heuristically reach a goal without some measure of deviance.

The FAA is notoriously bad at this, always has been. The NTSB has lambasted them multiple times for failures in devising and enforcing regulations. The FAA was also solely responsible for air traffic controllers having no choice but to sleep on duty (not sure that issue was ever fixed).

I'm not impressed with the NTSB either, but at least they make some sort of effort.

The whole aviation safety and regulatory system needs to be replaced - not just to get drone regulations up to speed, but to eliminate corruption and replace it with sound judgement.

Comment Zork (Score 1) 60

It resulted in lawsuits, such as DRDOS, being extended over decades, and many potentially exciting businesses being driven into bankruptcy.

To this day, it results in WINE incompatibilities where none should exist. This is a genuine problem.

Far as Windows 3.11 is concerned, lots of systems you really don't want failing (such as control systems for hydroelectric dams and nuclear reactors) use ancient versions of operating systems (NT 3.x, for example) because it's too dangerous to reimplement the control software. The consequences of an error are too great and modern operating systems are too complex to be made reliable enough.

These systems rely on legacy hardware, much of which is no longer made. They rely on no novel fault conditions arising. Because they're increasingly on the public internet, this cannot possibly be guaranteed. Without maintenance, without the prospect of anyone even knowing how to handle error conditions, these are ticking time bombs.

So, yes, the world is less safe and less satisfactory because of abandoned lines for which no source exists and for which workarounds are more dangerous than just allowing a catastrophic failure to arise.

Slashdot Top Deals

The universe is an island, surrounded by whatever it is that surrounds universes.