Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Seems obvious to me. (Score 1) 311

Simply make any use of the well-known logical fallacy types a crime against sanity, then see which methods are used in court. I think exile to Mexico would be a good punishment for those found guilty. Appeal to CowboyNeil would be categorized as "heresy and witchcraft".

Comment Re:A difficult subject (Score 1) 308

Absolutely, on all points. I'd perhaps add one other - we've got potentially good diagnostics, but they're not used for this, they're rare and they're horribly expensive. (Problem is, it's longer and less clear than yours.)

An example. Hospital MRI scanners are around 2 to 2.5 T, which gives sufficient resolution to see severe injuries and malformations but not much more. Medical scanners can go up to 7.3 T and research scanners actively used go up to 9.1 T. At this upper end, blurred sections of the brain are almost crystal clear. You can see not quite to the neuron level but fairly close. Subtle issues can be detected. It's more than good enough to find out if there's a problem with mirror cells, bandwidth issues (too much or too little) and similar fine-scale deformities. The best scanner that can be built that can take a human head is around 13 T. It's unclear what this would show, I've not been able to find any info on it

I wouldn't ask psychiatrists and neurosurgeons to have an underground bunker with dozens of such devices armed with top technicians at the ready, although if one of them is the sole winner of the US Powerball at its current 1.2 billion dollar level, it would be nice if some of it was spent on such things. However, MRI as a diagnostic tool is strongly discouraged, apparently, which seems to defeat its value as a means of rapidly identifying and classifying evidence you can't otherwise get to.

I counted the total number of scanning technologies (excluding minor variants) and came up with 33 different diagnostic tools that could be used at the level of the brain. Of those, I have only known two of those to be used in practice (EEG and MRI), never even remotely close to the levels of sensitivity needed to analyze the problem unless, as I said, there's a problem at the grossest of levels. EEG, for example, is performed with as few leads as possible and the digital outputs I've seen look like the ADC is cheap and low resolution. Nor have I ever been impressed by the shielding used in the rooms (the brain is not a strong source, so external signals matter a lot). I've read papers where MEG is used, but it seems to be almost exclusively research with very, very few hospitals actually using it.

This doesn't contradict your statement that there are no good diagnostic tools, partly because nobody has the faintest idea if these tools would be any good in diagnostics (as it's forbidden by the great overlords), how you'd read the data (if it's not actually used then nobody can understand the output at all, and if it's used but never for diagnostics in mental illness then there's no means of understanding what the output means in this context).

That's just the bog standard medical gear, though. Whilst it should be useful (your experience shapes your brain, your brain shapes your experience and this recursion should mean that you can identify traits of one from the other), there will be other tests. In fact, there are. There are hundreds of questions that make up the official test for autism spectrum disorders, but I've only heard of (and then second-hand) one doctor actually running through them. Most glance at the DSM (which is worse than useless and the criteria listed in it are largely rejected by both the checklist and those definitely in this category) and that's it. The checklist is probably not optimal and is probably incorrect much of the time as autism has a very wide range of causes (both known and suspected) and congealed categories of unrelated conditions won't work with any single checklist. Researchers hotly dispute even when it can be diagnosed and at what age it first appears. That's clearly not very helpful.

But that's positively enlightened compared to something like "Borderline Personality Disorder" (a label given to anyone who doesn't fit any billable category and is generally considered not worth wasting time on by the medical and psychiatric professions). Here, there really isn't an actual diagnosis as such, just an identification that there's a problem and that it's not something insurance will pay for.

We need a good, solid ontology of mechanisms of mental illness that forgets tradition and costing, whether there's any lawful or technological way to detect them or not, because those can be measured (even if only in principle) consistently and reliably. It's not enough, by a long way, but at least there will then be a clear indication of what the gaps are, what precisely we don't have diagnostic tools for (and perhaps even theory for).

This would also be the starting point from which medicines or therapies can be developed. You're right about side-effects. About half my current medications are there simply to counteract the side-effects of other medications. One I was put on, temporarily, shut down my colour vision. The problem? Doctors had to experiment on me. They had no idea what would happen until they tried me on something. I really do not like being used as a lab rat when the long-term effects of even short-term exposure is unknown but where it's known that even the short-term effects include death even at the lowest end of the therapeutic range with no understanding of why or to whom this will happen. It strikes me as... all a bit vague.

Comment Re:If humans have free will (Score 1) 207

It can't be philosophy as it is currently being experimentally tested. And, apparently, has been tested in the past.

Also, there are two branches of philosophy. The only branch of any consequence gave rise to formal logic, the systematic proof of a good chunk of mathematics, constructivism, Bayesian statistics, and so on. In other words, it's more rigorous than hard science, not less. In this branch, any statement determined to be true must be true under any circumstances, even if fundamental constants turn out to be neither fundamental nor constant, even if other universes exist with other physics within them. Doesn't matter. Science can discover what it likes, the statements must still hold and not differ by one iota.

The other branch is never used, even by philosophers. They'll publish stuff under it from time to time, but that's about it.

If you can't tell 1 from 0, then it's no wonder you have trouble with this stuff.

Comment A difficult subject (Score 2) 308

Partly because so little is known about the brain/mind. With something like a heart attack or a murder, there's a fairly clear sequence of cause-effect relationships that start with an known and end with a known. With mental illness, the genetics are obscure and too complex to fathom out by any conventional methods. Genetics aren't, however, the only contributing factor. Epigenetics, chemical signals, environment (including stimuli) right the way through life, it's a nightmare.

There are already 1,100 genes - not SNPs, genes - linked to the brain and 23&Me typically links about 50 SNPs of interest to each gene. That's 55,000 possible mutations, which gives you 2^55000 (10^16500) different combinations. In comparison, there are only 7x10^9 people alive on the planet (which means you can't get good resolution data on how variables interact, even if you studied everyone alive today) and about 10^100 atoms in the universe (which means that you'd nowhere to store sufficient data even if you could obtain it). That's just the genetic contribution, nothing else. What the everything else is, and how it relates, is only known in vague details. That's why news stories on yet another breakthrough are commonplace.

To make things worse, culture hasn't yet caught up to the idea there even is a theory of mind. It's still in some sort of Die Hard - Neolithic stage. Medicine isn't much better, the DSM manual has absolutely bugger all to do with what conditions and illnesses exist, it's about what tag the insurance should be billed under. The American psychiatric association is too busy digging its way out of the threat of criminal charges over direct assistance and fraudulent financial dealings to worry about anyone who is actually sick. The NHS can't afford anything more complex than a door-stop, right now, so don't expect Britain to haul anyone out of this mess. (Britain actually has a fairly good reputation on theoretical and practical psychiatric and neurological treatments, or at least it used to. Now, it's about on equal footing to Zimbabwe.) Australia has a Centre of the Mind, but it looks like it's a long way from getting anywhere - if it does at all. Some of its research seems iffy.

So there's no useful categorization, no meaningful theory, no known mechanics, superficial treatments for only certain diagnoses with rather suspect evidence to back them, no systematic approach towards system analysis, triage or debugging. Not even a definition of what a bug is.

The information in this post plus the fact that I've been here a long time aught to allow anyone here to identify (in very superficial terms) one out of the eight diagnoses I endure. Won't help you, won't help me. Those diagnoses aren't useful if you do want to help anyone, because each is subject to an overlapping combinatorial explosion. No, if you want to be helpful, there are citizen science projects for exploring the brain that will benefit the experts and there are probably insights the deep enthusiasts can contribute somehow by exploring databases and literature from perspectives that aren't obvious to researchers.

When it comes to interacting - understand, respect and listen. Oh, and don't fetishize any principle other than first doing no harm. Every other ethic, philosophy or cultural belief should be expendable if it contradicts that. Consider it a mandatory access control.

Comment Re: If humans have free will (Score 1) 207

See the Free Will Theorum and proof, then find the error in that proof. Talk won't cut it, either your claim is correct and the proof is flawed, or the proof is correct and your argument is flawed.

I am a mathematical realist, not a physicalist, but accept that physical reality is all that exists at the classical and quantum levels. There isn't any need for anything else, there is nothing else that needs to be described. But let's say you reject that line. Makes no difference, the brain is Turing Complete and there is nothing in consciousness that cannot be explained outside of Turing logic.

You might not accept that either. Again, makes no odds. Any change to the brain changes the personality, any change to personality changes the brain. They are tightly interdependent. The only externals are hormones and control signals sent by the microflora. The brain itself is governed by two sets of genes, one set containing one thousand genes, the other containing a hundred. Genes are moderated by epigenetic proteins that provide control signals and interpretation. This provides something in the order of 2^11000 different neurological setups (genes have many nucleotides), although there are likely unknown genes that push the number much higher.

I see no cause for this idea of external stuff. Until you can show a convincing reason to require it, it is not religion but a refusal to multiply entities unnecessarily that makes me say that if it's not needed, it's because it's not there.

Comment If humans have free will (Score 3, Interesting) 207

Then so do subatomic particles. You don't need AI if that's all you want. If subatomic particles do not have free will, then neither do humans. This second option allows physics to be Turing Complete and is much more agreeable.

If computers develop sufficient power for intelligence to be an emergent phenomenon, they are sufficiently powerful to be linked by brain interface for the combination to also have intelligence as an emergent phenomenon. The old you would cease to exist, but that's just as true every time a neuron is generated or dies. "You" are a highly transient virtual phenomenon. A sense of continuity exists only because you have memories and yet no frame of reference outside your current self.

(It's why countries with inadequate mental health care have suspiciously low rates of diagnosis. Self-assessment is impossible as you, relative to you, will always fit your concept of normal.)

I'm much less concerned by strong AI than by weak AI. This is the sort used to gamble on the stock markets, analyse signal intelligence, etc. In other words, this is the sort that frequently gets things wrong and adjusts itself to make things worse. Weak AI is cheap, easy, incapable of sanity checking, incapable of detecting fallacies and incapable of distinguishing correlation and causation.

Weather forecasts are not particularly precise or accurate, but they've got a success rate that far outstrips that of Weak AI. This is because weather forecasts involve running hundreds of millions of scenarios that fit known data across vast numbers of differing models, then looking for stuff that's highly resistant to change, that will probably happen no matter what, and what on average happens alongside it. These are then filtered further by human meteorologists (some solutions just aren't going to happen). This is an incredibly processed, analytical, approach. The correctness is adequate, but nobody would bet the bank on high precision.

The automated trading computers have a single model, a single set of data, no human filtering and no scrutiny. Because of the way derivatives trading works, they can gamble far more money than they actually have. In 2007, such computers were gambling an estimated ten times the net worth of the planet by borrowing against predicted future earnings of other bets, many of which themselves were paid for by borrowing against other predicted future earnings.

These are the machines that effectively run the globe and their typical accuracy level is around 30%. Better than many politicians, agreed, but not really adequate if you want a robust, fault-tolerant society. These machines have nearly obliterated global society on at least two occasions and, if given enough attempts, will eventually succeed.

These you should worry about.

The whole brain simulator? Not so much. Humans have advantages over computers, just as computers have advantages over machines. You'll see hybridization and/or format conversion, but you won't see the sci-fi horror of computers seeing people as pets (think that was an Asimov short story), threats counter to programming (Colossus, 2010's interpretation of 2001, or similar) or vermin to be exterminated (The Matrix' Agent Smith).

The modern human brain has less capacity than the Neanderthal brain, overall and in many of the senses in particular. You can physically enlarge parts of your brain, up to about 20%, through highly intensive learning, but there's only so much space and only so much inter-regional bandwidth. This means that no human can ever achieve their potential, only a small portion of it. Even with smart drugs. There are senses that have atrophied to the point that they can never be trained or developed beyond an incredibly primitive level. Even if that could be fixed with genetic engineering, there's still neither space nor bandwidth to support it.

Comment Wrong approach (Score 1) 115

You always start with the end you want to achieve. You can't get somewhere without knowing where it is, you can't even heuristically reach a goal without some measure of deviance.

The FAA is notoriously bad at this, always has been. The NTSB has lambasted them multiple times for failures in devising and enforcing regulations. The FAA was also solely responsible for air traffic controllers having no choice but to sleep on duty (not sure that issue was ever fixed).

I'm not impressed with the NTSB either, but at least they make some sort of effort.

The whole aviation safety and regulatory system needs to be replaced - not just to get drone regulations up to speed, but to eliminate corruption and replace it with sound judgement.

Comment Zork (Score 1) 60

It resulted in lawsuits, such as DRDOS, being extended over decades, and many potentially exciting businesses being driven into bankruptcy.

To this day, it results in WINE incompatibilities where none should exist. This is a genuine problem.

Far as Windows 3.11 is concerned, lots of systems you really don't want failing (such as control systems for hydroelectric dams and nuclear reactors) use ancient versions of operating systems (NT 3.x, for example) because it's too dangerous to reimplement the control software. The consequences of an error are too great and modern operating systems are too complex to be made reliable enough.

These systems rely on legacy hardware, much of which is no longer made. They rely on no novel fault conditions arising. Because they're increasingly on the public internet, this cannot possibly be guaranteed. Without maintenance, without the prospect of anyone even knowing how to handle error conditions, these are ticking time bombs.

So, yes, the world is less safe and less satisfactory because of abandoned lines for which no source exists and for which workarounds are more dangerous than just allowing a catastrophic failure to arise.

Comment Wrong problem, wrong solution (Score 1) 393

People shouldn't need to be tied to any physical address. A virtual address should function perfectly well. This can be in the sense of a nomadic tribe, the homeless/dispossessed (of which there are far too many right now) but it can also be in the sense of the Donald Coxeters of the world, people who simply don't have conventional lifestyles.

(For those unfamiliar with Donald Coxeter, I strongly recommend learning some maths. Any maths will do.)

So what you need is a virtual address that can map EITHER to a physical location, OR a logical location (such as a tribe), OR a transient address (see the 1996 specification for IPv6), OR an Internet address. Since you want to leave room for expansion, I recommend using at least three bits to specify the address scheme.

IPv6 isn't long enough for this, although the concept is correct. The concept is that you have a prefix that tells you what you're doing, a routing segment that tells you where you're going, and a suffix that is absolutely guaranteed unique and allows you to transition to absolutely anywhere in any form without losing anything along the way.

You can't route parcels over the Internet, you can't route multicast packets by mail, so clearly you need a protocol type in there as well. There are something like eight packet-based protocols. If we leave room to expansion, you need four bits to identify the type of packet, two to identify mode (unicast, multicast, anycast, plus one spare) and four bits to identify layer 1 constraints (what you can't send over).

That's 13 bits to define the characteristics of an address. That's three bits reserved for future use to round it to 16 bits, or two bytes.

Because this scheme is independent of user and is just as valid for probes in the Kepler Belt as for people on Earth, we're going to need a more sophisticated prefix. It's hierarchical, so all routing is as local as possible. Which is great, if you can be certain of never having more than 256 downstream next hops and one upstream hop. Not really viable if part of the intermediate system (people on aircraft, trains, other planets) is ad-hoc, because you simply don't know the topology. (Yes, I'm assuming here that Joe Bloggs' laptop on a 767 can become a relay point for any packet from any source to any destination, if that offers the best routing metric for that packet.)

You need a routing strategy that guarantees that two unique endpoints can communicate over any/all multipath lines of communication by best method possible per packet. Here, IPv6' hierarchy is not so good. It assumes one path from start to end, even though the path can change without notice. Packets midstream are supposed to be redirected.

For computers, that's tolerable. For postal mail, not so much. For postal mail to a mobile endpoint, it's too expensive and risks routing loops. For anything else, it's a disaster.

The good news is that people have dealt with weird network topologies in computing and graph theory for a long time now. The bad news is the computer geeks doing this aren't interested in ad-hoc (not much call for it in supercomputing or anywhere else butterfly networks and hypercubes are used) and mathematicians aren't any further along than static coloured Petri nets. Dynamic networks aren't yet at the bleeding edge of technology.

Not to worry, if we layer an ad-hoc routing strategy below the main routing strategy, we can create a simulation of a fixed network even though the layer underneath isn't fixed and the nodes don't correspond 1:1.

However, this means we need to specify virtual waypoints on our virtually fixed network, where the waypoints are connected via the IPv6-like scheme but labelled by means of a unique, fixed designation the ad-hoc layer can use to find where to send stuff.

This assumes that your next hop wants to have a particular property, that of being able to send on to another stage that has the next designated property, and that exactly where it goes is unimportant. So it's now more of a fuzzy hierarchy. One or two bytes won't do for this. I'd suggest a routing label, of two bytes, per hop, to describe the virtual properties that the physical must possess.

There are 8 bytes in the IPv6 routing information, which we're now doubling to get 16. Not convinced that's really good enough, remember this has to be universal to be a serious candidate. We need more levels per planet and more levels to get between planets, asteroids, satellites, probes, etc. We don't, however, need the same level of specificity as we're layering this as a software defined network, an X-Bone, on top of other networks that are transparent to the visible levels.

For this, I'd adopt the TUBA approach of variable length, albeit not in the same way. The suffix is fixed length, the prefix is fixed length, the middle is really only there to simplify the re-routing and to sanity-check decisions. If you had zero latency and an infinite routing table, it wouldn't be needed at all.

I'd therefore have routers collapse bits and expand bits of this routing section, such that the next router is always looking at a fixed-sized window at a fixed location in the header (the main problem TUBA had) and the level of uncertainty as to where to send stuff is kept equal or below what it was at the last hop, regardless of who is moving relative to whom.

The last bit, the unique ID, is just that. A unique ID. If carried by a person, it could be used as the user ID for signing onto a satellite network or hotel network, buying a house, etc. It represents the ordered pair of person and place, such that place can be changed at any time and the new ordered pair is still identified by that ID. I'd use a 512-byte UUID.

This triple of (prefix, route, uuid) then can be used universally. For phone systems, mail, email, it simply wouldn't matter.

You'd want a human-readable scheme on top of that, sure, but the computers would simply do a directory lookup for the 512-byte corresponding ID. This means mail sent to an old address would be forwarded to you, you wouldn't need to sign forms for forwarding. It means if an email service closes, the emails will also still reach you.

The address something is sent to wouldn't matter, regardless of medium. It would only be used to find out what uuid was implied and everything else would be taken care of.

Comment Interesting (Score 5, Interesting) 72

Kernel bypass plus zero copy are, of course, old-hat. Worked on such stuff at Lightfleet, back when it did this stuff called work. Infiniband and the RDMA Consortium had been working on it for longer yet.

What sort of performance increase can you achieve?

Well, Ethernet latencies tend to run into milliseconds for just the stack. Tens, if not hundreds, of milliseconds for anything real. Infiniband can achieve eight microsecond latencies. SPI can get down to two milliseconds.

So you can certainly achieve the sorts of latency improvements quoted. It's hard work, especially when operating purely in software, but it can actually be done. It's about bloody time, too. This stuff should have been standard in 2005, not 2015! Bloody slowpokes. Back in my day, we had to shovel our own packets! In the snow! Uphill! Both ways!

Slashdot Top Deals

Business will be either better or worse. -- Calvin Coolidge

Working...