Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Seen this before... (Score 1) 544

Am I the only one who remembers last time NASA did this? Back in 2010 they made a lot of marketing fluff about a discovery of a new life form etc. etc. http://science.slashdot.org/story/10/11/30/1846255/curious-nasa-pre-announcement Many assumed it would have to be about NASA finding indications of life in some other planets (even if it was only indirect evidence of some chemical sort it would be huge). Well, it turned out the news was that they had found a microbe (on earth) that had apparently had some uptake of arsenic when deprived of phosphorous. This was interpreted as the organism using arsenic instead of phosphorous in its DNA (though this had not at all been verified) and that this had huge implications for the search of life in space. If that wasn't disappointment enough, the discovery later came under fire and later research found no evidence of any arsenic uptake, so it was all just a bunch of crap to begin with. In other words, just because NASA makes huge announcements and calls enormous press conferences days in advance, don't hold your breath. It could be nothing more than... well, nothing.

Comment Re:Heh (Score 1) 241

I have to agree with the OP. There's something really strange about sentience. I don't see how just adding more complexity would create this "internal awareness". Intuitively it seems that such awareness is of an entirely different quality than what could be created by any amount of computational complexity. What is it that breathes life and sentience into these computations (more on that later)?

Of course my intuition could be wrong. Maybe it's just that sentience appears to have a different quality but that is just because it is so infinitely complex that our intuition has no grasp of such complexity. And that if it did, we would be able to see how sentience can be developed just from the standard properties of physical matter. I can't exclude that this is the case, I don't see how anyone could.

By the way, I'm curious about you saying that such computers would have to be made from something else than silicon and would have to be ultra-parallel, plastic etc.
I don't see why, if you have already accepted sentience from following from mechanical processes. As long as the materials obey usual physics, the properties of this material could be simulated easily on a computer. You could simulate ultra flexible/plastic neurons on any computer. Likewise, a single-thread can simulate any number of parallel threads with a linear slow-down. So to me it seems that if you go along the mechanistic route, you have to accept that sentience could occur even on the simplest of CPU's (or any kind of Turing machine in fact) just given the right program. Of course the program might execute at a slow speed, but it should be totally equivalent. And this is why I find the premise of "sentience is just complex computation" unappealing.

Another problem is that I find the whole concept of computation somewhat subjective. While computation is well-defined at the "input/output" level, the actual computation process is not. It's usually just abstracted away. And sentience must be a property of the computation process itself and not a property of the actual I/O (after all, I still feel sentient when no one is looking).

If I look at a Turing-machine made of organ-pipes, it's essentially just my interpretation that these organ pipes are operating as a Turing machine executing on some piece of information. You have to look at the system at just the right level and type of abstraction to see it's a Turing machines. To someone not "in" on it, it would just seem like a weird spectacle. The individual physical components are operating exactly as they would if they were not part of a Turing machine. That a computation is taking place is clearly not a physical matter. It just happens that humans have labeled this type of interaction between organ pipes "computation". So couldn't there be some subjective view of the organ pipes implementing a different type of machine running a different program that did not involve sentience? Likewise, couldn't I interpret leafs flying across the street of some sort of calculation? Given the huge number of physical processes going on in a glass of water, and the almost infinitely ways of intepretating what those processes are doing, it seems likely you could find some way of interpreting one process as a sentient process. That's why I think computation is in some way subjective.

So if computation is so subjective, what is it that breathes sentience into the computations making up our brain and thoughts?

By the way I have been debating this with people for 10-15 years. Some people understand immediately what I mean. Others seem to never see the problem. Is that because the latter are more clever and can better see how sentience follows from physics? Or is it because they are not sentient themselves? ;) In my view, this question and the question of "why anything exists" are the only two real mysteries remaining for science. Everything else seems to be just tiny details that could eventually be worked out.

Comment The "anti-indiviudal abilities agenda" (Score 3, Insightful) 73

I think the article was in some ways flawed. It gave a good description of how the error occurred. Then it moved on to a huge tirade against the focus on "individual abilities" which it blames for the whole error. Firstly, even taking the description of how the error occurred at face value, it is not at all clear that the error had anything to do with a focus on "individual abilities". On the contrary, it seems this was just an instance of really poor management that - due to cost overruns - pushed their employees to work harder, to the point that they lost their focus on quality and maybe even started cutting corners in the fabrication process. This has absolutely nothing to do with a focus on "individual abilities". However, let me address the "anti-individual abilities agenda" anyway.

The anti-individual abilities agenda is routinely promoted by managers, project managers and other people engaged in the management layers (management consultants, business schools etc.). The motive is pretty clear: Many bosses don't like admitting that the success of their project comes down to individual abilities of a few core members on the project. After all, what is the value of management then, they ask? It's like the tail wagging the dog.

However, this is just denying reality. I can firmly say that on any project of major size I worked on, the was a few 5-10% of people on the project running the show. This in itself is not very surprising, what is surprising is the fact that these 5-10% were not centered at the top of the pyramid. Rather, it was evenly spread out over all 'layers' from 'highest to lowest'. These people (by virtue of their skills and dedication to the project, something that is often lacking with the project management itself!) automatically assume a role of authorities whether management likes it or not. It's simply the only way to get things done. Let's face it, on any project there's going to be a lot of 9-5'ers that don't really care. They are never the ones driving the car, nor should they. It's the 5-10% who has both the ability to and the interest in getting the job done that counts. Those that dream about the project at night and who feel their personal honour is at stake in making it succeed. Also, as Fred Brooks noted in 'a mythical man month', some (sub)projects are like surgery. You need one highly skilled person to be in charge and carry out the job, and the rest of the team members are really just accessories of that person. Their contribution can be important of course, but at the end of the day, all choices, responsibility resides with the 'surgeon' etc.

I think the lesson to be learned from these observations is that management needs to accept that this is the structure that projects will generally fall into, no matter what they do. The job of management is to get the best result out of it. On projects with poor management that creates obstacles for progress and makes lots of bad choices (this often happens on politically infested projects as well as on projects where management doesn't have a clue about the technical aspects), often the project finds a way to completely bypass management. Decisions by management may be outright ignored, or important decisions are never brought up to this level but are just made behind the scenes. This is a very dangerous situation since important decisions may not be properly reviewed and may not even be known by all stake-holders. While most decisions taken may have been correct, it takes just one bad decision to jeopardize the project, and problems related to this kind of "skunkworks decisions" tend to surface very late where they may cause huge problems, sometimes disasters.

The job of management is to embrace the individual abilities, and to listen carefully (but of course not uncritically) to arguments brought forward, no matter if it is from a project manager or a "lowly" techie. They need to make a decent effort to try to understand what they are talking about, even if the explanations are not always clear and even if it can sometimes be highly technical.

Comment Re:Ignore it (Score 1) 412

Depends on how what you mean by easily doable.
It's true that if you draw cards many times from a deck then it is not unlikely you will at some point draw two black aces in a raw. However, if you (and only you) carried out this experiment just once in your lifetime it would indeed be quite unlikely -- but of course, not impossible.
So then we're back to what probabilities really means to us humans. The odds of a hit of this asteroid may only be 1 in 625. Intuitively, that's a low risk but probably somewhat higher than what our "comfort zone" dictates. But does it really make sense to use probabilities? After all, it either hits or it doesn't. If it does, we have no way to recoup the loss since we will all be gone (I'm now assuming a worst case scenario). So it's not the same as gambling or making bets on the financial markets, since there you do have the possibility of recouping losses on future bets.
It's the same dilemma faced for people with serious illness where they may be given a prognosis, say 50%. What can they use it for? To them, all that matter is that they are going to be in the right 50%.

Comment Re:Kudos (Score 2) 292

I agree but remember that must engineers are working on company time. For most companies it wouldn't be rational to have an engineer working months to isolate/reproduce this CPU bug. After all, this work will particularly benefit this company over all the other companies and at any rate it would be much cheaper to just do the workaround (which might be necessary anyway). However, a good engineer probably couldn't resist looking into this in his free time (and maybe in company time with nobody looking!) at least to prove that he was right. Those engineers are usable so much more valuable than the average engineer, that even if they sometime spent their time on things that are not rational for the company to spend their time on, it is still worth it to have them on the payroll :)

Comment Re:It is in fact virtually impossible (Score 1) 312

Forgot to say, that I can not exclude the possibility that there might be some interesting things in the execution (interesting algorithms for looking up the 9-char bits etc.)? I haven't read the details so I'm not able to judge. In my critical post above I was discussing the approach in terms of the classic idea about the ability of a monkey to generate the work of Shakespeare.

Comment Re:It is in fact virtually impossible (Score 1) 312

I absolutely agree. From the point of view of the classic "one million monkeys with typewriters" the "result" described here is completely and utterly uninteresting. If he had had each node randomly generate data until one of them had emitted the full work in _one sequence_ he would have a story. There's just this problem that this is highly unlikely to happen even if you throw all the world's computing grids after it, given that there's ~26^n random texts of length n. Therefore, while reading the abstract I knew it had to be fake but I was hoped to be proved wrong - unfortunately I was right :(

Comment Re:Here are some reasons why (Score 1) 729

Firstly, I don't see how this example with stroke victims with their personality/consciousness intact, should somehow make a quantum theory less likely. One could argue the opposite, the fact that personality and consciousness endure after severe brain damage, even when the person is clearly suffering neurological deficit such as severe paralysis, indicate that the brain is not the ultimate site for personality and consciousness.

The fact that no one has found neurons to be dependent upon quantum effects doesn't prove anything. Observing such an effect would probably be extremely hard. Conversely, AFAIK the behaviour of even individual neurons is quite complex and not always predictable. It's not just a "transistor". You could also argue that the consciousness-quantum effect would not necessarily be present when looking at one neuron in isolation - it could be that the effect only "cares" to be there when there's a fully functioning neurological system.

Personally, I believe that the processes human brain, as understood by classical physics, does produce part of human intelligence and behaviour (i.e. I don't believe it is just a mediator/amplifier of some quantum action). Also, there are known processes that are not under conscious control at all, and others that are only partly so (e.g. breathing). So it might be some very complex interaction between the "thing" that provides us with the "inner experience of consciousness" and with the physical/biological layer of the brain, that is required to fully explain all of human experience and behaviour.

Comment The inner experience (Score 1) 729

To me there's two reasons for invoking the quantum layer:

1) Firstly, it's hard to see how biology and classical physics can explain consciousness. Note that I'm not talking about human intelligence etc. because there are at least plausible ways to imagine how this could be "implemented" via classical physics. What I'm talking about is "the inner experience" i.e. the experience of existing, the subjective. Isn't it weird that we have such an experience? What would be the substrate of such an experience? Within classical physics, I could perhaps accept a world full of zombies running around seemingly intelligent but without an inner experience. It's not that I don't accept emergent phenomenon in general. I accept that intelligence can result from very simple building blocks. But I don't see how this is true for the subjective experience of existing.

Now this is Slashdot, so a coding analogy would be in order: Understanding consciousness within classical physics is like trying to play a sound on a computer without a sound card - it can't be done no matter what clever programming you use, since the basic building block or "API" isn't there!

Now, the problem to this idea is that it is very hard to measure this "inner experience" for anyone else than the person experiencing it. This is what it means for it to be subjective, and this is what is "magic" about it. But for the individual, the experience is valid and real. And at least to me, there seems to be no way of understanding it within classical physics.

2) There's some experimental evidence. For instance, the element xenon is almost chemically inert. Still, it is a powerful anaesthetic. However, note that as an anesthetic it doesn't just shut off all cells. It doesn't even shut off all of the brain or anything of that sort. Rather, it selectively shuts off consciousness! A person sedated with xenon can still breath, the heart is still beating etc. However, the experience of existing is (temporarily) gone. Now, how can such a primitive one-atom entity as xenon have such selective effects on consciousness? If consciousness was some complex emergent phenomenon, wouldn't it take a complicated molecule to go into the brain and find out exactly what neurons to affect so as to leave the vital functions intact while retaining consciousness? Xenon doesn't appear to be capable of this!

No one really know how xenon does it - but since it is chemically inert it must at least be at the "van der Waal" level. Some experiments indicate it might affect special pockets in certain proteins via at least semi-quantum effect. Given this evidence, it doesn't seem much of a jump to consider these pockets essential to consciousness - perhaps mediating it?

Comment Re:My wife is a doctor... (Score 1) 566

Clearly some people go to their doctor or ER even when there's clearly no need to... but in many cases, I can understand people being persistent in having their problems taken seriously.

Everyone knows stories of someone who had relatively minor symptoms for a prolonged period of time... that ultimately turned out to be caused by cancer, which was then in a late stage. We are also being reminded every day of how important it is to go to the doctor with symptoms early, where there's a chance for a cure etc. etc. I think most people know that in all likelihood, what they have is nothing. They know the odds it is cancer this particular time around may only be 1 in 10,000. But they also know that 30-40% of the population will get diagnosed with cancer at some point, and they have all heard of some case where it hit early and where the first symptoms were vague. What if THEIR case is the unlucky one? They have only this one life, so if it turns out to be serious, the fact that the risk was low will be little comform - they will have the disease in full-force, perhaps detected at a late stage.

Given all this, isn't it understandable that people might want to have tests, second opinions etc.? Isn't it what they are asked to do by all the cancer campaigns? The problem is we know too much and the whole concept of "risk" has become embodied into everything we do. This means we focus on disease even when we are relatively healthy. But it also enables us to sometimes detect diseases early, and to cure otherwise fatal illnesses (some of those cures being heavily dependent on early detection).

I think short of outlawing screening tests or massive programs to change people's attitude towards life and death (i.e. changing the perception that "a long life is good, a short life is bad" and the almost sense of entitlement of a long life), nothing is going to stop patients from wanting tests and nothing is going to stop doctors from providing it and profiting on it.

Comment Re:data recorder (Score 1) 347

By the way the concern is not limited to sector 0 that's just the boot sector - the FAT table and root directory structure occupies several sectors (for FAT32 one 4b entry is required per claimed sector on device just for the FAT table and there's a spare copy as I recall).

Comment Re:data recorder (Score 1) 347

There are several ways to approach the problem. I have three suggestions of which I think the last is the one that is most probably employed by this device.

1) One possibility is that the USB device 'cheated' and installed a custom device driver when plugged in. Such a device driver could intercept file system calls (sitting a file system filter if on Windows) and could pull off the feat. One problem is that a unique device driver would be required for each platform, at least if the platform is to display the behavior described when writing the file.

2) A more full solution would instead involve the device interpreting the file system structures written by the operating system. It would tell the operating system it was a 500Gig device and the o/s would put a 500G file system on it. However, the device would interpret the file system structures so that it could understand which files were stored where and hence being able to detect the writing of big files and - when it wanted to cycle back upon the beginning of the file - start redirecting write requests to the trailing sectors, so that they would be written to where the beginning of the file is. Such a solution is definitely possible and doesn't require a device driver. But it is file system dependent and probably quite complex to implement. For FAT32 it is probably doable, for NTFS it is probably impractical. Given the large (claimed) size of the device the user would likely format with NTFS. For this reason, I think this is unlikely to be the method employed.

3) A much simpler and probably more likely solution is the following: When the device detects a series of sequential writes that goes beyond the actual capacity of the drive, it simply starts redirecting those writes to the sectors written in the beginning (or, near the beginning) of the current sequential write series. This solution would not be specific to a given o/s or file system, but is dependent on the o/s handling copying large files through sequential writes - or almost sequential, depending on how much logic is put into the detection routine of the device (for instance it could be tolerant to intermittent writes to the file system structures as long as it saw continued sequentiality etc.). The hacker could have analyzed the writing patterns of common operating and file systems to come up with a simple algorithm that would work most all the time.

However, the solution might run into trouble in case of fragmented drives, but given that the purpose is just to convince a potential customer and that such a demonstration would likely take place on a freshly formatted drive, this shortcoming is probably irrelevant.

Comment Ideas how this was implemented (Score 1) 347

First, the thing about playing the file with the header missing: It is perfectly possible that the device wouldn't cycle to the beginning of the file, but instead to some point a bit into the file, allowing for the header to be included. Most movie players probably would happily play the file if the header was intact even if there was a jump in the frames. It's interesting to ponder how the author went about implementing this. A USB disk is a block device so it isn't aware of the concept of files - all it receives are requests for reading individual sectors. There are several ways to approach the problem. I have three suggestions of which I think the last is the one that is most probably employed by this device. 1) One possibility is that the USB device 'cheated' and installed a custom device driver when plugged in. Such a device driver could intercept file system calls (sitting a file system filter if on Windows) and could pull off the feat. One problem is that a unique device driver would be required for each platform, at least if the platform is to display the behavior described when writing the file. 1) A more full solution would instead involve the device interpreting the file system structures written by the operating system. It would tell the operating system it was a 500Gig device and the o/s would put a 500G file system on it. However, the device would interpret the file system structures so that it could understand which files were stored where and hence being able to detect the writing of big files and - when it wanted to cycle back upon the beginning of the file - start redirecting write requests to the trailing sectors, so that they would be written to where the beginning of the file is. Such a solution is definitely possible and doesn't require a device driver. But it is file system dependent and probably quite complex to implement. For FAT32 it is probably doable, for NTFS it is probably impractical. Given the large (claimed) size of the device the user would likely format with NTFS. For this reason, I think this is unlikely to be the method employed. 3) A much simpler and probably more likely solution is the following: When the device detects a series of sequential writes that goes beyond the actual capacity of the drive, it simply starts redirecting those writes to the sectors written in the beginning (or, near the beginning) of the current sequential write series. This solution would not be specific to a given o/s or file system, but is dependent on the o/s handling copying large files through sequential writes - or almost sequential, depending on how much logic is put into the detection routine of the device (for instance it could be tolerant to intermittent writes to the file system structures as long as it saw continued sequentiality etc.). The hacker could have analyzed the writing patterns of common operating and file systems to come up with a simple algorithm that would work most all the time. However, the solution might run into trouble in case of fragmented drives, but given that the purpose is just to convince a potential customer and that such a demonstration would likely take place on a freshly formatted drive, this shortcoming is probably irrelevant.

Comment Re:Thoughts. (Score 1) 527

I have to agree - I can definitely see how one would try to preserve as much as possible. But at the end of the day, what you are going to capture on videos will only be a very small percentage of her personality. The only thing that will feel like a "high fidelity" representation of what she was like will be in your head: The memory of those special moments that are quintessential to that person and where you never happen to have a video camera running when it happens.

Also be careful not to overdo this video recording. As other posters suggested, I would try to make the most of the time you have left together in terms of being together. Travelling is an obvioius thing if you like it and can afford it, but it can also be smaller things that matter to you. Then occationally you can take a picture or recording of that, just like you did prior to this happening to you. Don't take this wrong, I think you should take some photos and do some records, it's just that it seems you are so focused on it that you will always think you didn't do enough. So just set your expectation to something realistic. And ask yourself if you will/should watch hours and hours of footage of her after she is gone and if that is something you would wish your loved ones would do if you passed away?

I can't help thinking if all this recording won't disturb the natural grieving processes of the brain. Maybe it is better to remember things the way the brain wants to remember them. Some memories will fade away and there will be things where you ask yourself "Why can't I remember this?" or "Why didn't I ask this?". But this is just part of the healing process. Initially after a traumatic experience, you are thinking about it all the time, say at least every second. The healing process consists of a continuous extension of this interval between being reminded of the trauma. So after a 1 week maybe you can go a minute without being reminded of it. After 1 month, maybe you can go half an our without being reminded of it. Part of this process, I suspect, involves deletion/blurring of memories - pushing them very farther and farther back in the 'database' and untying them from their relationship to everyday objects and experiences, so that you are not reminded of it all the time. As harsh and as it may sound, you will have to move on and focus on the people around you both for your own and other people's sake. I can't help but thinking if constant, digital reminders can interfere with this process.

Comment Re:Mathematicians are gathering to vet this paper (Score 1) 147

Computer science is not a subset of mathematics - rather, mathematics is a subset of computer science. Any question in mathematics can be restated as a question about Turing machines. The question "Can the statement x be proven theory T?" can be restated as, does Turing machine PM(x,T) halt, where PM is a rather simple Turing machine that tries out all potential proofs of x using axioms of theory T (usually there's inifinitely many proofs to try) and halts if it finds a valid proof. You could say that complexity theory contains the answer to all mathematical problems, which is exactly why (in a very tangible sense for those familiar with the attempts at the problem including approaches based on circuit complexity) the problem P ?= NP is so hard to solve.

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...