Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:It is in fact virtually impossible (Score 1) 312

Forgot to say, that I can not exclude the possibility that there might be some interesting things in the execution (interesting algorithms for looking up the 9-char bits etc.)? I haven't read the details so I'm not able to judge. In my critical post above I was discussing the approach in terms of the classic idea about the ability of a monkey to generate the work of Shakespeare.

Comment Re:It is in fact virtually impossible (Score 1) 312

I absolutely agree. From the point of view of the classic "one million monkeys with typewriters" the "result" described here is completely and utterly uninteresting. If he had had each node randomly generate data until one of them had emitted the full work in _one sequence_ he would have a story. There's just this problem that this is highly unlikely to happen even if you throw all the world's computing grids after it, given that there's ~26^n random texts of length n. Therefore, while reading the abstract I knew it had to be fake but I was hoped to be proved wrong - unfortunately I was right :(

Comment Re:Here are some reasons why (Score 1) 729

Firstly, I don't see how this example with stroke victims with their personality/consciousness intact, should somehow make a quantum theory less likely. One could argue the opposite, the fact that personality and consciousness endure after severe brain damage, even when the person is clearly suffering neurological deficit such as severe paralysis, indicate that the brain is not the ultimate site for personality and consciousness.

The fact that no one has found neurons to be dependent upon quantum effects doesn't prove anything. Observing such an effect would probably be extremely hard. Conversely, AFAIK the behaviour of even individual neurons is quite complex and not always predictable. It's not just a "transistor". You could also argue that the consciousness-quantum effect would not necessarily be present when looking at one neuron in isolation - it could be that the effect only "cares" to be there when there's a fully functioning neurological system.

Personally, I believe that the processes human brain, as understood by classical physics, does produce part of human intelligence and behaviour (i.e. I don't believe it is just a mediator/amplifier of some quantum action). Also, there are known processes that are not under conscious control at all, and others that are only partly so (e.g. breathing). So it might be some very complex interaction between the "thing" that provides us with the "inner experience of consciousness" and with the physical/biological layer of the brain, that is required to fully explain all of human experience and behaviour.

Comment The inner experience (Score 1) 729

To me there's two reasons for invoking the quantum layer:

1) Firstly, it's hard to see how biology and classical physics can explain consciousness. Note that I'm not talking about human intelligence etc. because there are at least plausible ways to imagine how this could be "implemented" via classical physics. What I'm talking about is "the inner experience" i.e. the experience of existing, the subjective. Isn't it weird that we have such an experience? What would be the substrate of such an experience? Within classical physics, I could perhaps accept a world full of zombies running around seemingly intelligent but without an inner experience. It's not that I don't accept emergent phenomenon in general. I accept that intelligence can result from very simple building blocks. But I don't see how this is true for the subjective experience of existing.

Now this is Slashdot, so a coding analogy would be in order: Understanding consciousness within classical physics is like trying to play a sound on a computer without a sound card - it can't be done no matter what clever programming you use, since the basic building block or "API" isn't there!

Now, the problem to this idea is that it is very hard to measure this "inner experience" for anyone else than the person experiencing it. This is what it means for it to be subjective, and this is what is "magic" about it. But for the individual, the experience is valid and real. And at least to me, there seems to be no way of understanding it within classical physics.

2) There's some experimental evidence. For instance, the element xenon is almost chemically inert. Still, it is a powerful anaesthetic. However, note that as an anesthetic it doesn't just shut off all cells. It doesn't even shut off all of the brain or anything of that sort. Rather, it selectively shuts off consciousness! A person sedated with xenon can still breath, the heart is still beating etc. However, the experience of existing is (temporarily) gone. Now, how can such a primitive one-atom entity as xenon have such selective effects on consciousness? If consciousness was some complex emergent phenomenon, wouldn't it take a complicated molecule to go into the brain and find out exactly what neurons to affect so as to leave the vital functions intact while retaining consciousness? Xenon doesn't appear to be capable of this!

No one really know how xenon does it - but since it is chemically inert it must at least be at the "van der Waal" level. Some experiments indicate it might affect special pockets in certain proteins via at least semi-quantum effect. Given this evidence, it doesn't seem much of a jump to consider these pockets essential to consciousness - perhaps mediating it?

Comment Re:My wife is a doctor... (Score 1) 566

Clearly some people go to their doctor or ER even when there's clearly no need to... but in many cases, I can understand people being persistent in having their problems taken seriously.

Everyone knows stories of someone who had relatively minor symptoms for a prolonged period of time... that ultimately turned out to be caused by cancer, which was then in a late stage. We are also being reminded every day of how important it is to go to the doctor with symptoms early, where there's a chance for a cure etc. etc. I think most people know that in all likelihood, what they have is nothing. They know the odds it is cancer this particular time around may only be 1 in 10,000. But they also know that 30-40% of the population will get diagnosed with cancer at some point, and they have all heard of some case where it hit early and where the first symptoms were vague. What if THEIR case is the unlucky one? They have only this one life, so if it turns out to be serious, the fact that the risk was low will be little comform - they will have the disease in full-force, perhaps detected at a late stage.

Given all this, isn't it understandable that people might want to have tests, second opinions etc.? Isn't it what they are asked to do by all the cancer campaigns? The problem is we know too much and the whole concept of "risk" has become embodied into everything we do. This means we focus on disease even when we are relatively healthy. But it also enables us to sometimes detect diseases early, and to cure otherwise fatal illnesses (some of those cures being heavily dependent on early detection).

I think short of outlawing screening tests or massive programs to change people's attitude towards life and death (i.e. changing the perception that "a long life is good, a short life is bad" and the almost sense of entitlement of a long life), nothing is going to stop patients from wanting tests and nothing is going to stop doctors from providing it and profiting on it.

Comment Re:data recorder (Score 1) 347

By the way the concern is not limited to sector 0 that's just the boot sector - the FAT table and root directory structure occupies several sectors (for FAT32 one 4b entry is required per claimed sector on device just for the FAT table and there's a spare copy as I recall).

Comment Re:data recorder (Score 1) 347

There are several ways to approach the problem. I have three suggestions of which I think the last is the one that is most probably employed by this device.

1) One possibility is that the USB device 'cheated' and installed a custom device driver when plugged in. Such a device driver could intercept file system calls (sitting a file system filter if on Windows) and could pull off the feat. One problem is that a unique device driver would be required for each platform, at least if the platform is to display the behavior described when writing the file.

2) A more full solution would instead involve the device interpreting the file system structures written by the operating system. It would tell the operating system it was a 500Gig device and the o/s would put a 500G file system on it. However, the device would interpret the file system structures so that it could understand which files were stored where and hence being able to detect the writing of big files and - when it wanted to cycle back upon the beginning of the file - start redirecting write requests to the trailing sectors, so that they would be written to where the beginning of the file is. Such a solution is definitely possible and doesn't require a device driver. But it is file system dependent and probably quite complex to implement. For FAT32 it is probably doable, for NTFS it is probably impractical. Given the large (claimed) size of the device the user would likely format with NTFS. For this reason, I think this is unlikely to be the method employed.

3) A much simpler and probably more likely solution is the following: When the device detects a series of sequential writes that goes beyond the actual capacity of the drive, it simply starts redirecting those writes to the sectors written in the beginning (or, near the beginning) of the current sequential write series. This solution would not be specific to a given o/s or file system, but is dependent on the o/s handling copying large files through sequential writes - or almost sequential, depending on how much logic is put into the detection routine of the device (for instance it could be tolerant to intermittent writes to the file system structures as long as it saw continued sequentiality etc.). The hacker could have analyzed the writing patterns of common operating and file systems to come up with a simple algorithm that would work most all the time.

However, the solution might run into trouble in case of fragmented drives, but given that the purpose is just to convince a potential customer and that such a demonstration would likely take place on a freshly formatted drive, this shortcoming is probably irrelevant.

Comment Ideas how this was implemented (Score 1) 347

First, the thing about playing the file with the header missing: It is perfectly possible that the device wouldn't cycle to the beginning of the file, but instead to some point a bit into the file, allowing for the header to be included. Most movie players probably would happily play the file if the header was intact even if there was a jump in the frames. It's interesting to ponder how the author went about implementing this. A USB disk is a block device so it isn't aware of the concept of files - all it receives are requests for reading individual sectors. There are several ways to approach the problem. I have three suggestions of which I think the last is the one that is most probably employed by this device. 1) One possibility is that the USB device 'cheated' and installed a custom device driver when plugged in. Such a device driver could intercept file system calls (sitting a file system filter if on Windows) and could pull off the feat. One problem is that a unique device driver would be required for each platform, at least if the platform is to display the behavior described when writing the file. 1) A more full solution would instead involve the device interpreting the file system structures written by the operating system. It would tell the operating system it was a 500Gig device and the o/s would put a 500G file system on it. However, the device would interpret the file system structures so that it could understand which files were stored where and hence being able to detect the writing of big files and - when it wanted to cycle back upon the beginning of the file - start redirecting write requests to the trailing sectors, so that they would be written to where the beginning of the file is. Such a solution is definitely possible and doesn't require a device driver. But it is file system dependent and probably quite complex to implement. For FAT32 it is probably doable, for NTFS it is probably impractical. Given the large (claimed) size of the device the user would likely format with NTFS. For this reason, I think this is unlikely to be the method employed. 3) A much simpler and probably more likely solution is the following: When the device detects a series of sequential writes that goes beyond the actual capacity of the drive, it simply starts redirecting those writes to the sectors written in the beginning (or, near the beginning) of the current sequential write series. This solution would not be specific to a given o/s or file system, but is dependent on the o/s handling copying large files through sequential writes - or almost sequential, depending on how much logic is put into the detection routine of the device (for instance it could be tolerant to intermittent writes to the file system structures as long as it saw continued sequentiality etc.). The hacker could have analyzed the writing patterns of common operating and file systems to come up with a simple algorithm that would work most all the time. However, the solution might run into trouble in case of fragmented drives, but given that the purpose is just to convince a potential customer and that such a demonstration would likely take place on a freshly formatted drive, this shortcoming is probably irrelevant.

Comment Re:Thoughts. (Score 1) 527

I have to agree - I can definitely see how one would try to preserve as much as possible. But at the end of the day, what you are going to capture on videos will only be a very small percentage of her personality. The only thing that will feel like a "high fidelity" representation of what she was like will be in your head: The memory of those special moments that are quintessential to that person and where you never happen to have a video camera running when it happens.

Also be careful not to overdo this video recording. As other posters suggested, I would try to make the most of the time you have left together in terms of being together. Travelling is an obvioius thing if you like it and can afford it, but it can also be smaller things that matter to you. Then occationally you can take a picture or recording of that, just like you did prior to this happening to you. Don't take this wrong, I think you should take some photos and do some records, it's just that it seems you are so focused on it that you will always think you didn't do enough. So just set your expectation to something realistic. And ask yourself if you will/should watch hours and hours of footage of her after she is gone and if that is something you would wish your loved ones would do if you passed away?

I can't help thinking if all this recording won't disturb the natural grieving processes of the brain. Maybe it is better to remember things the way the brain wants to remember them. Some memories will fade away and there will be things where you ask yourself "Why can't I remember this?" or "Why didn't I ask this?". But this is just part of the healing process. Initially after a traumatic experience, you are thinking about it all the time, say at least every second. The healing process consists of a continuous extension of this interval between being reminded of the trauma. So after a 1 week maybe you can go a minute without being reminded of it. After 1 month, maybe you can go half an our without being reminded of it. Part of this process, I suspect, involves deletion/blurring of memories - pushing them very farther and farther back in the 'database' and untying them from their relationship to everyday objects and experiences, so that you are not reminded of it all the time. As harsh and as it may sound, you will have to move on and focus on the people around you both for your own and other people's sake. I can't help but thinking if constant, digital reminders can interfere with this process.

Comment Re:Mathematicians are gathering to vet this paper (Score 1) 147

Computer science is not a subset of mathematics - rather, mathematics is a subset of computer science. Any question in mathematics can be restated as a question about Turing machines. The question "Can the statement x be proven theory T?" can be restated as, does Turing machine PM(x,T) halt, where PM is a rather simple Turing machine that tries out all potential proofs of x using axioms of theory T (usually there's inifinitely many proofs to try) and halts if it finds a valid proof. You could say that complexity theory contains the answer to all mathematical problems, which is exactly why (in a very tangible sense for those familiar with the attempts at the problem including approaches based on circuit complexity) the problem P ?= NP is so hard to solve.

Comment Re:Rebuilds usually fail (Score 1) 289

You are absolutely right. Also, if you rewrite you constantly have to make the choice "Do I keep this thing working the same way as in the old app". This could be on multiple levels, i.e. how you present something in the GUI, how something is internally in the code, how it is represented in the database etc.

More often than not, there will be a lot of pressure to keep at least some of the things the same as before. And often keeping those things the same will force you into a lot of choices of the old app. You might find youself just cloning/hand-converting the old app.

On the other hand, every time you change something as compared to the old app, there will be complaints from customers since likely there will be some scenario that wasn't as easy as before. Furthermore, even if every change is for the better, just the fact that it works differently may upset customers as they have to retrain their employees. Even if everything works the same but just looks slightly different due to different GUI, that can be a huge issue as well since all training material with screen shots have to be updated!

As described in my previous post, I went through this whole process and this was for an automated conversion process which by its nature preserved the functional aspects. But just the slight variations in how the GUI looked were a huge issue but those we managed to get the customer swallow. However, other aspects I would also consider minor and generally equivalent (i.e. the exact tab order of elements, exact keyboard behaviour in list boxes) turned out to be huge issues whenever there was a difference - so I went to great lengths to preserve as much as I could. Consider how hard and not the least uninteresting it would be if you are not just auto-converting the code, but are implementing from scratch!

Lesson: Only convert or rewrite when you really have to. Make sure your existing customers' cost with the change is factored in as well as they will surely factor it in when considering your product.

Comment Re:Joel contradicts the IEEE (Score 1) 289

I think the IEEE view is totally backwards. My advice would be to never start out by thinking of secondary factors like like how old the code base is, how messy it is, programming language it is written in etc. Instead, look at the bottom line: What does it cost for the business right now that the code is bad. Is there a new feature we can't make, or can we see it is taking us too much time to make changes, do we have quality problems with those changes. Further, it s important to at least try to quantify each of these points and how much they cost. Then quantify the cost of a rewrite (including risk of cost overruns, setback in the market, lower quality in the beginning etc.). Then discount the cost of a rewrite with the rate of interest and compare the long term costs of either approach. Only then can you make a qualified decision. Often this isn't a hard exercise, because very often one will quickly see that the cost of the rewrite isn't worth it - the potential productivity gains from the rewrite can not catch up with the cost of the rewrite even over longer periods of time. Probably the rewrite should only be undertaken if it can pay for itself over a relatively short timespan since there's a major risk the product won't even exist or be relevant at that time. In most cases (when we are thinking about an app that has some maturity) only a major refactoring could make sense. Only very rarely does it make sense to throw out everytime. One of the situations that are hard are: What do I do with this app written in strange unsupported language XYZ, functioning fairly well, maintainable with some effort, but looking totally outdated (say GUI that looks like a Win 3.1 program etc.) and that can't inter-op with other languages etc. If the provider of XYZ doesn't provide a path to move the code to another platform, and one isn't available in the market you could try to do it yourself, which has risks as well and might not get all benefits (see my other post of such a project). Otherwise you might be either stuck with an app that appears outdated (and may be accumulating problems due to a buggy and unmaintained runtime etc.) OR have to pay the full cost of a reimplementation. Given that the conversion project might not be that bad after all and may not be as hard as you think.

Comment I went through the following conversion effort (Score 1) 289

I went through porting a ~400K application to Java. It was written in an unsupported, obsolete, early 90'ies "drag and drop" RAD system with a relatively simple language core. The program was highly mature and very functional even for competing programs. To port it, I wrote a conversion program that converted it function-by-function to Java. Additionally, I reimplemented the runtime library myself. The system had a couple of interesting specialities that made it fun to convert, because they were challenging to emulate in Java. Some of it called out to native C libraries (via a JNI mechanism, that I also had to implement) but also a strange type of global variables where a variable was shared among multiple processes. Additionally, it was possible to link fields in the GUI directly to storage locations (i.e. row in a table could be linked to the text fields of a dialog) so that the two would update it each other. There were various types of reflection possible and some simple message system for GUI. It took around 6 months of work. The project had to be done very fast for strategic reason. A multi-man effort to rewrite the application had spent 2 years to port just parts of the application. Of course not all the benefits was derived from my automated conversion. The original code was not that readable by today's standards, and it became slightly worse through conversion. But not all the "hand-written" converted code was that readable either, and some of it has already gone through heavy refactoring. No one included myself got much understanding out of the code but at least modern tools like Eclipse can be used to inspect, debug and extend it. Still, now 4 years after the completion of the project the code is still in production. New functionality has been written from scratch but the old code is still a significant (decreasing but probably still around 50%) part of the full application and has been maintained via bug fixes. There's no plan to reimplement just for 'reimplementation's sakef' but they might get reimplemented when they have to be significantly extended. It was also a fun project for me to. However, the code for the runtime library I wrote, that bridges the gap between the conventions of the original runtime system and Java's ssystem, is a bit horrifying to say the least. Fortunately, once it was done it worked quite well and has required no fixing for years. However, actually I now have to look into it again to make it work on another platform. It feels like reopening the Chernobyl concrete sarcophagus ;-) Fun and challenging, but I only change something on the day's where I feel at the top of my game :P

Comment Re:Whatcouldpossiblygowrong (Score 1) 251

I don't think the part about the NAND chips is true. The bad block management takes place internally in the controller using a spare pool and is not under user control by any means. Hence, when you format a flash drive you won't actually see any bad blocks since they will have been remapped internally by the controller. I haven't seen a bad block in 13-14 years. It is possible that if a sector should at some later point become bad, that the controller will report the sector as unreadable until it has been rewritten again (after being remapped to the spare pool) but that is just to prevent the o/s from relying on invalid data.

Slashdot Top Deals

Happiness is twin floppies.

Working...