Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:No P&S camera (Score 2, Insightful) 778

Well, there's that, but also bear in mind that cameras can afford to put a bit more power into the electronics, so that JPEG compression can be of higher quality.

Doubling the number of pixels on the CCD but more than halving the amount of retrievable data stored will give you a net loss of quality. High-res CCDs are relatively cheap and since the phones don't advertise the resolution of the image as stored, it's a great marketing ploy.

Comment Re:No P&S camera (Score 5, Insightful) 778

It's next to impossible. Phones need to be very small, lightweight and damage-resistant, the electronics need to be exceedingly low-power and the electronics for the camera and the electronics for the radio transceiver can't conflict.

That last requirement means is you use digital devices that produce analogue signals, the resolution on the ADC has to be so crappy that the RFI from the radio doesn't screw up the picture AND the voltage changes when a call is picked up or an alarm goes off or what have you can't throw the ADC.

The low-power means no fancy, power-hungry logic, the software zoom and other floating-point logic won't be terribly high precision, and the image compression algorithm will need to be light on the quality.

The size and damage-resistance impacts what sort of lens you can use, how rigid the structure has to be, how much the user can just seriously screw up the device before the image quality drops. Even for a disposable standalone camera, it's practical to put in some quite acceptable optics.

Even when such devices are of a size comparable to that OF the phone, you've got to remember that the camera is sans radio (or radios, for phones that have bluetooth and/or wifi and/or AM/FM tuners as well as the standard phone radio), sans keyboard, sans quite a bit of space-hungry stuff that phones either need or have as "features".

Comment Re:I have to say, I am depressed... (Score 1) 208

Inherent in the idea that Firefox prohibit malicious code breaking into an extension and causing it to reformat /dev/hda1 is the idea that the extension itself cannot reformat /dev/hda1, as it is impossible for Firefox to know what will or will not cause an extension to do something maliciously.

Indeed, if an extension can do anything, it is possible to write an extension that allows an external program to control your computer with the privileges of Firefox AND (this is the important part) it is ALSO possible for a buggy extension to give such control of your computer with those same privileges.

If an extension can do absolutely anything, without any restriction, so can malicious code. The compromise suggestion was to have privileged operations in a distinct process. An extension overall can then still do anything but a given extension component cannot. (You didn't comment on that - not see it or you prefer to find something bad to say?)

Compartmentalizing extensions would, yes, make life much harder for extension writers. It would force them to be disciplined, code properly and not incorporate stupid security flaws. This would eliminate a lot of stupid coders, but I'm having a hard time seeing that as a bad thing.

If that was unacceptable, if you had to have the bad coders, then is sandboxing really so terrible? I honestly can't think of too many things Java application writers can't do, but Java is sandboxed. Forth programmers don't seem to have too many problems either - it's a very popular language for developing very low-level code like BIOSes - and yet also runs entirely in a compartmentalized virtual machine.

Hell, one could argue that any Trusted OS (ie: an OS that runs on top of a very thin security OS that provides all the operations that have security implications) is essentially an OS running in a sandbox. It might be harder to run the latest *ix games or applications under Trusted Solaris than it is under Linux, but unless you can name something you CANNOT run AT ALL, I'm inclined to believe that the limitation you speak of simply does not exist.

So if Firefox sandboxed extensions then it might need to provide some extra functionality via extensions to the existing API. Doesn't sound too horrible and it's certainly not fatal to developers.

So we definitely have multiple ways of improving security without preventing extension writers from doing what they want -- the only thing improving security would impose is HOW extension writers did things. Again, is that a bad thing?

I would rather see the death of bad code than see the death of Firefox because it got a reputation for being worse than IE on security. Particularly if the reputation was not due to Firefox per se but because extension writers were drunk or lobotomized at the time.

Comment Re:I have to say, I am depressed... (Score 1) 208

There's really no excuse for Firefox to allow at least some of the more common security flaws - or at least allowing those flaws to cause problems.

First, sandboxing of extensions should limit what problems can be caused.

Second, a lot of errors are caused by the overflowing of buffers - a problem that could be limited by the use of stretchy buffers or bounds-checking malloc implementations. Or not allowing direct access to the heap.

Third, Firefox (and indeed all programs) should run on the principle of least privilege. Where some specific subset of program functionality requires significantly greater privilege than the rest, run the subset as a different thread or process at a different level of privilege. By extension (bad pun, I know), extensions could also be run as a different thread or process with even fewer rights. (OS' that don't allow programs to shed rights might be a problem, though.)

Comment Re:Meh. (Score 1) 221

Oh, it's practically a given that the non-linearity won't be the same at all scales. But complex behaviour can be produced by very simple non-linear systems - the Mandelbrot Set being the best-known example, so the presence of non-linearity merely creates a problem of practical computability rather than a problem of mathematical computability.

(Remember, to be computable in the mathematical sense, the algorithm has to complete in finite time. Which can mean anywhere from a few picoseconds to an hour after the heat-death of the Universe. To be practical, though, the model must produce results within the time the results are useful. To be commercially practical, it also has to produce results faster than other methods of getting those results.)

So we're looking at nested non-linear systems, no matter what starting point we're using.

Let's start with a bottom-up approach.

In the biological world, each cell has multiple mechanisms running in parallel where each mechanism is non-linear. The cell itself is a non-linear construct of these. There are different types of interconnect and these are also non-linear, so any network of cells is a non-linear construction of non-linear components. The brain has topological constraints, but unless there's grounds for believing those constraints to fundamentally alter the maths, the maths should be independent of implementation details.

This says we're looking at a nesting 3 deep. So we're looking at a chaotic system in which potentially all of the parameters are themselves chaotic systems in which potentially all of the parameters of that are also chaotic systems.

What else do we know? We know that the lowest-level systems are fundamentally unchanged from how they were 3.5 billion years ago when cellular life first arose. They may be chaotic but the building-blocks are all very simple. The only real internal changes have been in the organization of the building-blocks. All other changes within cells deal with interactions and mathematically interactions are on a different level.

Most of those lowest-level systems are common to heart cells, skin cells and brain cells. Now, this will include communication mechanisms and those we DO have to consider. Basic housekeeping that is a product only of it being biological can be ignored. Systems specifically activated in neurons and NOT common across all cells also have to be considered, even if housekeeping, as state is persistent in neurons by means of such housekeeping.

Now, the mechanics of these functions aren't what's important. What's important is what they do to the logic of a neuron to make it capable of data processing.

The cell itself is a network of these. In standard computer network terms, you're looking at the equivalent of a multicast-capable routing-capable ad-hoc network of moderate size. This is just for a single neuron, we're not even up to networking these things. Actually, strictly speaking, it's multiple such networks. In biological cells, you've independent chemical and electrical paths. Different latencies and different bandwidths.

Unless there is firm evidence that this is an implementation detail that does not alter the specification, I believe that it is wisest to assume it DOES alter the specification, that signal delays and other signal characteristics are important. Some variables from iteration X of the system are fed into iteration X+1, but others are fed into iteration X+N (where N can't be guaranteed to be a constant). This is what makes it a chaotic system of chaotic systems rather than merely a bigger chaotic system.

Now, the network of cells is basically more multi-path networking where again different types of interconnect have different properties. Further, not only are the nodes in the network effectively mobile and multicast, but the number of nodes is variable.

(We can ignore the number of connections a given neuron has by looking at the superset of functions exhibited by all types of cell in the brain, whether neuron, axion, or whatever, and by treating data as multicast to interested parties in a group rather than as a point-to-point.)

Since a network of networks is essentially just a network, there may be optimizations you can make. It may also turn out that some of the functions that are unique to "brain cells" really are just implementation details, that the same result can be produced with a simpler model.

Ok, that's starting from the lowest level and working up. What happens if we start at the highest level and work down (the classic comp sci approach)?

Well, the brain consumes and generates far more data than the senses can produce or the muscles can use. Therefore, really, the I/O is much more to do with synchronizing with an external reality than it has to do with what the brain is doing.

As James Burke pointed out in is original Connections series, what the brain perceives as it having perceived and what the senses are actually recording can be very different.

Ergo, at the highest level, it is reasonable to regard the brain as a virtual world simulator in which the brain has a point-of-presence in that simulation on which it bases its actions. This fits with what is known of mental disorders.

Disorders that reduce the connectivity of the brain (such as the two hemispheres being isolated from each other) produce multiple points of presence (and therefore multiple viewpoints).

Synaesthesia doesn't just produce an appearance of misdirected data (which would be the case if it was merely a switching error to the wrong processing unit in the brain), it produces something indistinguishable to the person from reality. The tidiest way to represent this is the person being aware of a virtual reality that has been so altered rather than the person being directly aware of anything external.

But what is this virtual reality made of? It's made of smaller units in the brain processing I/O where the bandwidth between units is improbably low. Thus, each smaller unit is the same as the whole. (Self-similarity in action.) Each component is a VR, where the larger VR is built from the interactions of those VRs. However, the total bandwidth between components is greater than the total bandwidth between the larger VR and the outside world. (Thus, it exhibits a property of fractals in which reducing the scale increases the complexity.)

So we're looking at a system that exhibits self-similarity and some interesting fractal properties. So it's safe to say it's a chaotic system. But because the non-linearity varies between layers, it's a chaotic system of chaotic systems. And since the properties are also true of the regions of the brain and cells, the nesting is again 3 deep with the possibility of optimizing to 2 deep.

(Since very simple unicellular creatures react to a stimulus that is indirectly processed internally, cells are themselves VR systems.)

So we get the same result with both approaches, which is good. We can also show that the simplest units are relatively simple systems that produce very complex results. This is also good as it means that whilst the brain might require a nested system of time-delayed equations where each have few hundreds of billions of terms to represent it mathematically even in this nested form, the terms are all very simple and many are very similar.

Mathematically, that's not going to be hard to formalize. Computationally, it would likely be easy enough to code. Practically, although this model requires nothing a Turing Machine cannot do in finite time, this model is useless unless you don't care that one second of brain function will take years (more likely decades) to compute on anything in the Top500 list today.

Comment Re:Meh. (Score 1) 221

People doing useful and interesting research frequently post on Slashdot, so I don't see what your problem is. It doesn't take a genius to mathematically model a brain and that isn't something people have bothered much with doing.

Some things people have tried to do are build models of compartments of the brain (bad idea), simulations of some poorly-specified upper-level functions of the brain (even worse idea) and discrete/binary simulations of individual neurons assuming them to be stateless and/or with a rigid topology (talk about dense).

The first is like trying to build a model of one part of a Mandelbrot set. A complete waste of time, since the maths doesn't work that way. The second is stupid because without a good specification, there's nothing meaningful to simulate. And since neurons are neither discrete, stateless nor in a fixed network (even adult brains have a surprisingly dynamic topology), all you get is a simulation of something that never existed instead of a simulation of the thing you want.

Why do people do these things? Because they're very doable. Neural networks are a doddle to code up, logic chains and decision trees are trivial on a computer, and since more people are interested in medical applications than AI, understanding compartments is far more practical than understanding the brain itself.

In short, people want to be paid far more than they want to discover, especially since discovering the mathematics of the brain won't do you any good as it'll be well outside the capacity of any machine out there (including the 100 million core one) to do anything sensible with such a model. Nobody likes inventing things that can't be used for another 50-100 years.

However, the fact that nobody WANTS a real mathematical model of the brain doesn't change the fact that the brain is an extremely simple device (mathematically-speaking). The unwritten part of the challenge is that they want a mathematical representation they can use and it is that which does not exist and will not exist for at least the next 50 years, simply because of the technology. The maths is a non-issue.

As far as Navier-Stokes is concerned, there are no reasonable assumptions. Particles do not move with a uniform speed, speed follows a bell curve. Well, almost, as a bell curve has infinite tails in both directions but in physics you're bounded. Particles are strictly between 0 and C and cannot take on either value or anything outside of the range.

In practice, since you don't see too many Bose-Einstein Condensates or even hypersonic particles when boiling water for an egg. However, even in a pan of cold water, there'll be water molecules moving fast enough to leave the liquid, and even when the water is boiling, there'll be water molecules that have the kinetic energy of a slug. Not many, but there will be some.

That's your first problem, because the first simplification is to decide what sort of range of speeds particles are likely to move at. The reality is "all of them, at some point or another".

The second problem is this differentiation between compressible and non-compressible fluids. In the same way that speed is non-uniform, density is also non-uniform. That means all fluids will have a mix of the two characteristics.

The third problem, as I've already pointed out, is that the system is chaotic. This means you need an infinitely fine grid and an infinitesimal time interval between iterations. Neither of these is possible. However, chaotic systems don't necessarily improve as you improve resolution, which is why CFD is often far more coarse-grained than you might expect. It has nothing to do with context, or even compute power, it has to do with experimentally finding a resolution where the results are similar (through the property of self-similarity) to what you might get if you could work at infinite resolution.

Self-similarity is NOT the same as identical, though, which is why most competent hardware engineers treat CFD as being a first approximation at best. The real thing will always behave differently, sometimes very differently.

Finally, there are no "reasonable" assumptions. If you need to simulate a system computationally (such as a 1000 MPH car, the movement of gasses in a fusion reactor, etc), it's because it's damn-near impossible to build some things without a first approximation and you simply don't have the ability to simulate the system any other way.

If you really could make "reasonable" assumptions on a system, you probably wouldn't be using CFD. Aircraft designers use CFD very minimally because it gives such cruddy results. The moment they get the chance, they're into the wind tunnel where they'll analyze air movement of smoke and/or thin strips of foil, sounds, and just about anything else they can think of.

Aircraft are an area where you CAN make "reasonable" assumptions. And what happens? The assumptions won't work on the computer models so they build real models instead.

Comment Re:Math cannot exist before wind. (Score 1) 221

Ideal circles do not exist, that is true. So what? The idea that you need an ideal form is Platonic (it comes from Plato's cave analogy). Does there need to be some ideal, in order for an approximation to exist? (Well, C++ and Smalltalk programmers can skip that question.)

Let's try a different example. Let's go for the Second Law of Thermodynamics. Statistically speaking, it's universally true. There are no exceptions on the macro scale of space/time. If you were to examine a small patch of quantum foam over a few picoseconds, it would be lousy even as an approximation.

Does this mean the Second Law is wrong? No, not really. Does this mean the Second Law is artificial, as it's only an approximation? No, I think you'll find the early universe obeyed it long before there were any observers.

So what does it mean? It really doesn't mean very much at all. It means you're asking the wrong questions and not getting useful answers.

As for the quotes, Albert Einstein was very bad at maths and I think Benoit Mandelbrot (amongst a few thousand other chaos and fractal specialists) would beg to differ on the ineffectiveness of mathematics in biology. It's hardly the mathematics fault that biologists are lousy mathematicians.

Comment Re:Math cannot exist before wind. (Score 1) 221

My argument is that it is quite immaterial as to whether Pi is universal or not. In any given specific space, there will be -a- constant that denotes the ratio between the circumference and the diameter. The fact that there exists a constant for a given space (whether or not there exists the same constant for all spaces) means that the property of the ratio is fundamental.

Iff* the same constant holds for all spaces, then Pi as we know it is -also- fundamental, but I am unsure this has been proven. My statement that we can split off what is artificial from what is fundamental is unaffected.

*Maths notation: If and only if

The only way it can be proven that mathematics is wholly artificial is to prove that the set of all mathematical "things" that are fundamental is equal to the empty set. ie: there is nothing - not a single property, not a single result - that is true everywhere, including Goedel's Theorum. If even something as simple as Goedel's Theorum is universal, then there exists at least one part of mathematics that is not invented but is wholly natural.

Now, here we run into a problem. If Goedel's Theorum is not a universal result, but an artifice, then it is also false because it would have to be possible to create a counter-example and the theory states no counter-example of this kind can exist.

Surely that seals the argument right there and then. Those who argue mathematics is wholly artificial must be arguing Goedel's Theorum is false. All other cases do not prohibit the theorum from being true. Thus, if there is sound reason for believing the theorum true, there is sound reason for excluding the notion that mathematics is an artifice.

Comment Re:Math cannot exist before wind. (Score 2, Insightful) 221

No, the bicycle is equivalent to a number base or a mathematical system. It is an implementation OF an underlying system (in this case, Newton's Laws), but Newton's Laws would still remain exactly the same whether Newton - or indeed bicycles - had ever existed.

The definition is also immaterial, as that too is an implementation detail. The underlying principle would remain unaltered whether the definitions of circumference, diameter or pi had ever been developed.

You are confusing the overlaid system with what it overlays. I'm saying you don't need to. Your argument is that the overlaid system is artificial, an invented product. I'm saying you're entirely correct on that. But what I am also saying is that what the product overlays, what is beneath the terms, the dynamics and the fancy Greek lettering is not artificial but exists whether it is known to exist or not.

The problem with assuming the two layers are the same is that you run into the Anthropomorphic Principle - the universe is the way it is because it produced people capable of seeing it. Let us, for a moment, assume the Many Worlds theory of Quantum Mechanics is correct. Then there are universes OTHER than the one we see and the theory falls down. The same would be true if the model of a multiverse as a foam (where each universe is a bubble in that foam) is correct.

But if you're on this site, you should be familiar with layering anyway. Maths - the fundamental, overarching thing that is shown in all mathematical systems that exist, will exist or ever have existed - is a Layer 1 concept in the OSI model. Concepts like numbers and other fundamental but artificial building blocks are Layer 2, which makes Group Theory a layer 2 switch. Anything and everything that MUST be true because of something in layer 2 is arguably also layer 2, which would include Goedel's Theorum. Anything that is true only in a specific implementation of mathematics is layer 3 or above.

Does using an OSI representation make it easier to see how not all maths is the same?

Comment Meh. (Score 3, Insightful) 221

Mathematically modelling the brain would seem to be a very trivial problem. The problem is that there's a lot of brain to model. I've posted (admittedly non-rigorous) mathematical models of the brain on Slashdot before, but narry a grant check from it. Bah.

Computational fluid dynamics for foams, liquid crystals, et al, isn't any harder than for anything else. The equations are chaotic by nature, but chaotic systems can be well-behaved on aggregate under certain conditions. CFD as generally done relies on some specifically hand-picked special case or cases being universally true. They never are, which is why most CFD differs from how systems actually behave in practice.

If you were to treat CFD as a problem in chaos theory, rather than as isolated collections of imperfect examples of special cases, there would be no problem. It is always when engineers try to take shortcuts and oversimplify the maths to make it easy on themselves that they run into problems. They should be locked up for their own safety. If you want to really annoy them, lock them up with some airgel foam.

The problem with chaotic systems is that the system is sensitive to initial conditions, which means the only way to get "correct" results is to use infinite precision and zero step sizes. This isn't useful, but is a good way to annoy experts in CFD.

This leaves two options - use very very big, very very fast computers (the option used by F1 teams), or find an equivalent problem you CAN solve (the idea behind transforms).

Ok, does chaos look like a good place to use transforms? If you could identify and classify the Strange Attractors in the system, can you do anything useful? Probably not, at least not in the "solving the problem" sense. Chaos is fully deterministic, but it is utterly unpredictable. The only solution is the whole solution.

What knowing the Strange Attractors might tell you is how to vary the precision and step-size to get the best results for a given level of compute power. But it's going to be all raw horsepower from thereon out.

The best way to invest money on such work is to design a co-processor that performs a handful of fairly high-level maths functions directly (optimized purely for speed, not physical or logical space) so that you can do Navier-Stokes almost at the level of raw hardware rather than through clunky software. Then cluster the living daylights out of the co-processor.

It's necessary to optimize commodity hardware for space, because chip real-estate is expensive. However, if you're building what is basically a SOP (single-operation processor) for a dedicated market that can afford things like Earth Simulator, the only time you care about space is when it impacts speed.

Ideally, if the speed of light wasn't an issue, you'd want each bit in the output to be produced by wholly independent logic, duplicating the input bits as necessary to accomplish this. In practice, you'd probably want to start with that conceptually but in reality have something that was somewhere between that and a highly compressed form. Too parallel and the delays in communication exceed the benefits from the parallelization.

But this is all obvious. Anyone here who has done multi-threading or any other form of parallelization knows about synchronization issues and communication overheads. It's even one of the biggest chunks of any course on the subject of parallel design. There's nothing new there, certainly nothing "unsolved".

But, yeah, a well-designed Navier-Stokes co-processor would likely give you greater accuracy and greater performance than the modern pure software solutions. Especially those using ugly protocols to do the communications.

If Intel can conceptulalize 80 Pentium 4 cores on a wafer, it would seem reasonable enough to imagine modern fabrication methods being able to put at least a couple of hundred dedicated Navier-Stokes processors into the same space. Since the input for an iteration would be based on output from that and other processors, there's no need for cache, just on-board generic high-speed memory and some communication lines (ala the Transputer). With the savings there, you might even squeeze in a few more cores.

If people are willing to spend gigantic sums of money on 100,000,000 core computers to do CFD work, I can see no serious problem with them spending what is surely only a few percent more at that scale on building a dedicated SOP cluster that's tens of thousands of times faster and infinitely easier to extend.

Comment Re:Math cannot exist before wind. (Score 5, Interesting) 221

I would claim that the ratio of a circle to its diameter is independent of being observed, or indeed there being an observer. I would also claim that the laws of geometry, probability and topology are universal and also do not depend on the existence of observers, let alone their ability to perform maths.

Radioactive decay follows an exponential decay curve. It will have done so long before anyone could add, let alone handle irrational numbers like e.

This puts me firmly in the category of maths being discovered, not invented. Mathematical tools, however, are invented and not discovered. I consider these to be quite different. If you were to imagine an alien lifeform on some distant world, they'll have an identical math but their experience of it, the way they treat it, the systems they use, those will all be unique to them because those are inventions and not anything fundamental to maths itself.

In a simpler example of the same concept, we can use ancient Greek maths today even though they didn't have a concept of zero and had (to modern eyes) very alien views on the way maths worked. We can use ancient Greek maths because the results don't depend on any of that.

We can use Roman results, too, despite the fact that their numbering system doesn't really follow a number base in any way we'd understand. It doesn't matter, though, because the important stuff all takes place below such superficial details. Even more remarkable, we can read many of the numbers written in Linear A, even though we can't read the language itself and know very little about the culture or people.

None of this would be possible if what lay under maths was invented. It's very hard to rediscover lost inventions, as there's many ways of producing similar results. But when you can rediscover lost number systems with comparative ease - well, doesn't that tell you there has to be something a bit more universal to it?

(I won't get into parrots being able to discover the notion of zero, but it's again pertinent as it's an example of a universality that transcends the invented language it's described in.)

Slashdot Top Deals

Disks travel in packs.

Working...