Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Crime is falling with lead levels from gasoline (Score 1) 118

"So this is the choice before us: We can either attack crime at its root by getting rid of the remaining lead in our environment, or we can continue our current policy of waiting 20 years and then locking up all the lead-poisoned kids who have turned into criminals."

Comment Google's mistake: ignores pyramid of users (Score 1) 262

"Odds are, people who use advanced features are more likely to turn data harvesting off. Thus making those metrics questionable. Then again, anyone who is opposed to being monitored is not part of the Google's target audience."

Sounds likely, AC. But here is Google's mistake. There is a sort of hierarchy or pyramid of users for many application. In rough percentages:
* 1% of users might become superusers making plugins and doing all sorts of fancy things with an application.
* 10% of users might become knowledgeable about what you can do with an app and provide support and encouragement for their friends (and also rely on the 1% for support and new features like plugins).
* 89% as all the rest just use the app and ask the 11% for help.

If you decide to design your platform for the 89%, you alienate all the people up the pyramid who provide free support and evangelism for the product and who guide the product in new directions. As Eric von Hippel at MIT has done studies showing that most (like 80%) of innovations are customer suggestions; so, you also cut yourself off from customer-led innovation.

I'm really going to miss "close tabs to the right" which I use frequently (and yes I have telemetry turned off too). If there is not a plugin possible for that, removing that feature is definitely going to reduce my liking of Chrome (which I use on a Chromebook). Now, maybe by itself that one change won't make me abandon Chrome (as if there are many great alternatives with Firefox/Mozilla fiascos) -- but, add up enough of these misguided decisions, and the odds will continue to change.

Comment Re:In Other Words (Score 1) 412

You did not read what I said, and are inverting the logic. Yes, the Universe manifestly DOES have a few "simple" rules a.k.a. the laws of physics, and HAS produced rocks. But that is literally irrelevant to the point that there is nothing about rocks -- or, if you prefer, the laws of physics and the medium in which they operate -- that appears "designed". The laws are regular mathematical laws and we have no evidence for some sort of highly imaginative "field" of possible mathematical law sets and possible Universal media obeying them that a designer can select from to create the design, let alone evidence for the insane recursion relation in complexity and design implicit on the existence of such a designer.

Any sentient "designer" of a Universe plus their Super-Universe within which it builds the Universe has more complexity (and greater information content) than the Universe that they designed and built. If complexity implies design, then every designer and their Universe must have a still more complex designer in a still more complex Universe. If you wish to assert that this recursion terminates anywhere, so that you can call the designer at that level "God" or "The Master Simulation Programmer", then you no longer assert that complexity necessarily implies a designer, in which case there is no good reason to apply the rule at all even in the first instance without evidence!

Quite aside from this, rocks specifically do not exhibit any of the characteristics we generally associate with designed things, and we have quite detailed mathematical models for the probable history of rocks that do not require or benefit from (in the specific sense of being improved by) any assumption of active design. Neither, frankly, do the laws of physics.

As I pointed out in another thread, the following is a classroom example of incorrect logic:

All men are mortal.
My dog is mortal.
Therefore, my dog is a man.

All computational simulations are discretized.
The Universe is discretized (or not, see other replies).
Therefore, the Universe is a simulation.

You argument is even worse:

Rocks, that do not appear to be designed, can be designed anyway.
Therefore, we can never say that rocks do not appear to be designed.

Say what?

My dog, that does not appear to be immortal, might be immortal anyway.
Therefore we can never say that dogs are mortal.

Sure we can. What you might get away with is the assertion that there is a very small chance that some living dog (including my currently living dog, that isn't dead yet!) might turn out to be immortal. However, every single dog since wolves came out of the cold that was born more than thirty years ago is to the very best of our observational knowledge and theoretical knowledge of dog biology dead as a doorknob and every living dog that any of us have ever seen appears to be aging and we all understand how aging and disease and accidents all limit life. To assert immortal dogs you have to just make stuff up -- invent things like "dog heaven" where all dead dogs run free and have an unlimited supply of bones, or imagine that somewhere there might be a very lucky ex-wolf that failed to inherit an aging gene and that has never had a fatal disease or a fatal accident and that somehow has eluded our observational detection -- so far -- and (ignoring the second law of thermodynamics and the probable future evolution of the Universe based on the laws of physics) assume that that dog will somehow survive longer than the Universe itself probably will. Both of which are pretty absurd.

So I repeat, there is absolutely nothing about rocks that makes us think that they are designed. That does not imply that they might not be designed after all, it is not a logical statement that rocks could not have been designed, it is an empirical statement that, just as dogs appear to be mortal (and not humans, however easy it is for dogs to make the mistake, especially around dinner time:-), rocks appear not to be designed. When I find a rock on the ground as I walk along, I do not quickly look around trying to figure out who designed the rock because it looks so very much like a made thing. Quite the opposite. And, I can almost guarantee, so do you!

Comment License management tools: good, bad, or ugly? (Score 1) 240

From me in 2001 posted to gnu.misc.discuss: https://groups.google.com/d/ms...

I definitely do not want to see a future world of only proprietary
intellectual property where basically everything I want to do requires
agreeing to endless licenses and royalty payments, such as described in
"right-to-read". My wife and I released a six person-year effort under
the GPL (a garden simulator application) around 1997 ...
so I am obviously sympathetic to encouraging free sharing of some
information and allowing derived works of some things.

However, on a practical basis, living in our society as it is right now,
any software developer is going to handle lots of packets of information
from emails to applications to program modules under a variety of
explicit or implied licenses. If a developer is going to do this in a
way that makes his or her work most useful to the community (under the
terms he or she so chooses), proper attention must be given to the
licensing status of all works received and distributed, especially those
that form the basis for new derived works to be distributed. Note that
even in the case of purely GPL'd works, one still needs to know that a
user contributing an extension to a GPL'd work was the original author
and/or he or she has permission to distribute the patch (if say an
employer owns all the contributor's work).

My question is: should software tools, protocols, and standards play a
role in easing this required "due diligence" ...
license management work (at least as far as copyright alone is
concerned)? ... Usually license management tools (e.g. for music or DVDs) are thought of
as keeping the end user from doing something they might wish to with
content they have paid for. Does it make sense as well to look at
license management tools from the perspective of allowing
(non-technical, non-lawyer) casual users to do things they otherwise
might not be legally sure they can do? Similarly, would such tools help
someone filter out proprietary content with licenses he or she does not
approve of (and would this provide incentives for artists to release
free versions if they want to reach people through those filters)? And
most of all, would such tools allow creative people to be more certain
that they could legally use certain freely licensed materials found on
the internet in making derived works? Would this provide a legitimate
defense of due diligence to minimize copyright infringement suit costs
(or reduce related liability insurance costs)?

For example, when you get an email it could come with a machine-readable
license (e.g. "redistribution OK in entirety", "for your eyes only",
"open content", "GPL"). Likewise, what if every file or zip archive came
with a specific machine-readable license? In effect, this would make the
license a fundamental part of the work.

In part, you may think, perhaps correctly, this it the "right-to-read"
nightmare. Such information could be used to prevent you from making
copies of things you might want to copy (legally or not) under some
notion of "fair use" ...
if the system enforced the license by preventing say you forwarding or
quoting an email that comes in with a license of "for your eyes only" or
with no explicit license at all. Perhaps the feeling that copy
protection systems will prevent fair use underlies much of the
resistance to such automation. It is not my point in this note to
advocate either for or against the enforcement of licenses by the end
user's system. Obviously though, enforcement would certainly be made
easier by machine-readable licenses, and this is a problematical issue
as far as "fair use" is concerned.

On the other hand, license management tools might force everyone to be
explicit about licenses for things they redistribute. Some authors would
explicitly choose free or open licenses. That might mean that when you
get free software (or open source software or anything else) you would
know what you at a minimum can and can't do with it. That clarity and
sense of peace of mind might help promote use and more derived works.

For example, even if MIT puts its course material on-line, that does not
necessarily mean you can make derived works from them or even share them
with a friend (other than by telling them to look at the MIT site). Yet,
without a good free license management system, that fact might not be
obvious to users and a truly free course library may never arise. (Note:
I don't know whether the MIT courses will permit derived works, so MIT
may surprise me.)


Being explicit about licensing (especially in a machine-readable way)
may have great benefits. For one thing, you might decide to set your
email receiver to reject email from most people unless it came with an
acceptable (to you) license. There might be a "license negotiation"
protocol at the start of all transmissions of all works.

For example:
Sender: PERMISSION TO SEND "Windows NT Source" BY "misguided kiddy";
Receiver: WHAT LICENSE?;
Receiver: REJECT;

or perhaps instead:
Sender: PERMISSION TO SEND "GNU/Linux kernel mods" BY "Linus Torvalds";
Receiver: WHAT LICENSE?;
Sender: LICENSE: GPL-2;
Receiver: ACCEPT;

If you ran a peer-to-peer file server, such a protocol might help ensure
only legally redistributeable works were redistributed on it (making it
legally safer to run one). Obviously, people could lie about the license
status of works when they inject them into the system -- but the point
is, it forces such people to explicitly lie, as opposed to just being
careless or neglectful. (Obviously, carelessness and neglect could
affect the system as well if the person injecting the information is
just confused, hopefully other factors like community awareness could
minimize this.) Nonetheless, it might gives users a legal defense from
extreme copyright infringement awards if they screen incoming data. This
in turn might make insurance for such situations affordable. Defenders
of such a file sharing system (in court) could then admit to there being
a few "bad apples" and take efforts to route out such illegally
contributed material in the same way people now use virus scanners or
other filters. This might make it more likely such systems would
prosper, with other attendant benefits for democracy or an open society.

To be clear: I personally am not for supporting sharing of material that
for legal or copyright reasons can't be shared (it's the law; change the
law peacefully if so desired). I instead want to make sure that it is
easy to share material that it is legal to share, and likewise I want to
ensure it easy to make derived works with clear legal titles from
material it is legal to make derived works from.

In the case of software, with such a system, when you build free
software packages (or "open source" ones), you could ensure that all
contributions were under an acceptable license, because that licensing
information would be already there in a machine-readable form (perhaps
including information pointing to works and their licenses from which
you made derived works). Presumably, if someone emailed you a
contribution using such a system, you could see at a glance from the
email record what license it (or the code part) was under. In addition,
information could also come along that was the equivalent of a statement
of either originality for the work or a statement the author had
permission to incorporate other works they used into the new work under
the license chosen. Such information might include an audit trail of all
works and licenses used by various authors in making the final product." ...

Comment Why this is immoral and should be illegal (Score 1) 38

"Foundations, other grantmaking agencies handling public tax-exempt dollars, and charitable donors need to consider the implications for their grantmaking or donation policies if they use a now obsolete charitable model of subsidizing proprietary publishing and proprietary research. In order to improve the effectiveness and collaborativeness of the non-profit sector overall, it is suggested these grantmaking organizations and donors move to requiring grantees to make any resulting copyrighted digital materials freely available on the internet, including free licenses granting the right for others to make and redistribute new derivative works without further permission. It is also suggested patents resulting from charitably subsidized research research also be made freely available for general use. The alternative of allowing charitable dollars to result in proprietary copyrights and proprietary patents is corrupting the non-profit sector as it results in a conflict of interest between a non-profit's primary mission of helping humanity through freely sharing knowledge (made possible at little cost by the internet) and a desire to maximize short term revenues through charging licensing fees for access to patents and copyrights. In essence, with the change of publishing and communication economics made possible by the wide spread use of the internet, tax-exempt non-profits have become, perhaps unwittingly, caught up in a new form of "self-dealing", and it is up to donors and grantmakers (and eventually lawmakers) to prevent this by requiring free licensing of results as a condition of their grants and donations."

Longer version: http://pdfernhout.net/on-fundi...

Comment Yeah, I remember. So 15 yrs ago I wrote this: (Score 1) 11

"Consider again the self-driving cars mentioned earlier which now cruise some streets in small numbers. The software "intelligence" doing the driving was primarily developed by public money given to universities, which generally own the copyrights and patents as the contractors. Obviously there are related scientific publications, but in practice these fail to do justice to the complexity of such systems. The truest physical representation of the knowledge learned by such work is the codebase plus email discussions of it (plus what developers carry in their heads).
    We are about to see the emergence of companies licensing that publicly funded software and selling modified versions of such software as proprietary products. There will eventually be hundreds or thousands of paid automotive software engineers working on such software no matter how it is funded, because there will be great value in having such self-driving vehicles given the result of America's horrendous urban planning policies leaving the car as generally the most efficient means of transport in the suburb. The question is, will the results of the work be open for inspection and contribution by the public? Essentially, will those engineers and their employers be "owners" of the software, or will they instead be "stewards" of a larger free and open community development process?"

And also, earlier, this to Ray Kurzweil in 2000:
"... It will be difficult for you to change your opinion on this because you have been heavily rewarded for riding the digital wave. You were making money building reading machines before I bought my first computer -- a Kim-I. But, I think someday the contradiction may become apparent of thinking the road to spiritual enlightenment can come from material competition (a point in your book which deserves much further elaboration). To the extent material competition drives the development of the digital realm the survival of humanity is in doubt.
    Still, you are a bright guy. If you study ecology and evolution in more detail, I think you may change your conclusion, or at least admit the significant probability of a bad outcome, and that we should plan
    If you do change your opinion in the future, and wish to fund work related to helping ensure humanity survives the birth of the digital realm, please remember me.
    MOSH to the end I guess!"

The Bayh-Dole Act is a big part of that disaster (letting universities privatize gains and tightly control use of what they make an with public funds rather than insist publicly funded research goes into the public domain):

Anyway, I'm still trying to limp along making glacially slow progress doing free stuff (Twirlip/Pointrel/etc.) on GitHub in increasingly vanishing spare time... My latest small increment:
"High Performance Organizations Reading List"

Comment The politics of science funding (Score 1) 249

Hi meta-monkey! I'm making a "meta" comment on the social-financial framework around battery (or any) science. :-)

Just look at the whole "cold fusion" or now "LENR / solid state fusion" controversy and fight over funding and recognition. The idea that a solid-state metal lattice can induce hydrogen atoms (on its surface, in a micro-crevice, or otherwise absorbed somehow) to behave differently than when hydrogen is in a gas is still heresy requiring immediate excommunication after vilification by a mob of virtue-signalling "disciplined minds" whose social standing and, worse, grant funding is threatened by the idea.
"In retrospect, I have concluded that much of the blame for the "cold fusion war" -- and it certainly has been just that -- stems from a vituperative campaign against the field with deep roots at MIT, specifically at the MIT Plasma Fusion Center. Not exclusively in that lab, however."

Ironically, about thirty years later:
"The Cold Fusion 101: Introduction to Excess Power in Fleischmann-Pons Experiments course will run again on the campus of Massachusetts Institute of Technology (MIT) over the IAP winter break Tuesday through Friday Jan. 20-23, 2015."

Fusion via cavitation also falls into that category of heresy (but may be emerging):

As does power via hydrinos (which may also just be LENR in disguise):

So, that's a third option to either it works or it does not work -- whether it works or not, your science career gets trashed because you even talked about an idea, let alone seriously tried to do an experiment about it. And your career gets trashed because of the *politics* of science funding. Science is a human enterprise after all, and humans being humans...

Comment Implication: no next-door relatives or neighbors? (Score 1) 137

Kudos to the kid saving his mom, but it is also kind of sad about how isolated and dependent on institutions and technology so many of us have become... So much so, we just take it for granted a four year old would have no neighbor or relative nearby to turn to.

Perhaps I was just lucky to grow up (lower-ish) middle class in a suburb in the 1960s with siblings, many stay-at-home moms as friendly neighbors all around, as well as lots of kids playing in the street. That seems to be a world that perhaps hardly exists anymore in the USA for any child... Other countries may be more likely to still have that kind of circumstance perhaps...

And more wealth seems to only make it worse -- see for example:
"The Problem With Rich Kids"
"In a surprising switch, the offspring of the affluent today are more distressed than other youth. They show disturbingly high rates of substance use, depression, anxiety, eating disorders, cheating, and stealing. It gives a whole new meaning to having it all."

"The Culture of Affluence: Psychological Costs of Material Wealth"
"Evolutionary psychologists have suggested, furthermore, that wealthy communities can, paradoxically, be among those most likely to engender feelings of friendlessness and isolation in their inhabitants. As Tooby and Cosmides (1996) argued, the most reliable evidence of genuine friendship is that of help offered during times of dire need: People tend never to forget the sacrifices of those who provide help during their darkest hours. Modern living conditions, however, present relatively few threats to physical well-being. Medical science has reduced several sources of disease, many hostile forces of nature have been controlled, and laws and police forces deter assault and murder. Ironically, therefore, the greater the availability of amenities of modern living in a community, the fewer are the occurrences of critical events that indicate to people which of their friends are truly engaged in their welfare and which are only fair-weather companions. This lack of critical assessment events, in turn, engenders lingering mistrustfulness despite the presence of apparently warm interactions (Tooby & Cosmides, 1996). ...
      Physical characteristics of wealthy suburban communities may also contribute to feelings of isolation. Houses in these communities are often set far apart with privacy of all ensured by long driveways, high hedges, and sprawling lawns (Weitzman, 2000; Wilson-Doenges, 2000). Neighbors are unlikely to casually bump into each other as they come and go in their communities, and children are unlikely to play on street corners. Paradoxically, once again, it is possible that the wealthiest neighborhoods are among the most vulnerable to low levels of cohesiveness and efficacy (Sampson, Raudenbush, & Earls, 1997). When encountering an errant, disruptive child of the millionaire acquaintance next door, neighbors tend to be reluctant to intervene not only because of respect for others' privacy but also, more pragmatically, because of fears of litigation (e.g., Warner, 1991)."

It used to be we lived in tribes and then still close-knit communities...

Daniel Quinn proposes we try to go back to that way of life:
"New tribalists believe that the tribal model, though not absolutely "perfect," has obviously stood the test of time as the most successful social organization for humans, in alignment with natural selection (just as well as the hive model for bees, the pod model for whales, and the pack model for wolves). According to new tribalists, the tribe fulfills both an emotionally and organizationally stabilizing role in human life, and the dissolution of tribalism with the spread of globalized civilization has come to threaten the very survival of the human species. New tribalists do not necessarily seek to mimic indigenous peoples, but merely to admit the success of indigenous living, and to use some of the basic underlying tenets of that lifestyle for organizing modern tribes, with fundamental principles gleaned from ethnology and anthropological fieldwork.
      Quinn argues that modern civilization is not working and will ultimately self-destruct, as evidenced by escalating worldwide trends such as environmental collapse, social unrest caused by hierarchal social structures, discrepancy between the rich and poor, development of ever-greater weapons of mass destruction, unsustainable human population growth, unsustainable agricultural practices, and unsustainable resource exploitation of all kinds. He claims that if we are to find a way of life that does work, we should draw our basic principles from human societies that are working or have worked in the past. ..."

But maybe smartphones used by kids are just something new and better than the tribe or friendly neighborhood? Gotta wonder...

Comment Re:In Other Words (Score 1) 412

No arguments. Simulations similarly are generally not "deterministically" scripted. They are constantly rolling (metaphorically) pseudorandom numbers to generate non-repetitive game play. But rocks or gameplay that is "generated" are still generated according to an algorithm that was designed, and I was using the term in this broader sense.

Comment Re:In Other Words (Score 1) 412

I was pointing out (possibly badly) that his argument was a formal fallacy of the general sort: "All men are mortal, my dog is mortal, therefore my dog is a man". "All simulations are discretized. The world we observe is discretized. Therefore, the world we observe is a simulation." Same argument, substitute men/dog/mortal and simulations/world/discretized (or whatever). This is simply an incorrect argument in symbolic logic completely independent of the meanings of the symbols per se, unless I am misremembering my formal symbolic logic.

ELSEWHERE I pointed out that we do not, in fact, know if the world is discretized and that even if it is as far as spacetime is concerned, that doesn't mean that it is discretized in amplitude/phase space. And I am teaching quantum mechanics every Monday, Wednesday and Friday at the moment, so I'm not exactly ignorant about this.

Comment Re:In Other Words (Score 1) 412

Shall I show you my dry stack walls and my mortared fieldstone walls? Besides, this doesn't really impact the argument. The argument is: "Things exist that appear to have functions in a system of interlocked causality. If I were going to simulate this particular system of apparent interlocked causality, I would do so by using things that have these functions so that the result looks like this system of interlocked causality. Therefore, this apparent system of interlocked causality is a simulation because it works the way a simulation of it that I built would work!"

This is an utterly absurd argument. Begging the question doesn't begin to describe it. This is just the argument for God by design dressed up in computer clothing with a side order of Solipsism, and leaves all of the same questions begged and not even acknowledged as "problems". OK, so we are a simulation. Even discretized, the Universe has the information content of at least 10^256! (that's factorial, not exclamation point, all the permutations of all the ways "stuff" can be entered into the apparent cells). Or, of course, as I argued, it could have far, far less information content because all it really has to do is provide a few gigapixels of my apparent visual field, a handful of less dense informational channels for sound, tasted, smell, and touch -- certainly less than a terabyte of information -- and update it according to a set of classical physics rules plus an interactive script. It doesn't even have to do more than one, because if the Universe is a simulation, you could be and probably are a NPC being presented to just me in my VR bodyset -- assuming that in some more fundamental reality I have an actual body and am not MYSELF a self-aware NPC in a simulation being run for things that look like giant amoebic blobs swimming in liquid helium near the cores of gas giants (or in some more bizarre environment as we have no possible way of even speculating about the physics of the world in which the host computer supposedly lies).

We could come damn near building this now -- it's an easy extrapolation of our first rudimentary VR sets. We likely couldn't make it high enough resolution yet, but that's just a matter of scaling of work underway and doesn't require anything like Planck length discretization.

Then there is that computer that we are all running on. One way or another, its information content has to be at least as large as the information content of the Universe being simulated, or Shannon has lived in vain. Furthermore, it has to have an extremely high degree of organization. Indeed, the information content in the physical hardware of any computer ever built -- all the way down to your hypothetical Planck scale -- is almost infinitely larger than the content of its "computational" working memory and processors. Indeed, if one accepts the assertion that real quantum phases etc are real numbers, and meditate on the continuum hypothesis and aleph null and aleph prime, it is infinitely larger. It takes billions to trillions of atoms to represent a single switch, and many switches and other adjuncts to perform even a simple, crudely discretized computation simulating real number arithmetic.

So if you REALLY take the simulation theory seriously, you have to have a Universe somewhere -- somewhere, somewhen, somehow, there has to be a physical basis for the computation, energy and entropy with a set of rules that encodes this massive program -- that has a much, much, much, much.... larger information content than the Universe being simulated. My laptop (plus a remote supercomputer plus a network) can play World of Warcraft and provide me with a very nice simulation shared with a few hundred others (more like a few tens in any given perceptual field representation) based on coarse-grained objects and carefully builts SURFACE representations, because the giant snapping turtles are only shell thick and have no actual internal guts. Even this crude a simulation, skin deep and lacking real depth and transmitting only a shared visual space with added sound effects that don't even try to "share" a sound space, requires ever so much more physical information to represent it.

Now if I were designing a Universal simulation, I would make it self-representing. That is, I would make it its own computer. This makes it information-theoretically compact. The program being run are "the Laws of Physics", and the data being manipulated represents nothing but itself; it isn't stored on something else. Now the simulator for the Universe is the exact size and exact structure of the Universe being simulated, Shannon is very happy, and hey, even the Planck length -- if real -- is now relevant. The only real problem left is that now it isn't a simulation, it is reality itself. And a minor secondary problem -- even if this is how I would design it (if only because I can look around me and see that it works) that doesn't mean that it was designed. One cannot look at something with a given degree of complexity and say "Wow, that's complex! It must have been designed in order to be that complex" without contradicting your own argument with the implicit assumption that there is an even MORE complex layer of reality supporting the designer and the medium in which the design is realized. The only empirical conclusion that is justified and consistent is that what we see is what we get. Reasoning by analogy isn't reasoning at all, either logically or empirically.


Comment Re:In Other Words (Score 5, Insightful) 412

1. Due to limited computational resources, the simulated universe would be granular or "quantum".
2. To limit computation, reality would be held in a fuzzy probabilistic "superposition" state until it is actually observed, similar to how a GPU running OpenGL will skip the generation of hidden polygons.
3. The maximum speed of information transfer would be finite, to limit the propagation of changes through the universe.

All of these are actually true in our universe, ergo, we are very likely a simulation.

And this, sir, is why you really need to consider taking a course in formal logic and maybe learn about logical fallacies.

None of these assertions, even if they were true in some useful way, constitute a statistical or logical argument for the conclusion. This is true at an openly embarrassing level. Suppose one were designing a rock because you wanted to build a rock wall and for some reason didn't want to use actual rocks. Due to the cost of raw materials, rocks would be finite in size. Because you don't want the wall to be boring, rocks would come in many different colors, sizes, and shapes. Because you don't want the fake rock wall to fall down, rocks would be solid, as opposed to liquid, glass, plasma, gaseous.

All real rocks are actually finite in size, come in many different sizes, shapes, and colors, and tend to be solid to the point where "rock solid" is a standard metaphor in human speech. Ergo, all rocks are obviously designed.


Teleological arguments are pure bullshit, which is what the physicist in question (as well as myself, also a physicist) are happy to point out.

When one actually looks at rocks or Universes, there is an utter lack of either evidence or a plausible, consistent, evidence linked chain of reasoning that increases the probability that the notion/hypothesis "Rocks are designed" or "We are living in a computer simulation" is/are true from their rightful place (so far) of 0.0000.....(0 until you get bored with writing 0's)...001 to something with a tiny smidgen of actual measure.

These are not independent assertions, by the way. If you take the assertion that the Universe is a simulation seriously, then rocks ARE designed objects, even though there is absolutely nothing about rocks to suggest that they actually are designed.

One could then deconstruct the truth of each of your statements individually. For example, there is nothing in quantum theory that limits computational requirements -- quite the opposite. Indeed, quantum theory is built on top of complex, non-discrete numbers in every quantum textbook ever written -- C-numbers. That is, quantum objects are described in general by (at least) TWO real numbers, not just one. If you attempt to represent the quantum state of a very simple -- the simplest -- two level quantum system such as |\psi> = A|-> + B|+>, one discovers that it requires two continuous degrees of freedom and that the states of the system map nicely into points on a 3D spherical hypersurface. If you try to describe the most general quantum state of N such 2 level objects, it requires 2^N or so continuous degrees of freedom. Consequently, we are limited in our solutions or simulational studies of fully correlated quantum systems to a tiny, tiny handful of e.g. "two level atoms" -- perhaps 20 to 30 of them -- because one very quickly runs out of computational resources to perform even very small general computations.

Second, you are building a whole mountain of assumptions into what appears to be a misinterpretation of the Planck length. To quote Wikipedia's page on this topic:

There is currently no proven physical significance of the Planck length...

so you are quoting something for which there is no direct evidence as evidence in a bad teleological argument for something for which there is no evidence at all.

You also don't address the actual numbers associated with the Planck length/time. If the Planck length \ell_p is order of 10^{-35} meters, and the visible Universe (alone) is ~10^11 light years across, and a light year is 10^16 meters then there are 10(11+16+35)*3 = 10^{186} cubic Planck lengths in the visible Universe, and making Planck time out of \ell_p/c we end up with another factor of 10^70 x 10^186 = 10^256 discrete space-time points. That's a hell of a lot of data, and one has to compute all of this for all of these time slices.

Now speaking only for myself, if I were building a simulation of the Universe, it would NOT look like this microscopically. That's because when one plays a game with a physics simulation, all one has to do is present a perspective view into a purely classical representation of various surfaces, plus some sounds, plus some sundry nervous/sensations. Humans can't see microscopic things anyway, even with a microscope we don't see microscopic things, we see images that our brains plus some cognitive work identify as microscopic things. I don't have to make a virtual world that has actual simulations of individual viruses to simulate the nervous sensations of "feeling viremic". Reality need never be more than skin deep, perception deep. I'll point out that empirically (there's that word once again) ALL actual reality simulations present precisely this sort of a Universe BECAUSE it doesn't require an enormous representation. When a dark iron dwarf in WoW throws a bomb at you, the simulator doesn't compute the quantum chemistry ot a gunpowder explosion all the way down to the Planck scale, it just manipulates a few pixels and sprites according to a very simple model of what an explosion LOOKS LIKE.

Similarly, it is really irrelevant as to what the "speed of propagation of causality" is in a simulation. It doesn't even matter how fast your computer is, since you are just stacking up large arrays of numbers with some index you are identifying with some sort of discretized timestep. And don't get me started about relativity and simultaneity and the ordering of events separated by spacelike intervals and COMPUTATIONS of all of these things -- suffice it to say that your argument itself is in fact naive and incorrect per point as well as collectively.

Could the world of our experience by a simulation? Sure. Of course it could. And pink unicorns COULD fart rainbow colors. There is nothing fundamentally contradictory about either one, especially when you get to make up the terms that aren't being contradicted.

It's just that we haven't a shred of actual evidence that either assertion is true. Or that the Universe is a made/designed thing. Or that we could somehow DISTINGUISH a designed "real physical" Universe from a designed "simulation, unreal" Universe from the real, undesigned, physical Universe we appear to live in. Teleological arguments are just as dumb in religion as they are in the assertion that we are all living inside "the Matrix" in reality. How could you even know?


Comment IBM could still be saved -- see my reading list (Score 1) 301


The most important for a company to re-invent itself is the first item and it relates to "shoplifting all of the spare hours":

"Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency (by Tom DeMarco)"

He says there is a tradeoff between efficiency meeting old needs quickly) versus effectiveness (meeting new needs with flexibility & responsiveness).

DeMarco points out that it is precisely the middle management layer that needs some slack time the most to be able to innovate in ways that lead to organizational learning. But everyone needs slack time to take part in that too. IBM is likely going in the completely wrong direction if it is reeling people in to presumably over-schedule them even more.

I last worked for IBM in Research about sixteen years ago myself... The project I worked the most on was the IBM Personal Speech Assistant (a forerunner to Siri and such). The team was very proud that Lou asked for one for his office:

But -- I had enough "slack" then (after a year of hard work) that when my then supervisor (his site above) went on a two week vacation, I build a speech activated display wall out of used ThinkPads which looked a lot like a Jeopardy board. (A coworker said it was a a good thing I was not in the lab when my supervisor first walked in after his vacation. :-) I always wonder though if years later that spark led to the idea of Watson being on Jeopardy?

Still think a conversational display wall is a good idea to pursue further. And I still want to make a programming language tailored to being edited easily via voice recognition. Of course IBM has long since sold off ViaVoice... And while there was some slack in Research then around 2000, I was told it was nothing like what was there in the 1970s and 1980s where a lot more creativity was possible. So, even then, these ideas were unlikely to be pursue-able.

And also around 2000, on teamwork at Research, one thing I heard at lunch was someone saying something like "We hire the top people from the most competitive schools and then wonder why they have trouble getting along.." There is a certain lack of diversity as well from such hiring practices.

Comment Mainframes have been surprisingly resilient (Score 1) 301

I'm all for distributed systems, but for many big companies, mainframes still make a lot of economic sense:
"While some believe that smaller distributed servers provide the agility needed in today's fast-moving cognitive era, the IBM mainframe is the preferred solution for many of the world's most competitive businesses, including:
92 of the top 100 banks worldwide
70%+ of the world's largest retailers
23 of the world's 25 largest airlines"

And see also, on a smaller scale:
"IBM designed IBM i as a "turnkey" operating system, requiring little or no on-site attention from IT staff during normal operation. For example, IBM i has a built-in DB2 database which does not require separate installation. Disks are multiply redundant, and can be replaced on line without interrupting work. Hardware and software maintenance tasks are integrated. System administration has been wizard-driven for years, even before that term was defined. This automatic self-care policy goes so far as to automatically schedule all common system maintenance, detect many failures and even order spare parts and service automatically. Organizations using i sometimes have sticker shock when confronting the cost of system maintenance on other systems.[1]"

In general:
"Why on Earth Is IBM Still Making Mainframes?"
"Business is more mobile than ever. Yet however lightweight those mobile devices feel in your pocket, they can still make good use of a big, powerful machine chugging away in a back room, not going anywhere."

Mainframes are also more than just hardware. Mainframes are in a sense a culture of 100% uptime and reliability.

That said, distributed computing continues to improve... And distributed computing culture continues to improve...

As to the original article, IBM is still shooting itself in the foot with this move away from supporting remote work... What IBM needs to be creative is not colocation but "slack" in the Tom DeMarco sense:
"Why is it that today's superefficient organizations are ailing? Tom DeMarco, a leading management consultant to both Fortune 500 and up-and-coming companies, reveals a counterintuitive principle that explains why efficiency efforts can slow a company down. That principle is the value of slack, the degree of freedom in a company that allows it to change. Implementing slack could be as simple as adding an assistant to a department and letting high-priced talent spend less time at the photocopier and more time making key decisions, or it could mean designing workloads that allow people room to think, innovate, and reinvent themselves. It means embracing risk, eliminating fear, and knowing when to go slow. Slack allows for change, fosters creativity, promotes quality, and, above all, produces growth."

That was the great thing about IBM Research when I worked there around 2000 -- a bit of slack to be creative and good work/life balance. But, IBMers even then said the rest of IBM was not like Research...

Slashdot Top Deals

"The number of Unix installations has grown to 10, with more expected." -- The Unix Programmer's Manual, 2nd Edition, June, 1972