Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Autonomic Computing 152

pvcpie writes: "The New York Times has a story today about Autonomic Computing, which is described as "a biological metaphor suggesting a systemic approach to attaining a higher level of automation in computing;" and they published a paper (pdf) on the topic. Apparently there are already some universities signed up on Autonomic Computing projects, more info was available on the website and in the nyt article. It also appeared in CNET."
This discussion has been archived. No new comments can be posted.

Autonomic Computing

Comments Filter:
  • Karma Whoring (Score:2, Informative)

    by Skynet ( 37427 )
    IBM has done quite a bit of research on autonomic computing. IBM Just keeps getting cooler and cooler IMO. Although I have to say that this wquote frightens me:

    "Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead"

    Anyways, here is the link to their Autonomic Computing R&D site:

    http://www.research.ibm.com/autonomic/ [ibm.com]
  • Consider the autonomic nervous system: It tells your heart how many times to beat, checks your blood's sugar and oxygen levels. It monitors your temperature and adjusts your blood flow and skin functions to keep it at 98.6 degrees. But most significantly, it does all this without any conscious recognition or effort.

    Can you say "SysAd"?

  • This is a fascinating topic for research, speculation, and sci-fi, but like AI, is probably like the search for the holy grail. (Of course, Indy (courtesy of Mr. Speilberg) showed us that it really does exist!!)
    • Maybe. But why should people in offices have to worry about how their computers work? Companies are losing huge amounts of lost time to employees trying to get their computers to work right. Most of them don't realize that when software crashes, it's NOT their fault.

      The nuts and bolds of computing should be out of sight from these users. MS came up from computer hobbyists, geeks who like to play with computers. IBM has always been about business.

      In the beginning, users had to learn software, as in how to respond to dumb prompts to make it work correctly. We still follow that model, even if the choices are more varied and powerful. It's past time that the software should be able to adapt itself to the user, not the other way around. Just having to know how to type is more than computers should require.

      It may not be possible, but it's certainly worth researching.
  • The future (Score:5, Interesting)

    by micromoog ( 206608 ) on Tuesday October 16, 2001 @09:54AM (#2435805)
    This is absolutely the future of commercial computing. Each new release of each major vendor's flagship database product is increasingly self-tuning, and these systems generally run better than their manually-tuned predecessors (due in part to the shortage of skilled workers).

    The more that can be done automatically, the more of the IT staff's precious time can be dedicated to more complex tuning tasks, and/or new development. This will make IT more effective, not obsolete.

  • by gmkeegan ( 160779 ) <(gmkeegan) (at) (yahoo.com)> on Tuesday October 16, 2001 @09:54AM (#2435808)
    Using the human body as an analogous self-maintaining system, would there be a coresponding lifespan to such a system? A point that is reached when the body can no longer repair and regulate itself and it simply quits.

    For that matter would there be analogous doctors, hospitals and life support systems? How about gymnasiums for keeping in shape? (and I ask that last one only half-jokingly...)

    Gordon
    What do you think you are doing, Dave?
    • by Alien54 ( 180860 ) on Tuesday October 16, 2001 @10:05AM (#2435863) Journal
      You need to remeber even a single cell has the equivalent of Autonomic functions that are used to keep going. It does not probbably do the equivalent of deciding "shall I digest food now?"

      This means that we would need modular units in a network, say, that would be autonomic. The desktop PCs would have to be autonomic before you could bet the network to truly be so.

      It would be a whole new way of computer design for software, and I doubts that some of the OSs out there would have code bases that could be viable in this regard.

      Note that you can do this sort of thing as an optical illusion. You can pretend that everything is doing all right, when in fact it is going to hell in a hand basket. The vaporware diagnostic that merely pretends everything is all right, or the repair that cause more damage than was present in the first place.

      But I think we have had enough of that over the past decade or two to know to avoid it. And, of course, the guilty have not been named because everyone knows who they are already.

      • "You can pretend that everything is doing all right, when in fact it is going to hell in a hand basket. The vaporware diagnostic that merely pretends everything is all right, or the repair that cause more damage than was present in the first place" Are you describing computing or the current state of affairs?
    • The human body has a life expectancy because the processes (chemical reactions, physical stress over tissues) it suffers along its "maintenance" damages the "hardware".

      The computer's life-expectancy doesn't change much due to the self-tuning properties, but of course these self-tuning properties will stress more the machine (more usage of CPU, disk and I/O in general), and hardware fails after some time. It may take long, but it fails.

      Now consider a self tuning database system which includes a shelf of backup tapes and a robot arm to switch tapes (or CDs) as part of its maintenance. Moving parts add to the stress, which reduces "life expectancy".

      Just to add to the mess, human life expectancy is also related to environment conditions. Being hit by a meteor or burned in a fire is just as bad for a computer as it is for a human.

      • The question of why we age is still open, but the accumulation of damage theory doesn't move me. It doesn't explain why mice live a few years, dogs 10, cats 20, humans 80, and galapagos tortises hundreds. (all numbers approximate)

        There is also a phenomenon called apoptosis, which is the spontaneous death of seemingly healthy cells. It is part of the body's self-regulation -- cancer seems to be, in some sense, a failure of the apoptosis mechanism.

        So we may have software vendors building, instead of planned obsolescence, apoptosis into products. They could even make it a feature -- if nothing ever dies, evolution stops.
        • The most likely cause for aging and death are that evolution never bred for that. By the time things start winding down -- late 30's to early 40's -- a dozen children have been squoze out, thus instantiating "evolution" as highly successful for that individual -- and with it only the genes that guaranteed "youthfulness" into the late 30's to early 40's.

          Yes, continued breeding into the 40's, 50's, 120's, etc. would indeed add to the age via genetics, but it is relatively minor and must overcome the energetic, wanting-the-females up and coming next generation to actually get the females.

          Furthermore, we do probably gain the slow advantage of increased age from evolution. Do any other mammals live as long as humans, even given ideal nutrition? Close, yes, but we should be dying in our 40's or 50's of old age after a healthy life based on our size, not 70's and 80's. For thousands of years, some people, mainly kings and the wealthy, have lived to old age even by modern standards, and they have continued to breed up into that great age, passing along their genes. I will bet this is the source of what appears to be the relatively unnaturally long healthy lifespan vis-a-vis other mammal species.

          We'll probably get the first big forays into extended life (well, second after good nutrition) via replacement parts. More $$$ for acephalous cloning experiments now! After that, chemistry (or other stem cell research) into preventing/reversing brain breakdown. More $$$ for cloning research now!

    • Lifespan? Hmmm. Where biological life is concerned, we're so far stuck with what we have. (If you think that two decades of backwards compatibility in the 80x86 has resulted in a barnacle-encrusted crufty system, how about two or three billion years of backwards compatibility for life on earth? :-) I don't see any a priori constraints that would make us design in things like those little cellular fuses, telomeres.

      As for gymnasiums, I can sympathize with future autonomous systems..."Aw, do I have to go to the health club and defragment?"

    • Remember HAL's last comment before being destroyed in 2010: "Will I dream?" Earlier in the movie, the arrogant Dr. Chandra told HAL's counterpart SAL9000: "Of course you will. All sentient beings dream. Perhaps you will dream of HAL, as I often do."

      Seriously, back in the early 70's I was working for some Ph.D. types whose goal was to mimic the human brain in a computer. I almost got fired for asking if that meant the computer would have to be offline 8 hours a night to dream, so it could reorganize its thoughts the way our brains apparently do during sleep.
  • by Anonymous Coward on Tuesday October 16, 2001 @09:55AM (#2435815)
    It reboots itself.
  • If, as IBM purports, "this new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data..." then maybe they should drop the "computing" part of the name. Perhaps "autonomic data access" or "autonomic data systems" or even something wackier. Feel free to make suggestions... I'm sure the slashdot crowd can come up with some winners! ;-)
  • by sbeitzel ( 33479 ) on Tuesday October 16, 2001 @09:59AM (#2435836) Homepage Journal
    Look out, guys, they're performing a paradigm shift!

    While I think that designing system components with feedback capabilities is a neat idea, remember that doing this in a safe way requires actual computer science. Or at least extensive modeling. It's cool, sure, but it's going to be a while.
    • . . . remember that doing this in a safe way requires actual computer science.

      That's why it's encouraging that IBM is doing it, not some BS startup.

    • This is no paradigm shift. This is just more of what we've been working on for forty years. Think about dynamic page swapping. Think about routing protocols. Most of the work in the design of systems and networks has always been to get these things to automatically manage themselves, in real time.

      The focus on interactive computing in the last fifteen years has just kept attention away from the ongoing work in this area.

      But then, maybe people don't really want automation, maybe they'd really rather participate more, not less.
  • "IBM's proposed solution looks at the problem from the most important perspective: the end user's. How do I/T customers want computing systems to function? They want to interact with them intuitively, and they want to have to be far less involved in running them. Ideally, they'd like computing systems to pretty much take care of the mundane elements of management by themselves."
    So... they want to cut on I/T staff by letting the computers manage themselves? This seems a very bad idea to me.
    I currently work as a sysadmin and I also have a lot of contact with end-users. Users like it when they have a problem, someone comes over, talks a little, fixes the probem.
    They want to be helped by a person that acts nice to them, that's why I think you'll probably always will need I/T staff.
    and from a more technical point of view: What if the computer fucks up? All these systems will probably "think alike" and if one Computer thinks that considering the managment of the PC it's a good idea to whipe the harddrive... more will probably follow...

    must...go....home....to...much...work... ;-)
    • I think you read into that one a little wrong.

      They're saying that they want the computers to handle the mundane tasks so that the IT people can have more time for the more important tasks, like the person-to-person tasks. Do you know any IT worker who isn't overworked? This is designed to help alleviate their stress and give them more time for the more important things in life.

      At least in theory =)
    • Re:Hmmm... (Score:5, Informative)

      by saridder ( 103936 ) on Tuesday October 16, 2001 @10:13AM (#2435895) Homepage
      A program/database is only only as good as the knowledge of the person who has input and wrote it. If a PC thinks that to wipe it's hard drive is the best way to fix itself, I'd blame the author who wrote the maintenance program. No expert would wipe a hard drive as a general fix for a PC, and would not write these instructions into the PC.

      In my opinion, part of a autonomous PC is to be self-sufficient, not act like a lemming and follow other PC's just to follow.

      Plus, just as human have a basic survial instinct to survive, I think you'd write this instinct into the PC as well and not have it destroy itself (unless it was doing major harm to its master, etc. Remember Asimov's rules for robots).

      Finally, I agree humans will never be replaced as the final decision maker in fixing and running PC's, servers, networks, etc., but when I was a sys admin, I'd have killed for PC's to be smart enough to do some of the basic, mundane, man-hour, labourious tasks such as upgrading Service Packs if I told them all to do it, install programs, etc. Then I could have done more fun stuff. Plus when I had to fix a problem, people weren't glad to see me, because I was only there when something went wrong. Granted they were happy someone was there to fix the problem, but all would have preferred that there was no problem in the first place (PC fixed itself).
      • > No expert would wipe a hard drive as a general fix
        > for a PC, and would not write these instructions
        > into the PC.

        Hence any number of OS's where the equivalent of "rm *.* -R" never even so much as warned you before forging ahead. I concurr. They were not experts -- they couldn't be. They were forging new ground. They were those who thought performing the first trapeze triple was a fancy thing for the future.

  • by bill_mcgonigle ( 4333 ) on Tuesday October 16, 2001 @10:01AM (#2435840) Homepage Journal
    My high school biology teacher must've said a thousand times, "Evolution proceeds towards what works, not towards what is best."

    The body works so well because it's highly highly highly redundant at the cellular level, not because there is a brilliant master control program controlling the most efficient implementation. You can't even imagine a number as big as the number of hormone receptors in your body.

    That kind of duplication in a computer system costs real money, and while a noble goal, people don't spend money on reliable systems, they buy Windows.

    This is a terribly useful approach in the battlefield, and the right thing to do once bandwidth and computational power are practically unlimited, but we're still in the stage of computing where people just want more features, reliability be dammed. After all, Nimda follows an autonomic behavior.
    • After all, Nimda follows an autonomic behavior.

      There's a lot to that, actually. In all of computing so far, virii are the only programs that effectively self-maintain. This is of course due to their unique environment - not just indifferent but positively hostile humans and sentries to evade and destroy.

      People make a lot of fuss about things like this, and doubtless IBM will make real advances here due to just their huge resources, but most of these concepts are not new. Ditto these new automated soldier-things the Army are developing. Yes, doing it in a more complex and mission-critical environment is far more prone to error (whereas when a virus fails to replicate, it's not the end of the world for it - there are another few million still going), but we are not looking at a paradigm shift here.

      Virus writer have a lot to teach us about self-maintaining and -tuning programs - while despising the destruction they cause, I can't help but admire their design prowess.
      • > Virus writer have a lot to teach us about self-maintaining and
        > -tuning programs - while despising the destruction they cause, I can't help but
        > admire their design prowess.

        It seems, you never looked at the code of virus programs. They are not self-maintaining or seld-tuning. Most of the time, they are even written very badly and tend to crash in unknown environments. But for a virus, that doesn't matter, as long as it performs well enough in a commmon environment. The author usually doesn't care it it crashes the current host, ruins it or just doesn't work if the virus gets confused. If a virus happen to find enough friendly computing environments, that offer it the exact condition it needs, we'll hear about it on CNN. If not, it just won't spread.

        All in all, calling a virus self-tuning or self-maintaining is utter crap, usually the kind found in many articles about artifical life by Katz.
        • by Snootch ( 453246 ) on Tuesday October 16, 2001 @11:09AM (#2436139)
          It seems, you never looked at the code of virus programs. They are not self-maintaining or seld-tuning. Most of the time, they are even written very badly and tend to crash in unknown environments.

          Actually, I have, and I know that many are very amateurish, but you come across the occasional gem - I once found a very cunning polymorphic macro virus lurking round. Funnily enough, those ones are the ones that tend to do the least damage - correlation?
    • The body works so well because it's highly highly highly redundant at the cellular level, not because there is a brilliant master control program controlling the most efficient implementation.

      Well, yes and no. There is lots of redundancy in the human body (even perhaps too much for typical conditions nowadays in the case of the liver and kidneys) but there is also the autonomic nervous system, which is an active controller, and it's at that level that this is being proposed.

      To get serious reliability you do need redundancy (RAID, failover, etc.) but you also need load balancing and monitoring software which works at a higher level. If your disk system fails you're screwed, but if you get hit with a Slashdot effect you can also be screwed unless you can dynamically shift resources around.

      Of course, that kind of stuff is hard. Look at all the problems Linux itself has with out-of-memory conditions and all the different approaches that have been proposed for dealing with it. None is obviously the best way, and each has advantages and disadvantages. Of course, in all of them there's at least some overhead, so your point about capacity isn't completely irrelevant even at this level.

    • My high school biology teacher must've said a thousand times, "Evolution proceeds towards what works, not towards what is best."

      Erm, sort of true. Evolution actually works to do what is 'best' in terms of the fitness function... i.e. seeks to maximise or minimise the result of some metric. If you pick your fitness function correctly, you can make the system optimise towards any required goal.

      Just make sure you don't have any bugs - because GA's and GP's will find and exploit bugs that give higher fitness metrics faster than the programmer.

  • Control (Score:2, Interesting)

    by LowneWulf ( 210110 )
    I would worry about these systems if I had one. I for one enjoy the fine-grained control I have over systems I manage.

    I would hate to see my web server decide to bump up the number of allowed simultaneous connections in response to a denial of service attack, or decide that the ogg encoder in the background is indeed more important than domain control services.

    ... and of course the manditory gripe - that my system decides that it doesn't like my pirated MP3s and deletes them automatically.

    If computers become smarter than the people who design their software, how are they any use as a tool anymore?
  • by nyjx ( 523123 ) on Tuesday October 16, 2001 @10:04AM (#2435858) Homepage
    Just to be cynical for once - this sounds like a "user centric" repackaging of a whole bunch of hard AI research: learning, reactive planning, goal driven behaviour and autonomous agent work in particular.

    In the end it turns out that the most complex problem arise in trying to coordinate a collection of "autonomic" (?) components. Distributed systems with unrully objects... This is what the autonomous agent community is mainly concerned with ( see the UMBC [umbc.edu] agents page or this very useful overview paper [liv.ac.uk] for example).

    Of course IBM pushing this it might mean a kick up rear for the academic to actually get some of this potentially cool stuff working. Chances are you never want the end user to know how it works anyway.

  • "Open the pod door, Hal" ...

  • by callmegracie ( 521494 ) <i...am...grace@@@angelfire...com> on Tuesday October 16, 2001 @10:15AM (#2435902) Homepage Journal
    ...where the Enterprise came upon the race that had built very complex, self-sustaining technology and were able to devote themselves to "higher pursuits", like art and music, but who were unable to fix the problems that eventually developed because there was no one left who remembered how everything worked?

    sorry for the run-on, but that was my immediate thought when i read Paul Horn's statement that the creation of "computer systems and software that can respond to changes in the digital environment, so the systems can adapt, heal themselves and protect themselves" is the only thing which will reduce the need for "constant human maintenance, fixing and debugging of computer systems." freeing humans for higher pursuits sounds good, but is probably only likely in a utopia. Horn goes on to say "The only way to get efficiency gains in information technology is to take some of the people out." This trend sounds like the steel industry - we'll have more cost efficient processes in providing IT services, but all those educated in that field will end up working at mc donald's.

    so what happens when we all forget exactly how this "autonomic software" regulates itself? i guess this is the final word in proving the importance of documentation! : ) ** begging for a flamebait mod** &nbsp&nbsp or we could skip the documentation and just kidnap the children of visiting alien starships when we eventually start dying of radiation poisoning from our super-self-configuring systems.

    the infamous penn state stalker server! [psu.edu]

    • ...where the Enterprise came upon the race that had built very complex, self-sustaining technology and were able to devote themselves to "higher pursuits", like art and music, but who were unable to fix the problems that eventually developed because there was no one left who remembered how everything worked?

      We're not talking about software that's self-modifying, though that may be the next step. Even self-modifying software will only be partially self-modifying anyway. The code will stay the same, the program will just make more decisions for itself.

      • what i meant by self-sustaining was that the system that ran the planet was basically hands-off. did you watch that episode? what happened was eventually there was a side-effect from the systems' operation that it produced a huge amount of radiation. so everyone got sick, and no one had children, and the race was almost extinct when the enterprise came. but i'm sure the system didn't change itself. if it had, it would've corrected the radiation problem on its own.

        this episode was rated 102 out of 178 on a best to worst list [geocities.com]. the episode was aired on February 15, 1988. it was episode #17 of the first season. you could read this more detailed here [geocities.com].

        • Hmm, I might have seen it. I'm just saying that something like that can never happen without self-modifying code in that there will always be programmers, even if all they're doing is writing code for fun; Likewise mechanical engineers, et cetera.
          • Exactly. Even in today's day and age you have "mountain men" who would ba perfectly capable of living if the economic and technological infrastructure collapsed. How about the Amish? They too would not be too affected if the rest of the world lost the "life support" of our autonomous systems.

            I can certainly see your point [slashdot.org] (callmegracie [slashdot.org]), but I think that as long as there is some human diversity, we are likely going to survive just about any apocalyptic event.
    • Define higher persuits. If I had all the requirements for life, free, I'd still want to learn, work with math/computing/the sciences.

      Your "utopia" would be hell to me.
    • freeing humans for higher pursuits sounds good, but is probably only likely in a utopia.

      Welcome to utopia.

      Recall that once upon a time, electronic computers were unwieldy contraptions that required the user to reprogram them in hardware for every new application. von Neumann overcame that obstacle, just as later scientists overcame additional barriers to complexity by building operating systems, high-level language compilers, text terminals, GUIs, and so on.

      The point is that the entire venture of computing has been one of bootstrapping additional levels of complexity, since its very inception. As Dykstra has put it, there is only one fundamental problem in computer science, and that is that computers are too hard to use. We have slowly eroded that barrier over time, but a lot more needs to be done to allow humans to think at even higher levels, similar to how they would work with an intelligent colleague rather than an idiot savant. This notion of "autonomous" computing is simply one more step in that direction

      Bob

  • Of when Crab and Tortise were involved in a phonograph player duel, where Crab was on a quest to find a phonograph upon which any sounds could be played, and Tortise kept making "This cannot be played on Phonograph X" records.
  • by MarkusH ( 198450 ) on Tuesday October 16, 2001 @10:24AM (#2435933)

    "The only way to get efficiency gains in information technology is to take some of the people out."


    They're called managers.

  • this says one thing to me

    ka-ching!![$$]

    lots of overtime
  • by BMazurek ( 137285 ) on Tuesday October 16, 2001 @10:34AM (#2435978)
    Apparently there are already some universities signed up on Autonomic Computing projects

    Yeah...but does the University know about it? :-)

  • This is not to be confused with anatomic computing [fu-fme.com], which is an entirely different concept.
  • Hmmm... a computer that's supposed to run perfectly all the time and doesn't let the user get in and mess with anything. It sounds vaguely familiar, like a fleet of old Macs booting up at the same time. One thing I know, don't make the power switch software-dependent, or we'll never be able to turn the bastards off without unplugging them... or throwing them out of the window.
  • by drnomad ( 99183 ) on Tuesday October 16, 2001 @10:43AM (#2436020)
    Reading books about Fuzzy Logic, Artificial Life and so on, I played a mind game, designing this "software" few years ago (and many others have I'm sure!).


    The problems I discovered were:

    * The building blocks of the software itself, are human optimized algorithms and datastructures;

    * In order to improve human optimized algorithmns (meta-optimization?), one could develop some form of trial and error optimization algorithmn, but this would complicate things even more (it's hard to determine whether the searchdirection makes any sense); designing such algorithmn is very hard, because, how long do we search before we give up? This is like the chess game, certain move may look silly in the first place, but it could be a very good move in the end...

    * If the program is to optimize smart, it will need to use *known* optimizations, and be unable to improve human optimized algorithmns... Introducing the factor of meta-optimization gives the problem of CPU-time distribution: how much CPU time may content optimization take, and how much time may met-optimization take??

    * If only known algorithmns are used, the program is bound to a limited level of complexities. Meaning that: lot's of human comprehension has high complexity, which is yet not very well understood by science; the "Perfect Human Interface" is likely to fail in this area - it's the area the user (again) needs to adapt to the machine.


    But if these guys actually succeed in their quest... brilliant!!

    • Although we would certainly seed the "meta-optimizer" with the best we can come up with, the optimizer would necessarily be free to combine our raw material in ways we did not anticipate.

      Further, these "mutations", provided they score well, would later be combined with each other as well as the best performers from the original seed material.

      The end result would be something we could never have predicted from the outset (or else why bother), and in the worst case might be so complex that we don't even understand HOW it works even though we may be able to satisfy ourselves that it DOES work. That will be the beginning of just trusting that the machine will work it out, and our role will gradually be reduced to just "checking that the answer sounds reasonable".

      Of course we will have to be careful then, but the potential benefits are so great that we won't stop pushing things forward.
      • Interesting, but one should define the tasks of a meta-optimizer first.


        When I wrote a B-Tree, as an improvement on a Insertion sort, I wondered whether I could think of an algorithm, fed with meta-semantics on the insertion sort, and designing the B-Tree as a evolutionary successor. It took me quite some time to understand the algorithm myself, and implementing this was even harder - semantic comprehension of the building blocks "in english" resulted in many more lines of C++ code. Because of the recursion, the B-Tree code needed, except for the actual B-Tree code, the following elements:


        * execution environmental code - organisation and anticipation of the recursive characteristics of the algorithmn; for example a messaging system to tell the calling function what we'd expect from it...

        * datastructure environmental code - understanding the benefits of using multiple data structures to be able to perform the algoritmn; but this would be related to the B+ tree even more; using 1 data structure (BTree) was harder than using 2 data structures (B+Tree)...

        * state determination code - needing to know where we stand and what to do when a (recrsive) function call returns; for the trees: do we need to rebalance?

        * borderline code - dealing with those borderline cases; what if I'm in a left node, having no left sibbling, while the right sibbling is not an option?


        Stepping from the B-Tree to the B+Tree, the data structures are quite simular to those used in the B-Tree, slight differences make the B+Tree algorithmn easier to implement, and better in throughput for table-indexing.


        Ofcourse, this is just a simple example where we could try to think of a meta-optimizer. One can extrapolate this to R-Trees and X-Trees, still, B, B+ R and X trees won't have to be the panacea to organize complex information, perhaps a computed "tree" or indexing structure would even perform better in that case. Oracle's database (and others) are optimized to detect their environment and to act as effectively as possible, although their algorithmns are basically human optimized. Even using "Fuzzy Logic" to determine the best balance between the tasks of caching, defragmenting, reorganizing (ie hashtables and 'clusters'), the main way to store the data is predetermined. In pure (non ORDBMS) SQL based systems, the IT-specialist has a hard time designing flexible datastructures with hierarchy; one meets that point-of-no-return very quickly. So in a system which has the elements of organizing hierarchy and repeatance, the perfect storing algorithm should be computed, perhaps going beyond human comprehension (I agree with you).


        Another problem with meta-optimization is that the system must try to model the way the user thinks - which is the only way for a system to act whether it understands the user. If you've ever studied the discipline of "Neuro Linguistic Programming" (I believe they call these guys 'motivation-guru's' in english), then you understand that everyone has personal sub-concious truths. A schema of reasoning is very personal, because every individual has learnt different things. The point is, that the system will be able to adapt to one person only would that be userfriendly? The system wouldn't be very suitable for public use.


        Still, as I said, I'm playing the mind game, and if someone is to overcome these problems... brilliant!

  • by scruffy ( 29773 ) on Tuesday October 16, 2001 @10:45AM (#2436026)
    I think a lot of the complexity we have is superfluous. Do we really need 1 GB of MS Super-Duper-Word to write a few lines of text? We bring too much complexity upon ourselves by our demand for more features and prettier interfaces.

    Anyway, the idea of Autonomic Computing is hardly new (consider plug-and-play and autoinstallers). The really, really hard part of it is to impose autonomic computing on a system that was not designed for it. It is very difficult to make a complex system "simple" without redesigning the complex system.

    • I agree completely with you. I don't want the computer to have to go through more then 50 lines of machinecode just to put 'a' on the screen. OTOH I do appreciate it when Office tells me I don't know jack about grammer when I'm writing up some sort of letter or report.
      Personally, I use a text editor for almost everything that nobody else needs to read. And a lot of what they do need to read. I won't say which, I may get flamed. I don't care if they gripe about the fact that it's only a text file and they don't like it. If it needs to look pretty, then I'm not the man to talk to anyway.
    • I agree that simplicity is usually the best option, but I don't think that all applications can benefit from the extremely low levels of simplicity that you describe in your "MS Word vs the Text Editor" example.

      Surely you are correct that complexity is often superflous, but the problem that these folks are trying to solve is inherently complex enough to call for a complex solution. Its similar to the networking in an OS. Networking makes up an astounding large and complicated part of any modern operating system, but its only needed unless you require a network connection.

      Bold text that looks "pretty" is awfully hard in a text file...
  • What's next? (Score:2, Informative)

    by clone304 ( 522767 )

    I think most of you, so far, are missing the idea here. I also think the good Dr. from IBM is too, but that is beside the point. The point here in redesigning the way systems work from the ground up is to make them more capable of doing what YOU as users/admins actually want them to do. The idea being that YOU set the policy and the computer learns how best to implement it.

    I, personally, don't like this very much. It sounds like the next step in closing off the workings of the "operating system" from the user. What happens to Linux and open source when Windows starts to dynamically rearrange it's code to optimize for your preferences and specific uses? It gets left behind is what.

    I've been thinking about where operating systems are headed and what I want in an operating system, lately. I had pretty much defined what I wanted, when I started to run across projects like this: TUNES [tunes.org], and ideas like this: Flow-Based Programming [http]. I then realized that I wasn't entirely original. People have been thinking about the same things and trying to work them out for some time. But there has been little mainstream work done to get things to happen.

    In my opinion, the design of TUNES and the ideas expressed about Flow-Based programming are a perfect fit for open source programming. And, there's no reason that autonomic computing couldn't fit right into the mix as well, as long as it's an open-source feature rather than a built in proprietary unified piece of the system.

    The new system I'd like to see would be completely dynamically restructurable, and reprogrammable from the ground up. I think this would be a prerequisite for full-blown autonomic computing, but I have a feeling that the corporates are going to slip it into Windows in such a way that Windows stays the same on the surface, but just tells you less and makes more decisions for you than it already does. Problem is, that's what most users think they want. What I suggest is doing it in such a way that each user has total choice about how his system is designed and operated. Of course there would be predefined templates for certain types of systems (web servers, web/e-mail clients, gaming system, desktop publishing workstation, etc). So a user could pick one or more open source templates on which to base his system and then modify it to his needs as he goes. These templates would define what optimum scheduling and resource allocation should be done for specific tasks and merge this at the lower level with the needs of other tasks and the priorities set by the user or learned dynamically by the system.

    I think we'll see some very interesting advances in the next 10-15 years. Let's hope the open-source community doesn't miss the boat. Microsoft sure as hell won't.

    • I mostly agree with what you are saying, paricularly about the focus on the user's intentions in relation to the computers actions.

      However, you refer operating systems as the basic component of the "Autonomic System" the article proposes. This, I think, is a little off the point. Individual operating systems should be nearly invisible in such a system. Instead, more abstract entities, running on hetergenious operating systems, will be the basic components. The templates that you refer to will be things like a particular data source, or an email service, or a voice communication entry point. These components will distributed and independently maintained. They will, however, be locatable via a single search mechanism.

      That probably didn't make sense. I'm late for a meeting and I'm typing faster than I'm thinking ;-)

      I also think that Sun's JINI is worth mentioning. I think that JINI's creators intended it to be the commication backbone for systems exactly like this one.

      AND for the only thing that I can really contribute: I think that such systems will need to be very dependent on a consistent error system, where components can reliably propagate errors in such a way that other components can "understand" and act accordingly. Also, its seems that formal specifications (languages such as Z) could be used to specify the interfaces between shared resources.

      Blah! on to my meeting.
      • Well, I had to leave a lot out and look for most convenient words to avoid writing a PHD dissertation about it. If you define an operating system as the basic software that comes on the system or that makes the system capable of doing anything at all, then it fits with my view of it. This is not necessarily the best/proper definition of the phrase "operating system", but it's how I choose to look at it. In other words, anything that can boot and run, at minimum, an infinite loop qualifies as an operating system. Code that booted then left the processor to execute whatever exists as garbage in uninitialized RAM, would not. However, what I am proposing is a basic framework that does as little as possible of what a traditional kernel does, providing only a component object framework and access to devices. Most normal kernel activities like scheduling and resource management would be handled by plugin component objects. And these will be replaced dynamically as the need arises, or possibly several might run concurrently, being managed by a master scheduler module. Also, it should not HAVE to run off of distributable components, though it should definately be at home with that. Like you said, components should be searchable via a single search mechanism, though possibly searching among several configurable code repositories. Packaged components should come with source by default, unless the component is under a proprietary license. And applications don't ever have to be "compiled" into one huge bloated binary image, though I imagine there might be reasons to allow it. The idea being that since all of the open source components needed to build just about any application are available, why reinvent the wheel. Applications could be defined visually and specified in a design language. So, users could customize their applications at run time to streamline and adapt the apps to their needs. They could then share this redesigned template with others who have similar needs. So, in this framework, if it was built of fine grained components. I could rip apart MS Office and end up with my own app that consisted only of Clippy and a plain text editor with spell/grammar checking. And if I needed more functionality or pretty fonts and formatting for a particular job, I could plug those features in and go. Anyway, whatever, I could yap about this forever.
    • Re:What's next? (Score:1, Interesting)

      by Anonymous Coward
      No, it is you who is missing the idea and replacing it with his own - which is how you can imagine that the good Dr doesn't "get" his own concept.

      Stafford Beer has been writing about this topic for three decades. There was the Chilean Experiment in 1973, which was an experiment in autonomic computer control systems. This is nothing new, it's just something that almost everyone is ignorant of. Perhaps because the US government staged a coup in Chile in order to stop the experiment. In all my years on Slashdot I have yet to mention Stafford Beer and have anyone say, "Yeah, I've heard of him".

      No, it's really nothing new, and it's not as complex as the IBMers would have you believe [and it has nothing to do with XML!]. It's not some new way of writing software - Beer's system was implemented I believe in straight COBOL. It's a new way of designing software, and it is indeed a paradigm shift in the true sense of the word - which is how the research has gone unnoticed for 30 years.

  • Favorite quote from the manifesto page: "The information technology boom can only explode for so long before it collapses on itself in a jumble of wires, buttons and knobs." Watch out for flying information technology debris.

    There have been various proposals over the years to emulate biological systems. Cybernetics in fact was all about self-regulating systems -- at a lower level than is proposed here.

    I'm so glad they've got such a handle on things. I'll take shipment of the new autonomic computers in January.
  • by ldopa1 ( 465624 )
    This is some very cool stuff. Of course, I am reminded of a joke related to this:

    Joe, the world's leading Cyberneticist boards a plane bound for Athens, Greece. This flight is the maiden flight of American Airline's first totally automated flight.

    As he walks to his seat, he is greeted by a slick looking robotic flight attendant of his design. After sitting down, another attendant of the same design brings him a scotch and water (just the way he likes it) and says in a tinny voice "Good Morning Dr. Davidson, I hope you enjoy the flight."

    Settling back in his seat with his drink in hand, he thinks about the many thousands of hours he has put into the autonomic systems that entirely control this plane.

    As he goes to give his empty glass back to the robotic attendant, the plane pushes back from the gate. After a short while, he hears a much smoother robotic voice come over the intercom; "Good Morning, ladies and gentlemen and welcome to American Airlines Flight 1644 from Los Angeles to Boston. This will be a 2 1/2 hour flight. We at American Arlines would like to take a moment and point out that this is the first trip made completely under the control of the latest IBM 36000 Autonomic Robotic Piloting Computer. Every aspect of this flight, from the attendants serving you drinks to myself, the pilot, have been developed with safety in mind."

    As the flight trundles down the runway, picking up speed, the voice continues on; "So you can sit back, relax and enjoy the flight, secure in the knowledge that absolutely nothing can go wrong, go wrong, go wrong, go wrong......"

    Seriously, we're not very far from this. Flights routinely take off and land with only the most minor human intervention, and cars are being developed which use visual cues to pilot themselves down the road (a company in Australia has converted a Humvee for a test bed).

    • Seriously, we're not very far from this. Flights routinely take off and land with only the most minor human intervention, and cars are being developed which use visual cues to pilot themselves down the road (a company in Australia has converted a Humvee for a test bed).


      With a Humvee, they could get away with putting a Club on the wheel and a brick on the gas pedal. If they really believed that their software could do the job, they'd be installing it in a Geo Metro.
    • Um, couldn't help but notice... The flight's to Athens (beginning of joke) yet the announcer said they were going to Boston (about half way through). I suppose humans can make errors too ;-)

      But seriously, I don't have any fears about fully automated flights. Concerns, yes. Ie, I'm fine as long as they code it right, As opposed to some ppl who'd freak out at the thought of not having a human pilot.

      If anyone hasn't noticed, humans aren't designed to fly. A computer system specially designed to do this would eventually be more skillful.

      I'm especially interested in having cars that drive themselves. This could add a real safty factor, seeing as 95% of autos are driven by poorly skilled drivers. It seems western society has forgotten that we place ourselves in mortal danger every time we hit the road...
      • > This could add a real safty factor, seeing as 95%
        > of autos are driven by poorly skilled drivers.

        More than a few Sci-Fi stories have proposed as part of the background that human-driven vehicles will be outlawed on the general roads simply because they are the only ones causing any accidents anymore.

  • Open the door Hal. (Score:3, Insightful)

    by AbandonAllHope ( 211475 ) on Tuesday October 16, 2001 @11:12AM (#2436146)
    In reading this article I'm beinning to wonder if AI is going to arrive not in the form of software implementation, but in the form of a computer that's "autonomic". Think about it, the article discusses a computer that is "self healing" with no need for human maintinence. I would imagine such a system would also need limited regenitive capabilities in case hardware was damaged on a physical level, this might fall under the bracket of self replication. Think about the amount of information these machines would be handling and "autonomicly" distributing themselves.
    Even limited self replication coupled with the ability to process information so rapidly and powerfully seems like borderline sentience to me. What happens when you attempt to replace an autonomic router and the computer as a whole deceides thats really something you shouldn't do, because the router is so useful. Can this be coded around or avoided altogether? The people that develop this technology are going to have to be weary of creating something that cares more about its own processes than the user trying to make use of them.
    • OK, this is going to be kind of "trippy", but bear with me.

      I don't know about you, but I'm not afraid of this. This is the next step to transcendence for the human race. The goal of (wo)man is, or should be, to increase in power over his domain, both to preserve it and to reconstruct it.

      Technology serves to save the environment and destroy it, and it's all part of the balance in which we live. Primitive peoples live in harmony with what they're given; technologically advanced people make their own terms of harmony and strive to live in those terms.

      For an example of this, look around you. Chances are the environment you live in is not natural. Streets do not grow from seeds, buildings were not created by geological activities. Construction technology serves to create a new environment for you. We model the Earth after our desires, and thus, we have dominion over it.

      If we create artificial life, then we have achieved a sort of godhood, and as long as we keep /our/ higher powers in mind, then our creations are not abominations. This won't be HAL or SHODAN, folks. Morality flows down from higher planes in subsets. Our morality is merely what we need from the collective morality of all humanity, and the morality of the AIs will be that which we provide it.

      Heavy questions for a tuesday, but AI needs to be thought of spiritually now, not just in terms of the technology, if we want to really advance it. IMO, of course.

      - Josh


    • If theres no maintnance, what happens to us? Its our jobs to maintain computers.
  • Sounds like someone at IBM just read a load of Mark Burgess papers [iu.hio.no] and found their next marketing angle. Interesting that one of their ideas for "autonomic" optimisation is for the system to clone and run several OS images...just like recent AS/400 models can do! These and other great ideas coming soon from an (IBM) mainframe near you! Presumably they'll be open sourcing all their mainframe technology so that the effort isn't impeded by "proprietary standards"?

    Cynicism apart, it's a laudable initiative if it results in a large kick to existing research in these areas. SAGE [sage.org] are also turning their attention to the process of automating and scaling system administration tasks (see recent discussion on sysadmin "research" on sage-members list).

    OTOH, I can think of a large part of the IT industry - those vendors with profitable integration services business units - who possibly won't be throwing their lot in with IBM on this one.

    Ade_
    /
  • Fred Brooks dealt with the issue of complexity in his classic The Mythical Man Month. Of the quest for a technique to fundamentally simplify software development, he says:
    Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any--no inventions that will do for software productivity, reliablity, and simplicity what electronics, transistors and large-scale integration did for computer hardware. We cannot expect ever to see two-fold gains every two years.

    It is curious that the paper mentioned in the article does not deal explicitly with Brooks' objections since they are the best known statements of the problem of complexity in software.

    Among Brooks solution for reducing complexity is to use great design: Whereas the difference between poor conceptual designs and good ones may lie in the soundness of design method, the difference between good designs and great ones surely does not. Great designs come from great designers. Software construction is a creative process. Sound methodology can empower and liberate the creative mind; it cannot enflame or inspire the drudge.

    I would also add that simplicity engenders complexity. Simplified systems become subsystems for more complex designs. When the complexity of a system becomes a barrier to its further enhancement, simplifying it only allows its complexity to continue to increase.

    It is the objectives of the system that creates complexity, not the development techniques. The only way ultimately to reduce complexity is to artificially constrain the requirements.

    • Only 14 amino acids form the basis for tens of thousands of proteins and billions of combinations of genes. Those amino acids combine with one another in exactly three ways (covalence, hydrogen bonds, Van Der Waal bonds) to form those proteins. Maybe this is an example of what you mean by "Simplicity engenders complexity."

      Of course, there are other important chemicals in the body (both organic and inorganic) -- lipids, carbohydrates, salts, etc. But the "simplicity" of the protein "alphabet" is a starting point.
      • Maybe this is an example of what you mean by "Simplicity engenders complexity."

        It's one example. Carrying it forward, a living organism with autonomic systems will form complex societies of individual organisms. As soon as these societies evolve autonomic functions, they form complex alliances or symbiotic relationships. And so forth.

        I'm also thinking of the rules of chess that allow for a vast number of games, simple mathematical formulas that generate fractals, rules of grammar that produce language.

        There seems to be some kind of recursive process at work that tends toward greater complexity. As a result, the autonomic theorists may not be tackling complexity but simply moving it to a higher level.

        Unfortunately, I'm out of my depth here.

  • Autonomic Virus? (Score:2, Interesting)

    by zoward ( 188110 )
    Two possibilities rear their ugly heads here:

    1) An autonomic virus, written with the capability to "heal itself" once installed. Does this make sense? It seems to me that some existing virii already have some self-healing properties, such as those that hide a copy of themselves on a user's HD and insert a registry key in a Winodws registry to have themselves restored at reboot time. Thoughts?

    2) A virus designed to insert itself into an autonomic system would conceivably be able to use the system's "self-healing" properties to protect itself (a funny memory springs to mind. I went to remove Outlook Express from my Win2K box at work, and discovered that Win2K does not have the option to uninstall Outlook Express. Undaunted, I went into the folder the executable was in and deleted it. Within five seconds, the system detected my "user error" in deleting a system file, and restored it. It took me a while to figure out how to prevent this, but it really threw me for a loop when I first saw it happen).
  • Barnum science (Score:3, Insightful)

    by Alomex ( 148003 ) on Tuesday October 16, 2001 @12:08PM (#2436427) Homepage
    Every so often what is a reasonable, modestly interesting idea gets high high-jacked by the snake oil peddlers in whose hands it becomes the solution to the world problems.

    The popular press quickly grasps on to it, as these magic bullets increase the circulation of OMNI and Scientific American. Eventually the politicians hear about them and allot untold amounts of money to these efforts.

    After 5-10 years nothing much comes out of this, and the snake oil peddlers move on to another area.

    Among the thusly overinflated areas we have:

    - AI
    - neural networks
    - expert systems
    - nanotechnology
    - chaos theory
    - e-commerce
    - parallel computing
    - distributed computing
    - complexity (a la Santa Fe Institute) theory
    - logic programming

    the latest two additions are

    - the semantic web
    - autonomic systems

    /.ers are well advised to apply a healthy dose
    of skepticism to any such magic bullet claim.


  • What will we do once the world is run by self programming self healing self repairing and maintaining machines.

    What then?
  • Any biochemists out there care to enlighten the rest of us on the viability of the idea?


    While the idea has the power of lucidity, it's not clear whether it can be implemented effectively. The development of autonomic responses in complex animals developed over millions of years using sexual reproduction as a means of pruning less viable branches while introducing sufficient variation to ensure species-level survival.


    At the organism level, autonomy is accomplished through a combination of neural, hormonal, and physiological responses to external stimuli and internal state changes. How many proteins and hormones, etc are required for this? How many combinations of signals are there? Clearly the number of combinations is large enough to be considered "countless". That an organism as complex as a human being works at all is an impressive feat of biochemical integration and regulation.


    "...we don't really need... sentient machines and androids programmed to love and laugh -- to overcome the largest obstacle standing in our way."


    That's an assumption which remains to be validated! How does IBM know that it's not the other way around? Perhaps love and laughter (i.e. higher emotions) are a natural and inevitable byproduct of the ultimate expression of the ideas of autonomy. Put another way, it may be possible to provide some sort of low-level homeostasis without emotions, but the maximum expression of those concepts might lead to a deeper philosophical awareness. At that point, look for IBM's business systems to call in sick with "mental health days" once in a while.


    Richard Powers' novel Galatea 2.2 [barnesandnoble.com] was an interesting examination of the relationship between self-awareness, emotion, and intelligence.

  • by Animats ( 122034 ) on Tuesday October 16, 2001 @12:36PM (#2436585) Homepage
    But the Slashdot crowd won't like it, because the UNIX/Linux crowd is clueless in this area.

    • You should never have to tell the computer something it already knows.
      That was in the original Apple "Inside Macintosh".
    • Everything must be plug and play.
      The hardware side is reasonably close on this. All the newer interfaces (USB, IEEE-1394, PCI, PCMCIA) have identity info on all devices. And it's been that way for a few years now. It's time to pull the plug on the old stuff and insist that everything autoconfigure.
    • All derived state must be checkable and regenerable.
      This is key. And again, Apple almost had it right, once. The original Apple model was that the system had two main repositories of system state - the Desktop file, and application preferences. The Desktop file could be regenerated if needed (and had to be, due to lousy database design), and application preferences were cosmetic only - you could delete preferences at any time, and just went back to the defaults.
      Apple never faced up to checkability, though. And it hurt them, because they were running an unprotected OS with a tendency to trash its internal data structures.
    • Software installation needs to go away. Read Cooper's "The Inmates are Running the Asylum" [amazon.com] on this. I can't improve on what he writes. Installing an application should be viewed more like docking a peripheral and less like blithering all over configuration files.

      • Broken things must not contaminate other things.
        It's unacceptable to ever get bad data from a disk. Reported errors, yes; undetected errors, no. Everything must have error checking. Memory parity must always be on. (And ECC ought to be standard.)
    • But the Slashdot crowd won't like it, because the UNIX/Linux crowd is clueless in this area.

      That's one way to look at it, but what is really happening is that the Unix crowd ensures that features are built on top of rock solid foundations rather adding features that don't always work and then going back and making lots of fixes.

      BTW, I don't tell the OS the configuration of the hard drive everytime I open a file.

    • > All the newer interfaces (USB, IEEE-1394, PCI, PCMCIA)

      PCMCIA is still a sack of shit. I have any number of laptops, PCMCIA cards, tower drive PCMCIA drive bay units, whatever, and I always have to go find drivers for those devices, then for the PCMCIA devices, and plug them in, many combos don't work, lock up the PC, prevent it from booting, don't get recognized even with drivers installed, or do get recognized, driver loaded, but just don't "take" as far as the OS is concerned.

      Nah, it has a long way to go. Another "Mac" idea was requiring hardware to configure itself, or at least prompt the user to push in the install floppy, which executes and configures automatically. I'll never forget the idiocy of using a PC at work for the first time (after college) and, what the hell is with all this IRQ crap, I/O port address stuff, just to get my modem working? How in God's name do I find this out (no Internet back then)? Why doesn't the manual say? Where is my computer's manual? What's all this IRQ crap in Duke Nukem just to get the SoundBlaster to work? WTH?

      All you PC programmers suck! It was the only logical, rational conclusion.

    • Thanks for the many excellent points in your post; but, I'd like to offer these additional suggestions. I'm sure there's some mathematical terms (closure? reflexive?) that these are based on, but it's been way too long since I took that college class.

      • If you can set a parameter, you should be able to query it.
      • If you can query a parameter, you should be able to set it.
      • When querying a parameter, the syntax of the output should be the same as the syntax used to set the parameter.


      Hypothetical example:
      SET foo="bar" /* Set a parameter */
      QUERY foo /* Query a parameter */
      SET foo="bar" /* Output of the query */
      QUERY foo > param.txt /* Same, but saves settings in a file which could be later used to restore the settings */

      An advantage is that one can capture ALL the settings that configure an application. AND, one can use those captured settings to EXACTLY restore that configuration.

      An added advantage is that one can programatically generate permutations of parameters and their values for AUTOMATED TESTING of the possible configurations. Don't even need to know beforehand what the result is supposed to be -- just permute them, use them, capture what happens, and then sift through to see if everything makes sense. Save valid output as a baseline; fix any bugs that are discovered; repeat until all permutations are covered.

  • The first is, will IT workers embrace this? I know quite a few Oracle DBAs who are none to happy about turning over the optimization of their database to the computer. Maybe these people are just afraid of change and having to learn a higher art than what they've got, but if they tell their manager that they can do much better than the computer can, who will the manager believe? For this to progress, you must find a way to not scare the geeks.

    Secondly, how do we accomplish this without advancing machine tecnology too far? If a machine becomes self aware and protective of itself, what happens when we want to shut it down? What are you doing Dave? I know there are ways of preventing this, but will they work, and will we be able to find out if they work before it is too late, so to speak. I'm not trying to be paranoid, but this is something that is a real concern.

    Another piece of this that someoneelse mentioned is if the computer is maintaning the basic stuff, what happens when the computer dies and no one knows exactly how it did what it did? A very real example is the ubiquitus (sp?) of calculators. How many of you can still do long division in your head? There was some story I read in High School where this guy who could do simple math without a computer was such an oddity that he became a king or something like that.

    Keep doing those math problems.
  • by KidSock ( 150684 ) on Tuesday October 16, 2001 @02:51PM (#2437342)

    It tells your heart how fast to beat, checks your blood sugar and oxygen levels, and controls your pupils so the right amount of light reaches your ...

    There's an OO principle called dimeter which advocates as few dependancies as possible between objects. This sounds like a lot of hooks all over the place which is not a model of simplicity. It would be better for "it" to step out of the way and let each object adjust itself based on its surroundings just as in natural systems. Nature has a tremendous advantage over computers. It is far more efficient because everything is happening literally in parallel. Computers can really only do a very limited number of things at a time although sometimes the user perceives concurrencey due to very rapid time-slicing.

    As a result, programmers are forced to make tremendous compermises given the comparatively limited medium with which they have to work. It will take well established techniques and objective analysis to determine the be way to utilize bits on silica.

    Over the years I have recognised one principle that transcends this issue -- the issue of dealing with complexity. Oversimply it is Recursive Composition. This "pattern" or OO construct as it is sometimes referred to does not have a Class or particular set of relationships between objects. It's completely arbitrary. The idea, is to recursively delegate the responsibilty of another part of the system to yet another module. At the leaves of this tree you have the primative operations and at the root you have one simple instruction for triggering a potentially very complex cascade of instructions. Thus you have reduced the complexity of the overall system. The key difference between this and just another group of functions calling one another (and thus target to reduceing complexity of programs and in real-life systems) is parameterization.

    As a simple example, imagine trying to encode or decode a database file. The database file has a header, a record list, and data chunks. Like this one on PalmOS PDB files [palmos.com]. If one were to apply the principle of Recursive Composition the API for this PDB codec would be, at the top level, PDB_decode(char *src). At the next level down you have operations like Hdr_decode(char *src) and Record_decode(char *src). At the leaves you have dec_uint32be(char *src) to decode an unsigned 32 bit integer in Big Endian byte order.

    If you can parameterize cleaning exactly what is required to perfrom a task and delegate it to another module you have broken the problem into at least two smaller problems which reduces the order of complexity. Simple! ;-P
  • by mj6798 ( 514047 ) on Tuesday October 16, 2001 @03:59PM (#2437632)
    Biological metaphors like that were the inspiration and motivation behind Smalltalk and similar systems (read the original papers). It didn't work out that way.

    Homeostatsis and self-regulation are not properties that you implement once in some abstract data type and that henceforth works for everything, or that require breakthrough new technology, they are design goals that you need to take into account when you design each and every part of a system. Biological organisms have been forced from day one to deal with these issues. The reason real software systems don't do this is not that people don't know how to, it's that software developers don't bother and aren't trained to do it, and that they can get away with it because there are always smart humans around to help it.

    So, next time you write a new piece of software, think about how you can make it more self adapting and less reliant on numerous environment variables and other arguments supplied by the user. The pathsearch library is a simple example of this.

  • Security? (Score:3, Insightful)

    by Michael Woodhams ( 112247 ) on Tuesday October 16, 2001 @04:12PM (#2437704) Journal
    More networking + more devices + more subtle, automatic behaviour of devices - human oversight = many more security holes.
  • The article lists 8 characteristics of autonomic systems:

    1. To be autonomic, a computing system needs to "know itself " and comprise components that also possess a system identity.
    2. An autonomic computing system must configure and reconfigure itself under varying and unpredictable conditions.
    3. An autonomic computing system never settles for the status quo -- it always looks for ways to optimize its workings.
    4. An autonomic computing system must perform something akin to healing -- it must be able to recover from routine and extraordinary events that might cause some of its parts to malfunction.
    5. A virtual world is no less dangerous than the physical one, so an autonomc computing system must be an expert in self-protection.
    6. An autonomic computing system knows its environment and the context surrounding its activity, and acts accordingly.
    7. An autonomic computing system cannot exist in an hermetic environment.
    8. Perhaps, most critical for the user, an autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden.
    The features indicate a system several orders of magnitude more complex than the one it is intended to correct. Given the fact that these autonomic systems are supposed to deal with the shortage of IT professionals, where are the IT professionals supposed to come from that will implement these systems? Are they solving the problem of complexity or just creating more complex systems to maintain?

    When they develop an autonomic programing language it will be time to give it some serious consideration.

    • The features indicate a system several orders of magnitude more complex than the one it is intended to correct.
      These objections were once applied to "automatic code generators": who on earth needs a globally optimizing compiler to print paychecks, and what chance is there of writing compilers that can handle weather-prediction algorithms? They'd be more complex than the programs they're being asked to compile!
  • Gene Roddenbery (spelling?) came up with fully integrated autonomic computing with hardware (ie., the Enterprise) ages ago, best expressed for the first time in The Next Generation. Plainspeak computing, graphical/physical interfaces, monitoring systems paramters, generating data and executing instructions at voice-command, the whole scha-bang, all on the bridge of the Enterprise. Of course this is where computing is going. The fine writers at IBM should watch more TV. I am not sure that this is where computing SHOULD go as the paper contends; it puts that much more power in the hands of people wishing to commandeer a business or endeavour by way of its computer control. Doubtless security measures will improve but there is ALWAYS a way to adapt to an obstacle; we see this in nature as we do in computing. Overcoming obstacles is why computers exist in the first place.

    wakka smakka
  • Wouldn't this involve software that adapts itself? Perhaps even changing its own code? What would that do to the concept of sourcecode or copyrights?
  • Can someone get me a copy of that paper by some IBM employee? Seriously.

    Auto-coding - Virtual Interaction Configuration:

    Knowledge Navigational Mapping - Virtual Interaction Configuration [mindspring.com]
    The Matrix Metaphores [mindspring.com]
    VIC legal, equations, definitions and concepts [mindspring.com]
    Command specs [mindspring.com]
    Knowledge Calculator [mindspring.com]

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...