Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Lightweight Languages 189

Denise writes: "'What happens if you get a bunch of academic computer scientists and implementors of languages such as Perl, Python, Smalltalk and Curl, and lock them into a room for a day? Bringing together the academic and commercial sides of language design and implementation was the interesting premise behind last weekend's Lightweight Languages Workshop, LL1, at the AI Lab at MIT.' Simon Cozens report, on perl.com, says it wasn't the flame fest you might have imagined."
This discussion has been archived. No new comments can be posted.

Lightweight Languages

Comments Filter:
    • Yes, apparently at least Dave (co-author of the pickaxe book) were invited, but unable to attend. Bummer.
      • Re:Ruby (Score:2, Insightful)

        by thing12 ( 45050 )
        Big Bummer. This is a language that needs more exposure. I'd love to use it in place of perl if there were more toolbox modules out there for it. It needs to reach a critical mass before the really complicated modules, like template toolkit, get written - and before they get written people aren't able to write the quick and dirty programs (that languages like perl are famous for) that give it the critical mass... vicious circle.
    • Re:Ruby (Score:2, Informative)

      by Phantasiere ( 209248 )
      Both matz and Dave Thomas were invited to attend the conference, but unfortunately they both had prior commitments at the time.
    • I wrote a web site in it (about 3500 lines of ruby, not entirely polished) and it was super quick to write, includes an output class for generating content in various formats (HTML, plain text, whatever else you want to implement), a system for processing form data (not great, but reusable), and a template-ish system (uses the decorator pattern) for attaching arbitrary content to a page. The only problem is that it is far too slow to put into production, I'm porting it to perl, but I really hope that the execution times of ruby programs will increase, ruby is pure joy to program in.
  • Lua (Score:2, Interesting)

    by Hougaard ( 163563 )
    All my votes goes to Lua. www.lua.org. A fantastic language, small and very fast!
  • XML and Lisp. (Score:3, Interesting)

    by DGolden ( 17848 ) on Wednesday November 28, 2001 @09:15AM (#2624156) Homepage Journal
    XML is, as is touched upon briefly in the aritcle, just lisp s-expressions, but with phenomenally annoying syntax.

    If you have to work with XML, and you know some Scheme, I recommend translating it into Scheme form, via ssax [sourceforge.net]. It makes XML not quite such a pain in the arse.
    • And what does that buy you? XML is a meta-syntax for markup languages; Scheme is a programming language. You don't "code" in XML any more than you code in HTML.

      This is not a troll: what's the equivalent in Lisp to XML namespaces, or attributes, or DTDs?

      • Note: all examples taken from http://okmij.org/ftp/Scheme/xml.html

        Here's an xml file:

        <Forecasts TStamp='958082142'>

        <TAF TStamp='958066200' LatLon='36.583, -121.850'
        BId='724915' SName='KMRY, MONTEREY PENINSULA'>

        <VALID TRange='958068000, 958154400'>111730Z 111818</VALID>

        <PERIOD TRange='958068000, 958078800'>
        <PREVAILING>31010KT P6SM FEW030</PREVAILING></PERIOD>

        <PERIOD TRange='958078800, 958104000' Title='FM2100'>
        <PREVAILING>29016KT P6SM FEW040</PREVAILING></PERIOD>

        <PERIOD TRange='958104000, 958154400' Title='FM0400'>
        <PREVAILING>29010KT P6SM SCT200</PREVAILING>
        <VAR Title='BECMG 0708' TRange='958114800, 958118400'>
        VRB05KT</VAR>

        </PERIOD></TAF>
        </Forecasts>

        and here's the equivalent in sxml

        (Forecasts (@ (TStamp "958082142"))
        (TAF (@ (SName "KMRY, MONTEREY PENINSULA")(BId "724915")
        (LatLon "36.583, -121.850")(TStamp "958066200"))
        (VALID (@ (TRange "958068000, 958154400"))
        "111730Z 111818")

        (PERIOD (@ (TRange "958068000, 958078800"))
        (PREVAILING "31010KT P6SM FEW030"))

        (PERIOD (@ (Title "FM2100")
        (TRange "958078800, 958104000"))
        (PREVAILING "29016KT P6SM FEW040"))

        (PERIOD (@ (Title "FM0400")
        (TRange "958104000, 958154400"))
        (PREVAILING "29010KT P6SM SCT200")
        (VAR (@ (TRange "958114800, 958118400")
        (Title "BECMG 0708"))
        "VRB05KT"))))

        Namespaces are dealt with as in
        (c:part (*NAMESPACES* (c "http://www.cars.com/xml")))
        • Nice, but there are tools that understand XML, but don't understand Scheme. And some of them do useful things. Scheme is inherently more meaning filled, but that also means that it may contain expressions that require a Scheme interpreter to understand. Which means that you can't use your tools.

          OTOH, XML is being extended in all sorts of incompatible ways, so it may soon loose the advantages that it has held.

          Is there a Docbook parser for a Scheme representation? Can it be used to, e.g., generate pdf, rtf, dvi, tex, etc. representations of the code? If so, then my objections are probably wrong. Otherwise, perhaps not. Perhaps if I wrote the Scheme representation, I would need to translate it into XML for useability. (Though in that case, I might prefer the older Docbook format.)

          This is a serious question, as right now I'm having difficulty in getting Docbook working on a Win95 system, but I can easily get, e.g., MS Scheme (and others) working.
          .
          • Re:XML and Lisp. (Score:5, Interesting)

            by jacobm ( 68967 ) on Wednesday November 28, 2001 @06:36PM (#2627385) Homepage
            You are confused. There are such things as s-expressions, which are, loosely, any sort of balanced nested parenthesis thingy:

            ((() ((hi) (there mr)) slashdot) guy () () ())

            is one, for example. Lisp (and Scheme, a dialect of Lisp) both a) represents programs as a particular subset of s-expressions, which we'll call the set of s-programs (which makes sense, right? Every program is a piece of data, but not every piece of data is a program), and b) has facilities for very easily manipulating s-expressions -- they are Lisp's favorite datatype (the name LISP comes from "LISt Processor," in fact, and s-expressions are really just a way of writing down lists). The appeal of using Lisp dialects to process XML is based not on the fact that those programs are S-expressions, but that they process S-expressions easily -- and, as it turns out, XML expressions can be trivially converted to S-expressions -- let's say that there's a subset of s-expressions called X-expressions, and there's a trivial bidirectional mapping between X-expressions and XML expressions.

            So, let's say you want to write a program that reads and/or writes XML to communicate with the world. You can just write your program as a normal old S-expression-manipulating program, like Lispers have been doing since 1958, and then right where you read in the data, you call some function that reads an XML datum instead, and right where you write it out, you call some function that writes an XML expression. Now you can still use all the XML-processing gizmos you already have, but you can also write your own XML-processing gizmos really easily. In fact, I've been involved for some time in a Web programming language project, and that's how we manipulate XHTML: we read in XHTML expressions, manipulate them with Scheme code that's easy to write because the XHTML-to-S-expression mapping is so thin, and then write out XHTML expressions to the web browser. None of the other XML-based tools in the chain (the web browser, XML editors you use to generate the web page templates, et cetera) need to know or care about the fact that my implementation is in Scheme.

            The only smugness you hear from the Lisp people (and this is where the faux comparisons between Lisp and XML come in) stems from the fact that Lispers have been storing their data the way XML does, only more cleanly and with less typing, for years. Now XML comes along and everybody thinks it's going to usher in world peace and change the way we put our pants on. Well, dammit, Lispers were already putting their pants on a different way, thank you very much!

            • I'm not confused, particularly. But I don't believe that the tools that I am currently thinking of handle Scheme formatted s-expressions (presuming that you call XML s-expressions). I was particularly thinking of Open Jade.

              In particular, I am contemplating ways of getting Docbook, etc. to process correctly as a component of other programs. Easy on Linux, on Win95 ... I haven't figured it out yet (well, the Open Jade site was down when I went there, which may have some effect on this....)

              And it was in this context that I ran across an argument claiming that Scheme expressions were better than XML.

              I suppose the short form of my question would have been "Better for what?", but that seemed discourteous. And besides, if there were a way to do this in Scheme, I wanted to hear about it. So far, no answers.
              • "Better for what?"

                Scheme is better for manipulation of XML.

                Translating XML -> scheme is easy.

                Manipulating s-expressions in scheme is powerful.

                Translating s-expressions back to XML is easy.

                It won't help your problem, though.

                jeff
      • I don't think you correctly understand what the OP was saying. What he said, translated into CS-ese, was "The set of XML expressions is isomorphic to the set of Lisp s-expressions," a true statement. What you heard was, "The set of XML expressions is isomorphic to the set of Scheme programs," which is clearly not at all true -- as you point out, XML (and s-expressions) can be used to describe any sort of data, while valid Scheme programs are much more constrained and can only describe a particular set of computations (though the cool thing about Lisp is that programs are themselves represented as a subset of the very same sort of data that Lisp programs are best at manipulating, which is why Lisp macros are so incredibly more powerful than C macros). However, there's a trivial map from XML expressions to S-expressions, and Lisp loves S-expressions more than any other kind of data, which means that Lisp loves XML expressions almost as much.

    • Which is cool, I love Scheme. But can I inline Scheme and make my ten-year-old, SGML-based pagination engine understand it? Let go of your hype, Luke. Use the code.

      Or better:

      Dammit, Jim, I'm a publisher, not a data-modeller.
      • > But can I inline Scheme and make my ten-year-old, SGML-based pagination engine understand it?

        Probably. Good SGML processing tools understand DSSSL, which is, of all things, scheme. SGML used to just break down entirely to sexps, but that approach wasn't fast enough, given the state of the art of scheme at the time.
        • Not the one I'm using. Show me, master, I am eager for enlightenment. I know about DSSSL and its grandchild XSL, which has inherited its Schemy goodness. Which SGML processor are you thinking of? Clark's implementation did not, unless I wasn't paying close enough attention.

          This is something I want. Not some crusty: Bah, its just sexps; but some working code.
  • by CProgrammer98 ( 240351 ) on Wednesday November 28, 2001 @09:19AM (#2624162) Homepage
    Interesting weekend! Here's the summary in case you can't get on, (or if you're lazy!)

    As I've indicated, the interest of the workshop was as much what was going on outside the talks as well; Dan and I got to meet a load of interesting and clever people, and it was challenging for us to discuss our ideas with them - especially since we didn't always see eye to eye with our academic counterparts. Sadly, few people seemed to have heard much about Ruby, something they will probably come to regret in time. Dan seemed to have picked up a few more interesting technical tips, such as a way to collect reference count loops without walking all of the objects in a heap. Oh, and we found that you should pour liquid nitrogen into containers first rather than trying to make ice cream by directly pouring it into a mix of milk and butter. And that the ice-cream so produced is exceptionally tasty.

    But seriously, what did we learn? I think we learned that many problems that we're facing in terms of Perl implementation right now have already been thoroughly researched and dealt with as many as 30 years ago; but we also learned that if we want to get at this research, then we need to do a lot of digging. The academic community is good at solving tricky problems like threading, continuations, despatch and the like, but not very interested in working out all the implications. To bring an academic success to commercial fruition requires one, as Olin Shivers puts it, "to become Larry Wall for a year" - to take care of all the gritty implementation details, and that's not the sort of thing that gets a PhD.

    So the impetus is on us as serious language implementors to take the time to look into and understand the current state of the art in VM research to avoid re-inventing the wheel. Conferences such as LL1, and the mailing list that has been established as a result of it, are a useful way for us to find out what's going on and exchange experience with the academic community, and I look forward intently to the next one!
    • To bring an academic success to commercial fruition requires one, as Olin Shivers puts it, "to become Larry Wall for a year" - to take care of all the gritty implementation details...

      First John Malkovich, then Andrew Plotkin [wurb.com], now this. Aren't things getting a little out of hand?

      -- MarkusQ

  • So, did they get around to figuring out which language is best suited to control a robotic arm in such a way to dump milk and butter into liquid nitrogen?

    "My language tastes better!"
    • Butter? Don't you mean cream? I wonder what exactly the definition of "lightweight languages" is here. Perl is not my idea of lightweight; lightweight *programs* are pretty easy to write, but the language and its environment are *huge*. Python isn't that small either.

      Pascal is a lightweight language, even with extensions. Take a look at pax [rave.org] -- that's very possibly the smallest non-obfuscated functional language out there. Take it to another level: Befunge. False. Brainf*ck (the smallest Turing machine implementation I've ever seen). OISC, living proof that Subtract and Branch If Negative is all you need.

      Though to be honest with you, when I think lightweight... HTML is precisely one example. The Unix Shell(s), with the possible exceptions of bash and ksh93. JavaScript, if not in execution, is lightweight in concept; Java is not, though it was intended that way. Scheme is lightweight; Common Lisp is not. Snobol probably was. *roff is, PostScript is not. Forth is; var'aq [geocities.com] is not. Lightweight implies two things to me: small overhead and specialized (though not necessarily limited) functionality.

      So yeah, I don't much like the name of this conference.

      /Brian

  • by Anonymous Coward on Wednesday November 28, 2001 @09:22AM (#2624167)
    How "lightweight" a language can you produce, yet keep it usuaable?

    To me, it would seem that the lighest I can come up with is:

    • The ability to create variables, and assign an absolute value to them
    • Comparision, with equals, is less than and is greater than
    • Branches to an absolute position in the code
    • Math operators + & - only
    • Ouput to screen.
    • Input from keyboard

    So would that be usuable? A simple program such as:
    VAR A
    VAR B
    INPUT A
    INPUT B
    C=A+B
    PRINT C
    GOTO 3


    Can we get even more lightweight? :)
    • How "lightweight" a language can you produce, yet keep it usuaable?
      To me, it would seem that the lighest I can come up with is:

      <snip>
      You sir, are mistaken. The only ability a programming language really needs is to output "Hello world" to the screen. Here's an interpreter (in Perl), let's see if you can guess the syntax:
      print "sansChoice interpreter\n> ";chop($code = );print "\nHello world!\n";exit;

      This is a powerful, intuitive, interpreted, simple, no-point-oriented (NPO) helloworlding-language.


      • > You sir, are mistaken. The only ability a programming language really needs is to output "Hello world" to the screen. Here's an interpreter (in Perl)...

        That's nothing. I'm putting the finishing touches on my new processor design, and includes a native PHW opcode, no arguments.
      • by Anonymous Coward
        General use. For example, could you write a simple game (Say, Pong) with it? How about the perenial favourite "What is your name?" Could you do more complex things with it, such as mangle a simple email address into an RFC 2822 compliant address? Sort a list of values?

        Think simple tasks that you would do with e.g a shell, or Perl, or even small C programs. How far can you strip a language down, yet still be able to acomplish those tasks?
        • If you want to be able to use it directly for real general-purpose programming, in the existing environment, it has to interface to any C library. That's a long way above and beyond Turing completeness.

          This changes it from a "closed" language, with fully-specified behavior, complete within itself, to an open language which may be extended to arbitrary behavior with external modules, so it's not really a small language at all, just a small interface to a huge language. It either needs to be a compiled language, or to have compiled modules.

          On the other hand, if you are willing to allow an environment designed as needed, to access all needed functionality in the form of one input and one output line, it returns to the sort of task a Turing machine can do.
    • by Tom7 ( 102298 ) on Wednesday November 28, 2001 @09:38AM (#2624211) Homepage Journal
      Well, we can see that you learned to program in BASIC. ;) Not all languages need assignment as their primary form of computation ...

      I'd say that the lambda calculus is more lightweight, and also easier to program in than your example:

      exp ::= x (variable)
      | exp1 exp2 (application)
      | (fn x => exp) (function)

      Basically the key is that you have higher-order functions (you can pass them around), and that's it. With this, it's relatively easy to code up booleans, integers, lists, recursive functions, trees, or practically anything. (If you wanted to do IO, you'd need some kind of primitive functions for interfacing with the machine.) Since everything is higher-order, it's easy to code these once and then pass them around. It's not as nice as a modern language, but it's nicer than a turing machine...

      Actually, there is a simpler language that uses only two functions (!), but this one is pretty hard to program in directly.
    • VAR A
      VAR B
      INPUT A
      INPUT B
      C=A+B
      PRINT C
      GOTO 3
      Can we get even more lightweight? :)

      Sure. Why do you have to specifically declare variables? And why have a special syntax for "input a", why not just have input return it's own value?

      (print (+ (read) (read)))
      Heyyy, that looks familiar....:)

    • by StrawberryFrog ( 67065 ) on Wednesday November 28, 2001 @09:55AM (#2624249) Homepage Journal
      The lightest computer mathematically proven to be equivalent to any other lanuage is the Turing machine [google.com],


      If you want to experience the Turing tarpit (where anything is possible, but nothing is easy enough to actually do) firsthand, try the Brainfuck [muppetlabs.com] language, based closely on the turing machine. the language has 8 instructions, and only one of them (input) has any arguments beyond an implicit current location. The compiler is 240 bytes!

      • try the Brainfuck language

        Bf is a lot of fun, but not light in the sense of perl or scheme. Since the article didn't define light language, I'll give it a shot. Looking at their choice of languages, light appears to mean: easy to program, easy to understand, but powerful, interpreted languages. Bf is none of the above. About all you can say for it is Turing-complete!
        • Bf is a lot of fun, but not light in the sense of perl or scheme. Since the article didn't define light language, I'll give it a shot. Looking at their choice of languages, light appears to mean: easy to program, easy to understand, but powerful, interpreted languages. Bf is none of the above. About all you can say for it is Turing-complete!

          No, it also has the advantage that, if some poor sap goes to all the trouble to write a deamon in it, you get to smile and say "bfd!"

          -- MarkusQ

          P.S. It's also occasionally useful to drive a spike in "you can't do Y in language X" debates that have gotten out of hand.

        • Looking at their choice of languages, light appears to mean: easy to program, easy to understand, but powerful, interpreted languages

          Yup, BF, turing machines & lambda calculus are "light" in the sense that they are tiny but theretically complete & of not much practical programming use. For instance, you can't do TCP sockets in Brainfuch no matter how hard you code, cos there's no way to get to the OS socket API.

          The PHP that I use at work (and the Perl, Python, Ruby etc that other people use for similar tasks) are "light" in the sense of flexible, and quick to knock something together, inegrates well & comes with a great big heavy library full of usefull stuff.

      • If you want to experience the Turing tarpit (where anything is possible, but nothing is easy enough to actually do) firsthand, try the Brainfuck [muppetlabs.com] language, based closely on the turing machine. the language has 8 instructions, and only one of them (input) has any arguments beyond an implicit current location. The compiler is 240 bytes!

        There's an x86 compiler for bf at ~170 bytes, but isn't the smallest bf compiler written in bf well over a gigabytes?

        -- MarkusQ

    • Lightweight? you mean Visual Basic?
      yes i know it's very usefull for some purposes, but ultimately it's a lightweighrt language.
    • Sounded like fun, so here's a reverse-polish interpreter with a whopping 10 instructions (counting subroutine definition). It's not totally minimal, but it is useable, and Turing-Complete.

      Odd numbers are true, evens are false, and control flow is through conditional return and conditional tail recursion. Comparisons and other arithmetic operations can easily be built up from addition and negation. Named variables can be created by making subroutines that return an address.

      I call it, simply, rpol.

      RPOL
      #!/usr/bin/perl
      %commands=();
      %data=();
      @mainstack=();
      %builtins=(
      '+'=>sub{push @mainstack, (pop(@mainstack)+pop(@mainstack))},
      'neg'=>sub{$mainstack[-1]=-$mainstack[-1]},
      'set'=>sub{$temp=pop @mainstack; $data{$temp}=pop @mainstack},
      'get'=>sub{push @mainstack, $data{pop @mainstack}},
      'in'=>sub{read STDIN,$temp,1; push @mainstack, unpack('c',$temp)},
      'out'=>sub{print STDOUT (pack 'c',(pop @mainstack))},
      'eof'=>sub{push(@mainstack, (eof STDIN)?1:0)},
      '<?'=>sub{(pop(@mainstack)&1)?'repeat':0},
      'ret?'=>sub{(pop(@mainstack)&1)?'return':0},
      );

      open(PROGFILE, "<$ARGV[0]") or die "Couldn't open program file.";
      while(<PROGFILE>){
      chomp;
      if(/^#/){
      #ignore comment
      }elsif(/^:\s*(\S+)\s+(.+?)\s*$/){
      $command=$1;
      $commands{$command}=[split /\s+/,$2];
      }elsif(/^\s*(\S.*?)\s*$/){
      rpol_exec(split(/\s+/, $1));
      }
      }

      sub rpol_exec{
      REPEAT: ;
      for(@_){
      if(exists $commands{$_}){
      rpol_exec(@{$commands{$_}});
      }elsif(exists $builtins{$_}){
      $temp=$builtins{$_}->();
      if($temp eq 'repeat'){ goto REPEAT }
      if($temp eq 'return'){ return }
      }elsif(/^(\d+)$/){
      push @mainstack, $1;
      }else{
      print STDERR "Unknown token: '$_'\n";
      exit(1);
      }
      }
      }
      #END FILE

      For a sample, I wrote a program that swaps every two bytes of input, then writes them to output.

      Sample Code (test.rpol):
      #unused cat utility
      :cat in out eof not <?
      #unused newline function
      :nl 10 out
      #main program
      :not 1 +
      :swap 1 set 2 set 2 get 1 get
      :cleanup eof not ret? out
      :swapcat in cleanup eof ret? in swap out out eof not <?
      swapcat
      #END FILE
    • Can we get even more lightweight? :)

      Oh my, yes. All you need to compute is three operations (and another couple to do i/o). Check out unlambda [eleves.ens.fr]. Lighter than brainfuck, probably even more maddening, since it doesn't have state like a turing machine does.

      Change the i/o ops to read and write arbitrary memory locations, and you could write an operating system in unlambda (same goes for any other of these toy languages)
  • I think we learned that many problems that we're facing in terms of Perl implementation right now have already been thoroughly researched and dealt with as many as 30 years ago; but we also learned that if we want to get at this research, then we need to do a lot of digging. The academic community is good at solving tricky problems like threading, continuations, despatch and the like, but not very interested in working out all the implications.... So the impetus is on us as serious language implementors to take the time to look into and understand the current state of the art in VM research to avoid re-inventing the wheel.

    I think that's the coolest part of probably the whole conference. If perl/Parrot/Python can manage to take the best of both the academic and the practical worlds, they'll be unstoppable. Heck, it might even be a first! The two seem allergic to talking to each other, as if they'll become contaiminated, rather then treating each other as a chance to learn, grow, and test.
  • And what about this [userfriendly.org] ? At least to represent the commercial side of language design...
  • This is just the sort of discussion I like to see happening. Not a "my language is better than yours" bout, but a frank examination of what makes a language good, and what makes it better.

    I get very tired of the "X is better than Y" fights. They're pointless, and if this collection of language pros can avoid it, so can we. The better language is the one that gets the job done best for you, period.

    Rather than clinging to our cliques, getting together with users and creators of other languages is beneficial to everyone. Hybrid vigour, if you like.

    It's this sort of cooperation the open source movement in particular should embrace, not petty squabbles over syntax preferences. In the end, everyone should win.
    • A language is just a way to manipulate abstract concepts... The more concepts you have, the harder it is to make a language lightweight. It makes me think of those synthesisors with 3 buttons that had all functionalities through combinations of those 3 buttons. To me, it is just as hard as having more buttons. The essential thing is to understand the concepts you're trying to manipulate. Then anybody can learn any language...
    • Although i do agree that the petty squables are not very productive, they do create diversity between language syntax, which i think is a good thing. There is an amazing amount of overlap in functionality, and someitmes sytax is the only thing that really sets the languages apart. If i am more comfortable with algol descendant languages, why should i have to learn scheme syntax, when there are tools that will work for me in a style that i am comfotrable with. I know that it doesn't take too much time to aquire a cosmetic knowledge of a language (knowing what c functions aproximately map to what lisp functions, and such), but if i could use a tool that i am already comfortable with would mean NO learning curve. Colaboration is good, but not to the point if having only one or two languages left to use.
  • From the article (seriously):

    Oh, and we found that you should pour liquid nitrogen into containers first rather than trying to make ice cream by directly pouring it into a mix of milk and butter. And that the ice-cream so produced is exceptionally tasty.

    Years ago I read an article about a guy from Jackson Hole, Wyoming who made gourmet ice cream. He had determined that the two things that separated good tasting ice creams from the rest were:

    1. Fat. Ice cream needs lots of fat.

    2. Size of the ice crystals. The water in ice cream can be frozen in big crystals or little ones. If you freeze it slowly, you get big crystals. Freezing quickly leads to small crystals. Small crystals == better ice cream.

    So this guy found that he could make the smallest crystals by pouring everything into a big bowl with some liquid nitrogen and stiring it really quickly. This was after trying several different methods of freezing the ice cream, none of which were fast enough for him.

    He said that a good test of ice cream was whether it floated in water. Good ice cream should be dense enough to sink. I guess this is due to the high fat content. Of course once you put it in water, it is no longer good ice cream, right?

    • ???

      Fat is *less* dense than water (it floats).

      Maybe the right kind of cold fat sinks (as opposed to the water which expands upon freezing)?

      Anyway, sinking ice cream is strange.
      • Good ice cream sinks, but it's not because of the fat content. It's due to the air content.

        In some places there's a limit on how much air can be in ice cream: 50%. There's no lower limit, but at 0% you've just got a block of ice. So there's a de facto lower limit.

        Something like Ben and Jerry's has much less air. That's why it's less dense, and that's why sinking ice cream can be a measure of quality.
        • Could someone explain to me how this comment is currently rated a '4' but mine, which is the grandparent of this comment, got modded up to '5' then down to '-1' and now is resting at '0'?

          Hooray for lemming moderation!

          PS, this guy is right about the air.

      • Try freezing a stick of butter and dropping it into a bowl of water. Sinks right to the bottom. Being that water expands when it's in the temperature range where it starts to freeze, and everything else in the universe contracts (afaik). Water is unique that way.. it's really cool too if you think about it because if water didn't expand then we wouldn't have ice on the tops of lakes there would be no life on earth.

        But anyway if you have lots of fat in the mix, it should sink when it gets cold because as a whole it will be more dense than the water surrounding it.
    • In the UK they have a very strange TV presenter called Jeremey Clarkson who came to fame as an over the top presenter on TopGear [bbc.co.uk]. For a short while he had a "chat show" where he would take a small part of each show to do something silly and extreme including making ice cream with Liquid Nitrogen! It was hilarious, simple, effective and brilliant, now if only I still worked at tarrc [tarrc.co.uk] where their was a plentiful supply of liquified chemicals :-)
    • Of course once you put it in water, it is no longer good ice cream, right?
      You could wrap it in clingfilm and fish it out again.
    • He said that a good test of ice cream was whether it floated in water. Good ice cream should be dense enough to sink. I guess this is due to the high fat content. Of course once you put it in water, it is no longer good ice cream, right?

      once upon a time there was a time when I and my gf didn't know eachother very well, well, basically we were dating. So, to make long story short, once we bought a packet of cheapo ice-cream (1kg, banana-chocolate). Well, at that night we couldn't eat it till the end.. so we desided to dump it. Into toilet.
      Guess what? It didn't go down. That brick'o'**hit floated there. I read a short prayer upån it's soul, and we went to bed. In the morning it was all melted and went down beautifully.

      And the moral of the story?

      you tell me some.


    • We sometimes do this as a party trick at midwest SF cons. Take your basic ice cream recipe in a big bowl, then have one person slowly pour LN2 from the dewar while the other one stirs maddly. I'm not sure why the article recommends pouring the nitrogen into containers first.

      Liquid oxygen works wonderfully as well. Last summer in Michigan we made LOX ice cream with freshly-picked thimbleberries. (And no, it doesn't burn! Not even when you put a blowtorch to it...) In a pinch you can even use dry ice. Have someone rub a block of dry ice on a cheese grater over the bowl. This method tends to leave some residual carbonation in the ice cream. Bring along root beer extract for flavor!

      Other fun cryogenic tricks -- Everclear (198 proof grain alcohol) will freeze at liquid nitrogen temperatures. Small pieces chipped off evaporate marvelously on the tongue. An inverted scotch-on-the-rocks can be made by freezing scotch in an ice-cube tray and adding the cubes to a little water or soda. Winecicles are interesting, too, but beware the tongue-and-flagpole effect when you lick them!

    • you should pour liquid nitrogen into containers first rather than trying to make ice cream by directly pouring it into a mix of milk and butter.

      You can check out my video [thesync.com] about making ice cream with liquid nitrogen. I'm a bit afraid about the butter part, generally LN2 ice cream is made with milk and heavy cream, plus sugar and vanilla. I'll have to try the pouring mix into LN2 rather than pour LN2 into mix.
  • by mfarah ( 231411 ) <{miguel} {at} {farah.cl}> on Wednesday November 28, 2001 @09:41AM (#2624218) Homepage
    I can't believe it. They talked about Perl, Scheme, Python, etcetera. Yet they didn't invite ESR to talk about the unique problems (and solutions) that implementing INTERCAL poses.



    I sure hope next year's LL2 addresses this issue.

    • ESR can be very controversal. Maybe they were trying to stay especially on topic. Hopefully next year they can invite someone involved with INTERCAL to discuss the topic.

      As has been said elsewhere, these kind of conferences are really focused on trying to find common ground so knowledge can be spread between specialists.

      ESR has an unfortunate reputation for stridency, this could have been a reason for his absence.

      INSERT SIG HERE
    • You gonna pass that crack pipe around, or are you just gonna keep it to yourself?

      =]
    • by Elian ( 10270 )
      He was invited. He just couldn't make it that weekend. Which was a pity, because I wanted to discuss getting an INTERCAL parser for Parrot... :)
    • I'm the co-organizer of the LL1 workshop, at
      ll1.mit.edu [mit.edu]

      Actually we did invite ESR, and we actually scheduled the workshop around his constraints. However, there were a few misunderstandings, and four days before the workshop we got mail to the effect that he wasn't coming.

      Maybe next time.
  • by TechnoVooDooDaddy ( 470187 ) on Wednesday November 28, 2001 @09:43AM (#2624222) Homepage
    any similar conference i've ever been to (including some W3C working sessions) have been extremely professional, even when working on standards. IMO the only time you get flamefests is on the internet on boards/newsgroups populated by wannabes who don't fully understand what they're flaming about, and the flames are pretty much just a front to cover their lack of knowledge/experience. on the other hand, stick a bunch of knowledgable people in the same room, and considerable respect for each other is shown.
  • Well, I don't know cURL, but I shure do know Flash. And from what I get from the cURL site it doesn't seem to have anything to do with Flash or it's "niche".
    I wonder what this guy is talking about.
  • by srichman ( 231122 ) on Wednesday November 28, 2001 @10:19AM (#2624313)
    I was at the workshop, and while it was mostly congenial, there was definitely a bit of tension between the academics and the "industry" folks (if you could call them that...). Basically, the dichotomy was between PL researchers, who espouse the virtues of Scheme dialects [plt-scheme.org] and other well designed but not widely used languages, and the applied folks, namely Simon Cozens (Perl), Dan Sugalski (Perl), and Jeremy Hylton (Python), who implement widely used lightweight languages that aren't as "respectable."

    There was a bit of a superior attitude from some of the academics, who feel that languages like Perl and Python reinvent the wheel and neglect the body of academic research by coming up with suboptimal solutions to PL problems that have long since been "solved" in the PL literature. Maybe "frustrated" is a better word than "superior." While I can totally appreciate their point of view, I found myself cringing in embarrassment once or twice when a harangue by one of the academics went a little overboard. There has already been one post on the LL1 mailing list that I feel crossed the line.

    The discussion came to a bit of head during the (very interesting) "Worse Is Better" panel (based loosely on the writings of Richard Gabriel [dreamsongs.com]), which centered on the question of why the most popular languages aren't the "best" ones.

    Like I said, though, it was mostly very congenial. Ultimately, I think each camp took something away from the encounter: both new-found implementation techniques, and a greater knowledge of and interest in the other community. There are some practical issues that the Perl/Python guys have to deal with (e.g., interfacing with legacy languages like C) that aren't really addressed by academics, and I think it was great that these issues were brought to light.

    The LL1 website, if anyone is interested, is ll1.mit.edu [mit.edu].

  • Academia to Hackers (Score:5, Interesting)

    by Tom7 ( 102298 ) on Wednesday November 28, 2001 @10:25AM (#2624335) Homepage Journal

    I think we learned that many problems that we're facing in terms of Perl implementation right now have already been thoroughly researched and dealt with as many as 30 years ago; but we also learned that if we want to get at this research, then we need to do a lot of digging. The academic community is good at solving tricky problems ... but not very interested in working out all the implications.

    This is the best paragraph in the article. Here's what makes me sad:

    Slashdot-type hackers have an amazing ability to get things done. They can really come up with a working product faster than anyone.

    BUT, slashdot-type hackers have a tendency to implement olddd ideas, and also frequently to make well-understood mistakes. It is true that we are on the cutting edge of implementing internet protocols and maybe window managers, but in other areas we are implementing 30 year-old ideas still. (OS design and programming languages come to mind especially.)

    WHO, if not the hackers, will embrace this stuff? They are the only ones that are supposed look beyond the hype and marketing and status quo to evaluate things based on technical merits, and to create implementations of new ideas.

    I know only the OS design that I learned in my undergraduate course. But that is enough to know that the design of the kernel is very conservative! Where are capabilities? Where is fine-grained access control? Does anybody *really* think that their internet daemons should run as *root* just so they can open up a port with a low number? (I know there are plenty of workarounds...) I am sure that there are dozens of great ideas in OS design from the last 20 years that would be totally appropriate for a hacker's kernel.

    I know a bit more about PL design. Being in academia pollutes the mind, I know, but I am sure that almost all I see in the slashdot PL community is reworking of old, mediocre ideas. Who in the world will use and develop new programming languages if not hackers?

    (So, the PL fanatic in me wants to point out caml [inria.fr], which, even though it is not my personal favorite, I think could become really popular with slashdot-style hackers. It is really fast -- probably the fastest, it is hacker buzzword-compliant (it has "objects"), and yet it has taken many great ideas from academia and put them in a really usable, accessible form. Try it if you are in for a taste of something different!)

    Anyway, just trying to say that if you are tempted to go hack up your own programming language, please at least don't assume that Perl is the state of the art because it is the most popular scripting language or something. Take a class, read a book, and check out some of the weirder languages coming out of academia first. Hackers are how the revolution happens...

    • I know a bit more about PL design. Being in academia pollutes the mind, I know, but I am sure that almost all I see in the slashdot PL community is reworking of old, mediocre ideas. Who in the world will use and develop new programming languages if not hackers?

      (So, the PL fanatic in me wants to point out caml [inria.fr], which, even though it is not my personal favorite, I think could become really popular with slashdot-style hackers.


      Of course ML languages are 20 years old, and Caml was developed *before* Perl and Python. So it isn't necessarily that the newer ideas are better, just that lots of good ideas tend to get lost to history for various reasons.
    • One of the big advantages of using objects with languages like Python, Ruby, Smalltalk, and sometimes Lisp is that one doesn't need to know the type of object that one is dealing with when one writes the code. One can ask it what it is, and proceed from there.

      This is much more difficult with languages where the type of an object is expected to be know when you write the program. CaML seems to be of this latter class. (So are Java, Eiffel, Ada and C++.)

      Notice that the second group of languages tends to be faster, but less flexible. This appears to be an inherent trade-off (though Java paid extra by having an interpreter for security and portability reasons).
      • by Tom7 ( 102298 ) on Wednesday November 28, 2001 @02:25PM (#2625598) Homepage Journal

        Caml does full type inference for you, so that you have to write fewer types than you would in C or java.

        In fact, in Caml you really only have to write types when you write down an interface to a module -- and this is exactly what languages without sophisticated type systems lack. It is very difficult to write precisely what your interface is without writing down types, and if the type language is poor (ie, Java, or worse: perl) then writing interfaces becomes more an exercise in documentation and finger-crossing.

        (Personally, I also find that automatic type checking is very conducive to writing maintainable programs. It keeps me from making the gross hacks that are so tempting in perl. Typically it doesn't make my programs any longer or more difficult to write, since ML-family languages have lots of features to capture the common idioms that require this "flexibility" in perl et al.)

        Careful not to make too many generalizations. I think Caml is much nicer than other typed languages you mention.
        • It may be very nice in many ways. When I've looked at it I've been interested in reading in self-identifying objects from files. I'm not expert in CaML, by any means, so I may well have overlooked options, but it didn't look feasible to me. The only way that looked at all reasonable involved having a master type that could deduce which sub-type was appropriate by examining a string header. This is sort of the opposite of what an Object system should do, and felt rather like retreating to UCSD Pascal. (Mind you, I didn't study this issue closely, so I could easily be wrong about this. But even the bare references to random acces file IO was ... either I gave up, or I was so totally unimpressed that I didn't bother to remember it. [Well, I missed this in Common Lisp the first time I check the documentation too. For some reason this info seems to tend to be hidden.])
  • by Junks Jerzey ( 54586 ) on Wednesday November 28, 2001 @10:37AM (#2624376)
    it wasn't the flame fest you might have imagined

    Not surprising. The only people who get into flame-fests about programming language choice are insecure newbies. It comes down to the same reason kids argue about whose game system is better: they got one for Christmas and feel compelled to defend their choice, because they can't afford another. Once you know a sizable number of computer languages--especially different styles of language--then you no longer feel a need to be so petty. Different languages have different strengths.
  • Since programming language vocabulary and syntax is the human side of a human -> machine translation process (a process of translation through an interpreter, compiler, whatever etc.. to 0's and 1's), and usually requiring human "logical" thinking, isn't the real objective here one of identifying and defining abstraction manipulation functionality, the logic of translation mechanics?

    Certainly if the target is to be an optimized sequence of 0's and 1's then is it not the translation mechanics responsible for getting it there, and from whatever vocabulary(s) and syntax(s) used?

    This is where I believe genuine computer science and software development research got seriously distracted by the carrot of money. And as it was mentioned in the article regarding not doing it right in a tradeoff of getting it out the door, getting back to genuine computer science may be difficult to do! But it also seems to be an ongoing and growing problem in genuine Software Engineering. The latest version of a need to solve the software crisis? [ibm.com]

    Note that IBM presents a white collar high dollar I/T solution direction intent, but without any identification of the base functionality mechanics of translation.Read Written Comment #4 [uspto.gov] after reading the "Manifesto" at the above IBM link.

    With all this in mind, what are all these "Lightweight Languages", but examples of how many ways you can create a custom vocabulary, syntax and translator that outputs 0's and 1's not always in the optimum sequence?
    .
    .
    • Thinking in terms of "1's and 0's" is seriously wrong. Binary data is simply data, it does not represent computation itself. You can encode a UTM in base 2, 3, 10, or 287 if you like. There is trinary computation, as covered on slashdot only a few days ago, there is quantum computation, which really flies in the face of a lot of classical computing rules.

      The fact that every language is an application of basic computing concepts is no more help than telling a research pathologist that all viruses are made of matter. It's simply not science to keep pointing out things we already know.
      • When we actually have trinary state hardware then it will be a matter of translation processes that boils down to trinary states for that type of hardware. For quantum state, translation proceses that boils down to quantum states, for quantum state hardare. For analog computing, the translation process would .......

        Encoding to a selected base is also a translation process, but not one that is directly compatable with the hardware without further translation (that you don't see), unless it is to the base compatable to the hardware.

        Hardware is made of matter and apparently a refresher course wouldn't hurt you. Back to basics is always a good thing when you have forgotten them or have gone astray to the point of failing to solve problems like the software crisis.

        .
    • With all this in mind, what are all these "Lightweight Languages", but examples of how many ways you can create a custom vocabulary, syntax and translator that outputs 0's and 1's not always in the optimum sequence?

      You completely miss the point. If you want to address your so-called software crisis - which is only a crisis when you have unrealistic expectations, based on ignorance or denial of the issues being faced - then you need to provide humans with languages that allow them to express programs in powerful ways, that make programming easier and more reliable. Focusing on the 1's and 0's completely misses the fact that the challenges lie at the level at which the humans controlling the machines operate.

      Academia has produced many innovations in these areas. All modern mainstream languages can benefit from these "new" language technologies (some of which are actually decades old). The LL1 workshop was about communicating between those who have developed sophisticated and powerful ways of dealing with language problems, and those who have a record of having implemented languages that are popular with humans - languages that are used not because of mandate from on high, but because they're perceived as easy to use and also powerful, and thus desirable to use.

      An additional interesting element here is that the mainstream "Lightweight Lanugages", like Perl and Python, have a better track record than the big commercial languages of incorporating these ideas - witness the fact that Perl and Python support advanced capabilities such as closures and continuations, whereas other recent languages, like Java, have limited to nonexistent support for such things.

      A collaboration between the authors of mainstream lightweight languages, and academic language researchers, opens the possibility to accelerate language development in a sorely needed way - instead of innovations taking literally decades to make their way from academia to the mainstream (e.g. the way object-orientation did), this lead time could be reduced to mere years.

      In addition, via lightweight languages, these features would be delivered in a form more palatable to the audience consuming them. Lightweight languages tend to recognize the pragmatic needs of their users, as opposed to imposing restrictions based on aesthetic constraints such as "elegance".

      In summary, the plethora of lightweight languages is a simple reflection of a dynamic and fast-evolving ecosystem, an absolute requirement for further progress in the extremely complex endeavour of humans programming machines.

      • I didn't create or define the term "software crisis" The computer industry did in the mid 1960s. And the computer industry continues to use the term as an identification of a class, and sum total effect, of problems"

        Your failure to know this seems to be consistant with failing to understand that it's not "language" and "syntax" that's the real issue, but rather getting the translation mechanics figured out.

        This way it really won't matter what vocabulary (language) set and syntax you use. But rather opens the door up for combining languages as well as extending and creating them. Allowing you to use the better vocabulary and syntax for what you are expressing.

        Translation simply takes whatever you have written and converts it into the optimum bit sequence for the machine to deal with.

        Or for that matter, Translation from whatever form to whatever target form that is defined. Like Human to human Translations (i.e. English voice input to german audio/spoken output.)

        The translation mechanics are going to be the same.

        There is nothing wrong with defining new concepts, such as language does. But in having the science of translaion mechanics figured out, it will enable new concepts to applied alot sooner and probably alot easier too.

        .
        • I didn't create or define the term "software crisis"

          No, but you used it as though you believed it. In my experience, now that it's no longer the '60s or '70s, the phrase gets thrown around by people who know little about software development, usually to sell a product or idea.

          Translation simply takes whatever you have written and converts it into the optimum bit sequence for the machine to deal with.

          Or for that matter, Translation from whatever form to whatever target form that is defined. Like Human to human Translations (i.e. English voice input to german audio/spoken output.)

          The translation mechanics are going to be the same.

          You might want to read Joel Spolsky's piece on "Architecture Astronauts [joelonsoftware.com]". Suggesting that all translation from any language to any other language involves the same translation mechanics, to me simply indicates that you've never actually done any work or studying in this field, and are indulging in armchair speculation. Any abstraction at that level will be essentially useless to the task of actually translating the material in question.

          Take a look at what's involved in rewriting code in a functional language to a compiled form - an area that's enjoyed a lot of academic attention - and compare that to tools which translate human languages. The commonality there is minimal, at the level of stratospheric overviews like "get input; translate input; produce output". Architecture astronauting indeed! Dare I point out that all computation follows this pattern - "translating" a problem into a solution - so once your universal translation mechanics have been developed, we'll never have to write another line of code? "O great translation mechanics, what is the answer to the question of life, the universe, and everything?" Sounds good, let me know when you're done!

  • Obviously... (Score:4, Interesting)

    by mirko ( 198274 ) on Wednesday November 28, 2001 @10:54AM (#2624442) Journal
    If we don't take the learning curve into account, you might en up with Color Forth [colorforth.com]
    (or any other Forth derivate, such as BigForth [jwdt.com] - for Linux and Windows - which include a breathtaking GUI RAD : Minos)...

    Here's a small ColorForth program: This consists of an IDE disk driver [colorforth.com].
  • The best development for little language writers could be Perl 6's Parrot "assembly" language. By focusing on the details of executing a very low level language very quickly on a huge number of platforms, the Perl team will be providing an excellent VM model that should minimize the need for language designers to worry about issues like GC and other system headaches.

    I doubt this is the first effort to create a popular open VM, but it seems to be one of the most heavily promoted. Hopefully we will see Parrot-based languages springing up everywhere, and perhaps even ports of existing languages.

  • What exactly constitutes a lightweight language? Is it a scripting (interpreted) language? Or one that serves as a building block for other languages (a micro-language, so to speak)? Or is it merely any language that can be used for Rapid Application Development? I thought I caught a reference to Java in that article (used as a comparison), which I don't consider to be "lightweight" by any means, and I've heard people swear by Python as a full-blown development platform (I don't know anything about Python, so forgive me if I sound ignorant). It seems as if a lightweight language is basically one that is an open source work-in-progress.
    • Definition (Score:3, Informative)

      by srichman ( 231122 )
      From the call for participation [mit.edu]:
      We use the term "lightweight languages" to describe some of the common features of these new languages. The term "lightweight" refers not to actual functionality, but to the idea that these languages are easy to acquire, learn, and use. Examples that would fall into this category include Perl, Python, Ruby, Scheme (and scsh), and Curl.
      They are also often (but not necessarily) dynamic, interpreted, and/or loosely typed.
  • "Sadly, few people seemed to have heard much about Ruby, something they
    will probably come to regret in time."
  • by Snard ( 61584 ) <mike.shawaluk@ g m a i l .com> on Wednesday November 28, 2001 @12:11PM (#2624812) Homepage
    From the review:

    Paul Graham rounded off the talks by talking about his new dialect of Lisp, which he called Arc. Arc is designed to be a language for "good programmers" only, and gets away without taking the shortcuts and safeguards that you need if you're trying to be accessible.

    I predict that someone will later come out with a new and improved version of this language which is backward compatible, and runs 10 times faster. That language will, of course, be called Zip.
    • Anyone who's a lisp fan (and those who'd like to be converted) should really read Graham's writeup of his LL1 presentation at http://www.paulgraham.com/arc.html [paulgraham.com]

      I'm really excited about this language. They're going to give an honest shot at making a lisp that will have more general appeal (read the part about onions: they're taking the "onions" out of common lisp), yet still maintain the raw power of macros. It should be very exciting.

  • Since functions are degenerate relations, and people need relations (remember the relational databases need stored procedures etc. guys) my favorite foundational prototype for "light-weight programming languages" is Libra -- a programming langauge based on binary relations [adelaide.edu.au]. It's not what I would have done as a prototype, but hey, the guy [adelaide.edu.au] DID IT and here I am talkin' about it! :-)
  • Yes, great idea, but one could argue that both academics and the worthy hackers are somewhat removed from typical user contexts. There are plenty of requirements in the 'enterprise applications' space (not to mention the scientific modelling space etc.) that are virtually never addressed in basic programming language, but only in endless 'frameworks', 'wizards' and other add-ons that are really band-aids for concepts not considered at language design-time. For example:

    Commercial information tends to be persistent, not transitory. A good language should work directly with stored data.

    Processes in organizations are long-lived and distributed, whereas typical programming languages just deal with transient threads etc. (outside workflow systems such as WebLogic Integration).

    Programs represent rules, algorithms and other forms of knowledge that end-users will want to add to (e.g. a discount formula). Not only should the environment allow run-time modification and extension, it should also support representations and syntaxes accessible by non-programmers.

    Every action has a principal actor associated with it, and typical commercial environments need to record who it was for auditing and access control purposes. If a programming language has no concept of Principal, one has to be stuck awkwardly on the side (e.g. Java EJB isCallerInRole).

    Transactions are a very common programming model. At the very least, there should be support for creating and propagating transaction IDs, restarting procedures etc.

    What else? Run-time metrics, versioning, SQL-style set predicates... well, you get the idea. People have to wake up to the fact that there is still a huge disconnect here.

    (Amazing to think that Java gave Microsoft some ideas and a wide-open goal, and they came up with C#).

  • rebol (Score:2, Interesting)

    I am a bit of a language freak and have a long-time habit of hearing about a new language, reading a brief feature list, getting really excited, reading the language and library docs, discovering something I don't like, e.g. in C# the way methods aren't virtual by default, going off the language intensely then adding it to my cv anyway.

    But when I checked out rebol that was mentioned in the article I found it was in fact as good as it first seemed, maybe better.

    Within an hour of first hearing about rebol I had written a gui program that displayed the live picture of the Tokyo Tower on the net and updated it every 60s.

    When I first wrote this program, it was as a learning experience for c#- and it took a hell of a lot longer to write and the code is much longer.

    So maybe for me rebol is the ultimate lightweight language!
  • Little languages (Score:2, Informative)

    Some folks have asked me about the talk I gave at the workshop, that Simon described so kindly in his review on perl.com.

    I wrote a paper about it. Although it's true I am a pointy-headed academic, I do occasionally hack a few lines of code, and I when I've solved a problem over in the research world whose solution would be useful to hackers, I try very hard to write papers that are readable by your generic hacker.

    If you go here http://www.cc.gatech.edu/~shivers/citations.html [gatech.edu] you'll see a list of papers I've written. These are the ones that people in the perl/scripting/lightweight-languages community might find interesting:

    1. A universal scripting framework [gatech.edu]
    2. The SRE regular-expression notation [gatech.edu]
    3. Atomic heap transactions and fine-grain interrupts [gatech.edu]
    4. Automatic management of operating-system resources [gatech.edu]
    5. Continuations and threads: Expressing machine concurrency directly in advanced languages [gatech.edu]
    6. Supporting dynamic languages on the Java virtual machine [gatech.edu]
    7. A Scheme shell [gatech.edu]
    8. Scsh reference manual [gatech.edu]
    #1 is the lightweight-languages paper on which my talk last week was based. By the way, the expect/chat & make replacements I mention in the future work section of that paper are basically done. I've three students at Georgia Tech who are wrapping up the implementation of a nice recompilation system called sake (pronounced "sah-kay," like the fish), which I like very much. A student who worked for me at MIT two years ago did an expect & chat replacement. (Lots of the scripts in my /etc directory are now written in scheme rather than sh, and I wanted something for my ppp subsystem.)

    #2 has an opening flame about a problem in the open-source community I call "the 80% solution" problem. The regex notation it describes is now standard with scsh.

    #4 & #6 will be of interest to VM designers.

    #8 is, ahh, somewhat more well known for its non-technical content. But I'm on a new set of meds now, and doing a lot better, really.

    -Olin

To do nothing is to be nothing.

Working...