Follow Slashdot stories on Twitter


Forgot your password?
Programming IT Technology

Open Source Programming Language Design 243

descubes writes: "It's been a long time since Java, the last major change in programming languages. Could the next one be designed "the Open Source Way"? For a few years, I have been working on a programming language called LX, which is part of a larger system called Mozart. I need some feedback. Could Slashdot readers comment on which programming language features they would like?"
This discussion has been archived. No new comments can be posted.

Open Source Programming Language Design

Comments Filter:
  • The whole thing is brain-dead. And my biggest peeve? Indendation-sensitive syntax.

    Do any of these idiot-child language "inventors" ever think about how indentation-sensitive syntax impacts a good cut and paste job?

    And no, you idiots out there who are cranking up the whine-o-grams: "but George, shouldn't you be writing modular & re-usable code instead of cutting and pasting?"

    No! Goddamnit, I'm talking about when real work get's done. When you are writing real code while learning about a complex problem, and you have to re-design 2-6 times a day. That is when real men & womyn cut and paste.

    Then again, the only languages with indentation-sensitive syntax are prissy little scripting languages, so I guess real programmers need not apply.

    Uh, then again, there could be "real" languages out there with indentation-sensitive syntax, Lisp? Never used it myself. Some functional languages used in CS classes of yore? Don't recall. I didn't claim to know everything *and* be perfect.

  • Compilers produce crappy assembly language

    Yes. HOWEVER, assembler code is not 'magically' faster than C code. A bad Assembler programmer's code will run slower than a good C programmer's code, and vice versa. However, a good assembler programmer's code will run faster than a good C programmer's code, most of the time.

    However, this point will be moot when Sun begins the trend of making up for a slow and inefficient language by producing hardware specifically designed for the language: the Java processor board. It plugs in like a standard IDE/PCI/whatever board, and it runs Java bytecode at blazing speeds.
  • by Anonymous Coward on Friday April 27, 2001 @09:35PM (#260310)
    A very long time ago, IBM tried to combine Fortran, COBOL, and a mess of other languages into an uber-language. They tried to put every feature in, and it got so big and cumbersome that it never became widely adopted. Putting a "request for features" call out to the world will surely never give you the language you're striving for. If Python, Java, Rebol, Perl, C/C++, etc. don't solve your problems, figure out what you're missing and take it from there.
  • by Anonymous Coward on Friday April 27, 2001 @05:59PM (#260311)
    Things I'd like to see:
    • Compiled and Interpreted (interpreted for development but a compiler for what needs speed just like some people use C interpreters to develop C apps)
    • A good standard library (like Java/Python/Perl)
    • Perhaps Truly Object Oriented (like Smalltalk)
    • Support flexibility, like I'm annoyed by Java when you can't create references to methods... it'd be very useful in a great many projects I do..

    The key is to be unique, you'll notice Python and recently rose to fame in that it contained ideas not seen in mainstream lately, such as tabs for delimeters of the score. Try to come up with someone radical that most people can't even think of now, but keep it simple so people can use.

  • Except for FORTRAN, which still kicks C's ass on numerical applications because of the "pointer problem", and yes C++ can produce code as fast as C, but it's much more difficult due to the complexity of the language.

    Standard C++ fixes some of the performance problems with earlier implementations. For an eye-opener, check out Blitz++ [], a numerical library written in C++. It performs on par with FORTRAN, sometimes even exceeding FORTRAN's vaunted numerical speed.

    Standard C++ can also be much, much faster than C. The standard sorting algorithm is a typical example. std::sort is 250% to 1000% faster than qsort according to one benchmark []. It is 20% to 50% faster than a hand-coded C quicksort for a particular data type. I have seen such results elsewhere -- this is just the first page Google turned up.

    Yes, std::sort is using inlining to good advantage. That's not "cheating" as some may argue. C++ (and the standard library) provide the efficiency of inlining while maintaining genericity and separation. That's what templates do. It's an intrinsic part of the language. C++ and the standard library help you reduce programmer time (less code to write) and execution time in many important cases.

    C++'s combination of static typing, polymorphism and generic programming while maintaining the ability to do "traditional" C-style structured programming is really, really nice. I have my choice of options for coding particular modules and I don't need to learn three different languages to do so. One could even argue that C++ supports a fourth model: with template metaprogramming [], one can write C++ code in a style that almost looks like functional programming in the sense that recursion is used exclusively and the code implements functions that do not modify any values. Granted, this form of coding is limited to compile-time values, but it can be used in lots of surprising ways [] to do things like generate entire class heirarchies automatically.


  • Which brings me to another point: there's a lot of legacy code in other languages, so it would be very nice to be able to copy and paste it into a hybrid program.

    Why in heaven's name would you want to do it that way? A much better approach is to compile those legacy codes into separate modules and provide the language with a way to import those modules with the ABI of the particular legacy system used. C++ can do this to a limited extent with extern "C" and there's no reason someone couldn't implement extern "FORTRAN" or some other such thing (I'm not sure if standard C++ allows extern to specify a calling convention in addition to a linkage, though).

    If we really take this to extremes, we might want such a feature to handle interpreted languages as well. It gets a bit tricky around that spot, though. :)


  • by The Man ( 684 ) on Friday April 27, 2001 @08:11PM (#260315) Homepage
    ...You need to have COME FROM, gotos are for wussies who need their hands held...

    Oh, you mean exceptions. After all,

    try {
    } catch (SomeExceptionType baz) {

    (This text is here in an effort to prevent this post from being considered filled with junk and rejected. Please ignore it and bitch to CowboyNeal that his "lameness filter" fails to catch first ports, hot grits, all your base, and natalie portman garbage but somehow rejects this perfectly valid and even fairly on-topic post. Thanks for nothing, CowboyNeal.)

    is really no different from

    if (some_condition)
    COME FROM foo;

    At least Intercal folks realize their language is a joke, I don't think Sun have caught on yet.
  • by pohl ( 872 ) on Friday April 27, 2001 @06:01PM (#260316) Homepage
    There should be a "Misinformative" moderation label. Jikes is not an open source variant of Java. It is a compiler for the Java language, implemented with an open source license. It is merely an alternative to the javac compiler that comes with the JSDK. It was even a faster alternative the last time that I tried it, but it is not a language that was designed in an open community, which is what the question is about. One could use "offtopic", I guess, but then would likely be screwed in metamod. Not that I care about karma.
  • by Ian Bicking ( 980 ) <> on Saturday April 28, 2001 @12:27AM (#260318) Homepage
    Named/out of order arguments- will cause confusion and bugs
    You are so very, very wrong with this. Named arguments are, IMHO, entirely and completely positive, with no negative effects whatsoever.

    Any function/method with more than, oh, two arguments causes confusion and bugs. Most of the time there is no real natural order to the arguments -- perhaps some conventions, but that's about it. Does the file come first in fputs, or is that fprintf...? Does either of those make more sense then the other?

    Argument ordering is usually arbitrary, but named arguments are never arbitrary. For large function calls (which would include object instantiation) keywords (named arguments) are very good.

  • And even if a human can write assembly better than a compiler, is it worth the cost? For the majority of us, the answer is clearly no.

    Agreed! By the time loss of portability and maintainability plus development cost is considered, it will rarely be worth it. See: The story of Mel []. I really doubt we want code like that these days.

  • Greetings!

    In your comments you wrote:

    What about: any large-scale application where performance and stability matter?

    That is exactly my point. "Any" is not meaningful. "Any" sounds like snake oil to the person who is first exposed to your technology (blame the hordes of marketeers that preceded you). Finding a specific problem where you can empirically demonstrate that LX and Mozart outperform other means (in uptime, cost, stability, development time, etc.) will focus your efforts and your PR. That will bring people to use your technology, and they, in turn, can discover that the technology is excellent for other applications.

    Mirror the success of others. Java started in the applet space. It's now used for developing everything from web to embedded to hard real-time applications. It took some time, but people eventually came around to realize the many uses of the technology.

    Good luck,

  • by ciurana ( 2603 ) on Friday April 27, 2001 @05:51PM (#260323) Homepage Journal

    Congratulations on your development of LX. It seems like you've made excellent progress so far, and the language definition and examples are useful for understanding the language itself. It looks cool.

    Rather than commenting on the language and its features, I'd suggest that you identify a problem domain where your language (and Mozart) are a better solution than any other options out there. This will allow users to identify your language more quickly and bring more users to it. When people think of Java they think "the language of the web." PERL? "The duct tape of the Internet." I'm sure you get the idea.


  • Wow, the coolest hacking link I saw on Slashdot for a long time!

    I always knew that C++ offers more ways for obfuscated programming than C - I can't wait to show these snippets to my colleagues. :)

    And funny to realize that I have just bought the book on Generative Programming that has been mentioned on that template meta programming page that features the Meta Lisp for C++ interpreter.

    One more reason to read the book soon.

    Mod this up!

  • but you're ok with having to download a compiler? or do youalso demand that it also compiles with gcc automatically?
  • From my undergraduate-level compiler design course (which every CS person should take, IMHO), there are many problems in optimizing compilers that are NP-complete. For instance, the favoured method of allocating variables to registers is NP-hard for exact answers, but a heuristic is used that works very well, provided you have more than about 16 registers (one of the main reasons why modern architectures all have 32 or more registers).

    Anyway, there is no such thing as the "perfect optimizing compiler". To be verifiably optimal, as well as knowing everything there is to know about the machine's internal architecture, it would have to have complete knowledge about the dataset that the program to be compiled is to be run on - if that is not available, if there is a tradeoff to be made the compiler has to make a choice that will be suboptimal

    To take a simple example, the compiler might choose functions to be placed in a certain order in the object file so that functions called repeatedly in sequence can all fit in the cache at once. Running the program with a different dataset could produce different call patterns, and thus the optimal layout of functions in the object file might be different.

    So, your program, and any non-trivial program, could only ever be truly "optimized" for one input dataset. Anything else is a compromise.

    Go you big red fire engine!

  • > There are plenty of indentation mode packages arround for emacs already.

    And I only need them for languages which lack any capacity to pretty-print, and thus force me to do it manually.

  • An inner class might constitute a closure, but to use that to program in functional style is a PITA - I know, I'm currently busy converting a C++ program that relies heavily on functors to parameterize its behavior, into Haskell. I wrote the C++ code some years back, and while it was fast and did the job, maintaining and extending it was a pain.

    In Java or C++, classes (not just inner classes) can be used to emulate the behavior of both closures and first-class procedures in order to code in a somewhat functional style. But that doesn't qualify a language as having functional features. My point was that in Smalltalk, closures and first-class procedures were already watered down, and Java all but eliminated them. Perl and Javascript both have real first-class procedures and real closures, and they are both stronger languages for that.

    I agree that a hybrid language isn't ideal, but I was predicting what I think will happen, not what I'd like to see happen. I don't yet see a functional language that's truly ready to completely replace imperative languages for average programmers. So for the forseeable future, the mainstream languages will simply adopt functional features.

  • by alienmole ( 15522 ) on Friday April 27, 2001 @07:36PM (#260336)
    Functional languages have already had quite a strong impact on mainstream languages, but only indirectly. Your professor is absolutely right about how long it takes for programming ideas to hit the mainstream. Java is the first example of a mainstream language which allows some of what Smalltalk enabled back around 1972.

    However, Java focuses almost exclusively on strongly-typed object-orientiation as its primary concept. It completely ignores two related features which make Smalltalk powerful: code blocks and closures. These Smalltalk features were actually derived from LISP, which at the time (1972) could only be called a proto-functional language. The first truly functional language was probably Scheme, in 1975.

    Because the functional ideas inherent in LISP were not fully developed at the time Smalltalk was created, the conceptual emphasis in Smalltalk was on object-orientation, derived from Simula. If Smalltalk had been able to draw from Scheme instead of LISP, there's a strong chance that it would have had a more functional bent, which might have affected the languages which were influenced by Smalltalk.

    Instead, Scheme came along just a little too late to directly influence the mainstream. Only recently have we started to see functional features appearing in mainstream languages. PERL and Javascript both support lambda-calculus-compliant closures, and first-class procedures, which are fully realized incarnations of the original concepts on which Smalltalk's somewhat limited code blocks and closures were based. Python has also recently moved in this direction.

    I predict that functional features will slowly be adopted by most mainstream languages over the next decade or two. Java will be the last new mainstream language that's completely non-functional (pun intended). The power of these functional capabilities is too great for language designers to ignore.

    Note that I'm not saying that current functional languages will become mainstream languages. Rather, just as mainstream languages have absorbed object-oriented concepts, they will also absorb functional concepts.

    Anyone writing a language today who isn't familiar with Scheme, Haskell, and ML may as well throw in the towel right now. Unless they plan to invent the next great paradigm, they will not succeed. I think it's impossible, in 2001, to write a language without taking functional concepts into account. (Of course I'm reminded of Tanenbaum telling Torvalds that writing a monolithic OS kernel in 1991 was a fundamentally bad idea...)

  • It seems virtually every language falls on its face when one tries to put abstraction on top of it. And not matter how abstract some language is, someone else comes along and wants it to be even more abstract. Just admit it: you goal is to be able to assert "all problems are solved" and expect it to just be.

    If you think C collapsed because of heavy macro hacks to implement more complex systems, then I say that C has not collapsed, and is running just fine for those of use that don't try to push it beyond what it was designed for. Of course C can't be everything. But it is for me an excellent tool for most of the things I need to do. Of course some things could use something better, and I do look to greater languages for that. But that does not mean C collapsed into failure. That's only for someone who expected something out of it that it just isn't for.

    I didn't see any comment from you about Forth. What say ye of Forth?

  • But some process like python's PEP could be a good idea...since every programmer should listen its usersm and the users of computer languages are the other programmers.

    Of course I overstated my case a bit... I was having fun (something in short supply here at the end of the semester). It's worth pointing out that the people the developers listen to tend to be people who have actually worked with the language and understand its gestalt... this new language may not even have a gestalt, and it certainly doesn't have people who have worked with it for hundreds of hours. I think until you get to that point, such things are pretty much a waste of time.

  • by Jerf ( 17166 ) on Friday April 27, 2001 @08:14PM (#260339) Journal
    I want it to be object oriented!... except for the useles parts. Oh, and combine the best of imperative and functional, the best of perl and python, the best of C++ and smalltalk, the best of capabilities and UNIX, the best of BeOS and OSX, the best of nethack and Angband, and the kitchen sink.

    It should be work on Palm Pilots, and Beowolf clusters. It should be easy to extend, easy to parrellize, and easy to optimize for every major processor currently in use. It should be easy to read, have a powerful and compact syntax, familar to people who understand Pascal, Ada, LISP, Prolog, or SQL. It should be comprehensible to an advanced two-year old, usable for teaching computer science concepts in college, and usable in a professional environment. It should be loved by both the Slashdot community and Microsoft, and it should be immune to embrace-and-extend.

    It needs to perfectly fit my needs but also perfectly fit the needs of my grandmother. It should have a dancing baby as a atomic object. By the way, whatever your language is currently doing is totally wrong, and you should totally change it around. Also, you need to satisfy every last comment posted in response to this article, plus the ones people only thought, but didn't take the time to post.

    Your language should replace OpenGL as the dominant graphics platform. Your language should have an order-N searching algorithm built in. Your language should easily extend into the quantum domain when such computers become available. There should be a command-line option that will read in all of my old QuickBasic programs from when I used DOS.

    Your language should be interpreted, compiled to byte-code, compiled to Java(tm) byte-code, or compiled to native code, depending on context. Your language should make sure that all programs written with it should be optimally thread-safe. Your program should be able to detect whether a given program will go into an infinite loop. Your language should have no patent issues. Your language should have all the whiz-bang features other languages have.

    I want to be able to apply an XSL stylesheet to my source code and get the equivalent program in Turing Machine code, but I don't want to learn XML... that's too much to ask. Your language should automatically internationalize all programs written in it.

    Your language should be elegant. I want to be able to implement the Linux kernel in three lines of code.

    I would take any suggestions on Slashdot with a Detroit-salt-mine-sized grain of salt. Consult a real language expert.... because most of all, your language should not designed by a committee of random computer users.

  • by JohnZed ( 20191 ) on Friday April 27, 2001 @06:01PM (#260341)

    When developers (Pike + friends) needed an efficient, processor-independent language for systems programming, they created C. Later, when the systems got so huge that they needed a new layer of abstraction, they (Stroustrup et al) looked at the problem and came up with C++.

    Guido wanted a language with the readability of ABC, but with exceptions, OOP, and extensibity, while Larry Wall obviously needed a Postmodern Extension and Reporting Language. Java's history is similarly tied in with very specific problems (smart devices, then applets).

    A programming language is an answer. If you propose to design one without first asking a concrete question (no, it doesn't count if your question is "what would be a really cool language?"), I suggest that you name it "42" for obvious reasons.


  • Well, I don't know if it's good compared to others, but EiC at [] looks really interesting.

    There you can find some links to other interpreters too.

  • by gunter ( 32474 ) on Friday April 27, 2001 @07:05PM (#260344)
    "I like Java because no matter what I do I can't do anything dangerous. Err wait, I hate that about it :)."

    Our resucue here is JNI. With this you can even segfault java :)

  • Expression reduction. This seems like it would be hard to implement and very confusing.[...] I think it creates too much confusion unless you can demonstrate that this would be a huge speed boost.

    I did actually implement it :-) Checkout the compiler from CVS... The precise rules are written in a separate document (which I need to put on the web someday), but basically amount to "the largest that matches." People who have worked for instance on large matrices or vectors that thrash your TLB know that there is a significant speedup in combining. Well, even multimedia encoding/decoding would benefit: this is the right way to define at a high level the equivalent of low-level instructions like MMX, AltiVec, etc. But the most important readability gain in my opinion is for types (think "array [A..B] of C" being a type expression.)

    The first example you showed of these was basically using them as a replacement for unions. I hate unions.

    The fact that you hate them doesn't make them useless. Consider a device control register where flipping a bit in one register changes the meaning of another register. The alternative is ugly pointers.

    Basically, you created much simpler syntax for dynamic memory management.

    No, I tried to create a way to represent data types that you can't represent easily in C. Simple application example: Run Length Encoding (RLE) for your good old BMP files.

    I can't help but see so many different types of pointers as needlessly complicated and confusing. What will you really be needing pointers for?

    Two answers there. First, the existence of multiple pointer types is a consequence of having them defined "in the library" rather than "in the compiler." Any program can define a "pointer type." And all the pointer types I describe are in the library.

    Second, I essentially try to fix a problem in C, C++, etc, where a pointer to allocated memory automatically can 'alias' an object on the stack. That's very bad for optimizations. This doesn't happen in LX because these would be pointers of different types.

    Last, to answer your suggestion of "leaving the others out", since the language allows you to create them, if there is no library-defined pointer, alternatives will pop up (just like the many string classes at the beginning of C++)

    I think you are misinterpreting where the (fragile base class) problem lies

    The fragile base class problem is that you cannot extend the class. I am suggesting that the set of polymorphic operations on a type is not closed by a single definition (for C++, the class definition.) Hence, any derived class can first add the functionality to its base class if needed. A bit as if in C++ you could say:

    class Foo extension {
    virtual void MyNewMember();

    class Bar : Foo {
    // Can now use MyNewMember.

    void Foo::myNewMember() {
    // extends class Foo only for class Bar.

    Thanks for the comments. They help.

  • LISP is cool, no doubt. And yes, it is reflective, user-built, user-extensible. On the other hand, it was never a language for the rest of us. One of the reasons is: Lots of Insipid and Stupid Parentheses. LISP is a bit like the Reverse Polish Notation in HP calculators. If you know how to use it, it's really great. But most people can't get used to it.

    And yes, I sometime uses LISP or other functional languages. Heck - early versions of LX even had support for Prolog-style backtracking! But no, Common Lisp does not have 33 out of the 38 features, and no, 38 - 33 is not 4 :-)
  • Mea culpa. It used to be called "Xroma", and I preferred that name. Then, the person who had coined the name left the company I work for, and wanted to keep the Xroma name. I had to recode the whole stuff (full of puns like Xromasomes, Xarbon, Xode, Xid, etc) in a hurry. I realized the mistake with Mozart/Oz after having coded 15000 lines that used the Mozart terminology. I am really sorry about that, but I'm tired of recoding...
  • Hear hear!

    There's been many times when I pasted a snippet of perl from somewhere into my program and ran it...only later did I fix the indentation.

    Not to mention, not everyone writes software the same way, so what I think is properly indented may not match what others think
  • by addaon ( 41825 ) <> on Friday April 27, 2001 @05:52PM (#260373)
    From the webpage:

    integer Large := 16#FFFF_FFFF
    integer Twenty_Seven := 3#1000
    real MAX_FLT is 2#1.1111_1111_1111_1111_1111_1111#E127

    If I can choose a base arbitrarily, why the assumption that I want to choose my base in base 10? Why can't I choose my base in base 16, as such...

    integer thirtytwo = 16#20#10

    But then I still need to choose the base I want to choose my base in in base ten... why not

    integer thirtytwo = 2#10000#20#10

    But then... agh! We're stuck in a loop.

    To be slightly less snide for a moment, what I'm trying to point out is that, while this is a good idea, it is slightly silly and slightly unnecessary. There are certain bases that are used... there are others that, in general, are not. Don't support every base through a clumsy yet silly syntax. I'd rather be able to do the following, and only the following:

    integer ten = 10
    integer ten = 10d
    integer ten = 12o
    integer ten = 0Ah
    integer ten = 1010b

    where the default encoding is decimal, but I can use one of d, o, h, or b to switch to another common base.

  • Your wish is my command [].
  • by p3d0 ( 42270 ) on Saturday April 28, 2001 @06:57AM (#260376)
    It looks like you have a lot of influence from Ada. Ada adds a new language feature for everything a programmer could want. I always found that this made it tough to learn and understand Ada because there are so many built-in language features. (The difference between Ada and C++, of course, is that in Ada it works.)

    I'm glad to see the influence of Eiffel in there. Eiffel is a language which achieves its generality not by adding features to the language, but by removing them. It doesn't try to cover everything you could want to do with a special case; it has a few well-chosen abstractions that cover a lot of ground.

    Anyway, when your language description looks like a big bag of toys a programmer could use, then perhaps you should take a step back here and ask yourself "what am I contributing?" What is new about your language? If it's just syntactic sugar, then don't bother.

    (I'm not saying LX is just syntactic sugar; I'm saying that your website hasn't convinced me that it isn't. The presentation of language features as a shopping list doesn't convey the real contribution of your new language.)

    You seem to have a grasp of languages that are general because of their complexity (Ada, C++), but make sure you have experience with languages that are general because of their simplicity (Eiffel, scheme, Smalltalk).

    A few other things:

    • I never really liked Eiffel exceptions much. I think Java does it better, but YMMV.
    • Don't make polymorphism explicit (with the "any" keyword). It should be the default. Non-polymorphic references should be the special case.
    • Have you really studied genericity? You seem to have bounded (ala Eiffel) and unbounded (ala C++) genericity, which is good, but have you looked at different schemes, like virtual types, or even Structural Virtual Types?

  • C is still the high-level language that produces the fastest code.*

    Which is an interesting definition of high level language. It's also contentious; good LISP compilers beat good C compilers on a wide range of problems, and no matter how difficult you think it, LISP is a high level language.

  • Just for the record, C was created by Dennis Ritchie and Ken Thompson. Primarily Ritchie, I believe

    Which is to say, they took Martin Richards' BCPL [], stripped out the virtual machine, added a pre-processor, and called it a new language. Stripping out the virtual machine was an advance that set computing back about thirty years.

  • Yes, I was joking. I am a computational physicist; I use C and Fortran extensively. The gimpy leg thing was to give a reason for wanting to shoot it. If a horse breaks its leg you used to have to shoot it to put it out of it's misery. In this case, it would be MY misery I'm putting C out of.

    I admit it's still the quickest, that's why I said you WANT to shoot it but cant because the old sonofabitch can still gallop when it wants to.
  • by MustardMan ( 52102 ) on Friday April 27, 2001 @07:50PM (#260385)
    C? The retarded horse with a gimpy leg. You wanna shoot the damn thing so bad, but I'll be damned if the sonofabitch can't still get up and run at a decent gallop now and then, when you take a good cattle prod to its ass
  • I sort-of half-agree with you. People will always produce equal or better assembly language code than compilers, because people can *use* compilers, look at the results, and optimize in whatever ways they want.

    To put it another way, compilers will never be better than people (in best-case scenario, where the people know what they're doing), because people can use the compiler and improve on the compiler's output.

  • YOu can find it at

    From their main page:

    TOM is an object-oriented programming language that advocates unplanned reuse of code. To this effect, TOM enables unplanned reuse through the following features:

    A class is defined by its main definition and any extensions.
    An extension can add methods, variables, and superclasses to a class.
    The source of the original class is not relevant while it is extended: it is not needed and does not need recompilation; nor is recompilation required for any client code or subclasses. Extensions can even be loaded at run time.

    Unplanned reuse removes the privilege of class modification from the class designer and hands it over as a right to the user. Every user has other uses for a class: the class does not need to suit them all if they can make it suit themselves.

    What does this offer to the writer of classes? Using a class is no longer a binary choice: the user can be almost satisfied by it and adjust it to his needs. And you can severely update the classes in your library, e.g., add instance variables or replace methods, without requiring recompilation of any program using it, or requiring a non-backward-compatible version change of your shared library.

  • TOM is GPL'd.

  • Can you (or someone) post a list of interesting and/or good C interpreters?

    - - - - -
  • Perl is better in its niche than any other alternative, had free implementations from the start, and isn't a growth on the pascal-like language tree.

    When programs were small, and machines were slow, Eiffel was great for fast machines and big programs. Now that programs are big, and machines are fast, it has a niche. But the best implementation is proprietary, and it's always going to be a growth on the pascal-like language tree.

    Your organic versus designed argument holds no water. What about the popularity of Java? What about the non-popularity of Forth? It's not how it came about, it's how it does the job.
  • No source code to a language design? What about BNF, or doesn't that count? That definitly seems like langauge design to me. What about the actual yacc/lex code used to write the official compiler? There is no reason that you couldn't GPL your BNF grammar and your lacc/yex code.

    Justin Dubs
  • The programming landscape is littered with good languages that died because of an 'unfamiliar' syntax.

    Smalltalk, Lisp.

    True, neither of these languages is ideal for all tasks, but they're both pretty impressive.

    LISP runs as fast as C++, and has extensive support of programmed transformation of code. (Thus, you can create, as an app programmer) new syntax and even new semantics. (The OO system CLOS is an application, not built into the compiler.)

    Smalltalk is interpreted, but is still very usable on a modern machine. Primitives can be compiled to C under Squeak. It also has beautiful introspection and semantics. It is the result of over 10 years of development in the 70's, and is still a sweet and clean language.

    Both languages are still amazing and just as powerful as C... And interactive programming to boot.. But why aren't they used? Is it because programmers are so afraid of new things that they won't spend even an hour learning a different (and potentially better) syntax.

    Both languages have a syntax that's far far simpler than C++ or even C. Its also much more elegant. Yet, that superficial difference seems to have sentenced them to death.

    Syntax unless egragriously broken[1], is superficial. Semantics is everything.

    [1] overcomplicated (APL: >100 operators, C++) Verbose (Java/Pascal).
  • Nope... It's got an unfamilar syntax, as someone earlier pointed above.

    Programmers do not like a syntax that is not close to the syntax of their first language (usually an algolic variant)..

    It's no harder to get used to than C, or any other real language. Try using it with an indenting, syntax hilighting editor for a week. Now, did you spend more than a week when you learned the syntax of your first algolic language?

    Anyone notice how all the languages that seem to be coming out are always algolic? And that they suck... They don't even have closures!

    Syntax is superficial. Semantics is everything.

  • by Fjord ( 99230 ) on Friday April 27, 2001 @08:43PM (#260407) Homepage Journal
    I'm annoyed by Java when you can't create references to methods

    Object o=new Object(); java.lang.reflect.Method hashcode=o.getClass().getDeclaredMethod("hashCode" ,new Class[0]); Integer hc=(Integer)hashcode.invoke(o,new Object[0]); System.out.println(hc);

    Now you try.

  • Here here!

    For a language to be useful for many tasks, it really needs good regular expression support.

    Not just for the RE stuff mind you, but I'd look to PERL as an example of a number of good programming language design practices in action. I especially like perl's ease of extensibilty. There are so many modules out there it boggles the mind :)

  • by BlackStar ( 106064 ) on Friday April 27, 2001 @08:57PM (#260411) Homepage
    There goes a few moderator points down the toilet. I was hoping SOMEONE who might have wandered through MIT would have put up a contrarian point of view to the whole "best thing" idea in languages. If one did, I can't find it.

    WHY does everyone have a favourite language, and assume it's the cat's PJs for every problem under the sun?

    I use, primarily, Java. Why? Because for what I do, I need the portability, and it's more than fast enough, and yes, the recent versions are portable properly. They're also likely a lot faster than you think they are.

    But my real point, is that it's not the only language, and the reference to MIT is found early in the book Structure and Interpretation of Computer Programs by Abelson, Sussman and Sussman. I myself did not go to MIT, so if I mess up, don't blame them, but the point made in the book is thus:

    "... First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. ... Second, we believe that the essential material to be addressed by a subject at this level is not the syntax of particular programming-language constructs ... but rather the techniques used to control the intellectual complexity of large software systems."
    Different languages evolve to solve different problems. People don't build things like Simula for writing a network driver or the next 1st person shooter. They build it to solve complex, physical simulations. The expressive power is greatly increased, but the problem domain is restricted.

    A general language is a nice idea, but we're starting to need something beyond that. The whole idea of the project is actually meta-programming. Not writing your next device driver. Write it in C. Maybe C++ if you must. But the OOP languages have given us a large number of very reusable, efficient components, regardless of what detractors of the OOP approach itself may claim. Knitting those components together right now is tedious in C++, Java, VB or even many of the visual designers. We're still bolting rods to wheels, instead of expressing the transfer of linear force to rotation at a higher level.

    I would humbly suggest that all the crock about the syntax and such be backed off. Especially type constructs, object aspects, and the other things addressed quite well in many different ways by other 3GLs.

    Start with interfaces. Describe the interfaces, and describe the use of an object that adheres to those interfaces. From there, find ways of describing systems of those interactions.

    Contrived example (required): database connector, table model, grid component, graph component, statistical analysis tool. Each has certain interfaces that can connect to each other singly or in groups, and can control things singly or in groups. The old MVC writ large. Find a way to describe and program the MVC system at the MVC level.

    I wish I had any idea how to do this, but I don't. I write in C, C++, Java, Perl, and have dabbled in everything from assembly up to Scheme. It's all so similar in far too many ways. It's not ideas and systems, it's still bits and branches.

    Many posters argue the efficiency. Your points are valid, but large, complex systems spend tens or hundreds of millions of dollars in programmer time for software running on "mere" millions of dollars of hardware. Doubling the amount of hardware and halving the amount of programming is a WIN for all involved.

    Odds are that the approach of the project is already correct. It uses a 3GL as the "worker" bits (Java, in this case, fight about it somewhere else), and tries to put a true meta-layer on top of that.

    The concept is as foreign to working programmers today as it could be. It's like knowing Classical Greek physics only, and getting hit by Relativity and being asked to design the analysis tools for it. You don't even know Newtonian theory yet. There is that large a jump in there.

    So skip the bits about the next cool language. It's a valid discussion, but unless I miss my mark, this project is grasping at something far, far beyond that. And being able to code to a pointer doesn't really matter at that level.

  • If you design your language by committee, you'll end up with something like Ada. (OK, I've never even seen Ada code, but I've never heard anyone say anything good about it.) If you just let everyone glob in features from their favourite languages, you'll end up with a monstrosity that nobody ever uses, like Perl. (Um, hang on ...)
  • I did actually implement it

    Well, I can't argue with that. I didn't realize there was a working compiler for this, I thought this was still in the design stage. Guess I should have looked around a bit more. I still think this could be confusing, but if the speed boost is there, maybe it's worth it.

    The fact that you hate them doesn't make them useless. Consider a device control register where flipping a bit in one register changes the meaning of another register. The alternative is ugly pointers.

    I did give reasons for hating them. To restate, I'd like to actually use the encapsulation for security (you can't mess with private stuff) instead of just type safety (it's easier not to mess with private stuff), like Java supports. This is impossible with unions, since members can overwrite each other's data. Not only does this feature not help me, it's impossible to do what I want because of this feature. It is really a matter of what you want the language to be used for.

    This is an all-or-nothing thing with the addresses pointer type. If you can do pointer arithmetic and not unions (or vice versa), there's no gain for me.

    The fragile base class problem is that you cannot extend the class.

    Agreed. But what is causing the problem is not the way C++ conceptualizes polymorphism, that you add virtual operations to the base class. The real problem is an implementation detail. Java has the same concept but not the fragile base class problem, simply because of its implementation.

    I'm not a low-level expert, but I believe it works something like this: given a base class with n virtual functions, an instance has a virttable[n]. Derived classes then have that same virttable[n+m] where m is their own virtual table. If the base class is then recompiled with virttable[n+1], code compiled with the new version expects the derived classes to have virttable[n+1+m] with the new spot being at index n...this isn't true, stuff breaks.

    I really don't like the idea of an extension to class Foo. The point of the polymorphism is that you have a bunch of operations which can be performed on any Foo, but are implemented differently in subclasses. Conceptually, each subclass does the same thing, even though it has a different implementation. I can't reconcile this with your idea of extending class Foo only for class Bar. You seem to basically be creating multiple versions of Foo. I hope I can't make a pointer to a Foo_e (extended version of foo) point to a Foo, because then I'd get a "pure virtual function called" or something strange when I try to call a non-pure function.

    Thanks for the comments. They help.

    I'm glad to hear it.

  • by slamb ( 119285 ) on Friday April 27, 2001 @09:14PM (#260418) Homepage

    Here's some things I do like:

    • Indentation-sensitive syntax. I've seen Python code and this seems to really improve readability. It would be especially great for a beginner's language, since it's important to stress proper indentation at the beginning.

    • Named and out-of-order parameters. Your example very clearly demonstrated the advantage of named operators for complex function parameters. This is a feature I don't think I've seen in any language but VHDL. (Though some Perl modules sort of cheat to get this functionality.)

    • Constrained generic types. I think this nicely solves the problem in C++ of getting confusing compile errors when you try to sort objects you've forgotten to define operator< for. This is especially important if plug in the types at runtime instead of compile-time, which it sounds like you intend to do.

    Here's some things I don't like:

    • Expression reduction. This seems like it would be hard to implement and very confusing. Specifically, think about expressions like that within larger expressions. If I define "A*B", "A+B", "A/B", "A*B+C" and "(A+B)/C", what happens when I use "(A*B+C)/D"? What gets called? You could do "(A*B)" then "(A+B)/C" or "A*B+C" then "A/B" or just use the binary operators. There would be a bunch of different cases and its unclear what code would actually be executed. I think it creates too much confusion unless you can demonstrate that this would be a huge speed boost.

    • Variant records.

      The first example you showed of these was basically using them as a replacement for unions. I hate unions. I like the way you can in Java set a security manager that controls what various bits of code can and can not do. But this depends on those bits of code only accessing other objects through the public interfaces. So unions in this case would be very bad for security, and they are very bad already for type safety. I really think we're past the point where we need to save a few bytes.

      Second, you used a variable to size an array. Basically, you created much simpler syntax for dynamic memory management. I think this masks a lot of problems. What happens when you run out of memory trying to resize the array? You never explicitly resize the array, so I don't even know where you'd go about inserting code to deal with that failure. Second, what happens when that variable changes in some non-obvious way? Ie, through a pointer to it or through the union thing you described above. The array isn't resized, and nothing good can come of that.

    • Multiple kinds of pointers.

      I can't help but see so many different types of pointers as needlessly complicated and confusing. What will you really be needing pointers for? You've defined "in", "out", and "inout" parameters, so pointers are no longer needed to pass by reference. They are needed for dynamically allocated memory. And they are needed for really low-level stuff that needs to address specific bits of memory.

      I suggest instead doing this: having the simple pointer type you've defined which does not allow pointer arithmetic or pointers to arbitrary addresses. Having the address type you defined available if the security manager allows it (again, the idea of not only type safety but security from not allowing access to arbitrary regions of memory). autoptr really isn't that different from ptr...especially since in C++ it can be implemented without any language support at all. Leave the others out.

    • Function-based dynamic dispatch (polymorphism). You talk about how C++ has the fragile base class problem, that new virtual functions can't easily be added to the base class. But I think you are misinterpreting where the problem lies. The problem is not that you have to add the virtual functions to the base class. That's just the way it has to be; otherwise, very weird stuff would happen when you try myBaseClass->onlyDefinedInInheritedClass() (remember, you don't know if a shape object is a rectangle or a triangle or whatever, that's the point of polymorphism). The real problem is the way C++ represents virtual function tables in compiled code. Java, for example, does not have this problem.

    Really, quite a few of these features I don't like. It seems like they just add complication to the language spec without solving any huge problem. This will both make it harder for you to create a compiler and harder for people to learn/use the language.

  • by pi_rules ( 123171 ) on Friday April 27, 2001 @05:47PM (#260421)
    I like C ... because I can manipulate the memory byte by byte in an uncontrolled manner.

    I like C++ because it makes me really say what I want to do if I try and do crazy shit to memory.

    I like Java because no matter what I do I can't do anything dangerous. Err wait, I hate that about it :).

    C++ has a nice way of protecting people.. you need to explicitly say "Yeah I'm totally sure I need to take this hunk of memory, cast it into a void* and do some arithemtic on it." It's too long though.. the lines of source that is.

    Allow a Java like straight jacket... using something like #pragma preprocessor defs or something.

    #babysit_me_i_am_dumb ... this would act like Java :)
    #make_sure_i_am_sure ... this would act like C++
    #back_the_fuck_up ... like C -- you do what you want.

  • Not in Smalltalk. It's not some added on reflection library, but a core part of the system. #perform: is a primitive, not just some shorthand for that mess in the Java example.
  • The ugliness of Java never ceases to amaze me.

    In Smalltalk:

    Transcript show: (Object new perform: #hash).

  • by Gorobei ( 127755 ) on Friday April 27, 2001 @07:05PM (#260424)
    Sigh, I really don't want to add to the language flames, but here goes anyway:

    Those that do not use LISP are condemned to reinvent it. Badly.

    Every good computer programmer I know has designed several "mini-languages." They all improve expressability in some minor way (because they scatch an itch or are optimized for a specific domain.) But, 99% of them never catch on because they are not extensible in a "deep" fashion, or if extensible, the meta-language of extensibility is painful for real-world problems.

    Languages become nasty because programmers try to write "nice" APIs for their users. E.g. the systems guys provide "clean" APIs for the library writers. The library guys provide "clean" APIs for the application writers. The application writers provide "clean" APIs for the users. Each layer uses a lot of crufty stuff in an attempt to make life easy for the users of the layer. Eventually, the entire system is cruft, and hard for everyone.

    C was clean, but began to collapse when programmers were forced into heavy macro hacks to implement more complex systems.

    C++ started nicely, but soon was burdened by the ancient linker. Templates have become the new evil that replaces the old evil of macros.

    Fortran avoided the whole issue by making abstraction beyond the subroutine level impossible.

    Common Lisp, ML, Prolog, Scheme, Smalltalk, etc, all try to be "honest" languages: the writer of a piece of code trusts his users, and the users can inspect the system they are using. Everyone is assumed to be intelligent, and "information hiding" is looked upon with a degree of suspicion. The more "features" a language has, the more it worries me: these are decisions made by the designer that I cannot change. This is why I like LISP: your program must conform to certain basic rules (it is a list,) but all other design decisions are visible to me, and probably changable by me.

    Of thelist of 38 unqiue characteristics of LX, Common Lisp already has 33 of them. Indentation sensitive syntax is similar to paren-balancing syntax. The other 4 are either artifacts of non-sexpr languages, or trivially implementable in a few lines of LISP.

    LISP was the original user-built, customizable language.

  • I am still waiting for TI or some other company that makes calculators to come out with a Haskell calculator. It would be nice to have a hand-held Functional Programming Language Calculator. I hate programming my current TI calculator in its nasty procedural scripting language or whatever you call the crap it comes with.
  • If you're going to have recursion, just make sure there is some way to trap stack faults and/or prevent hard drive thrashing... "ERROR--program foo caused a THRASH EXCEPTION in module bar. Please choose one of the following: [abort] [thrash for 30 more seconds] [keep thrashing until the process is killed or terminantes on its own]"

    Need XML expertise? crism consulting []
  • I originally wanted to call Magenta Linda (as in Lovelace) because it sucked so bad

    That name's already taken. It's the name of a distributed programming language. It was also named after Linda Lovelace. Search for it or check out this link:

  • by MongooseCN ( 139203 ) on Friday April 27, 2001 @06:02PM (#260431) Homepage
    For example in C++ you have objects which are suppose to be seperate and thought of only through interfaces. Well it just never really works like that. You have public members which other modules can read and write, you got the "friend" keyword hack which lets other special objects access members of a certain object. All these things break all the rules of OO programming and modular programming in general.

    If object A is dependant on a certain public member always being available from object B and suddenly the variable is assigned different types of values or used in another way, the object A will have to be changed to accept the changes in B. Well this synchonization never usually happens unless there is a lot of documentation written (We all know coders love to write documenation!) or it finally produces a bug and you whip out the debugger and start tracing...

    This is just one of the many examples. I would like to see a language where objects are forced to be seperate and truly defined only by their interfaces. C++ almost had it until it introduced all the hack keywords which broke everything.

    A truly modular lanuage would be great for a Open Source language because people could work on different objects without having to worry about the internal details being compatible with another coder's object. This would allow parrallel coding to work more efficiently. Also when people join on Open Source projects, they don't have to time to go through all the code in the project to see how things work, they only have the time to look at a few sections, understand those and start coding. With enforced modular code, the new coder will only have to look at interfaces to understand how a program works.

    Modularity is the key to making Open Source work.
  • Objective Caml has all this.

    Besides: It has a wonderful object system, but is great for procedural programming too. It is an excellent functional language, but is great for imperative programming. It is strongly typed, but you can desactivate the typing features if you want too.

    Last but not least, the performance of the compiled code is excellent, better than C++ and close to C.

    Put it another way: it has what it takes to please irreconciliable communities such as the C++ people, the Lisp people and the Java people. And much more.

    Also, it is, as this story suggests, designed "the open source way". That is, it is open source of course, and its design is the result of constant discussions of excellent technical level on the caml mailing-lists.

    Heaven on earth, isn't it?
  • I disagree with your claim that C and C++ have identical performances, but this is a debate which has been lasting for ages... In my specific field, that is, scientific computing (ie number-crushing), the gap between C/Fortran and C++ is both obvious and important. And OCaml is in-between.

    Anyway, you know what people say... there are 3 sorts of lies: lies, damn lies, and benchmarks. So I don't think it's worth arguing on this issue. The bottom line is that OCaml has the advantage, especially compared to Lisp and Java, to provide performances of the order of system languages such as C and C++, with a much better abstraction (higher level semantics, garbage collection, command-line interpreter, etc...).
  • by mliggett ( 144093 ) on Friday April 27, 2001 @08:06PM (#260436) Homepage
    High-level languages don't always result in slow code. Probably the strongest counterexample is Objective Caml []. Functional (1st class functions; lexical closures), OO, exceptions, strictly typed, type inferencing, parameterized modules (and classes), a macro system that lets you extend and modify syntax (in camlp4) and more. The language is probably 10x as expressive as C (e.g. it takes, on average, 1/10 the space to say the same thing in OCaml as C), but it compiles to near-C speeds and sizes (sometimes beats it). This is a 15-year old project with a liberal license (LGPL) that works on UNIX, Windows and Mac OS! An older version (Caml Light) works on Palm OS. More people should be considering languages like this for complicated problems where performance is an issue!
  • Uhhh... Squeak is Smalltalk. Maybe a bit more added since the ST80 definition, but it's nothing *new*.

  • I actually have a bit more of a PostScript fetish myself, but I think that's just a personal quirk. I don't claim that functional languages are the be-all, end-all of language design, but there's a certain appealing rhythm to a language that seems to go "do this to this to this to this to this to..."

    But it's all in what you're trying to do. I wouldn't write an operating system in PostScript, but that doesn't mean I don't think it's a good language for what it does (and a few other purposes besides).

  • by connorbd ( 151811 ) on Saturday April 28, 2001 @09:58AM (#260440) Homepage tml

    About six years ago I went on alt.folklore.computers on Usenet to create a language spec by this very process, except I did it as a joke. You don't want to write a spec by Bazaar methods -- it's a sure guarantee of an unnecessarily baroque design that will be a bitch to implement from the ground up. If you're doing it seriously, you'll start unnecessary language wars as people pull out there MFTLs for design inspiration. You will get a reference manual two inches thick like Ada or C++. You will get bitched and whined at because your objects aren't as pure as Smalltalk, because your functions aren't as functional as ML, because you're more baroque than Perl.

    Too tough even to change, now that I think of it -- I went back to try and rework Magenta into something coherent and I couldn't cram enough of the design back into my head to make any sense of it a year later. Implementation by committee can be a thing of beauty; that's how the Internet was built. But design by committee... let's just say I originally wanted to call Magenta Linda (as in Lovelace) because it sucked so bad.

  • by chipuni ( 156625 ) on Friday April 27, 2001 @06:08PM (#260441) Homepage
    It's easy enough to design a language. But your real question should be...

    What is so insanely great about this language that would convince a programmer to use it?

    From my brief reading of the webpage, the language seems to be a mish-mash between Pascal, Perl, C, and Python. Those are all good languages... but I didn't see any reason why Pascal, Perl, C, or Python users should switch to your language.

    Remember that getting in a language is hard. Right now, many programming tools already support the major languages. Unless you have a large corporation behind your language, it's hard to get enough mindshare to get all the tools that programmers want. Are you really willing to do the compiler, debugger, profiler, editor, and all yourself? Across all platforms?

    Before you create a new language, I recommend that you do two things:

    1. Find out why Perl, a language that has mostly accumulated its present form, rather than was designed, has become so popular.
    2. Find out why Eiffel, an incredibly well- designed language that has multi- platform support, excellent tools, and a company behind it, has only remained a niche language.
  • Read a book which covers use of friend, and encapsulation at the component level, instead of accepting naively that the "OO way" is the one true solution. OO does not offer the best encapsulation, only the best object-level encapsulation. It's possible to do better. C++ ain't perfect, but assuming that friend harms encapsulation just because too many people misuse it is ignorance.
  • Lisp is nice from the naive point of view, but try implementing a large piece of real code in it. You find out very quickly that LISP is really nothing more than a syntax - there is nothing there to build on - you get to rebuild every library you might have ever wanted all over again.

    This is but one reason Lisp is just an academic curiosity these days.

  • Like this?:

    @P=split//,".URRUU\c8R";@d=split//,"\nrekcah xinU / lreP rehtona tsuJ";sub p{ @p{"r$p","u$p"}=(P,P);pipe"r$p","u$p";++$p;($q*=2) +=$f=!fork;map{$P=$P[$f^ord ($p{$_})&6];$p{$_}=/ ^$P/ix?$P:close$_}keys%p}p;p;p;p;p;map{$p{$_}=~/^[ P.]/&& close$_}%p;wait until$?;map{/^r/&&}%p;$_=$d[$q];sleep rand(2)if/\S/;print

    Larry Wall won the obfuscated C contest twice, you know?

    C'mon, flame me!

  • by tshak ( 173364 ) on Friday April 27, 2001 @09:17PM (#260448) Homepage
    You are forgetting the core purpose of a computer - it's supposed to do the work for US, not US the work for it. This is a concept I think many of us "tech geeks" and engineers forget. Although I don't agree with the "just throw hardware at it" attitude, abstraction exists so that we can create more "quicker and easier". You make some good points - especially applicable when it comes to small real-time OS's - but even JAVA is running great on cell phones.

    No offense at all, but unless you're coding an OS, you need to let go of your outdated concepts of low "level code running super efficient" and recognize that abstraction and OOP are here to help the HUMANS - the HUMANS are not created to help the machine! Just imagine Linux being ALL ASM! Unmanageable.
  • The retarded horse with a gimpy leg.

    You're joking, right? C is the old, mean, sunuvabitch granddad horse that might not be as flashy as these younguns running around, but can still kick their ass when it's time to get some work done.

    C is still the high-level language that produces the fastest code.*

    *Except for FORTRAN, which still kicks C's ass on numerical applications because of the "pointer problem", and yes C++ can produce code as fast as C, but it's much more difficult due to the complexity of the language. Of course, compilers still don't produce code as good as hand-coded assembly language and please don't quote me the "myth of the magic compiler" that is supposed to produce code better than humans because you can always code whatever tricks humans would do into the compiler, blah, blah. That's total crap. Compilers produce crappy assembly language. The problem is that no one cares anymore. I've never seen a proof, but I suspect the perfect optimizing compiler is a travelling salesman-class problem. Does anyone have any proofs of my suspicions? Oh well, enough of this digression. :)


  • When developers (Pike + friends) needed an efficient, processor-independent language for systems programming, they created C.

    Just for the record, C was created by Dennis Ritchie and Ken Thompson. Primarily Ritchie, I believe.


  • by AuMatar ( 183847 ) on Friday April 27, 2001 @06:55PM (#260460)
    Since no one seems to be hitting the question much, Ill be the first.

    Digit grouping- makes programs far more readable
    Base selection- Will make low level programming easier by making the bases explicit. No more problems due to forgetting an h or b.

    In/out specifying in param lists- will increase efficiency and help tell when functions will actually change a parameter.

    Tabbing as program structure- Its going to cause problems. People will think that lines will be executed conditionally when they won't be due to forgetting a tab (and vice versa). The reason for a grouping char was to force them to think about what is/isn't in a statement

    Underscore ignoring- Too confusing. People will try to read each others code and be unable to figure out what open_file is. Either make _ illegal or let it be a full character.

    Too many keywords- will make language hard to learn. Plus too much typing.

    Preconditions- If one compiler uses it for optimization and another throws an error at a broken one, your code will do two VERY different things on the compilers. Choose one, then it may be cool.

    Named/out of order arguments- will cause confusion and bugs

    Lack of types- will cause inefficencies of too much/little space being allocated. Also having compiler defined types like integer wioll make programs too compiler dependent.

    Customizable for/cases/etc- It will cause confusion when programmers start taking standard language ideas and twist them. Trust me on this.

    No opinion
    Case insensitive- doesn't make a huge difference, but I do see it causing confusion as the _ ignoring will.

    Im curious- what is the use of this language? I see a basic language, a lot of sytactic sugar with no real use, and a lot of features that will make compiler writing EXTREMELY difficult. Not to mention that most of the language features will be rarely used- You really are trying to add too much to one language. This language does everything, but wont be able to do all of it well. Jack of all trades, master of none.

    And you have one major question left to answer- So why should I use it? What feature(s) does it have that the existing languages don't? I don't see any- in fact you define its features in terms of other languages that have them. You need something that is yours and yours alone if you want this to suceed.
  • by groomed ( 202061 ) on Saturday April 28, 2001 @07:22AM (#260464)
    I've said this before ... I'll say it again:

    Indentation should help the programmer to understand the code. It should not be an additional source of worry.

    As such, indentation should represent the program structure. It should not embody it.

    First, because this approach makes it possible to compute the indentation from the program structure, and this helps to flag many errors without the need for compilation.

    But perhaps more importantly, because the indentation is computable, it is discardable.

    And this makes the language easy to transcribe: you can easily copy and paste snippets of C or Perl code to and from weblogs and email messages and get them to compile, usually without worrying about spaces or tabs or newlines or any such transformations that may have occurred in the process.

    Yes, properly indented code looks gorgeous, and this counts. But in any non-trivial program, semantical ugliness quickly dwarfs syntactical ugliness.

    Mandatory comments would be a better idea.

  • Bravo. This is the right set of questions to ask anybody embarking on a language design project. I couldn't resist making an earlier comment about the things I wish had been present in the last few broad-applicability languages I was using, but you're right, even though there is plenty wrong with today's languages the solution is (probably) not Yet Another Language Standard.

    Aside: Here's my favorite example of an ideal match between a language construct and a semantic behavior: typing a command line with a bunch of pipes and forks. "ls | grep | foo | more" was such a leap forward from manually naming and manipulating temporary files. It lets us control a complex suite of functions through a very simple interface model. Interface specifications are our great contribution to the advancement of knowledge. They let us take a complex system, and simplify its behavior and definition by organizing it in terms of independent components. And yet interface definition, management, publication, and revision is one of the things we've done worst through the years.

    JMHO - Trevor
  • by Spinality ( 214521 ) on Friday April 27, 2001 @08:10PM (#260474) Homepage
    Here are a few random comments, based on a history of having designed many languages through the years (none of which you've ever heard about). I hope this isn't considered too bloated a comment; sorry if it pisses you off, but I find the topic so interesting.

    Comments. Make it really easy to provide strong in-line documentation. Don't, for example, emulate whatever brain-dead folks at M$ are responsible for VB still not permitting comments after the "_" used for line continuations, making it impossible to have function parameters documented one-per-line. Consider multiple documentation-related conventions, e.g. "//" for end-of-line comments, perhaps support for standardized structured comment blocks in preambles, ways to comment-out and uncomment blocks of code, ways to group sets of procedures, ways to draw separator lines etc. within listings (printed listings are still useful even in this day and age), ways to generate indices/crossrefs etc. Think of it as an algorithm publication problem.

    PL/1 disease. You've proposed lots of good features. However, as others have pointed out, it will be easy to overload the language with a bazillion keywords and features to satsify every participant's biases. At the end, there will be n separate incarnations of the language, consisting of each user's set of preferred constructs. Nobody will really grasp the whole thing. Instead, strive for the kind of expressive purity of C (better: LISP), where a small number of primitive syntactic components can yield a rich semantics. It's a hard problem, and frankly a collaborative effort rarely can yield the conceptual purity that a single author can impart. But don't give up.

    Extensibility. The most useful languages (IMO) can be extended to suit new situations. In many complex environments, it's better to adapt the language of the solution to match the language of the problem. This is why meaningful names, operator overloading, shared libraries, named constants, preprocessor definitions etc. are so helpful -- you can extend a language's working suite of concepts by documenting the specific interfaces used to interact with a particular execution environment or problem domain. So, for example, resist including I/O primitives and instead make it easy to create extensions from within the language. (Why consider them extensions rather than just traditional function calls? So that we can specify compile-time properties and behavior for their interfaces, such as argument marshalling, synchronization, error handling, exception conditions, etc., without bloating the runtime environment.)

    Encapsulation. We should always be able to replace a named entity that appears to be an atomic value (integer, string, etc.) with a named property having a complex programmatic implementation, with either compile-time or run-time behavior. Such a replacement should be transparent to clients of the associated interface or name. Preferably, this should be possible within a narrow lexical scope, not just between packages or interfaces.

    Integrated data definition/metadata. Try to bridge the gulf between the language's internal namespace and the external database environment. It should be anathema to reference entities using names stored or specified as character strings ("MyDatabase.OpenTable('Employees').Fields('SSN')" as we must do in so many languages when referencing external data). Instead, somehow let us bind the external data definitions into the compiler's namespace, so that referring to "Employees.FirstNaame" generates a compile-time error. (Obviously, we need to control the binding semantics so that in some cases this occurs at run-time, in some cases at link/load time, in some cases at compile-time, in some cases at preprocessing time.) Ideally, the same mechanism should support run-time interfaces to external resources such as CORBA components, including run-time retrieval of interface metadata.

    Incremental compilation. Design the interactive programming environment, the language, and the compiler together, a la Smalltalk, rather than using the raw source code paradigm. Assume that we'll want the ability to dereference variables, determine variable scope, jump to definitions, view cross-references, etc., all while editing the source. This isn't really a language design issue, other than putting an emphasis on components that support separate compilation. But if you build your first compiler this way, e.g. via a p-code implementation, all your successor environments will preserve the interactive development model rather than the batch compilation model.

    Explicit declaration and lexical scope. Help us find those dangling/incorrect references.

    Constants and other compile-time tools. Give us strong support for creating highly-readable code that nevertheless can compile efficiently into in-line code and integer arithmetic. Combining the ability to drill-down to the physical machine with high-level encapsulation and incremental compilation would give us a wonderful span of control. (This is what we love about Smalltalk, being able to hack the running device drivers from inside the source code editor.)

    Asynchrony, continuation-passing, message-passing, interrupts, critical sections, real-time constraints, shared memory, etc. Give serious thought to how deep issues of OS programming would be handled. If possible, address such issues through highly-visible fundamental mechanisms rather than hacks. For example, if you want to provide a way for two execution threads to share write access to a variable, implement some kind of encapsulation that provides well-defined interfaces and behaviors, rather than permitting indeterminate and possibly system-crashing results by making it look like a normal variable.

    Well, that's enough blather for the moment. I hope these comments are useful. A few inspirations I'd love for you to consider in language design: C and BLIS, for their conceptual simplicity; Cedar, for its richness and language/environment integration; Smalltalk, for its extensibility and structural encouragement of small code fragments rather than monolithic procedures; CLU (and Simula and Smalltalk and SQL-embedded languages) for integration of external and internal data.

    Good luck. -- Trevor

  • You should check out Jiazzi, a project I'm working on, which provides a component/module construct for Java, enforcing access for classes inside as opposed to outside of components, and has an explicit notion of import and multiple instantiation of the same component in different contexts (think seperately-compiled templates):
  • I've notices that these days the two biggest trends in high-level language design is the ability to 'round-trip' and a "seperation of concerns".

    What I mean by 'round-trip' is that it should be possible to parse the language, make some complex transformation, and spit code back out without loosing a lot of information. This in general is impossible in C and C++ because macros operate at a higher level than the core language, and because a single line of source code in a header file can mean different things depending on the context in which it's included.

    The second big advancement is an extension of the OO model called Aspect Oriented programming. There have been a number of studies in the area, and many of them show impressive gains in performance, or reduction in code size. The goal of AO programming is seperating loosely connected tasks that normally have to be interleaved by the programmer, and automating the process, which is called weaving. This is normally done by allowing "after-the-fact" additions to code using glob style operators. For example, it's possible to add interfaces to a class without modifying it's source code, or by 'wrapping' functions or groups of functions in before-after style blocks. Take a look at AspectJ [], which is a backwards compatible extension to Java.

  • by blamario ( 227479 ) on Friday April 27, 2001 @06:53PM (#260480)
    When I first saw the story, I thought of another (and IMO more innovative) programming language/system called Mozart []. Would it really hurt to at least check Google for "Mozart programming language" before you name your new project? Not that the original is very original either... If you decide to change the project name, may I suggest not to use Beethoven?
  • You never know how many times I've tracked bugs down to semicolon-terminated if statements. IMHO, bugs resulting from different tab stops are easier to find than inherent logic problems like this. Besides, most editors have the option to visually differentiate tabs from spaces. Also, the idea is to let other people see the structure of your code faster. That means no more debates with regards to indentation styles. A programmer friend of mine used to have a very different way of putting braces around code blocks, and we eventually found it so difficult to read after a while that we had to make a custom pretty-printer to visualize her code. She has since changed her evil ways. <grin> This is a huge step towards standardizing open source code. Since everyone will be forced to use the same indentation, all code would theoretically look the same from this point on. Other steps toward this standardization include Hungarian notation (probably one of the few features I actually like about Microsoft code) and enforcing modularization (like the Java rule of one file per public class). I would like to see more of this in future programming languages.
  • by Kletus Cassidy ( 230405 ) on Friday April 27, 2001 @08:22PM (#260482)
    I'll split my post up into my ideas on the features you have now and my suggestions for features I'd like in a programming language. Good luck. :)

    Current Features

    The current features I didn't mention are the ones I thought were well thought out or didn't really have any issues with.
    • style insensitive names: These sounds like it will cause more confusion and cause more problems than it will solve (that said, is there actually a problem that it solves or was this just a cool feature you thought of hacking in?).

    • using keyword: Be careful about Keonig lookup [] if your language isn't going to dynamically load classes like Java does. Some people think the "using" or "import" style keywords should behave like #include but they usually are more subtle than that.
    Features To Consider

    • Threading library: Multithreaded programming is more efficient than the using multiple processes and has grown increasingly popular. The fact that languages like C++ do not have a standard threading library unlike Java is a bad blow.

    • Virtual functions: Be consistent with how virtual functions are used. One of the many failings of C++ is that the behavior of virtual functions is completely unintuitive; virtual func s can't be called in a constructor or destructor, lookup for overloaded functions stops at the current class instead of all the way up the inheritance heirarchy, etc. Keep inheritance simple, C++'s private, public vs. protected inheritance is a mess.

    • Platform independent numeric types: Like byte, int32, int8, int64.

    • Code based documentation: Something similar to javadoc or perlpod. It is great to be able to get an overview of a whole project simply by reading documentation generated by the code.

    • Resumable exceptions: The idea that blocks of code in exceptions can be retried is nice but even cooler would be to borrow a leaf from the Smalltalk book and mark exceptions as resumable or not.
  • Java is ugly, sometimes, if you're doing things the language wasn't meant to do. Reflection is primarily a means to access compiled classes at runtime, and while it can be used to invoke methods dynamically, its certainly not meant to. If you want it to look pretty, make a library function. That's the only reason it looks pretty in other languages.

  • by squiggleslash ( 241428 ) on Saturday April 28, 2001 @07:13AM (#260484) Homepage Journal

    My first thought on "open source programming lanugage development" was along the lines of: "You mean... like this?"

    Suppose someone, call him "Dennis", produces their own computer programming language. For the sake of argument, let's call our example language 'C', because it programs Computers.

    Dennis releases the language C to the public domain, by releasing sources (specs).

    Then suppose someone, call him Richard, comes along and takes C and adds a few bits and pieces he wants in it plus some ideas some other people have had. Something he can only do because the language is, er, "open source".

    Then, say, Bjarn and Steve come along and decide to add some features to C and Richard's C (who has named his after his favourate animal) to make it support object orientation. Bjarn calls his version "C++", and Steve calls his "Objective C".

    Then, someone else, ooh, I dunno, Bill J, and maybe another person called Bill G, decide to take C++, and clean up the language and implement a virtual computer architecture in it. Bill J names his after his favourate brand of coffee, and Bill G names his after a musical note.

    And all of them are able to do this because:

    Dennis made his language "open source" by releasing the specs, so Richard and Bjarne were able to add features.

    Richard made his extentions to the language "open source" so Steve was able to add features.

    Bjarne made his extentions "open source", by documenting them again, so Bill G and Bill J were able to add features.

    Would that be what "open source" development of a programming language would be like?

  • by pkesel ( 246048 ) <[pkesel] [at] []> on Friday April 27, 2001 @06:49PM (#260486) Journal
    I think asking "What features do you want?" is short sighted. You should be asking, "What do you want to do with this language?"

    If you're wanting to write server-side web apps your not going to focus on a GUI framework. If you're wanting to write a MOM, you're going to think data structures and serialization, as well as sockets. When you know what you're doing with it then you can say, "What features?"

    Server side apps would be well-served by a VM with dynamic class loading. MOM would do well to have some well-constructed socket features and easy threading, plus some standard serialization support. GUI is going to need a good event managing system.

    I think writing a language because you're looking for the next greatest thing since Java is wasteful, disruptive, and serving no relevant purpose.
  • by localroger ( 258128 ) on Friday April 27, 2001 @06:48PM (#260487) Homepage
    Truth to the hardware. Really.

    I have been involved in something close to a war with the very bright, inspired, but slightly misguided developers of firmware for a device I use every day for about 6 years. The firmware was written in C++. The day I saw it (back in '95) I said "This is the best product in our industry in more than a decade. If only it was faster." It implements a smart, well thought-out language for newbies to use to program this embedded peripheral in the environments where it will be most used. (Think truck drivers providing input.) Everything rocks except the fact that the smart, well thought-out synthetic development language can only execute about 200 instructions a second.

    OK, this is on a 20MHz 80186, but I learned on a 4MHz 8080A. I know there's a 6-level interrupt system but this is still really bad..

    Fast-forward. I have sold a *lot* of these beasties. But the competition have moved forward, and now while they still aren't as smart or well thought-out, sheer processor improvements have made one or two of them very fast. Lately I had an application that had to be very fast. I made a comment about going to the competition. This melted enought ice to cause the sea level to change. After some bantering, I suggested that I might benefit if I could download x86 machine language code as part of my pidgin programmin' language file. To my everlasting amazement the factory guys agreed and worked out a primitive API with me.

    11,000 lines of code later I have an application running on this cool little box even its makers never realized was possible. Why? It's amazing what you can do when every low-level counter isn't coded in double-precision floating point and system stuff doesn't go through two layers of indirection (C++, dontcha know).

    Abstraction can be very useful but it can also be a very short dead end. I told this company in 1995 that it was cheaper to write assembly language software than it was to replace an 80186 with an 80386; today they have done the 80386 thing and still don't have the speed I've achieved in their older instrument with better code. Sure, lots of code can be done at high abstraction with little or no loss -- but when you do all the code that way you lose a sense of what the machine is actually doing. I really don't like languages that obfuscate what is really going on to create some "virtual environment" where efficiencies are not apparent. It is very easily to get yourself into situations that look reasonable but where exponential resource requirements develop.

    The best computer languages IMNSHO are line based, not character based. They have line counts somewhat related to object code byte counts. They have a human parsing-time somewhat relatable to processor parsing-time. Sometimes this makes them harder to use than groovier languages that exploit human perception windows, but when you do learn to use them you will be able to write code that works and doesn't bog down a server farm.

    C was good at the fast thing but poor at the lines vs. characters thing. (Really, I think Dante's Devil has an extra fork sharpened up for whoever came up with the whole dumbass stream-of-characters idea.) C++ isn't good at anything except confusing people. The many other languages invented since 1985 or so are all trying very hard to be both C/C++ and the things C/C++ isn't.

    Someone mentioned what a tragedy it is that the objects in C++ have been hacked. Well, what else did you expect? OO itself is a totally cocked idea, and welding OO onto something once thought of as "portable assembly language" (ralph, ralph) is kind of like mating a monkey with an iguana. I'll admit it's a long time since I was in college but I was gabberflasted to find that an engineer who graduated in 1995 was unaware that 0.1 is an infinitely repeating "decimal" in binary floating point notation, so that 1/10*10=0.9999999 unless you round it off; and furthermore that the single-precision libraries account for this but that the double-precision libraries don't, because the doubles figure you can work out for yourself when you want to round off. It's all there in Knuth but who reads him any more when there's all this cool Java* and derivative crap running around?

    Oh well my rant for now. Karma down 3 from what, eh? It's late and I'm still dealing with this fscking stomach flu. Bah. If it can't be done in 1802 assembly is it really worth doing, I say.

  • Bollocks.

    You don't need "friend" for this -- "friend" is merely a bit of syntactic sugar that people who don't know how to use the Visitor Pattern [] or double dispatch [] must resort to to get bidirectional relationships.

    In your particular example, you'd want to use a seperate object to represent the borrower-book relationship (likely a State pattern object), have a method in the Book class which registers itself with a Borrower argument (or a BookVisitor argument, more generally), establishing the relationship thusly.

    Private members are private for a reason.

  • Just get a decent editor that can hack indentation syntax. We had that with OCCAM twenty years ago.

    There are plenty of indentation mode packages arround for emacs already.

  • First off most language design has always been open source. Microsoft Visual Basic is a rare example of a closed development process over an extended time. Java being the only other that comes to mind (there is a 'standards body' but Sun retains a veto).

    I don't see any attempt at a formal definition of the language. Before designing a programming language you should know about operational semantics, denotational semantics and all that good stuff.

    There are severe problems with the syntax. The reason the flexible syntax of FORTRAN and COBOL was abandoned was that it introduced bugs. There is a space probe lost on the wrong course because of DO FOR I = 1, 10 being mistyped and interpreted as DOFORI = 1.10.

    Program modularity hacks tend to be just that. Multiple inheritance was wildly popular in academia and an utter failure in commercial programming. MIT undergrads could hack it on a student project but when applied on an industrial scale the result was utter confusion. The more someone likes multiple inheritance the more difficult it is to decipher their code.

    I don't think we need a new declarative programming language. What would be nice is a good revision of C. C++ was a disaster, give me C any day. Objective C was promising but lost. Java is unfortunately tied to Sun's strategic objective of dislodging Intel and Microsoft (why not have a go at Cisco while they are at it).

    If Microsoft was to get the idea of doing a gcc front end for C# they might just get traction. It has all the programming advantages of Java without the four fold loss of performance caused by compiling to a virtual machine based on an obsolete SPARC chip. [yes I know the best java coders can write code that is faster than the worst C coders, the best C coders are a very different story].

    What we are really lacking in the Internet age is a language with good handling of communications and parallelism - particularly loosely coupled multiprocessors. Now that Tony Hoare is at Microsoft I guess we can hope that some CSP/OCCAMish features might make it into C#

  • The design of Fortran, C, C++, Pascal, Eiffel and LISP were done behind closed doors

    They were all developed the same way Linux was, someone put out a basic version, the community commented back and the lead developer may or may not accept the patch.

    What you are talking about appears to be not 'open source' but 'design by committee'. That only works in one circumstance in my experience, when a single individual does most of the design work.

    You are right. The good news is: the formal definition exists; it is about 90 pages long

    Not a good sign, get it down to 5 pages and the language might have a chance of being implementable.

    The bad news is: it's boring, it's incomplete, and it's based on a pre indentation-sensitive version

    The indentation syntax has nothing to do with the semantics.

  • When developers (Pike + friends) needed an efficient, processor-independent language for systems programming, they created C

    Actually they took an existing language called BCPL and wrote a subset called B, then they added a few features back to create C. BCPL was in turn a subset of CPL the all singing all dancing replacement for ALGOL 68 which was a little too good at being all singing and dancing.

    In the case of C they wrote the language arround the compiler and vice versa.

  • Has anyone else gotten pissed that something like Visual Studio doesn't work as a studio? What would be really cool is to have the ablitiy to write in multiple languages in one program. Obviously not everything is going to work and it would be tough, but just think of being able to write that calls some math.cpp() or showme.vbasic() function...
  • by frob2600 ( 309047 ) on Friday April 27, 2001 @07:23PM (#260497)
    LISP, what can I say? This is one of the coolest languages you will ever suddenly understand while writing code in it. I don't know everything in the language yet. But I have to agree that this language is very good.

    I would advise any person who wants to develop their own language to take a good look at LISP first. You may not need a new language, but even if you do -- you will have a bunch of great ideas to start with.

    "Do not meddle in the affairs of sysadmins,

  • by frob2600 ( 309047 ) on Friday April 27, 2001 @07:04PM (#260498)
    You need to have COME FROM, gotos are for wussies who need their hands held. If you are a real programmer you know where you should be looking for the next instruction. And this should be the only control construct. I prefer a language where I don't have to spend too much time thinking about which way I want to implement a loop. Having only one way to get something done is better. We don't want another mess like perl becoming popular.

    Data type: we don't need no stinking data types. ['nuff said]

    Comments are for simps, don't allow them. I prefer a language like BRAINF*** where anything that is not a valid command is ignored by the compiler -- but it would be even better if anything that was not a valid command crashed the compiler with obscure messages.

    And whatever you do, make sure all the commands cannot be understood by any person will less than 7 doctorates. We don't want Visual Basic programmers to think about using our new language of choice. Make sure the language is also impossible to read even if you understand it. [For an example look at INTERCAL].

    As a few final pointers on your way to language success. Make sure your documentation consists only of the following line:
    Just do what you are supposed to do and you will be fine.
    Then release the source to the compiler -- of course it must be written in the language and no others [to prove it is not a toy language. Toy languages cannot be used to write their own compilers.] And after that all you need to do is refuse to update any bugs that might show up. It will be as popular as Java within the week.

    I would also recommend that you find a way to make large random prime numbers and integeral part of the binary -- but I have not worked that part out yet.

    "Do not meddle in the affairs of sysadmins,

  • Often iteration is implemented using recursion.

    This depends on the language. Common Lisp, for example, has plenty of iteration constructs, including the (IMHO) fabulous loop macro (and Lisp programmers certainly don't shy away from using them). Others generally have fewer iteration constructs, but often they are still there. To what extent one uses iteration vs. recursion is, I'd wager, largely a function of culture and taste for various languages.

    Syntax is often minimalistic.

    Ever tried to get ML's syntax into your head? :-) I like Lisp myself (which has a fairly simple syntax, although there is some there - you still do need to remember how special forms and macros work individually) because the syntax is minimalistic. Some languages are hairier, though.

    Tend to be interpretted, so have the advantaged of being interpretted (altering source at run-time etc)

    This is just false. Lisp, for example, is typically compiled, just differently from the way to which people may be accustomed (some modern Lisp systems, e.g. Corman Lisp, don't even have interpreters). Many of these languages, however, are interactive which is a different thing. In the aforementioned Corman Lisp, for example, everything that you type into the read-eval-print loop gets compiled right away on the spot, so you don't notice it the same way you would compiling C, for example. ML, OCaml, Haskell, Clean, etc. all have compilers too, and many implementations are capable of making native executables.

    Different philosophy regardling variables, sometimes completely untyped, sometime no variables at all.

    Again, this depends on the individual language, and possibly what one means by variables.

  • by hding ( 309275 ) on Saturday April 28, 2001 @05:25AM (#260503)

    Recently I came across the following story, which tells of Paul Graham's use of Lisp in his company, creating web-based store program, which was to become Yahoo stores.

    Beating the Averages []

    Needless to say, Graham attributes to Lisp itself a large part of the credit in being able to accomplish what he did.

  • by melquiades ( 314628 ) on Friday April 27, 2001 @06:28PM (#260506) Homepage
    I'm sure there are going to be a lot posts saying things like, "What's the point of this? Why are you bothering? Who needs another programming language? Everything you are doing has already been done better, and it was all useless anyway." I got a lot of that crap when Eidola was Slashdotted recently.

    Keep an open mind. If you're not the sort of person who can enjoy new ideas that are cool but may turn out to be useless, just go read another thread.

    There's an interesting study [] about the much-abused field of visual programming languages. The researchers polled programmers who worked with visual and non-visual languages to see which they liked best, and which was most effective. Their main result? Programmers' opinions of a language correlated strongly with how long they'd been using it. In other word, programmers have an overwhelmingly strong bias for the familiar; they are so strongly biased towards what they are used to that they can't really make objective judgements about unfamiliar ideas. Not surprising, but easy to forget!

    This bias is a tremendous barrier to new technology. If everyone with an interesting but questionable idea gets shouted down, a lot of useful ideas are lost. Think of the brave souls who installed Linux before it was really usable while everyone was saying "OSes have been done before. Why are you bothering?"...and thank them now.

    Then give Mozart and LX a fair hearing. There are some good ideas there; let's help them mature.
  • by Bi()hazard ( 323405 ) on Friday April 27, 2001 @10:27PM (#260508) Homepage Journal
    only? *smack* you're saying you want to remove functionality so it can look pretty? fool, if you make a language like that anyone who wants to use an exotic base (which can be useful to efficiently and clearly express a variety of problems) will use another language. Tip for those of you who want to make new languages: we don't need niche languages. We need something that does everything, and usually does it well. A standard language we can master, rather than trying to be proficient in C,C++,Java,perl, etc.

    For example, in the number base problem you could use d,o,h, and b for defaults, but if you include a line like "define #c base 37" now you can use 1000c to represent 1000 base 37. Or, you could say "include numberbases.lib" and get a whole bunch of definitions and functions right away. Or, if you were insane, you could say "language assembly { ...assembly code implementing base 37...}"

    Which brings me to another point: there's a lot of legacy code in other languages, so it would be very nice to be able to copy and paste it into a hybrid program. While that may encourage bad programming practice, we want people to use the language, not run away when they realize theyll have 2 years of rewriting the same old stuff before they can do anything interesting. It also gives you a quick and easy way to smack down anyone who claims your language isn't as efficient as some other one for whatever specific problem.
  • by oodl ( 398345 ) on Friday April 27, 2001 @09:04PM (#260517)
    Dylan has all the features you are interested in:

    Compiled and Interpreted YES
    A good standard library YES
    Truly Object Oriented (like Smalltalk) YES (In Dylan even methods are objects)
    Support flexibility YES (Dylan has hygenic macro system)
  • by oodl ( 398345 ) on Friday April 27, 2001 @09:21PM (#260518)
    > C is still the high-level language that produces the fastest code.*

    CMU Common Lisp performs better for some number crunching. Dylan is fast also. Also I think some of the functional languages such as ML perform quite well.

    And all of these languages are much higher-level and more productive than C.
  • by undecidable ( 410548 ) on Saturday April 28, 2001 @05:17AM (#260521)

    there is no such thing as the "perfect optimizing compiler". To be verifiably optimal, as well as knowing everything there is to know about the machine's internal architecture, it would have to have complete knowledge about the dataset that the program to be compiled is to be run on

    And this is one of the reasons why the Java VM technologies that are being developed along with HotSpot are so interesting since they perform profiling on the fly and restucture code for better preformance. Of course you need to pay for this profiling and restructuring during execution, but apparently Sun is betting that it's overall a performance win.

  • by Flying Headless Goku ( 411378 ) on Friday April 27, 2001 @06:11PM (#260522) Homepage
    Could the next one be designed "the Open Source Way"?


    There is no source code to a language design, so the concept of open source simply does not apply. Can we please try to preserve some distinct denotative meaning in our words, and not just throw them in wherever we want to exploit their connotations?

    Yet another top-level post accepted for being buzzword compliant.
  • by melatonin ( 443194 ) on Friday April 27, 2001 @10:30PM (#260524)
    If object A is dependant on a certain public member always being available from object B and suddenly the variable is assigned different types of values or used in another way, the object A will have to be changed to accept the changes in B.

    One of my biggest gripes of C++ is that when you call a method, you're actually executing a function. OK, in any language this is true, but a C++ method isn't any more flexible than a function.

    In Objective-C, which is a dynamic OO language (not strongly staticly typed like C++), objects respond to messages (implemented by methods). Methods have unique signatures. For example,

    - (void) setSize:(NSSize)size;

    This is a message that any object can respond to if it supports "settting a size." NSSize is a two-dimension size thing (x,y). If you had an object that represented a file, it might respond to

    - (void) setFileSize:(unsigned long) inBytes;

    But it should not respond to:

    - (void) setSize:(unsigned long) size;

    The compiler will give you a warning if you do, saying that the receiving object may only implement sizeSize:(NSSize). But code will execute fine, as long as you know that the types are fine. The compiler will also check this for you if you give it the types of the objects (which you pretty much always do).

    Objects are designed to talk to other objects. If you stick to a set of rules (the OpenStep/GNUstep frameworks define a great set of ground rules), you can have objects communicate with other objects easily, and not have your code bug-ridden.

    Better yet, in Objective-C, you can define optional behaviour.

    if ([monitoringObject respondsToSelector(@selector(sizeChanged:ofObject: )))
    [monitoringObject sizeChanged:foobar ofObject:sizeableObject];
    ;//monitoringObject doesn't need this info.

    (a note to the confused; when you send a message, the runtime "selects" a method to respond to it. That's why you see "selector" up there.)

    Not the greatest example of why you would want it, but the important distinction between Obj-C and C++/Java is that you are thinking about how objects communicate, as opposed to how object types work with other types.

    A truly modular lanuage would be great for a Open Source language

    dude.... []. We'd love your help.

    ok, we're not building a language, but of modularity is what you want, help us complete this stuff :)

God made the integers; all else is the work of Man. -- Kronecker