Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Eidola - Programming Without Representation 258

Lightborn writes: "From the Web site: "Eidola is a representation-independent, object-oriented, visual programming language. Eidola is an experiment which takes a wild new approach to the structure and representation of programming languages. Traditional programming languages are heavily tied to their representation as textual source code, which is unfortunate -- text is a very poor notation system for the concepts of a high-level language. An Eidola program, however, exists independent of any representation; its "fundamental" form is as a set of carefully defined mathematical abstractions."" We're confused, yet intrigued.
This discussion has been archived. No new comments can be posted.

Eidola - Programming Without Representation

Comments Filter:
  • by Anonymous Coward
    I suspect the problem which will be illustrated by Eidora is that most efforts at abstraction lead to oversimplification. I can give examples using European languages: German is capable of a precision of expression that is simply not possible in English, while in Italian (I am told-I don't speak it) there are types of poetry which have no analogue in English, at all. You cannot discuss Italian poetry in German other than to catalogue it as to type, pretty much the rows of butterflies pinned to a board in the museum of natural history.
    My point is that metalanguages are constructed in such a way that they provide a gross oversimplification of the processes going on in real language, they are very hard to enlarge (for exactly the same reason it wasn't until 1917 that mathematicians accepted their first inductive proof-concern that the sky will fall if you tweak the rules even slightly), and tend to be Boolean in orientation, etc.
    Having said that, there is a place for languages which may be limiting in their richness of possibilities, but which may be adaptable to deterministic predictable construction of code with determinable performance paramaters, etc. These folks seem to be wandering around down by the creek, exploring like eight year olds, and may bring home something we'll all find interesting, and maybe they'll only come home wet and dirty. Wish them luck.
  • by Anonymous Coward
    It's nice to see that people are rediscovering all the consepts that are in the ANSI Common Lisp language.

    In true lisps (like Common Lisp not Scheme) only one function READ is concerned about textual representation of programs. You can change read syntax any way you like. There actually exists reader macros that read FORTRAN :).

    In the lanuage standard, only chapters about function READ discuss textual representation of forms.

    When READ has done it's job program is represented as bunch of lists inside lists, vectors, structures and classes.
    READ syntax is just like XML but better for programmers. (XML is just another consept that has been familiar to Lispers for 40 years now.)

    After READ has done it's job MACROS are executed. Unlike C-preprosessor, Lisp macros are like normal Lisp functions. Only differences are execution time (after read before compiler macros) and that macros don't evaluate their arguments.

    After macro expansion it's time for compiler-macros. compiler-macros are nice for many things I'm not going to tell you.

    After that it's time to compile to machine-code. Most CL-implementations compile to machine code.

    In short. CL-programming is like programming a compiler. READ is like lexer (you can change syntax of the program anyway you like). Then we have intermediater representation using internal datastructures. Macros are like what compiler does to intermediate structures in conventional programs. Compiler-macros are like local optimizations. After that is compiling to machine code, but that is the job for the CL implementor.
  • I think we need a new moderation category: -1 technobabble.

    Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs.
  • You're completely misunderstanding the point of this language. "Representation-independent" semantics does NOT mean that you'll code in little happy egyptian symbols rather than text. It means that code can be presented to the programmer in whatever form best suits the task at hand, whether that be text, a call graph, a class hierarchy, etc.
  • We in the US haven't had to deal with this since we gained our independence from Britain in the 1700s. I say we take a stand, and show these totalitarian swine what we're made of!

    Who's up for the Boston CD Party?
  • *any* knowledge of Wittgenstein is liable to influence just about everything. :-) The *Tractatus* is very proto-OO in places, for example.
  • If an Eidola program crashes, does it bleed out too?

    --K
  • Programming language theory as always dealt with "semantics", which are mathematical objects that correspond to a program, and to which you can make mathematical reasonings in order to proove desirable properties (security of protocols, correctness of a compiler, etc).

    So all computer languages are already in a sense "independant from the text of their source code".

    Most of the usual languages such as C, C++ or even Java have rather non-rigorous semantics, due to thir being too close to the complexity of the computer their run, which makes it difficult to have a mathematical model of execution, and also due to making to many compromises toward such things as compatibility and readability.

    But there exist languages that have always been developped keeping very rigorous semantics, such as the Haskell and OCAML functional languages.

  • I suspect it will end up being something more along the line of providing example results, and having the computer then try to mimic them.

    For more on this idea--sometimes called "Programming by Example"--see Henry Lieberman's PBE home page [mit.edu], or the fascinating book (now available online) Watch What I Do [acypher.com].
  • Do they have a problem that doesnt fit the constructs of the languages available?
    If so, define the problem so I can see the usefulness of a new programming paradigm.
    If not, a new language is not needed.

    Often times it seems (to me) that crazy languages are created by crazy people who dont know how to program.
  • I've written one in Prograph [pictorius.com], although it has been several years since I last looked at the language. Basically, you define blocks with input and output data streams. At the top of the sort block, you make a sort of gateway block that splits into two streams, each of which sends data to the block itself. It is actually pretty easy.
  • I disagree totally with the statement that text is not a good way to represent program structure. Humans are very good at representing complex ideas with language and this is represented as text in the written for of almost every language.
    Why move back to the equivilent of cave paintings to write programs, just make text even better!
  • Look at that! No textual representation, object oriented, what an idea!! Geez, somebody should have thought of this 30 years ago!! Oh, wait, they did....it's called smalltalk!!!
  • Hmmm.. Another means of representing the same old programs. I suppose its target was to make the programmers life easier, and the code easier to understand. Both of which are great targets.

    On the other hand, I suspect that what we really NEED is a language that can be REALLY well optimized. In the "good old days" when PCs had 1Mhz processors, 10Mb HDD and 64Kb RAM, programmers had to struggle to make their code fast, tight and clean. These days, with 1GHz CPU, 30Gb HDD and 1Gb RAM, that ain't so. Programmers can just add a "-O6" flag, and hope for the best.

    Can you imagine a programming paradigm that meant code could be written as readily as it is today, but which compiled to that old-fast 'n' tight code.

    We wouldn't be needing to upgrade our hardware every 5 minutes just to cater for the latest greatest office suite, desktop, or version of 'X', which now take 4 full CDROMs to install because the code is so lazy, and re-use is still just a pipe-dream!
    --
  • Alright, I'm a fool. Can't even post to the right thread. Sorry.
  • This one's just a lame joke.
    ---
  • Clearly the internals are the same as any other language, so what makes Eidola revolutionary is its representation (or independence thereof).

    And they have ZERO examples of any good Eidola code! Maybe some white knight will step in and write some good representation engines for them?

    Have they thought this through even once??? Uh, Eidola guys, you might want to spend a few more days thinking before calling for help. On damn near every page on your web site...
  • "text is a very poor notation system for the concepts of a high-level language"

    I agree; text is a horrible mechanism for representing semantic concepts. Look at English, for example, I can't remember the last time I saw anything that conveyed high-level ideas in English. oh wait... I forgot about that whole 'literature' thing....

    --Jered

  • Just objects with plug-compatible connectors, kind of like ICs.

    Er, you'd think than the hardware engineers would have stumbled across this idea, considering it's they're job to put little IC's together. Oh, wait, I bet they did....

    The primitive objects should just be sensors and effectors which can be connected together into higher level objects.

    How does this work, exactly? How do I set up some sensor so that it knows what to look for if you won't allow me any sort of programming language? You can't expect people to build a mountain of objects -- without being able to configure any of them -- just so they can tell when someone has finally given up on the system starting and hit ^C.

    Just drop them in and they find the right connections, automatically and reliably.

    Of course, this sort of automatic programming has never worked. And even if it did work you'd never be able to prove it, because all of the vast resources computer science has devoted to proving program correctness assume that you have some idea of what the program is actually doing, and that you can translate it into a logical (that is, linguistic) framework.

    Programming for the masses!!

    Oh, great. I hope that isn't supposed to win over any converts. The "masses" have never been terribly enthusiastic about programming, and most of those who are, can't.

  • There are other non textual programming abstractions out there. The Mozart project [sourceforge.net] is one, but there are others such as Microsoft's own Intentional Programming. [microsoft.com] You can find other related links from the Mozart page.
  • Proposed definition of infinity:
    I call "infinite" the solution of: x * 0 = 1.
    This definition takes my brain about 1/2 second to process.
    Historically, much of the mathematics have been built by defining something as the result of an equation with no result. For instance, x*x=-1 yields the complex numbers, x*x=2 yields the non-rational numbers, etc.
  • 1/ Words such as "absurd" and "existance" or "doesn't work" do not have much place in my own understanding of mathematics.

    2/ The proof you gave, at best, proves that the 'x' solution of the above is not a natural number. The exact same reasoning you gave proves that there is no solution to x+1 = 0. Do you deny the "existence" (whatever meaning you give to that) of the number -1?

    3/ You use a recursive proof which you claim as being invalid in some other posts. This is internal inconsistency. While this has no mathematical value whatsoever, this puts in my "brain" a serious doubt regarding the rest of your "reasoning."

    So you say that infinity has no place in your own mathematics. Fine. Then, you are using the word "mathematics" in a sense which is not the sense I use myself.

    To me, mathematics is a methodology for reasoning derived originally from idealized physical entities (such as square, lines, etc), but which has since then been formalized using logic, axioms and formal derivation. This formalism is useful because it has been shown by historical experience that it helps us "discover" and "model" physical properties.

    Complex numbers are a very good example. The original "existence" of complex numbers is the observation that you can build a set of consistent rules with numbers built using the "imaginary" solution of x * x + 1 = 0. Even better, the resulting rules are quite close to those that we have previously defined for "natural" numbers.

    But the most interesting thing about complex numbers is that they turn out to be a very good way to describe, for instance, electromagnetic fields or wave equations. And the circle is complete, we are back to "real" things. The same thing happens to infinity. A physicist will use an infinite line as the model because it works well, regardless of the existence of any such line in "the real world." Remind me of the old joke about the physicist asked to model a horse race, and who starts with "let's assume a spherical horse." That's really all there is.

    So mathematics, in my opinion, do not work because of their abstract beauty or whatever. They work because they allow us to make predictions for what we call the "real" world.

    As a last comment, the use you make of "absurd" or "beauty" makes *you* behave as a religious zealot who owns The Only Real Trush of The Saint Intuitionism. And, frankly, I don't see why the fact that Cantor was Jewish has anything to do with the quality of his maths. Oh, or maybe it is because 42 is the answer?
  • At the very least it seems pretty early on. It seems to be in theory almost entirely. Without code examples (how would they be represented?) its very dificult to fully comprehend. Or maybe Im just too used to coding in the traditional way. This could be the biggest leap since Java (or assembler or C etc) or nothing but theory.
  • "To solve this we need to focus on just what those tasks really are, and how people think about them. Most people think about the results they want; few can think about the steps needed to achieve those results. What will work better for the age of information is tools that work better with the concepts that average people (as opposed to academics) really think about, and use that to produce what they want."

    The trouble with this whole concept is really simple and has nothing to do with computers, but with problem solving in general: people DON'T know what they want! You're asking for a DWIM machine. You may as well forget about it cause it ain't gonna happen.
  • This is a result of something deeper, in part. There's no canonical solution to even a rigorously defined problem. Solutions vary and so will their bytecode or other mathematical representation. Your mileage will vary.
  • There's not a lot new here, and an awful lot of rubbish. Lets look at the differences page...

    "Traditional programming languages are heavily tied to their representation as textual source code", well, no, they are heavily tied to their syntax representation as a context-free language, usually one that has to be LALR(1). The syntax of the language is ultimately represented by a tree. If you don't like the usual textual representation of that tree, why not make a utility that manipulates the structure output by the compiler's parser directly? (Because the tree is fiddly to build, which is why we don't do that). LISP or Scheme have a syntax that show the tree structure of the language very explicitly.

    A text file can represent anything that can be stored in a computer (a sequence of numbers seperated by spaces is all that's required). Admittedly that's not terribly easy to manipulate; if it were we'd all be very happy with writing machine code with no assembler. So we try to invent computer languages that better represent how we use natural language, because we are very good at using natural languages (what we got was things like COBOL). Mathematicians are very good at using mathematical languages, so we also got languages like ML. Engineers gave us FORTRAN (probably because they hate us). Computer Scientists noticed early on that higher-level languages were well represented by trees and lists, and gave us LISP and Scheme (and lots of parentheses)

    They say that the other two of the general differences exist in one of C++ or Java. Um, so they're nothing new, then.

    "Explicit member inheritance". Hmm, would it really be helpful to be able to inherit a function from a superclass but not the member variables it uses? I can't find any reference to what they mean by a signature, though context would indicate that they mean declaration here, though that still makes no sense... In the languages I have used that have 'signatures', they are pure type definitions. Usually these languages also have type inference, so the compiler can spot an error in your code by ensuring that it matches the type, but this definition seems to be too strict for their usage of the word. Last time I wrote Java, it did not have these, and doesn't do type inference anyhow so it doesn't need them anyhow. (C's extern declaration does have some of these properties, though)

    I can't see through the bollockon field on the Multiple Inheritance thing. Are they talking about aggregation here?

    "Virtual members". Hmm, that's just overloading the new operator. Operator overloading is all yuck anyway from a type analysis point of view, which is why they have to generate type errors at /runtime/, which indicates a duff type system.

    "Signature specialization". This is a problem that polymorhphism addresses much more neatly, with compile-time type checking too. C++'s templates does a sort-of impression of this, but a language like ML does it better.
    - fun append a b = b::a;
    > val 'a append = fn : 'a list -> 'a -> 'a list
    - append [ "Hmm" ];
    > val it = fn : string -> string list
    - val newappend = it;
    > val newappend = fn : string -> string list
    - newappend "Blah";
    > val it = ["Blah", "Hmm"] : string list
    - newappend 1;
    ! Toplevel input:
    ! newappend 1;
    ! ^
    ! Type clash: expression of type
    ! int
    ! cannot have type
    ! string

    "Multiple function outputs". Hmm...
    - fun whoo (x, y) = (x+1, y*3);
    > val whoo = fn : int * int -> int * int
    - whoo (1, 2);
    > val it = (2, 6) : int * int
    They're talking about tuples here. C people call these mysterious things structs.

    "Parameter names in signatures". Uhm... They didn't explain themselves very well here... Why? Parameter names only matter inside a function definition. Outside the types are all that matter, as you have to be able to pass the parameters.

    Andrew.
  • The APL design goals were the same as the those listed for Eidola.

    I wrote an APL system to keep track of my LP collection back in the 1980's - one line was all it took.

    But APL code tends to be "write-only" code - if Eidola is easier to read, maybe it will become something big.
  • > It's all been tried before and it never works because programming is not maths.

    Actually all computer science (with the possible subset of computer architecture, which is engineering) is applied mathematics, including the subset of programming.

    Wasn't it Don Knuth that compared writing a computer program using a specific language to proving a mathematical theorem using a specific set of axioms?

    Eventually programming comes down to boolean operations and base two arithmetic. Sounds like math to me.

  • What do you consider "visual"?

    One of the Java demos has an interactive sorter that performs Quicksort.

    CLR or Knuth Vol 3. tell you on paper what a quicksort algorithm is. I have to read the paper to "visualize" that algorithm.

    How about a flowchart?
  • I don't think an obfuscated eidola constest would be challenging enough. How about an UNobfuscated APL contest? Now that would be something!
  • Yeah, but that would make those inductive proofs really hard ;-)

    But seriously: There is no way to design a language for a Turing machine (or any real machine of your choice) that would allow the set of programs have a cardinality other than \Aleph_0. Think about it, it's quite easy. And besides, I don't see what you would gain with that.
  • I think it could be graphically oriented and at the same time doesn't have to be. I think that is what they mean when they talk about filters. You can view how you want to view it. Say you view it in an UML type way, then you use a UML filter, and if you want to view it with text, you'd use a text filter.

    There is something I would like to point out. IT DOESN'T COMPILE. From what I have read, it is an interpreted language. You need what they are calling the kernel to run it. I am assuming that this is their version of a virtual machine.

    This leads me to another feature. They are writing it in JAVA! I understand the "coolness" and "cross platform" abilities you get by writing in something in Java, but do we really want to write an interpreted language using an interpreted language? I don't think so. I would think it would be better to write it in C,C++,Pascal,Fortran, et. al.. This would be the equivilant writing a c/c++ interpreter in the old interpreted basic in which you had to be in basic to run the code. Does this make sense to anyone?

    Don't get me wrong. I like the idea, I disagree with how they are doing it. I would write it in another language taking time to design it so that the platform dependant sections are together to make porting easy. I would also write it so that you could either run the code in the kernel or compile down to a native binary and run the code independant of the kernel/virtual machine. I would probably have done a proof of concept in python, or another scripting language, to get peoples appetite wet for the project and allow for people to be creating code.

    I think this has potential. So far, I don't like where this is going. I am sure somebody will come up with a similar methodology that with the benefits I want, but the question is when.
  • I would have to disagree to a point. The idea is ok, but the implementation is bad. This is just another interpreted language instead of a virtual machine (like java uses) you have a kernel.

    I would also miss the use of pointers and references. They can save on memory and space instead of requiring the copies to be made. Althought the could implement it so this is not a problem.

    I like the idea. It would open up new avenues in developement. I don't like the idea of forcing OOP as much as they do. I think a good tool set would allow for adaptions of better designs as they come along (like C++ does). Java is pretty much stuct in OOP model because of its rigid focus on OOP. It appears this is where they want this to go.

    I also don't like the idea that the code doesn't appear to compile down to a binary. I would like an enviornment like for development, but I would want the final product to have the option of creating a binary that would run faster then when you use the kernel to run it.

    Scripting languages are great--as we all should know, but do we want to go back to an interpreted language format. The kernel idea reminds me of the old basic days when basic was only interpreted.

    Overall, I think this is a good start, but I thik they need to reconsider the design a little. There is a big potential when removing "dangerous" stuff like pointers for more programmers to not bother learning what is going on and to think things through. This again reminds me of my childhood, and programming in basic.
  • Well, like many others, I poured over this site for some information on how this whizz-bang magic of freeing ourselves from text files is supposed to work. The excessive flurry of barely-defined buzzwords sorta makes me want to shy away from it, too. After reading everything, I can come to no conclusion about anything, save that if this ever gets finished, it would probably not be useful for much more than rapid prototyping (did I read that right, a *java* compiler?!) and the like.

    "In short, Eidola is a programming language which separates a program's structure from how that structure is presented."

    Maybe I'm too locked into the square, obsolete, "old-school" world of text files, but I can't for the life of me figure out what the hell that's supposed to mean.

    Maybe I'm just old-fashioned, but this really seems like a drastic new approach for the sake of a drastic new approach. Maybe it'll pan out and be of value to people and projects, but I'm not holding my breath. In any case, it can't hurt to try.
  • by Anonymous Coward
    Most programmers are very wired into text as their default representation - we are, after all, very textual creatures (could we have this discussion in diagrams? Conceivably, but it would be an inefficient representation of the semantics). Some specialized systems like visualization networks work very nicely as a visual interface (AVS) because they have this nicely visual metaphor: networks of sources and filters. But as for Hello World and its useful cousins, I dunno. steve d.
  • From a brief look at the web page, this sounds awfully similar to Microsoft's Intentional Programming [microsoft.com] project, only less general, and less sophisticated (IP is designed to be able to represent many existing languages, with their differing semantics). They don't make any mention of IP in their FAQ though. Could someone who knows more about Eidola expound on the similarities and contrasts of these two projects?
  • Thanks to a couple of people for pointing out to me that similar formal specifications exist for OO languages. For instance, looking at the type systems for the simple L1 and L2 languages described here [ic.ac.uk], with their accompanying soundness proofs, gives me all kinds of warm fuzzies.
  • run aleph[1] programs.

    I'm not sure what kind of keyboard you would need to write regular aleph[1] programs.

    Some weird quantum analog hybrid might more
    truly be considered aleph[1].

    Is the notion profound, stupid, or profoundly stupid. I really can't tell.
  • I'm looking forward to the day that source code is stored in XML.
    Traditional programming languages are heavily tied to their representation as textual source code, which is unfortunate -- text is a very poor notation system for the concepts of a high-level language
    I agree - there's little point in a human attempting to encode consepts in plain text, then expecting the compiler to deduce that context from the code. One simple way of working would be for the IDE to store the syntax highlighting in the code, so the programmer can instyantly see any mistakes on the screen and alter the automatically-generated markup.
  • Both have block structure and static, lexical scope. These were new features when Algol *68* came out.
  • I'm having difficulty seeing the novelty of Eidola. It's certainly not the first language to have a basis in a formal semantic calculus -- the lambda calculus for functional languages goes back to the 1930's. Eidola is also not the first to aim at a rich yet provably correct type system.

    I agree. I don't see anything on the Eidola site about the author's background or credentials, so it's hard to tell where he's coming from; but he either seems to be unaware of a lot of work in this exact field, or is ignoring it for reasons he doesn't explain. The fact that he provides no bibliographies (that I saw) or other references to prior work by others doesn't help.

    Besides, the notion of different representations of the same program is an almost trivial one, and doesn't really need a special language to support it.

  • Calvinbola got me grinning, too. You have clearly been blessed by Jeebus. Since I don't have mod status today, you'll have to settle for my lame adulation, sorry!
  • In the real world, the luxuries we think programming needs to have just don't exist.

    we train ourselves to perform the steps required to accomplish specific tasks

    We who? Programmers? In the real world, business needs more programmers because there are more tasks. And those tasks are results driven, leaving it up to the programmer to translate result requirements into programming models. It is this step that needs to be implemented. Of course that can mean the end of programmers as we know it today.

    Whats really needed ... is time

    Joe Normal will never program as we know it. But I do believe the eventual end result is that Joe Normal will be able to get a computer to get things done that would today require a programmer, and require time.

  • One of the problems that seems to be missed by the academic research community is that the more abstract something is made, the fewer are the number of people who can work with it. While certain things as the C language and assembly language are not something the average person can take on, abstractions which work in mathematical constructs are no better.

    One thing that will be needed more and more in time is the ability to make more people capable of accomplishing the tasks we today call programming. To solve this we need to focus on just what those tasks really are, and how people think about them. Most people think about the results they want; few can think about the steps needed to achieve those results. What will work better for the age of information is tools that work better with the concepts that average people (as opposed to academics) really think about, and use that to produce what they want.

    I suspect it will end up being something more along the line of providing example results, and having the computer then try to mimic them. The cycle would be repeated with corrections and adjustments until the results are as expected, or close enough to be satisfactory. These tools still have to be programmed a more conventional way, though I suppose perhaps Eidola one approach to accomplishing this.

  • You suffer from the fallacy that someone who does not agree with you must not agree because they do not understand. I understand. I still don't agree.

    I am personally undisturbed for other philosophical reasons (which are not worth going into) that mathematics is not self-proving, and am perfectly happy with an "infinity" that may not be definable in the conventional sense but still has definable behaviors. "The square root of negative one" isn't real either, but still has definable behaviors, and is thus as real as necessary.

    (Still, kudos for the approach; it takes guts to shake the foundations of reality and see what comes out, even if I don't agree with your assessments. This is not sarcasm, it only sounds that way on the Internet.)

  • you don't get it, these properties of text based storage of sourcecode put an upper limit on a system's size.

    Now c++ and c are typical examples where the fixed semantics of the language are not enough. Hence the use of preprocessors. If you were right, preprocessors would be redundant.

    Obviously you haven't read Fowler's book on refactoring, otherwise you wouldn't claim such nonsense, I strongly recommend you do.

    ", but they are different conceptual entities, and should be stored and represented differently."

    Just one question: why?

    Answer: design uses different representations because sourcecode does not provide you with the constructs to model the design (i.e. the expressiveness of source code is very limited). Abandon the notion of text based storage and fixed semantics and that problem goes away. No more outdated designs! The system description contains all the information you need. You just have to query it in a smart way to get rid of the info you don't need.

    "programmers have to do it, and managers have to build time and motivation into the the process to ensure that it happens."

    Walk into your average large software company responsible for lets say around 5 million lines of code. Ask for the design documents. Presuming they don't throw you out, you'll find that in most cases either the design consists of some very outdated documents or that the sourcecode is the documentation. In real life people don't spend much time recovering designs unless they absolutely have to. Try convincing managers you want to spend the next three months just recovering the design of the product you're working on.
  • Text is unsuitable for representing structured data. That's why databases exist. In fact the only reason we store programs in ascii files is because programmers are reluctant to give up their vi/emacs editors.

    Because of this:
    1 - Languages have fixed semantics. The text files follow a syntax which cannot be changed. Consequently the semantics of the syntactical constructs are also fixed. Ever wanted to add multiple inheritance to Java? Well, you can't because the language spec forbids it.
    2 - Automatic refactoring of code is difficult, regular expressions just are not powerfull enough to do the job. You need more structure to do it properly.
    3 - You don't have first class representations of design concepts. Because of point 1 you can't add them to the language either. But worse, the most abstract details about your system are lost when you store them as text. You won't be able to recover them with a parser, simply because the information is no longer there. Ever tried to recover a highlevel design from just the source code? I remember seeing something about a dependency graph of the linux kernel just a few days ago. That's the best we can do with source code.

    Now does this mean we have to give up on our editors? The answer is no. All that needs to change is the primary representation of the program. Once you have a representation, you can define a view on it to be able to manipulate it. Ascii is probably the simplest form of such a view. However, it is a rather limited view also. So we'll need more advanced tools as well.

    Finally, I should add that eidola should visit microsoft's site: http://www.research.microsoft.com/ip/ to learn about their intentional programming project. Charles Simonyi had a similar vision years ago and apparently the project has advanced significantly since then. The information on the page is a bit limited unfortunately, but there are some very interesting papers.
  • All programs start as vaporware. This is not a product anouncement, just a few people communicating some good ideas about the way we currently build software and a plan to improve it.

    Since the very basis of their idea is not to have a text representation, you won't find code examples. I think the best way to think about working with this kind of systems is as manipulating parse trees (without actually having to parse!). For instance, variables will not be represented as identifiers. Rather you'll just point at the spot in the parse tree where the variable is pointing at (i.e. some subtree). Also the pointer can point to any subtree. So if, you've built some very complex algorithm, all you have to do to reuse it is point at it in the parse tree.
  • Note - important responses to many posted questions here!
  • So far I've only tried in under Linux, but it worked fine using the 1.3 JDK for me.

    It should probably read "Java 2" instead of 1.2.

  • I remember seeing an article about nano-motors that used vaporised water to move a piston that made a shaft rotate. A friend pointed out it was a steam engine. Just very small.
    Now people are talking about fibre optic delay lines as storage devices. Some of the earliest computers stored data as sound waves in mercury [cam.ac.uk] and
    nickel wires [science.uva.nl]. A speaker injected sound in one end, it was picked up my a microphone at the other, re-shaped and squirted back in.

    Same idea, different medium.
  • This isn't a programming language, it's a model of computation, and a poor one at that.

    Whoop de doo. Another extra-complicated Turing machine substitute.

    To quote from the semantics manual: "implementations represent the semantics instead of the reverse -- What a wild thought!"

    What a ridiculous thought! What a wrong thought! The land doesn't represent the map. This is just an example of poor semantics (take that either way, both work).

    It might be useful if they defined a representation, but that would cost them their gimmick, and make Eidola just another language to be judged on its true merits, wouldn't it? The sensible thing to do to avoid getting hung up on the representation would be to define a simple and consistent back-end representation meant for computer processing, not human reading or writing. Instead, he promotes his poor, overcomplicated, model of computing and discourages standardizing the representation as "limiting" (how much better we would all be if we ignored other such limiting standard representations as ASCII and two's complement!).

    On the bright side, he really seems to be having fun with TeX. It all looks much more respectable with Greek letters and logic symbols, doesn't it?
    ---
  • To ensure that the program is completely independent from the representation, you can't use the same notation twice.
    ---
  • Aleph[0] is what you become when you die.

    Aleph[1] is what you become when you die as an Aleph[0].

    Jeebus is Aleph[CXD] (where CXD is my pathetic imitation of an infinity sign; it is also the ASCII representation of the Jeebus-fish, which eats its own tail while turning the water it swims in into wine).

    Jeebus told me so in a vision.
    ---
  • I fail to see the true distinction between the mental construct of natural numbers and other mental constructs. They are all mental constructs that may or may not reflect the true nature of the physical world.

    While it is true that our proof the existence of infinity is no better than our proof of God's existence, it also the case that we have absolutely no proof of our own existence or anything else in the physical world for that matter. We can prove nothing about the physical world. The best we have is "I think therefore I am" - and that is still somewhat faith based. That our perception of the passage of time reflects the true nature of the physical world is also a faith based belief. Time may very well be an illusion for all we know. We would be foolish to limit our models of the real world to only those things that are intuitive. It just might be the case that there are counterintuitive things out there. And then once you recognize these things, your "intuition" can be retrained so that what was once paradoxical is now intuitive. I don't believe that our intuitions are fixed in stone, nor do I believe that everyone has the same intuition. Intuition is plastic, like all of our other mental processes, and can be altered to suit external conditions. The "undeniable" messages we receive from our intuition are often completely wrong! The only thing my intuition tells my that I must always believe is "I think therefore I am".

    Infinity", while defined better than, say, a "tooth fairy", is still more phantasy than mathematics.
    But if it has definable characteristics within a mathematical system, then it is part of mathematics! Besides, all of mathematics could be called a fantasy anyway. There is no such thing as mathematics which is "true" - only various mental constructs which varying degrees of usefullness and aesthetic appeal.
  • If your mathematics is so pure, so great, then why does it contain paradoxes, contradictions, undecidability and incompleteness?

    Sheesh! You can't have everything! As demonstrated by Godel, all formal systems will necessarily be incomplete in some way. This is not a bad thing - it's just a property of formal systems. Formal systems are not like a religion. They are simply a game with rules that we invent and follow. Nothing at all to do with the "real" world. Math is just something in your head. It has nothing to do with what is "just" or "good", and nothing to do with religion. BTW, Cantor's infinities have well defined properties and are not inconceivable at all. It doesn't matter if infinity exists in nature - we can imagine it. We can also imagine tooth fairies or anything else we want to think about.

    Now if we build mathematical models of reality (eg, Quantum Chromodynamics, Superstring Theory, General Relativity), then there can be "religious" or "faith based" aspects, namely that you have faith that your model conforms to reality. Yet we have no assurances that "reality" can be mapped into a formal system.

    Heizenberg showed us that, to the best of our knowledge, we can't simultaneously know both position and momentum to arbitrary accuracy. Are you going to complain about that too? How about Schrodenger's cat, which is both alive and dead? And those photons, which are both waves and particles? If nature was so great would it have all that indeterminate stuff built in? These things are just a property of nature, to the best of our ability to measure.
  • Text is unsuitable for representing structured data.

    Text is, howver, excellent for describing the structure of data. Which is what a program needs to do.

    1 - Languages have fixed semantics. The text files follow a syntax which cannot be changed. Consequently the semantics of the syntactical constructs are also fixed.

    That's not a bug, that's a feature. Fixed semantics enhance communication. Anyone who's worked on C++ code where someone has abused operator overloading knows the danger of redefinable semantics.

    2 - Automatic refactoring of code is difficult, regular expressions just are not powerfull enough to do the job. You need more structure to do it properly.

    Refactoring shouldn't be automatic. Large changes to code should be the result of careful thought.

    3 - You don't have first class representations of design concepts...Ever tried to recover a highlevel design from just the source code?

    I wouldn't expect to be able to do so any more than I'd be able to recover the source code from the object code. You can get some idea, certainly, but they are different conceptual entities, and should be stored and represented differently.

    The difficulty is in keeping the different representations in synch. If they are actually useful representations, automated tools won't do the job; programmers have to do it, and managers have to build time and motivation into the the process to ensure that it happens.

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • this is the bridge pattern applied at the language level. the pattern, as defined by the gang of four:

    Decouple an abstraction from its implementation so that the two can vary independently.

    from the Eidola front page:

    In short, Eidola is a programming language which separates a program's structure from how that structure is presented.

    i know i might be stating the obvious to a lot of OOP-savy /.ers, but i had trouble grasping the bridge pattern when i first read it, and wanted to point out this great example.

    if you do OOP for a living -- hell, if you're a programmer of any stripe -- i encourage you to read the book [fatbrain.com].
  • As somebody working in information visualization, I would love to see a useful visual representation of programs. The idea is great, but they should better start working on the visual parts real soon, because this is going to decide between success and failure. Their underlying system might be a huge leap (which I doubt, but there you go), but if they don't have a good visual representation and no textual one, who is to use it? But if they have, this might become a great thing.
    I'm really looking forward to more news from them.
  • by babbage ( 61057 )
    text is a very poor notation system for the concepts of a high-level language.

    Like, say, English? If the first principle these people are starting from is that text is a bad way to express ideas, then I have a very hard time imagining what it is that could be better. Pantomime? Grunts & gestures? Pictograms? From their page [eidola.org]:

    Clean good (C, Java, Scheme); monstrous and messy bad (C++, Perl, Ada).

    Hey, different strokes for different folks and all that rubbish, but the thing I like most about Perl (for one) is that it is *intentionally* messy, just like human languages. You can tell that a linguist came up with it. Complex problems simply don't always map well against simplistic solutions. A complex language, and the rich expressiveness that it allows, is often just the thing that is needed. English & Perl both work so well because they are flexible, adaptable, messy, and dynamic. "Orthogonal" languages are so hell-bent on rigorous mathematical structure that they often get in the way of more natural (to humans) ways of conceptualizing they problems at hand.

    It also bugs me that I can't find any samples of the source code (or whatever -- graphical flow charts, who knows) of this language. The best I've found so far is a high level description [eidola.org] of their design criteria, which is all well and good, but if they can't even formulate their concepts well enough to express them in something looking like code, then I can't imagine how they expect anyone to wnat to use it. "I know, they'll just, like, think really hard about stuff, and it'll work better." "Radical man, really radical. Hey, pass me that willya, I need another hit..."

    Riiiight.....



  • Dude, read the FAQ on their site. Eidola is an experiment right now, not a working model. Even they admit it may not pan out how they've planned it.

    If you have to ask all these questions, then forget about it. It's not what you're looking for.
    ------------------------------
  • www.eidola.org is running out of my closet. Since it's posting on /. last night I've come to be very thankful for that fact.

    Incedently, the reason for this is that Eidola is in its infancy, and rather than make snappy screen-shots, Paul's instead decided to spend his time focusing on the language itself. It's not necessarily inherently visual, but given that it's NOT necessarily text-based either, the idea of having lots of different and useful graphical views of your program seems an obvious one.

  • Scheme: Functional language, very stack based, makes heavy use of LISP-style lists, funky weak-ass typing scheme, extremely recursive syntax. . . essentially lambda calculus on steroids.

    Pascal: Imperative language. Annoying typing scheme that the programmer is always tripping over, standard infix imperative language type syntax. . . Basically, C without all the things that make C C.

    I'm failing to see the family resemblance here. . could someone help me out?
  • > Personally I'd like to see an example program

    This follows on from what I was saying - you cannot see an example program without using a representation. Heck, you can't even do abstract maths on paper (or blackboard, web page etc) without a representation. You can't save to disk without a binary representation.
  • That is a very misleading headline!

    You cannot program without a representation of your code, be it lines & boxes, 10pt Courier or whatever.

    What this seems to say is that the language is independant of the representation. For eg the pascal and c uses of { and begin repectively are just representations of the same thing. So are indentation styles. IMHO representation-independance is a good thing, as it makes some silly flame wars about who's style is better irrelevant.

    In order to really be representation-independent they would have to have at least 2 working representations for the coders to use, which is the opposite of the title. Sheesh.

  • I was looking for screenshots on the website. Since it's so visual I assumed there must be graphical examples etc. But I could not find anything. The whole website only contains text and formulas.

    In fact, I've seldom seen a website since 1994 that contains so little images as this one (not a single). The author must be a very non-visual, text oriented person.

  • Yeah, this looks alot like Fourth too.
  • Now that could be a cute contest...
  • ...the chaos and confusion if the CDC adopted Eidola:

    Researcher 1: I tell ya, Eidola is spreading across the campus like wildfire!

    Researcher 2: Shouldn't we be quarantined or something?

  • Java still doesn't hava a canonical binary form at the class file level. It is conceivable (and in fact happens) that two Java compilers would produce the different output for the same piece of code.
  • Simonyi, Charles (father of hungarian notation), has been working on this for at least the last 4 years, there is a paper on it here [microsoft.com].

    I got to meet Simonyi a few years ago. The group i was working with got interested in IP, (Intentional Programming) and sent me to go check it out. It seemed really cool, they have so far used it a little in Outlook, but we didn't have any direct need for it.

    -Jon

    Streamripper [sourceforge.net]

  • Computer programming should involve no syntax, grammar or any such thing. It should all be objects, connectors, sensors, effectors and signals.

    Nah. The hardware people used to do things that way, but went to textual representations like VHDL for complex systems. Logic diagrams for big systems are painful to work on.

  • Yes, that's true. OCaml adds object support to Caml, an implementation of ML.
  • This is old, and it's been done. Check out Smalltalk (see the sig) and some LISP implementations. They've been doing it for 20-30 years. Every Smalltalk implementation (AFAIK) stores source, binaries and live objects in a memory image. It's the rest of the world who seems to be stuck in the 60s and 70s with files.

    As far as XML source- why? If you really want, there's a unified file-out format for Smaltalk which uses XML.


  • This reminds me of an idea I had a long time ago to make C++ more like English. I created one text document to describe the basics, and then moved on to more practical things. Here is that text document:

    E++ -- what would it look like? Could it be parsed into C++ Here is what I think it might look like:

    the header file:

    define "Krunk" as something that has these members:
    an integer called m_Int,
    a long called m_Long,
    and a character called m_Char.
    Krunk also has these functions:
    a protected function called Yell that takes 3 arguments:
    a character, a long, and a float.
    Yell returns nothing.
    a public function called Shout that takes 1 argument:
    a float.
    Shout returns a long.

    the source file:

    The Krunk interface is in krunk.h

    When you tell a Krunk to Yell, it does nothing.
    When you tell a Krunk to Shout it says 2.

    Create a Krunk called Blah.
    Tell Blah to Yell.
    Tell the Screen to Print what Blah Shouts when you give it 3.

    if this is the end, then do this:
    on sunny days:
    take a drive.
    on rainy days:
    take the bus.
    otherwise, do nothing.

  • I'm quite familiar with automata and turing machines (I'm a Senior in computer science, perhaps all too familiar on friday nights when I should be out ).

    That's the way to write out programs that has any graphics. You could abstract away with this. Perhaps you could make a great language with it. I'm still not convinced that it's practical though. Especially when he mentions Verification and Validation testing. This is often done with a text based script. Now I suppose that you could have a graphic that says "Pump every number, including the ones that aren't supposed to go in here in, and tell me what it looks like," but traditionally that's done with test scripts and other such text realm goodness. Also, I can look at a whole page of text at one time, it's hard for my field of vision to look at a gigantic picture of a program. I suppose that graphical language could be ok if you're talking about making flow charts and such, but still, is it more practical than text?

  • Its important that this is really a big leap forward, and this is made clear to developers before they'll even think about using this kind of code.

    And that's where Aspect Oriented [xerox.com] programming such as Aspect J [aspectj.org] will come in. Aspect oriented programming most likely is a big enough leap forward for developers to accept it whenever it goes mainstream. There are some definite clear cut advantages that aspect oriented programming has over OO. But It's late and I don't feel like listing any of them. Visit the site if you haven't already.


    --------------------------------------

  • But it starts to think of programs in a level even higher than text, if that makes sense. I could almost think it's trying to treat the source code as annotation and description of a program, and not the actual implementation...It's different than 'visual' thinking, in which you have functional blocks with busses, data directions, transformations, blocks and checks, etc.
    In other words, one could compare it to VHLLs like Haskell (functional programming) and Prolog (logical programming). The former describes the relationship between the input and output as a series of equations, and the latter uses predicate calculus (sort of) to describe a world in which the program is true. Both of these are text-based languages. The real difference is that in traditional languages, there is no real distinction between representation and substance: what you see is what the compiler sees. In this new language, this distinction exists, and basically allows you to represent a program as whatever you want. The best analogy I can think of is the document-view pattern/architecture frequently used in OO programming, if that makes sense.

    There's no reason why you can't use the kind of diagrams you describe, provided they encapsulate the underlying semantics of the language. I think that's actually a pretty cool idea; it certainly makes conceptualizing a program's behaviour and function much easier.

    It's probably a good start towards mission critical code style, in which there needs to be correctness, validation and verification built into the language in the first place.
    Haskell certainly allowed for this kind of formal analysis (for instance, one could prove equivalence between two programs, although I sucked at it and can't say how applicable it is for correctness proofs and other verification strategies.
  • Nice buzzword, but think of it. Human programmers still need to look at something and tell the difference between two programs. Maybe it'll be "represented" in a different way. Ultimately, there will have to be something to describe what a program does - that something could be symbols, images, lines, whatever. It's still a representation.

    They could call it a "different representation" language, but that would ruin the image. :)

    In any case, if there's a page describing the different ways Java, C++ and Eidola handle classes and methods, it can't be all that revolutionary.

    w/m
  • Visual stuff, GUIs, languages etc, are more *intuitive* in the same way that picture books are more intuitive to babies. It doesn't mean they are superior. This is why the best interfaces are a combination of GUI and language.

    No, the best interfaces are invisible.

    "Computer. Earl Grey. Hot."

    I'm only half kidding.
  • As per vorbis, I've switched.

    Sure, all the mp3s I get from friends are, well, mp3s. But everything I encode is vorbis because, well, the differences are small enough that it costs me *nothing* to use them. It's open source, it's free, it's better quality, etc. So all the music from all my CDs are Vorbis.

    So to answer your question, this does have some benefits.

    This is *not* a fundamentally new type of language. I think if you've seen predicate calculus, cellular automata, and lexical parsing in the same class, you'll find a lot of this familiar.

    What this is doing is mapping predicate logic/calculus (I think, I sucked at predicate calculus) with structured programming. It's also very different too.

    But it starts to think of programs in a level even higher than text, if that makes sense. I could almost think it's trying to treat the source code as annotation and description of a program, and not the actual implementation.

    Hmm, analogy...

    It's different than 'visual' thinking, in which you have functional blocks with busses, data directions, transformations, blocks and checks, etc.

    Eidolon almost certainly forces a different way of thinking, but I don't think it's terribly foreign. In this case, the 'language' cannot exist outside of it's runtime, or context. In this way, it's similar to lisp, or scheme(I think). Conceivable one could write a Eidolon program in a regular text file and 'open'/'run'/'compile' it with the kernal, and be able to validate it in some sense.

    Argh, I feel tongue tied. It should allow for very high level structured programming in terms of thinking at an abstract object level. Things are defined in a very predicate calculus way, with the textual representation almost becoming documentation for the structure defined by the calculus. This may have more functional similarites to Lisp than I thought, but it's been awhile since I've played with Lisp.

    It's probably a good start towards mission critical code style, in which there needs to be correctness, validation and verification built into the language in the first place.

    Anyone want to help me out here?

    Geek dating! [bunnyhop.com]
  • Graph oriented? I dunno.

    It doesn't seem to me a new language, as perhaps a new flavor of an older idea, like Lisp?

    Use some predicate calculus notation to start describing and defining functionality, as well as for allowing one to show correctness and validation.

    Map it closer to today's object paradigms without the constraints of matching 'objects' as defined as nouns with verb-methods, instead using a more abstract concept of packages, super-elements, and sub-elements, and algorithms. Given that they haven't finished documenting algorithms, I'm out on a limb here.

    The goal would seem to have something closer to predicate calculus, and thus something easier to hold provably correct and functionally correct.

    Instead of the many to one mapping of source to algorithm, it should be closer to one to one, I guess.

    Of course, I was never very good at predicate calculus, and maybe everything I've said is obvious, and I'm being stupid.

    Geek dating! [bunnyhop.com]
  • Some example, any example of some sort of code took forever to find. But here is something! [eidola.org]

    It seems to be an implementation of this Eidola language in Java and gives a very brief example of what appears to be Eidola programming or whatever passes for it.

    Here is a snippet of the example from the page:

    Here's a simple example to get you started:

    new Class c
    new Variable v
    v setType c
    v setSuperElement c
    c addPublicMember c.v

    new Class d
    d addParent c
    d addPublicMember c.v

  • I suppose it does have some flavor of the encapsulated aspect of XML.

    XML is like:

    <mydoc>
    <toc>
    </toc>
    <instructions>
    </instructions>
    <index>
    </index>
    </mydoc>

    which allows you to define a container "mydoc" that contains further more specific containers like "toc","instructions" and "index". The Eidola code looks like you can dynamically extend a variable to create an arbitrarily complex data structure. So like with XML, you can keep adding more specialized containers to the root.

    It could be a cool concept, but the web site makes it extremely difficult to figure out what the subject matter is all about.

    No code == No understanding.

    "sweet dreams are made of thisss..."

  • That would be a major pain in the butt. Personally, I like the exactness in programming and how it forces you to express your creativity in a completely logical way. Can you imagine a computer that would always try to do what it thought you wanted instead of what you said?

    Ick. Not nearly enough control or exactness in that.

    Nobody talks or thinks like this...

    Actually, I think like that, and so do plenty of other people. I've not only found it very convenient to do so while programming, but also in solving everyday problems. Every field has its own unique language and ways of thinking. Why should the field in which we command machines be any different?
  • Eidolon almost certainly forces a different way of thinking, but I don't think it's terribly foreign. In this case, the 'language' cannot exist outside of it's runtime, or context. In this way, it's similar to lisp, or scheme(I think). Conceivable one could write a Eidolon program in a regular text file and 'open'/'run'/'compile' it with the kernal, and be able to validate it in some sense.

    That was my thought on playing with the test interface. People will make comparisons to visual basic or code generators, but I think tcl or scheme is a more apt comparison.

    IMHO the "visual" part of this project is intended to make the point that "code" is not the "executable", and that the way we create and build (and sometimes run) programs need not be based on text. If visual modeling tools were not already so popular they might have advocated sound instead.

    As an aside, their site is the first time I've run across javadoc. Very, very, very cool. Might be the thing that finally convinces me to learn java.

    --

  • Sounds like that time I tried to learn Perl. "Ok, with this language, it does what it thinks you meant to say."
  • I'd like to see this one happen. It looks like they've got a kernel in development, but the actual symbolic creation of a program is in the future. It also appears to be one more step towards representing things closer to the way we do in real life. If you could actually pick out a common everyday "object" and just use it, that would be very cool. Especially if that object has already been proven.
    Viola, instant cash register :)
  • by Anonymous Coward on Friday February 09, 2001 @06:24AM (#444639)
    Those who don't know Lisp are doomed to reinvent it, badly.

    Take a look at the Common Lisp Hyperspec [xanalys.com], in particular, the descriptions of what the Lisp Reader does - It parses the textual input stream into Lisp objects [xanalys.com].

    Since the Reader is modifiable (i.e., you can remodel it to take in program representation in any way you want and convert it into Lisp objects), and there is no constraint on how lisp objects are represented internally in a conforming implementation, it follows that Common Lisp is already a "representation independent, object oriented programming language" We get the "object oriented" part because the Common Lisp Object System (CLOS) is part of the standard, and part of all current implementations.

    Seems to me these people would have spent their time better by writing some reader-macros for Common Lisp than reinventing the wheel.

  • by jbuhler ( 489 ) on Friday February 09, 2001 @12:01AM (#444640) Homepage
    I'm having difficulty seeing the novelty of Eidola. It's certainly not the first language to have a basis in a formal semantic calculus -- the lambda calculus for functional languages goes back to the 1930's. Eidola is also not the first to aim at a rich yet provably correct type system.

    Standard ML is a language whose type system is formally justified. IIRC, the type system was designed to permit proofs that every program which type-checks successfully satisfies certain correctness and safety properties, e.g. it never accesses a value of one type as if it had another, incompatible type (*). ML has a very competent, freely available implementation (Standard ML of New Jersey) and has been used to write, among other things, a TCP/IP stack, a web server, and its own highly optimizing compiler.

    I don't know if anyone has yet tried to reproduce this level of formal justification for the type system of an OO language. ML has polymorphism and type signatures, but I don't think it has any notion of an inheritence hierarchy. If Eidola is the first language to bring such formalism to OO, that would certainly be a nice contribution.

    (*) Note that the proof doesn't go in the other direction, ie "every correct program type-checks," because useful notions of correctness are in general undecidable. The ML type-checking rules are provably sound but are not complete.
  • by crovira ( 10242 ) on Friday February 09, 2001 @05:36AM (#444641) Homepage
    If properly modeled, any NP complete computing problem can be reduced to a topology network (even behavior can be seen as transition in the temporal dimension.)

    There are two ways of handling n-dimensional presentation:

    1) Untangle and handle one dimension at a time. While this allows for textual presentation, this approach suffers from the difficulty of "getting the 'big' picture." This leads to reams of source code of dubious quality.

    2) Handle the dimensions three or four at a time and construct geodesics or geodesic transformation "movies" to present the problem space. The geodesics put objects at the terminal points of relationships.

    The second approach can be entirely dynamic and generated from provably correct definitions of the objects and of the relationships in the problem space.

    The object definitions can go as far as referring to specific instances of the objects but disambiguation becomes yet another dimension which might be more effectively be handles by refactoring and subclassing.

    There is still the problem of selection, some dimensions will reveal very little of the problem space being limited, disconnected and/or monotonic.

    The creation of an interactive "3D tank" presentation using data gloves to select and manipulate objects and relationships and the selection of dimensions themselves would be great for presenting problems for discussion and solutions for correctness of fit.

    Still this would make for a promising area of research.
  • by Jerf ( 17166 ) on Friday February 09, 2001 @09:40AM (#444642) Journal
    It is impossible for you to conjure up "infinity" in your mind, because that would take forever.

    Really? I can. Didn't take me forever, either.

    I suppose you'll next claim that "big numbers like 'trillion' don't exist" because we can't conjure those up in our minds either? (Might not take you forever if you insist on counting each number but it will exceed your lifespan.)

    The concept of doing anything which is infinite is completely absurd!

    Not gotten very far in Physics, have you? Particle physics without the mathematics of infinity are, to borrow your word, "absurd".

    Nice troll!

  • by Animats ( 122034 ) on Friday February 09, 2001 @10:41AM (#444643) Homepage
    The Eidola papers are rather arrogant for the amount of work he's actually done. It looks like he's reinventing Smalltalk, but it's too early to tell.

    Non-text programming has been around for a while, Visual Basic and Prograph being the best-known examples. There's a lot to be said for doing the graphical parts graphically, instead of writing text to describe window and widget layouts. But graphical descriptions of control flow and data are just too bulky.

    One real problem we have today is that the mainstream languages are too hard to parse. C++ cannot be parsed without reading the include files first, and Perl doesn't even have a clear syntax definition. This retards the creation of tools that process program text. ("Indent" for example, has never been upgraded to handle C++. Nor is there a program that turns Perl programs into canonical Perl, with no elision.)

    LISP was at the other extreme, being very easy to parse, and thus there were lots of useful tools for doing things to LISP programs. One editor had the ability to select a block of code and have it extracted as a function, with all the variable references and arguments adjusted so that this worked every time. Programmers today can only dream about tools like that.

    Bad syntax has nothing to do with language power. It's a cruft problem, coming from adding stuff to a language. Remember, C started out without real types; adding user-defined typenames (typedef) to the syntax broke the ability to parse modules without seeing all the type definitions. C++ made it even worse. The Pascal/Modula/Ada family of languages, on the other hand, don't have this problem. var foo: typebar; is parseable without knowing anything about typebar, while the C/C++ equivalent, typebar foo; is not.

    On the other hand, language syntax doesn't really matter much any more. Library semantics consume a much larger fraction of programmer time.

  • by aliebrah ( 135162 ) on Thursday February 08, 2001 @09:50PM (#444644) Homepage
    While this is a great idea, I wonder how many people will be willing to make the change. When moving to a fundamentally new type of language there must be some real big advantages over the current status quo. For example, Vorbis is better than MP3 ... no question. But its not *much* better, so no one switches.

    Its important that this is really a big leap forward, and this is made clear to developers before they'll even think about using this kind of code.

    It'll be hard for this to go mainstream.
  • by mpak ( 247326 ) on Thursday February 08, 2001 @10:14PM (#444645)
    "Eidola is a representation-independent, object-oriented, visual programming language."

    written in java... :-)
  • by woggo ( 11781 ) on Thursday February 08, 2001 @10:38PM (#444646) Journal
    I've yet to read the semantics document, so I won't comment on the language itself. However, the idea of multiple representations for a single program is a good idea both from a philosophical standpoint (a la Wittgenstein -- "to imagine a language is to imagine a form of life" -- so allow different forms of expressing a program), and from an engineering standpoint. In the latter arena, Eidola mimics the venerable ancestor of Scheme and Pascal, Algol 68, which allowed users to construct their own grammar for the language's constructs (an early attempt at i18n!); also, it could allow Literate Programming with great facility.
  • by joss ( 1346 ) on Friday February 09, 2001 @03:15AM (#444647) Homepage
    > Maybe I'm just old fashioned . . .
    not at all.

    They are the old fashioned ones. Hey language is inefficient... lets go back to scratching pictures in the dirt or hieroglyphics on the wall in order to communicate.

    This visual programming crap crops up from time to time because so many people are brainwashed by that crap about a picture being worth a 1000 words. Draw me a picture of "misguided".

    Programming is done with languages because programming is communication. It's communication between programmer and computer.

    This is also the reason why the GUI monkeys can never understand the power of a gcommand line. The command line is a language. They are stuck on the "pictures are better than words" meme. Yeah, true, they are, until you learn to read that is.

    Visual stuff, GUIs, languages etc, are more *intuitive* in the same way that picture books are more intuitive to babies. It doesn't mean they are superior. This is why the best interfaces are a combination of GUI and language. It's just like the way you give children picture books while they are learning to read.
  • by KevinMS ( 209602 ) on Thursday February 08, 2001 @09:48PM (#444648)

    I though it was discovered that that person in canada didnt have Eidola?

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...