Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

True Visual Programming 121

eberta writes "We are still stuck with text programming for the most part. I can think of only a few truly visual programming environments like LabView and none are really mainstream for application development. Being recently unemployed and having ample spare time, I have started a pet project to work on my own version, GIPSpin (Graphical Interface Programming System). With multithreading becoming an increasing issue in software development, I'm wondering why hasn't there been more focus on visual programming. I see so much possibility of making coding easier and handling threading issues semi-automatically by allowing the user to graphically guide the auto-threading AI. Right now it seems the industry is focused on figuring out how to get just small chunks of code auto-threaded either through hardware or compiler technology, with longer term solutions like OpenMP still text based environments."
This discussion has been archived. No new comments can be posted.

True Visual Programming

Comments Filter:
  • by bitsformoney ( 514101 ) on Monday March 21, 2005 @09:46AM (#11998073)
    ... check what is/has been out there before. I.e. something called ProGraph has been mentioned on /. [google.com] a couple of times before.
  • It's inefficient (Score:4, Insightful)

    by 0x461FAB0BD7D2 ( 812236 ) on Monday March 21, 2005 @09:51AM (#11998106) Journal
    There are ways of visual programming right now, such as WYSIWYG HTML editors (Dreamweaver, Nvu, Frontpage) and Visual Basic.

    However, I can't see this happening for Perl, PHP, C, Java, etc. Everyone has their own style of coding with their own ideas, many of which are abstract and cannot be effectively visualized. To make an IDE which effectively deals with all the quirks programming has would be quite a feat, but would be so bulky that text-based programming would be the most efficient.

    There might be a place for visual programming in rapid application development or some simple programs/scripts like HTML pages and the like. Beyond that, I'd doubt it.
    • Re:It's inefficient (Score:3, Informative)

      by drxenos ( 573895 )
      I don't think you understand what "visual programming" is. None of the example you cite are real visual programming. Visual Basic, et. al., are not true visual programming. They are IDEs to help in building software. Visual programming requires no (or almost no) coding. It is all graphical. One example (I forget the system) involved using blackboxes. The boxes has particular functions (i.e., a signal processor). The boxes had connections for there inputs and outputs, and properties to modify their b
      • Comment removed based on user account deletion
        • I wasn't defending it, just trying to explain it to the OP.
        • by Anonymous Coward
          Speaking as a hardware guy. I used to do schematic design (essentially, visual hardware programming) but switched to VHDL and Verilog as soon as they became practical. Schematics only work well for small projects. Above 10K gates and the number of interconnects between the boxes in your design gets cumbersome.

          An example where I've seen it work is in setting up video editting. Don't know the name of the system, but it had boxes with parameters that could be adjusted using a GUI. Seemed very intuitive.
        • Not that I am a fan of Microsoft, but BizTalk 2004 is a visual programming languange/environment. Yes it compiles down to IL via c# being generated internally, but you don't have to write code for many basic functions. It even support threading, and will generate code for transforming XML from one format to another. It is limited in what it will deal with and definitely isn't a general purpose language, but you can accomplish real work with it, if you are trying to do intergration work.

          In the case of
      • ... so you make the analysis by talking to people, writing about it. Specify the requirements in writing and the logic flow. Then you want to translate these linear constructs into a visual medium? It seems woefully inefficient to me and prone to error. It also is not the way people think. I consider myself a very visual person, but any visual reasoning I perform is not based on logic but a higher heuristic level or simply plain simulation.

    • ...such as WYSIWYG HTML editors

      Ummm, writing HTML is **NOT** programming. There's a reason it's called "HTML" instead of "HTPL". Sheesh.
      • Ummm, writing HTML is **NOT** programming. There's a reason it's called "HTML" instead of "HTPL". Sheesh



        Ummm, laying out a window in Visual Studio or Eclipse isn't programming either, but you're not going to get a hell of a lot accomplished if you're writing an end-user application but you're too "elite" to trifle with a UI. Sheesh.
    • For basic languages like C, I feel that the visual way to do things cannot be complete, unless you create some sort of code generator and trust the tool to create all the code for you. There are some tools for windows that do it so. All you have to worry about is to define the relationship of the data, design reports and so on. You don't have to touch code at all. However, I leave here a question: what if the tool has a bug? As a programmer from the last decade, I really don't trust such tools at all. I don
  • by p3d0 ( 42270 ) on Monday March 21, 2005 @09:51AM (#11998107)
    Here's why... Why don't you reply to this note and show me a sample code snippet from your visual language?

    Check out the Aardappel [fov120.com] language. I think it's a very interesting, powerful language, with one major flaw: it's a visual language. This makes it so awkward to deal with that even Aardappel's own inventor broke down and created a textual equivalent language for the sample code in his PhD thesis.

    Text is better than pictures for describing anything complex. We have thousands of years of experience to back this up.

    • I agree with parent; I learned Japanese by spending two years in the Tokyo area in addition to doing a minor in it in college. I feel that the pictographic nature of the kanji is one of the factors that makes Japanese one of the most difficult languages to learn. You have to spend far more time mastering the symbols of the language than you do say in English. Granted, English spelling is convoluted but there are only the 26 letters whereas in Japanese and Chinese there are literally thousands of differen
    • by Anonymous Coward
      "Text is better than pictures for describing anything complex."

      I would say that is false. On the contrary, once a good visual representation is found, it is far more productive. "One picture is worth 1000 words" is absolutely true, just look at the percentage of people using pure text OSes, or apps.

      IMHO, until now, noone has found a way to translate programming structures into sensible graphics. Certainly a one-to-one translation of "Ifs" into If blocklets is senseless; in old HTML apps like HomeSite
      • "One picture is worth 1000 words" is absolutely true...
        My point exactly. What happens when you need to communicate 10,000,000 words?
      • On the contrary, once a good visual representation is found, it is far more productive. "One picture is worth 1000 words" is absolutely true

        Then draw me a picture of the Gettsyburg Address. Or, heck, Basho's famous frog haiku, that should only take about 1/100th of a picture, right?

        There's a reason why people gave up on flow charts long ago (and why I look forward to UML ending up in the same bin soon). While program structure can be sometimes be usefully shown in diagrams ("this module interacts wit

    • "Text is better than pictures for describing anything complex. We have thousands of years of experience to back this up."

      You mean things like aircraft, spacecraft, buildings, or any number of things that you would use something like a blueprint or map to describe?
      I do agree that I have yet to see a good visual programing language but to say that text is better than pictures for describing anything complex is incorrect. It really depends on what you are describing. A set of blue prints for a home beats writ
      • Excellent points. Blueprints and maps may have a lot to teach the visual programming folks. They seem to want to base their languages on only slightly modified flow charts (in which group I would include pretty much all UML diagrams).
      • But those are all extremely limited domains which define either static relationships or relatively simple relationships. How many of your aircraft, spacecraft, and buildings contain a truckload of electronics which then contain firmware or software?

        What you describe is equivalent to something like UML in the software world -- close but no cigar.
    • Surely Befunge [quadium.net] proves that visual languages can be a usable alternative to textual ones :)
      vv < <
      2
      ^ v<
      v1<?>3v4
      ^ ^
      > >?> ?>5^
      v v
      v9<?>7v6
      v v<
      8
      . > > ^
      ^<
    • Text is better than pictures for describing anything complex. We have thousands of years of experience to back this up.

      Wow. That's quite a generalization.

      Most engineers born during the last few thousand years would disagree with you. Most (all?) structures, machines, circuits and other engineering constructs are described via pictures. Implementation follows directly from these with no intermediate "text". Recently various tools have had to "serialize" this data for the benefit of machines, but the r
      • Most (all?) structures, machines, circuits and other engineering constructs are described via pictures. Implementation follows directly from these with no intermediate "text". Recently various tools have had to "serialize" this data for the benefit of machines, but the reference document is the image, not the intermediate.

        Are you sure? I'm not in the hardware field, but I seem to recall that very large-scale circuit design is being done increasingly using textual languages like VHDL.

        GIPSpin, Aard

    • Here's why... Why don't you reply to this note and show me a sample code snippet from your visual language?

      The representation approach does not necessarily have to be mutually-exclusive. If we can find a way to divorce presentation from meaning, then the same thing could have both a visual and a textual representation.

      This would have the added benefit of allowing different people to have custom representations. This way we can see it in a format most comfortable for us individually.

      For example, I once
  • by saintlupus ( 227599 ) on Monday March 21, 2005 @09:52AM (#11998124)
    Here's a visual programming environment called SIVIL that one of the professors here at Canisius has been working on for a while.

    http://www-cs.canisius.edu/~meyer/VP/home.html [canisius.edu]

    --saint
  • Because... (Score:4, Insightful)

    by Anonymous Coward on Monday March 21, 2005 @09:55AM (#11998144)
    I'm wondering why hasn't there been more focus on visual programming

    Because in the 1980's people realized that visual programming was a dead-end idea. Trust me, one of the products I work with lets you program "workflows" using a visio-like toolkit. It's the most unproductive thing I've ever had to use.

    Seriously, researchers have beaten this one to death already. Even as far back as The Mythical Man Month it was already recognized that graphical/visual programming would not give us any improvements on efficiency.

    I guess the old addage of "reinventing lisp, badly" still holds true. "Computer Science" sure seems to ignore a lot of its past research.

  • by drspliff ( 652992 ) on Monday March 21, 2005 @09:58AM (#11998167)

    Visual Programming has always been a sort of industry goal, every product you see out there is trying to make it easier on the developer, be it UML or another modelling software, iMatix's Libero or another code generation framework.

    One of the reasons I think Visual Programming won't catch on for a long time, or will take serious innovational leap is because with existing solutions the developer looses too much control over the path of execution, optimizations, memory management and all the other lower-level stuff we developers have to tinker with.

    There have been numerous frameworks which have tried to bridge the gap, one that sticks in my mind is SilverStream's/Novell's exteNd Composer/Director. They follow the basic roles of a point-and-click programming environment, flow layout, assisted statement creation etc.

    But there is only so much you can do before you end up just writing solid code again. I don't want to sound like an elite-ist, but personally I think all these high-level visual programming environments will lead to 'Joe Blogs' developing your [name critical financial or business application here].

    And not to mention the thousands of zealots out there who you'll have to bring kicking and screaming into the new 'visual' era.

    Rant or rave, new developments in this area can be a great aid to experienced developers, but in the wrong hands they can cause more harm than good (Visual Basic anybody?)

    Moderate: -5 Zealot bait

    • Back in '93 or so I was the UI designer on a visual programming version of VB that Lotus developed (and later sold to Revelation software). We had "links" between UI elements and code snippets so that you could build basic VB apps without actually breaking out the text editor. It worked pretty well as an accelerator for basic application development, but eventually you ended up writing text-based code. It just saved you time getting started.

      I liked the idea so much I went out and built a game around it --

    • Rant or rave, new developments in this area can be a great aid to experienced developers, but in the wrong hands they can cause more harm than good (Visual Basic anybody?)

      You're barking (trolling?) up the wrong tree. The "Visual" in VB's name purely referred to the ability to lay out your application's Windows user interface visually. That was a fairly new thing back in the VB 1.0 days and was a fantastic time-saver. Other products had rudimentary equivalents but VB's was pretty stunningly intuitive at th
  • We think in Language (Score:3, Interesting)

    by sepluv ( 641107 ) <blakesley&gmail,com> on Monday March 21, 2005 @10:00AM (#11998179)
    Humans think internally about the world outside using language (hence multi-lingual people being faster thinkers as they have more paths to persue at once). Ask a psychologist.

    We do not think using a poor graphical representation of the Real World. Given this, text is the best way of representing stuff on a computer (except were graphics are explictally necessary).

    Now, when we get realistic VR systems that actually feel like RL, this may be a different matter (although source code would have to be represented as text at some level as both computers and humans think one-dimensionally with strings of text or numbers).

    At the moment all we have is slow 2D graphics on flickery, bright, flat screens. We have far too go.

    • by TuringTest ( 533084 ) on Monday March 21, 2005 @10:23AM (#11998398) Journal
      Humans think of the world using language, but we also think of the world using visual, spatial, temporal, sensorial... reasoning.

      Do ask a real psychologist, she will say that there are different kinds of think. Textual is best suited for abstract, logical reasoning. But associative thinking is often better done visually. In the Programmer's Guide to the Mind [209.87.142.42] you have an interesting classification of all these.

      A programming environment should take care of all these kinds of thought, not just support the logic abstractions as they do now. A promising field of research is Programming By Example [mit.edu]. This programming style tries to build the final program by using concrete reasoning over samples of data [mit.edu], instead of forcing you to think of the general, abstract procedure.
    • I agree for the most part, but I would revise it to recognize that we do think about a lot of stuff visually (being able to do so is absolutely necessary for tool use, and I know I understand things better when I have pictures to work with.)

      However, the language is essential. It has been our primary mode of communication since time began, and trying to get too far away from it in any situation where you have to explain or describe complex ideas seems like folly to me. I think there is probably a reason w
      • Which has nothing to do with the fact that you were raised speaking a phonetic language? I mean, I totally agree with you that phonetic languages own. But then again I was raised on pseudo-English.
    • Humans think internally about the world outside using language

      I'd prove you wrong, but my ASCII diagram was blocked.

    • I suppose that's why all electrical engineers design incredibly complex circuitry completely in text?

      Sure they can and do these days with VHDL. But I'm really surprised that there are so many people chiming in claiming that you cannot show/design anything complex with a diagram.

      It's done all the time on the hardware side. Just because no one has done a great job at designing such a thing for software that has caught on, doesn't mean it cannot be done.

      Note that on the hardware side they also have the reus
  • by uradu ( 10768 ) on Monday March 21, 2005 @10:13AM (#11998295)
    Because, while on the surface it looks like a really neat thing, in the real world productivity and usability quickly take a nose dive. Visual tools are good for smaller graphs where you can keep most of the view onscreen at once (e.g. filters or rules like in email apps, or workflow graphs), but once the graph grows beyond a certain size it quickly becomes unmanagable. Besides, what's quicker, typing "for (int i = 0; i max; i++)" or dragging an "if" element from a toolbar and dropping it on an empty area of the form, activating and filling out its fields, and connecting it to the rest of the program flow?
    • In the late 80s I used an experimental Pascal system called Genie Pascal (from CMU). it had a structured editor that eliminated syntax errors because you could only input syntactically correct programs. woo hoo. of course, syntax is one of the least of our worries in software development. Similarly, trying to represent program structure as pictures at the level of syntax is a mistake - it's optimizing the wrong problem. We need to think bigger than syntax if we're going to find a useful visual programming m
    • Besides, what's quicker, typing "for (int i = 0; i max; i++)" or dragging an "if" element from a toolbar and dropping it on an empty area of the form, activating and filling out its fields, and connecting it to the rest of the program flow?

      Visual programming, or using models, or whatever name, will eventually come, but only when we are ready to jump to the next level - when we don't have to use if's explicitly. Yes, modelling ifs and fors and whiles would be extremely unwieldly. But eventually we will

      • I doubt visual programming will ever significantly replace textual representation, at least in our lifetimes. As an adjunct--just another tool in the box--it can be very useful, but not more than that. Some aspects of programming lend themselves more naturally to textual representation, others to visual representation, and forcing each into the other paradigm only makes things awkward. Take the GUI and command line: some aspects of file manipulation are more quickly and naturally done using the mouse, other
  • I used to write IVR applications, interactive voice response. They're the automated phone systems that everyone loves to hate.

    While I had been doing it for ten years it never failed that when there was a regime change within the company some new high level manager would read an add in some random telephony journal and think he could magically reduce our IVR development time ten times over.

    Yep, everytime it was one of those graphical languages. Look! Just connect the voice prompts to touch tone input no
    • I concur with the Parent post. It looks great and seems like a great idea until you actually try to use it.

      Parity VOS's Graphical Call Flow Charter [intel.com] (which was bought by intel and ignored a while ago) is an example of this. Anything other than a very simple call flow ends up being many many square meters of virtual screen real estate.

      I will be very very happy once we finally get away from it, and I'm looking forward to a simple perl scripting based replacement.

  • I like it! (Score:2, Informative)

    by WetCat ( 558132 )
    Also for control process applications an interesting thing is
    http://babelfish.altavista.com/babelfish/trurl_pag econtent?lp=ru_en&url=http%3A%2F%2Fwww.ipu.rssi.ru %2FLABS%2FLAB49%2Flab49rad.html [altavista.com]
    made by Russian Institute or Control Sciences.
  • INPUT DEVICES (Score:4, Insightful)

    by ericandrade ( 686380 ) on Monday March 21, 2005 @10:22AM (#11998378)
    Computer interfaces suck.

    Programming languages are visual and depend on the computer screen or whatever the computer outputs. Changing the way it is disposed on the screen is just trying to make it cute.

    The major restraint right now on programming is INPUT.

    The mouse dates back 50 years,
    the keyboard's even older than that, and it's designed to slow down users! (both cause +RSI and other crap, and are very slow and innacurate)

    For example, where are dual cursors?
    There needs to be more OS implementations and design to have superinterfaces. Make the long, tedious, well planned out programming that will accelerate future programming. And think out of the box: Future programming will not be done with a keyboard and a mouse.

    http://sloan.stanford.edu/MouseSite/1968Demo.htm l

    Why can Stephen Hawkings write speeches, scientific texts and do tons of complex things with a single thumb clic?

    Where are the standards for new interface developpement?
    The OS developpements to support the hardware?

    Screens are getting bigger, why do I have to rely on a menu in the top left of the screen if i'm working in the bottom right of a 2nd screen?

    Design and manufacture of new technology is slowed down by the limited ways we have to transfer, collect, manage and create the complex data we have to deal with. F*ck Moore's law! With the keyboard and mouse, the bottleneck is between us and it. The idle time prooves it.

    Computer's are an instrument. Instruments are allways hard to master. They evolve. They're not supposed to only get cuter.

    Imagine a computer using 50% of it's processing power to know what you want /to say/it to do... /rant
    • Re:INPUT DEVICES (Score:5, Insightful)

      by pavon ( 30274 ) on Monday March 21, 2005 @11:32AM (#11999174)
      The mouse dates back 50 years
      The steering wheel dates back 1000s of years, yet we still use it because it is an effective interface.

      the keyboard's even older than that, and it's designed to slow down users!
      No it wasn't - that's a myth. Besides the best text entry devices which are designed to be as efficient as possible, such as Dvorak (use it myself) and chording, are only incrementally better than QWERTY - not the revolutionary jump you are looking for.

      For example, where are dual cursors?
      Dead on the research table where they belong. They never provided any significant improvement in performance, and even slowed users down due to having to stop and think about what they were doing - even after spending a significant time learning the system.

      Why can Stephen Hawkings write speeches, scientific texts and do tons of complex things with a single thumb clic?
      You can too - on a cell phone. They are smart ways to make the most of limited input capabilities; however, they are incredibly slow compared to the keyboard and mouse, and no one would choose to use one unless it was the only option available.

      With the keyboard and mouse, the bottleneck is between us and it. The idle time prooves it.
      Yet, much of the idle time is due to user thinking, not slow user input - you don't notice yourself being slow, because you are preoccupied, but you do notice when the computer slows you down even slightly. Honestly, ever since I learned to to touch-type, I have been able to type as fast as I can think, and there isn't really any reason for me to want to input text faster. Once I get a wacom tablet, I'll likely be able to say the same about the 2d/3d graphics work (err play) that I do.

      Most of the instances where I notice the computer slowing me down - making me do more work then I ought to - is due to poor design and integration of the software, not the keyboard and mouse. For example, I had to start up and entirely separate program just to spell-check this post.

      Where are the standards for new interface developpement?
      I don't know - why haven't you come up with one?
      • For example, where are dual cursors?

        Dead on the research table where they belong. They never provided any significant improvement in performance, and even slowed users down due to having to stop and think about what they were doing - even after spending a significant time learning the system.

        I guess you've never played a video game where one thumb did the looking and the other did the moving. I can think of a ton of cool uses for dual cursors: place them on opposite sides of a app window and then "tu

        • Re:INPUT DEVICES (Score:3, Interesting)

          by RevAaron ( 125240 )
          Squeak Smalltalk [squeak.org] can do multiple cursors. Someone would have to do a bit of simple hackery for Windows/Linux to take two PS/2/usb mice and feed it to Squeak, or also there are some easier (but more expensive) ways to do it with two squeak processes. The usual use of two (or more) cursors in Squeak is for the remote access/collaboration software built in- you can have 5 people connected remotely, each with their own cursor with its own focus, as well as the person who is sat down in front of the computer. No
      • The steering wheel dates back 1000s of years, yet we still use it because it is an effective interface.

        IIRC, even some early steam-powered carriages used horses for steering before the steering wheel was invented. Are you sure you're not thinking of "reins"?
    • Check out GOMS and KLM modeling [umich.edu]. Look at the execution times for the primitive operators. As another poster said, the majority of the delay in a user interface is from thinking activities, such as visual search and method recall. And before you bash this stuff as only theoretical, try actually modeling a task you've already done, and then timing it while a user performs the task. You'll find that the numbers come pretty close.

      There's a good reason there haven't been any real changes in user input devices
    • For example, where are dual cursors?
      Instead of two cursors I'd rather have two hi-def cameras on the monitor, zoomed on my eyes, constantly tracking at what exactly screen pixel I am looking at the moment. That would work essentially like an infinite number of instantly accessible cursors.
      • Not necessarily...

        Think of a pianist. He/she doesn't look at every key played. Using the eyes would slow things down.
        • Using the eyes would slow things down only if you create unnecessary loops (see the key, move the hand, find the key, confirm that it's the correct one, press it, look at the screen, check that the letter typed is correct, and so on).

          Using your eyes for cursor would be nearly instant, because moving your eye precisely is much faster and more accurate than moving a mouse pointer and clicking can be as fast or a bit faster.

          Of course, a direct link to brain would be even better...
  • I like it! (Score:3, Insightful)

    by ka9dgx ( 72702 ) on Monday March 21, 2005 @10:42AM (#11998571) Homepage Journal
    It fits in with some of my own views of the world [basicsoftware.com], with the concept of Rich Source, etc. As long is this is a two-way tool, and can be used to offer another view of the source, as opposed to being only graphical, you've got a great tool idea going here.

    Keep going, and don't let the nattering neybobs of negativism here at /. get you down.

    --Mike--

    • don't let the nattering neybobs of negativism here at /. get you down

      "nattering nabobs of negativism" [answers.com] -- Spiro Agnew

      Or did you mean "Non illegitimi /. carborundum?" (No, I don't know any Latin. Someone please correct.)
    • Re:I like it! (Score:5, Insightful)

      by Jerf ( 17166 ) on Monday March 21, 2005 @01:23PM (#12000853) Journal
      Keep going, and don't let the nattering neybobs of negativism here at /. get you down.

      You know, I understand what you're getting at here, but you need to understand where the "nattering naybobs" are coming from. We're not saying "We've never seen this before and we're sure it won't work, don't try, fingers stuck in my ear, blah blah blah!" We're saying "We've seen several implementations of this idea, they all suck in the same fundamental way, and we've never seen anybody with a functional solution to the suckage problem."

      You want to try your hand at it, be my guest. If you have taken the time to understand history, what has failed in the past, the problems encountered by others, and you still want to try, well, go for it. It's your time.

      But if you think, and I get this vibe from the original poster, "Hey, there's nobody who's tried this great idea, I think I'll start a project on it!"... you're fucked, plain and simple, and throwing a bit of water on that ethusiasm is a favor, right up there with "Your first programming project should not be a real-time strategy RPG." Encouragement is great and all, but you should not encourage someone who's just been mountaineering for a week to take on Everest. Discouragement can be a positive thing.

      At the very least, educate yourself before starting. There's a lot of failure in this field.

      Two of my "general principles" come into play here:
      1. "When a lot of other very smart people have tried to solve the problem, but you think you have the answer, you almost certainly don't, and you certainly don't if it seems 'simple' or 'obvious' to you." This fits that in spades; a lot of very smart people have tried this and failed.
      2. "There exists a lot of solutions that only work in demos, or worse yet, when you just 'imagine' it in your head, but crash and burn when faced with the real world. Anything works in a demo, or in your head, even the absolutely impossible. If you've never seen it outside of a demo, there's a reason." And that fits this whole "graphical programming" to a T; it's trivial to create an "ooo, ahh" demo with the execution flying around the screen, two or three 'if's and a "for". But there aren't very many functions, let alone programs, that work with two 'if's and a for loop. (Another thing that fits here is many interesting AI solutions in academia that are claimed to be generally powerful, with varying justification, but in five years of working with the solution they've only managed one very particular demo.)
      Go for the gusto, but if you just close your eyes and hum when people try to explain why it's a hard problem, you're dooming yourself from the getgo and wasting your time.

  • I remember reading PC Magazine years ago, 1996 maybe, and there was an advertisement for a visual programming tool.

    They had a screenshot and then the tagline "NOT ONE MORE DAMN LINE OF CODE, EVER!"

    Anyone know what it was?
  • in fact there was a visual programming product in the late '80s on the Mac that seemed pretty well implemented. I experimented with it, but found it really didn't do anything for productivity.

    The problem with the idea of visual programming is that it solves the wrong problem, and it solves that incompletely. It's just too cumbersome to represent everything as a picture; instead these approaches tend to try to solve the problem of medium scale organization. Stitching together bits of code into an algorit
    • In the ECE department here, we have a course for doing "computer control" using LabView. The idea is to take a desktop computer and use it to "interface to the outside world", to input from sensors and control a robot or a miniature pumping station or whatever. There is also an embedded systems course, but this is using a PC as an embedded system. If you are building a million of something, you want to go with some kind of embedded system, but if you are building a one-of (classical thing is instrumentin
      • LabView is great! Really, it is the best approach for abstracting hardware control, providing an easy GUI, and handling parallel loops.

        But it has a learning curve like any programming language. If you are paging through huge screens of code, you are doing something wrong -- most of that complexity should be moved into objects or function blocks.

        If you have some code (eg. number crunching) that is better written in text, then code it in C and link in the DLL.
  • Check out Subtext [subtextual.org]. At the moment, it seems to still be in the conceptual stages. Be sure to watch the interesting Flash demonstration. (However, the fact that it needs a movie to properly demonstrate the concept is itself an indicator of how cumbersome it is to deal with visual languages.)
  • by ActionAL ( 260721 ) on Monday March 21, 2005 @11:50AM (#11999405)
    Believe me we use an old Honeywell system at work called MeasureX which uses graphical programming. EVERYTHING IS BLOCKS AND LINES CONNECTING EACH OTHER. It is the most inefficent way to read code possible. The human brain can read text code insanely faster than trying to decipher a huge picture with tons of blocks and lines strewn everywhere with complex connections and pipes from page to page. It is the most horrible thing I have ever seen. And if you have one missing connection or pipe, goodness good luck trying to debug it.
  • by Junks Jerzey ( 54586 ) on Monday March 21, 2005 @12:03PM (#11999583)
    The text-based UnrealScript has gone away, to be replaced by a fully-visual language, Kismet.
    • whats Kismet mean? you found this info while wardriving?
    • While UnrealEd3 does visualize much of the content creation process, including materials/shaders, physics constraints, particle systems, AI waypoints/hints, etc. It doesn't not visualize the underlying logic. As a developing working on two Unreal projects, I'm assuming I would have heard this from Epic.

      For more on UnrealEd3 (and the engine itself) look at:
      http://www.unrealtechnology.com/html/technology/ue 30.shtml

      Anm
  • by fenris_23 ( 634852 ) on Monday March 21, 2005 @12:20PM (#11999803)
    A programming language is a language used to communicate both to a machine and to other humans. Language features help us encapsulate, hide, and organize complexities so that communicating very complex ideas to a human or machine is more efficient and maintainable.

    Non-written languages do not provide the same depth and strength. For example, a CD of James Joyce's Ulysses is not as accessible and understandable as the same book in written form.
    Furthermore, how would you express those concepts visually? In my opinion, we developed our forms of written communication over the years because it is the most efficient and expressive.

    Take hieroglyphics for example. Everybody knows that the Egyptians used a written language of symbols referring to entire concepts rather than words. However, many people do not know that in every day practice, the Egyptians developed a linear form of the same language. Similarly, Asian cultures have adapted their languages to a linear form to use with computers because it is easier than adapting a computer to work with more complex symbols.

    Also consider that amount of complexity that can be expressed in a written (text) programming language. When you begin thinking about designing a visual language, you begin thinking about logic flow and control structures. However, you should begin at the most basic level. A programming language's lexicon has both closed and open classes. The keywords are closed but the open class of identifiers is infinite. Furthermore, the idioms used to express these identifiers in various statements are also practically infinite with respect to designing a visual language. Statements can be combined into idioms that vary between languages, programmers, development teams, and application domains.

    Worse, is the problem of side-effects. Many programmers using languages such as C and C++ use side effects all of the time. How do you adequately express that in a visual language?

    UML is a visual language that has seen a massive amount of research and development. Much progress has been made but even the most die-hard UML designer still has to go down to the code-level to fix the various idioms they wish to express in the programming language that the UML cannot express.

    Ultimately, I think the biggest problem is the lexical and syntactic constraints. A programming language allows one to easily expand the lexicon of a programming language as well as various syntactical forms. To do this visually, you will have to create a symbol set to handle each form. If you tried to implement this dynamically like how a written language works, then you are really just developing a written language that uses pictures for words. In that case, you are wasting your time and may as well stick with text.
    • Worse, is the problem of side-effects. Many programmers using languages such as C and C++ use side effects all of the time. How do you adequately express that in a visual language?

      Actually, it'd be nice if the "language" eliminated the use of side-effects. Find an explicit way of doing the same thing, people that have to look at your code later will thank you for it.

    • I think the parent poster has a lot of important points. But one thing I haven't seen mentioned in this entire thread is that each type of programming/thinking/communicating has its place. While visual layout of complex program logic is not a good way to program, a written procedural language is probably not the best way to create a GUI. We should be using descriptive languages (whether visual or text) for GUIs and such.

      For example, a typical how-to book is mostly text, but it also includes diagrams of how to put things together, and pictures of the completed project. All of these are essential to the book. Another good example is teaching. Teachers employ multiple methods of teaching, as different people learn differently, and different topics lend themselves to different styles. Some people learn best by doing; others by watching; and others by reading.

      I think the right thing to do regarding visual programming is to relegate it to where it works best. I suspect that this would be mostly related to things that are already visual -- i.e. graphical (GUI) layouts, etc. I think it's also useful to be able to visually represent program code "in the large" in a graphical manner, to see the inter-relations, but I don't think manipulating that type of graph is useful.

      I remember the old NeXT Interface Builder. You would take components, and connect them together. You'd have to write code if you were doing anything complex, but you could set up the GUI, including callback functions, visually. I think it's still around in Mac OS X. Things like GLADE work similarly. I'd like to see these types of tools used more.
      • I completely agree about a hybrid approach. The best example is in GUI builders. Take Java for instance. While writing a Swing interface out by hand is probably the clearest and most robust method, and there are plenty of good Java programmers who prefer writing GUIs out by hand, there obviously was great demand for Java GUI builders that caused the development of so many GUI solutions. I think the key point though is that despite GUI tools making things easier, many really good Java programmers I know of s
    • Everybody knows that the Egyptians used a written language of symbols referring to entire concepts rather than words. However, many people do not know that in every day practice, the Egyptians developed a linear form of the same language.

      Not quite right. The alphabetical form was not just used in "every day" practice, but also in official documents, inscriptions, etc. The alphabetical form developed out of a very early pictogram system, and in fact still used a small number of pictograms. To my knowledg

    • I'd argue that C and C++ both have a problem when trying to do anything that requires a notion of time. Using side effects is the worst way to represent time because you are using an implicit method (an artifact of the implementation) rather than an explicit method ("wait until this happens and do this") to express the logic of your system. There have been numerous research papers (see Ed Lee and others) written about how programming language research focuses too much on solving problems that only deal wi
  • Toontalk http://www.toontalk.com/ [toontalk.com] is still one of the most interesting visual languages out there.

    You program by controlling a character who can move around a landscape with his or her tool set. Using the tools, you teach robots (an analog of a subroutine) how to do something.

    It's couched as a game for kids, but in fact it's a complete language with strong semantics. (If you have kids you should try it out on them. If you don't, you should try it out on yourself.)

    Ken Kahn really deserves huge appreciatio
  • Was one of the first Visual Programming IDEs I'd used. I found it incredibly powerful and liberating. You could build complex beans in a simple, easy to use (to me anyway) Visual Composition Editor. Pity that it fell a victim to internal politics...
  • You know the ones. Where you build queries by dragging from one box to another to create a SQL join etc.

    You could call that visual programming (though of course you can't write your whole program with it)

    And the "visual" representation updates itself when you modify the code (which the procedural/OO visual programming systems seem to have trouble with).
  • Back in the late '80s I used Matrix Layout which became http://developers.sun.com/prodtech/javatools/jssta ndard/reference/techart/inteRAD.html [sun.com] . It was a ok, but COBOL and RPG developer, I really never invested the time to learn it properly.
  • by c0d3h4x0r ( 604141 ) on Monday March 21, 2005 @03:47PM (#12002929) Homepage Journal

    Visual representations work better than textual representations for most technical things, but only if you choose the right visual representation.

    Example of a good visual representation: music composition software with virtual modules/machines/synthesizer that you graphically plug together into a virtual "rack" of equipment for your song. Software like BuzzTracker or Psycle, which take this visual approach, are far more efficient to work in than the old textual interfaces provided by programs like ScreamTracker3 or Impulse Tracker.

    Example of a pretty good, but still imperfect, visual representation: the desktop GUI. From within the GUI, you can accomplish just about everything you could possibly ever need to do with your computer. It's almost never necessary to pull up a command prompt to get something done, because the GUI provides an equivalent, more-understandable, typically more-efficient way to do it. Of course, there are still cases where the GUI provides a shitty, inefficient visual representation (or worse yet, no visual representation at all) and you do still have to resort to the command line to get something done, or to do it more efficiently. This just illustrates how choosing the right/optimal visual representation is the real challenge, and it also illustrates how it's an ongoing work-in-progress to pick the "optimal" visual representation.

    Example of a bad visual representation: most visual programming models developed thus far. As others have pointed out, most visual programming models put together so far are too high-level to be realistic programming environments for real-world purposes. This doesn't mean that visual programming will never work. It just means that no one has offered up a decent enough visual representation of programming yet.

    Another thing worth noting is that when you try to develop a high-level "wrapper" layer which rides on top of a lower-level "intermediate" layer, which in turn rides on top of a lowest-level "base" layer, that the layering prevents the top layer from being as elegant and usable as it otherwise could/should be.

    Classic examples of this phenomennon:

    • GNOME/KDE/front-ends (wrapper layer) on top of Linux CLI tools/textual config files/X11/subsystems (intermediate layer) on top of the OS kernel (base layer), instead of integrating the desktop GUI model in a standardized way throughout the entire system
    • Visual programming model which attempts to translate pictures (wrapper layer) into equivalent common C/C++/Java/C# programming ideas (intermediate layer) and then into machine code (base layer)

    In other words, layering forces higher layers to have to be designed to accomodate the design of the layers underneath it, which goes directly against the idea of designing the user-facing (top-most) layer for optimal usability and human understanding.

    I think one of the biggest reasons visual programming has not really succeeded so far is because all the approaches to it have been attempts to "visualize" existing programming models as set forth by C/C++/Java/C#/Basic-type languages. That won't work because those programming models were never designed to be visual in the first place. This approach forces the top-most layer (the visual stuff) to be designed in a way that accomodates the intermediate layer, rather than permitting it to be designed in the most human-intuitive way.

    Instead of trying to create a visual representation of those existing programming models, the right approach (whatever it is) will ultimately prove to involve an entirely new programming model constructed specifically for it, rather than reusing all the same constructs and ideas established by existing textual languages.

  • the biggest problems with so called "visual" languages is that more often than not they are simply ways of expressing the same code in new ways. merely being visual does not make coding easier. we need new metaphors, new tools, not the same tools in new form to actually gain benefits.

    this is highly analog to the advent of the motion picture & television. originally television was little more than radio plays with people. it wasnt until multiple camera's began to be used that people realized the new
  • And didn't a large number of the replies decry the naive longing on the part of people who don't know any better. Folks, reducing complex logic to pretty pictures is not likely to happen...ever. Unless it's the programming language for Lego Mindstorms robots. But serious Lego'ers use one of the libraries available in several languages to get real...uh...Lego'ing done.

    'nuff said?

  • One vaguely similar case is the use of GANTT charts in project management, which are used to show concurrent activities and the dependencies between them. MS Project is the commonest example of such a "development" environment, and has higher-level features such as hierarchical grouping of activities. I have to handle GANTT charts frequently, and they are an absolute pig to debug, and understanding someone else's chart takes an hour or so of talking through it to work out whether it is consistent with the o
  • Probably the crux of the biscuit is Declartive languages and platforms vs. Procedural platforms. The visual tools work out pretty well for Declarative tasks, where the developer specify 'WHAT' the results is, but not very well for procedural tasks where the develper specifies 'HOW' to do it. On the declarative side are Data Mapping tools such as:
    • SQL Servers DTS
    • WYSIWIG SQL (ranging from bad to good)
    • WYSIWIG XML/XSL (XMLSpy, etc.)
    • Data Mapping (EDI Complete, BizTalk, etc.)

    Many report designers could be

  • by ctj ( 210968 )
    LavView developed by NI is a flexable and easy to use visual programming language. It is used in both lab and industal envroments for the control of plant and the collection of results. It the the concept of front and back pannels with front pannel displaying what a user would see and the back showing the code. It uses a dataflow programming style which works well for many event and data driven applications.

    I have used it and it can become difficult to use if you dont segment your programe into subVIs (fun
  • While trying to build a visual programming language I think it would be beneficial to look at systems that have already succeeded at the task. For instance it is possible to design a circuit, draw the schematic of that circuit and have another engineer understand the function. In that sense the schematic is a program, written in a visual language. However, this is only an example, and while it could function as the UI for a visual programming language it would be cumbersome. Flowcharts address the issu
    • (sorry, an OT rant)
      I've worked at companies that used version control (they liked it so much they used to different systems on the two projects I worked on) on software, but not schematics. Perhaps the worst thing is you could change a component value without having to (or even being able to) write a line of paragraph of text explaining why you did it. Try that with changing just one character in a program source file.
  • With multithreading becoming an increasing issue in software development, I'm wondering why hasn't there been more focus on visual programming.

    Where do I start? The very fact you think that visual programming somehow helps or has anything to do with multithreading shows you've got some massive misconceptions about the whole issue.

    To put my spin on the issue, I personally don't think that pure text-editing in terms of VI is the future of programming. However I also get maddened by the people trying to tel

  • If you are in a country you don't speak the language of, you have to point at the things you want (*). It's the same with visual programing. It makes it easy for newbies, because they can simply point at the things they want. But you might slightly more productive, when you can simply tell what you mean. When things become more complex, pointing at things comes to its limits. *) Just like me. I've learnt English at school, but my grammer is sometimes quite strange.
  • Programming should be fun! From those visual programming languages I have seen, only LabView is easy enough to use, so that one can enjoy using it. And even it isn't what visual programming environment should look like.

    With text interface there isn't much choice, so it quite trivial to make programming language interface: type text like you are using text editor.

    With visual interface there is much more you must take into account: visual appeal, drawing components, how relations are presented, creating fun

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...