Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft

Charles Simonyi leaves Microsoft 592

tibbetts writes "The New York Times reports (printable version) (Free blah di blah) that Charles Simonyi, the former chief architect at Microsoft and creator of Bravo, a text-editing program that later became Microsoft Word, has left the company to form his own startup. The focus of his new company is to "simplify programming by representing programs in ways other than in the text syntax of conventional programming languages," which is highly ironic in light of his infamous Hungarian Notation style of naming variables. Perhaps more amazingly, 'Mr. Simonyi has left Microsoft with the right to use the intellectual property he developed and patented while working there.'"
This discussion has been archived. No new comments can be posted.

Charles Simonyi leaves Microsoft

Comments Filter:
  • by Anonymous Coward on Tuesday September 17, 2002 @01:41PM (#4275030)
    I always prefered Reverse Polish myself.
  • by Washizu ( 220337 ) <bengarvey@[ ]cast.net ['com' in gap]> on Tuesday September 17, 2002 @01:41PM (#4275036) Homepage
    Let's hope he isn't allowed to take Clippy the animated paper clip with him. die Clippy die.

  • Grab him! (Score:2, Funny)

    by n2dasun ( 467303 )
    Quick, get him to work on OpenOffice!
  • Fleeing the ship (Score:4, Interesting)

    by doublem ( 118724 ) on Tuesday September 17, 2002 @01:42PM (#4275050) Homepage Journal
    And the Smart Rats are fleeing the ship. I wonder what he knows that we don't know.
    • Oh give me a break.
      1) The guy had been working for Microsoft for 21 years (how long have you worked anywhere?)
      2) He's a billionaire (how much money do you have?)

      These are perfectly good reasons to move on and try something different. In fact, most people in the industry switch companies much, much more often, so this is actually a rather positive sign for Microsoft (that people are staying for so long).
  • by ToasterTester ( 95180 ) on Tuesday September 17, 2002 @01:43PM (#4275055)
    IBM has done a lot of experimentation on developemnt systems along these lines. They never caught on. I remember seeing IBM demos trying to create development systems that anyone could drag and drop their own programs together.

    • Drag 'n Drop one's own programs together? Been there, done that, in 1994 no less. NeXTStep Developer.
    • I remember seeing IBM demos trying to create development systems that anyone could drag and drop their own programs together.

      I agree with you that they indeed *tried* to do that, but they ended up with a system (Visual Age) that was in fact a lot harder to use than a traditional "notepad-style" environment.

      Not everybody can do programming, it requires a special kind of imperative, cause-effect type of thinking. Such visual tools will (maybe) manage to make one aspect of programming easier, but I don't think they'll ever manage to make the whole easier enough so that aunt Tillie could do, let's say, her own custom expense manager. And the reason lies in the fact that most software projects are unique in at least one aspect. This uniqueness in most cases requires a Turing-complete programmer.

      Visual Age was eventually replaced by Eclipse which is, in terms of the programming interface, as standard as you can get.

      The Raven

  • WTF? (Score:2, Funny)

    by Picass0 ( 147474 )
    "Mr. Simonyi has left Microsoft with the right to use the intellectual property he developed and patented while working there."

    ????!!!!!Errrrr??????

    (conspiracy) Something seems to be going on here.(/conspiracy)

  • Hungarian Notation (Score:2, Insightful)

    by Art_XIV ( 249990 )

    Even Hungarian Notation is a big improvement over having no naming conventions at all.

    • Well it all depends on how you see it. Ever had to change an int to a long in a very very huge program? That's kind of a big search 'n replace. Besides I really think that it makes code unreadable... If I don't know what type a variable is, I prefer to look up the declaration. But then I just probably am a bad programmer.
      Just name the var what it is supposed to represent. If it is representing an age, call it "age" and not "iAge". Just my opinion.
      • iAge tells you how that age is represented. If I saw a variable called 'age' I wouldn't have a clue what the type was. It could be string for all I know. Not everything is as obvious to everyone else as it is to you. You should code so that you and a person who just picked up the code off the street (and knows how to code that language) are both able to understand it without difficulty. Easier said than done.

    • Even Hungarian Notation is a big improvement over having no naming conventions at all.

      To all the die-hard C programmers who refuse to make the Linux kernel C++ compatible because they are using variable names such as "new", let me point out that this wouldn't be a problem if you had called the variable nNew, gNew, new_p, or any kind of mangled name at all.

      Sometimes the key is just to have a structure, and it doesn't matter what the structure is.

      -a
  • by Spy Hunter ( 317220 ) on Tuesday September 17, 2002 @01:44PM (#4275075) Journal
    Registration-free link [nytimes.com] courtesy of asahi.com/english/nyt
  • by Anonymous Coward on Tuesday September 17, 2002 @01:44PM (#4275081)
    There's a programming language called LabView (http://www.labview.com). Programs in this language aren't textual but rather lke graphical machines that you can easily visual the data flow through. This doesn't ultimately make programming necessarily easier though... scientists without CS degrees that still want to program their scientific instruments just often happen to have an easier time visualising LabView programs, that's all.
    • In Grad school I used LabView to program a lot of data acquisition and even some control (it was kinda scary using a Mac Quadra to digitally control a $50,000 hydraulic press). This was obviously some time ago. I think the two advantages of Labview were the visualization (as you stated) and of obliviating the need to remember arcane syntax (I was programming fortan prior to that... shiver). Today toys like visual studio catch most of my syntax errors, leaving me free to make others. I still think some programming experience is required to get the most out of labview - you still need to know programming structures (comparisons, loops, etc). Its just a shorter trip from flowchart to program.
    • Yeah, I've heard of LabView. As a programmer, I hated it (caveat: this was 1995).

      The thing that hooks you onto LabView is you've got a bunch of test equipment that you want to automate. National Instruments has a HUGE list of "virtual" instruments that match the ones on your bench. Great, you say: these modules will be just the thing, and I'll be done in no time at all, because they've done all the work. WRONG.

      The main feature of the NI VIs was that they could reproduce, on your computer screen, a GUI version of the front panel of the test equipment. (The other trend was to sell you a piece of test equipment that plugged into an expansion slot of your PC or an external chassis, and had a GUI instead of a front panel, but that is a separate topic.)

      Well, big f**king deal. If I wanted to click an button-shaped icon on a GUI all day, I would have stuck with pushing the real button on the front panel. I want to write a PROGRAM, i.e., something more abstract than pressing the button.

      The only real abstraction that LabView provided was a block which could have dataflow "wires" connected to the terminals. Once there were more than four terminals (think, function parameters), it became impossible to keep the wires neat, or keep straight which terminal was which.

      Plus, the blocks were either ridiculously low-level (a GPIB command or two) or ridiculously baroque (a series of GPIB commands, with input wires for every possible setting of the instrument). I often had to resort to looking at the source, reading the GPIB sequences, then reading the instrument manual to translate into English.

      Any kind of structured programming, other than blocks (functions) required some hokey GUI expression, often involving multiple-page (like tabs in a modern dialog box) displays. By design, you couldn't see the multiple branches of a case statement at the same time. Plus, the need to keep sane wiring meant that these pages kept growing to hold the most complex case, so programs of any sophistication became huge.

      Forget it. I ended up writing my data collection code in a bastardized Pascal-like language supported by my data analysis program (Igor Pro). That was gross, but at least I could write a for loop without going insane, and I got a decent graphing environment.
  • Does this mean that we're going to be seeing more programs with annoying paper clips?
  • by DaytonCIM ( 100144 ) on Tuesday September 17, 2002 @01:46PM (#4275098) Homepage Journal
    Mr. Simonyi has left Microsoft with the right to use the
    intellectual property he developed and patented while working there.


    That's only because Bill Gates owns his soul.
  • by Sanity ( 1431 ) on Tuesday September 17, 2002 @01:47PM (#4275104) Homepage Journal
    In fact, it has a long history [berkeley.edu].

    I personally don't think that either a purely visual approach is necessarily better. Anyone looking into this should probably build it from the ground up by looking closely at how actual programmers write code, and treat it as a usability problem. Try to reduce key-stroke redundancy, and figure out ways to reduce errors. A friend of mine and I once considered writing a language editor which guaranteed that at any time, the program displayed in the editor window was syntactically correct. This would mean autogeneration of text (auto-completion of variables and syntax), and restrictions to prevent the developer from entering impossible code.

    I think the mistake people have made is often to start out with unfounded assumptions about how it should be done - such as assuming that a "drag and drop elements, then connect them up with lines" approach is the right direction (I don't think it is - or we would all be programming with Javabeans right now).

    • (* Try to reduce key-stroke redundancy *)

      IMO, redundancy of a code or code text pattern is often a sign that either you are doing it wrong, or that the language is insufficient.

      For example, if the code has something like:

      foo.bar.yukims.glock(a, 1)
      foo.bar.yukims.glock(a, 2)
      foo.bar.yukims.glock(a, 8)
      foo.bar.yukims.glock(a, 13)
      foo.bar.yukims.glock(a, 19)

      There should be a way to do something like:

      x = "foo.bar.yukims.glock(a,"
      x& 1)
      x& 2)
      x& 8)
      x& 13)
      x& 19)

      Not exactly like this, but something roughly similar, with better names of course.

      IOW, there are two approaches to dealing with such repetition: 1. Automating the reproduction copy-n-paste style, or 2. Use the language itself to eliminate the redundancy. The first approach makes programs harder to change IMO because you then have to change every copy if you change the parts that are the same.
    • I think the mistake people have made is often to start out with unfounded assumptions about how it should be done - such as assuming that a "drag and drop elements, then connect them up with lines" approach is the right direction...

      I agree. I would add that there are many visual techniques already present in most every programming language. In perl, a hash can be formed and referenced in a way that is (to me) visual:

      my $hashref = {
      'ref1'=>{'color'=>'blue'},
      'ref2'=>{'color'=>'red'},
      };

      Compare this to forming the same sort of data structure in java using hashTable. In java, you might approach this by forming instances of hashTable and then individually adding keys and values, one at a time.

      Hashtable hashRef = new Hashtable();
      Hashtable ref1Hash = new Hashtable();
      Hashtable ref2Hash = new Hashtable();
      ref1Hash.put("color","red");
      ref2Hash.put("color","blue");
      hashRef.put("ref1",ref1Hash);
      hashRef.put("ref2",ref2Hash);

      The perl example is much more self-documenting and "visual" than the java example. Perhaps more can be said for visual techniques with ASCII code?

      If you ask me, lets use unicode to create more wacky characters for perl to take advantage of! :)

  • Simonyi. (Score:5, Informative)

    by PrimeNumber ( 136578 ) <PrimeNumber@excite.YEATScom minus poet> on Tuesday September 17, 2002 @01:47PM (#4275105) Homepage
    Unlike most of the management at Microsoft (Ballmer), Charles Simonyi is definetly technical.

    Not mentioned in this article, he developed the Multiplan interface, which a gazillion of CPM based boxes used, the first version of Access, and had peripheral involvement of the development of the first Mac GUIs.

    This guy started writing programs on a soviet vacuum tube (Ural II) computer. He snuck into Eastern Europe, and from there moved to the US.

    If I had any cash I would invest in his company. :).
  • Not ironic (Score:5, Informative)

    by Junks Jerzey ( 54586 ) on Tuesday September 17, 2002 @01:49PM (#4275137)
    which is highly ironic in light of his infamous Hungarian Notation style of naming variables.

    It was a technique for making types easy to identify in a language (C) that doesn't have any native way of indicating type. In BASIC, you know that A$ is a string. In Perl, you know that @names is a list. In C you don't know what "last_position" is. A pointer? An index? A floating point vector? It's not as if Hungarian Notation was designed to be the ultimate language-independent programming tool.
    • Re:Not ironic (Score:4, Insightful)

      by furiousgeorge ( 30912 ) on Tuesday September 17, 2002 @01:57PM (#4275227)
      thank you. A voice of reason. Hungarian (while not perfect, and not that pretty) is DAMN useful.

      I was someone who was introduced to it kicking and screaming, but eventually I came around. As soon as you have to work in a LARGE software project it's a godsend. It makes reading someone else's code, or your own code 2 years later, MUCH easlier. When i can look at a variable in a strange piece of code and tell it's type and scope just from it's name, that saves a ton of time.

      Most geeks don't like it cause it's extra typing.
      • But then there's the dark side to that, specifically what happens when the type of the variable gets changed (say from int to class InterlockedULCounter for a counter). This being C++, the programmer's defined the correct methods to make the ++ and other operators work right, except that the number's now an unsigned long instead of an int, so only a few places where it was output needed touched to keep everything compiling cleanly and working properly. Nobody wants to go to the trouble of tracking down every initialization or increment of the variable across the entire program just to change the type prefix and now you're back to a situation where you can't tell the type of the variable from the prefix. Except that you're assuming you do know, and are in for a nasty suprise in the near future.

      • Maybe it does have a place, but I've seen it used where it should never be. Such as databases. char_Firstname and tbl_Employee is cool, until you want to change things, so the table is now a view and the char is now a varchar, and you have the wonderment of trying to decipher code that has just been obfuscated for you. Joy.
      • by smagoun ( 546733 ) on Tuesday September 17, 2002 @02:42PM (#4275714) Homepage
        Hungarian notation is the tactical nuclear weapon of source code obfuscation [mindprod.com]. Use it!

        (scroll down to #29 in the list, it's worth it)

      • Re:Not ironic (Score:3, Insightful)

        > A voice of reason. Hungarian (while not perfect, and not that pretty) is DAMN useful.

        I agree - except I use a practical Hungarion Notation, not an overly-idealistic one. I posted a comment a while ago about this.

        http://slashdot.org/comments.pl?sid=32873&cid=3582 560 [slashdot.org]

        The problem occurs when you take Hungarian notation to its logical conclusion: You get lost amongst the alphabet soup of glyphs. Variable names provide the abstration of memory addresses, but over zealous use obfuscates the name.

        Cheers
      • Hungarian Notation saves our ass. My group maintains several million lines of code, and we change variable types all the time. By changing both the type of the variable and the prefix on its name, we effectively cause all code that referenced that variable to fail to compile. This is the desired result.

        The task of progogating a change of variable type includes visiting the affected code and verifying that the change will not have unwanted consequences. It almost always does. Hungarian notation allows you to do this quickly, effectively, and in a single pass. Waiting for the regression test to come back negative is reckless and unprofessional.

        We don't allow code to be checked in if it is not in HN. If it can't be visually audited for type correctness by an independent team, without the use of an IDE or some type of code browser, it's a liability and therefore has no business in our code base.


        -Hope
    • It was a technique for making types easy to identify in a language

      Yes, but the value of having the type information accessible in the variable name has to be weighed against the confusion and clutter that adding that information causes.

      The names it creates are hard to read and remember, impossible to pronounce. It doesn't scale very well beyond a few native types, like BASIC and perl -- how many meaningful prefixes is a programmer supposed to remember? How many characters of a variable name are you willing to devote to type information? And finally, the type of a variable is usually obvious from it's context, and it can be commented where it isn't.

      I've never met anyone who has asserted that Hungarian notation is worth using. It is ugly and confusing, plain and simple.
    • I once saw a spec (back in my VB programming days) that had something like four or five parameters in the hungarian notation before you even got to the variable name. something like

      intLocalFnnameModulenameX

      THAT gets a little absurd, IMHO. Not arguing that intX, or even intLocalX isn't useful, but you can twist yourself around an axle pretty damn quickly with this stuff.
    • by wowbagger ( 69688 ) on Tuesday September 17, 2002 @02:21PM (#4275467) Homepage Journal
      Hugarian notation is EVIL, and here's why.

      Consider a large program, in which we manipulate lots of ints. We have lots of pointers to ints, so our code looks like:

      ....
      int *piFoo = &bar;
      *piFoo += 1;
      *--piFoo = 5;


      and so on.

      Now, we discover that ints aren't big enough - we need to use longs.

      ....
      long *piFoo = &bar;
      *piFoo += 1;
      *--piFoo = 5;

      ...


      OK, now we have two equally bad choices:
      1) We leave the variable names alone. But now they are lying, and therefor are introducing more errors.
      2) We change the variables. Now what SHOULD have been a simple change is rippling all over the code.

      Even if you do as you should, and use a typedef, things are still bad:

      ....
      typedef int Thingy;

      Thingy *pThingy_mythingy = 0; /* ????? */

      ....


      How do you create the "warts" for typedefs without creating ambiguity?

      It gets even worse if you have structures:

      ...
      struct Narf
      {
      int *pi_Poit;
      };

      ....
      *narf.pi_Poit = 5;

      ....


      Now, you have to rev all the items that reference that structure, all documentation that refers to that structure, etc.

      I can somewhat understand the use of a leading "p" to indicate "pointer to ...." but otherwise the notation creates more problems than it is worth.

      The proper place to trace variable types is not in the name of the type! It should ideally be traced by your editing environment, along with the location of the variable's definition, the location of it's instantiation, the location of it's initialization, and any comments that you want to assign to the variable.
    • context (Score:5, Informative)

      by Phil Wilkins ( 5921 ) on Tuesday September 17, 2002 @03:47PM (#4276272)
      Hyslop and Sutter on Hungarian [cuj.com]

      (In summary, don't.)

  • by Anonymous Coward on Tuesday September 17, 2002 @01:50PM (#4275138)
    This guy invented the hungarian notation yet his name is not an anagram of Satan, Baalzebub or Lucifer. Or has I missed something ? Or is it in the name of his new start-up ?
  • obvious? (Score:5, Insightful)

    by oyenstikker ( 536040 ) <slashdotNO@SPAMsbyrne.org> on Tuesday September 17, 2002 @01:52PM (#4275169) Homepage Journal
    Could it be that maybe this man just wants a change of pace? Maybe he wants to move geographically? Maybe he wants more freedom to spend time with people important to him? Maybe he just decided to do it on a whim? Can we consider that maybe, just maybe, this has nothing to do with Evil Empire Microsoft (TM), politics, Open Source, or geekiness?
  • by Tablizer ( 95088 ) on Tuesday September 17, 2002 @01:53PM (#4275174) Journal
    This topic raged recently on comp.object.

    There are basically two common candidates: drag-and-drop "box-and-line" diagrams, and tables (my favorite).

    I argued that OOP puts too much of the "noun modeling" into code. The more that is put into tables (relational databases), the easier it is for me to search, sort, filter, navigate, etc. the information (assuming decent relational tools).

    The alleged downside is that algorithms are decoupled from data, which is "bad" in most OO philosophy. However, I don't see any huge penalty of this, and the benefits of being able to apply relational algebra and relational modeling outweigh any small drawbacks IMO. Besides, I have put code into tables on occasion.

    I personally find code more rigid than a (good) relational system. In procedural/relational programming, mostly only "tasks" end up dictating code structure, and not the noun models, noun taxonomies, and noun relationships; which are all subject to too much change and relativism to use code to manage IMO. OOP is too code-centric WRT noun modeling.

    It is probably subjective, so I hope that whatever he comes up with to replace code, it does not become forced down everyone's throat if it catches on in all the PHB mags. One-size paradigm/approach does NOT fit all.

    Perhaps he can strive to make all 3 methods (code, tables, diagrams) interchangable. That way a given developer can use the representation that he/she likes the most without shop-wide mandates.

    • by sohp ( 22984 )
      Oh No! Another anti-OO "relational alegebra is all we ever need" rant by Tabilizer. Remember the term from back in the vinyl LP days, "broken record"? Now we say a CD is skipping.
    • I've gotten pretty far merging relational and object models.

      Personally, I find OOP can be a bit rediculous when everything is mindlessly reduced to a rigid object model as dictated by some guy's rigid methodology. (Not all are rigid)

      What I've found, is that most of the time is a matter of versatile interfaces. Myself wanting the best of my procedural language and SQL, I found myself creating interfaces that implement smart tables. A smart table is an object that exposes an arbitrary number of properties, like that of a named collection. Unlike a normal named collection, a smart table allows you to implement adhoc rules (changing this field causes this), using code, stored procedures, etc. Need more than just smart properties? Fine, derive from the Smart Table base class, and add your own functions (usually stored procedures)
    • Perhaps he can strive to make all 3 methods (code, tables, diagrams) interchangable. That way a given developer can use the representation that he/she likes the most without shop-wide mandates.

      It will have to be interchangeable at some level. No matter what you do at the highly-abstracted-developement-interface level, the hardware is still procedural. All the fancy tables and relational tools, and all the OOP modules, and all the event driven interfaces, have to be translated into step-by-step machine code eventually or they do exactly nothing.

      I'm not saying any of your ideas are bad, but it's important to recognize at some level that all of this stuff is really just window dressing. If it makes things easier for you to understand, great, but it doesn't fundamentally change how things actually get done, and at the end of the day a good programmer still needs to have some understanding of how the machine actually works.

  • by Kaz Kylheku ( 1484 ) on Tuesday September 17, 2002 @01:54PM (#4275196) Homepage
    It's been done; for example Lisp represents programs as data structures rather than text. The structures are often obtained by scanning a text notation, but that is not strictly necessary. Sometimes the structure is manufactured by the program itself. Or it could come from some GUI manipulations, whatever. I wonder what Simonyi could be up to in this area that is original? (Original to the entire computing world, that is, not just ignorant pockets thereof).
  • by daviskw ( 32827 ) on Tuesday September 17, 2002 @01:57PM (#4275225)
    He isn't hitting anything new as far as technology goes. Five years ago there was a company called FastTech that had tools called Graphiq and Cellworks.

    Graphiq provided a rudimentary GUI that let you plan program flow with individual modules coded in something called C-- (this is no joke).

    CellWorks provided a much better GUI but a different low level language that resembled in only the worst possible ways: Basic.

    What we discovered using these tools is that they could indeed be powerful and almost any yahoo could use them. Once you wanted to solve something complicated and the problem immedietly started to look like programming 101.

    In other words, complicated things are complicated, and it doesn't matter what the tool is. If you want to solve it you need someone specialized in that tool to solve it.

    It's as simple as that.
  • Hungarian (Score:5, Funny)

    by Quasar1999 ( 520073 ) on Tuesday September 17, 2002 @01:57PM (#4275226) Journal
    If it weren't for Charles Simonyi, I wouldn't be proud to be Hungarian at parties...

    Wait... I never actually get invited to parties... damn... day dreaming again... :P
  • Will Microsoft wait till the new company comes up with something truely nifty, and then buy it up (like they did to get their hands on Halo)?
  • by Dynedain ( 141758 ) <slashdot2&anthonymclin,com> on Tuesday September 17, 2002 @02:00PM (#4275268) Homepage
    Mr. Simonyi has left Microsoft with the right to use the intellectual property he developed and patented while working there.

    If he patented stuff, he owns the rights to it and can use it if leaves MS. Now if his work was patented in MS' name, then he couldn't take it.
    • Companies can not get patents. Typically the human gets the patent and assigns the rights to the company.

      A quick peek into the USPTO shows the Simonyi has something like 8 patents (probably from two applications, one of which was split into many parts) all of which are assigned to Microsoft.

      So, Microsoft must have granted him rights to use the patents in his new venture. And Microsoft must have gotten something in return or they have not acted in the interest of their shareholders. What they got is the mystery.
  • he's not the first (Score:5, Informative)

    by mirko ( 198274 ) on Tuesday September 17, 2002 @02:05PM (#4275323) Journal
    "simplify programming by representing programs in ways other than in the text syntax of conventional programming languages"


    Has he heard about COLORFORTH [colorforth.com] ?
  • Free blah di blah (Score:5, Informative)

    by frovingslosh ( 582462 ) on Tuesday September 17, 2002 @02:08PM (#4275352)
    "The New York Times reports (printable version) (Free blah di blah)

    Hey! The printable version that was linked to didn't blah di blah me when I tried to access it! Maybe this is the cure for all of the NYT registration stuff, link to the printable version rather than the one with ads. Of course, I'll miss seeing all of the ads, but I'm willing to make the sacrifice.

  • by leighklotz ( 192300 ) on Tuesday September 17, 2002 @02:11PM (#4275373) Homepage
    Charles Simonyi didn't just create "a text-editing program that later became Microsoft Word" as the Slashdot story says; he wrote the first WYSIWIG editor at the place that invented the concept, in 1974. Note that 1974/1975 saw the development of BITBLT, WYSIWIG editors, PDLs, icons, and pop-up menus.

    See PARC [xerox.com]'s history and search for "Bravo", or read the summary below:

    1975

    Engineers demonstrate a graphical user interface for a personal computer, including icons and the first use of pop-up menus. This interface will be incorporated in future Xerox workstations and greatly influence the development of Windows and Macintosh interfaces.

    1974

    ...Press, the first PDL, is developed by PARC scientists and greatly influenced the design of Interpress and Postscript.

    The Bravo word-processing program is completed, and work on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut and paste editor, begins. Bravo and Gypsy programs together represent the world's first user-friendly computer word-processing system.

    BITBlt, an algorithm that enables programmers to manipulate images very rapidly without using special hardware, is invented. The computer command enables the quick manipulation of the pixels of an image and will make possible the development of such computer interfaces as overlapping screen windows and pop-up menus.

  • "simplify programming by representing programs in ways other than in the text syntax of conventional programming languages,"

    Oh... you mean like Magic [magicsoftware.com] ?
  • by weird mehgny ( 549321 ) on Tuesday September 17, 2002 @02:16PM (#4275421)
    Here's one [fov120.com].
  • by deft ( 253558 ) on Tuesday September 17, 2002 @02:19PM (#4275440) Homepage
    'Mr. Simonyi has --left-- Microsoft with the right to use the intellectual property he developed and patented while working there.'"

    "Left", as in he left it there, for them to use, or...

    "Left", as in departed with that right so that it was no longer there and they couldnt use it.

    dont tell me i need to read the damn article.... :)

    • I think they mean

      "Mr. Simonyi has [departed] Microsoft with the right to use the intellectual property he developed and patented while working there."

      Meaning that Microsoft holds the patents and he has been given the right to use them without forking over license fees.
  • by GuyZero ( 303599 ) on Tuesday September 17, 2002 @02:21PM (#4275462)
    Wow, I get to be the first person to post something actually informative.

    Simonyi was big on what he called 'Intentional Programming' (yes, as opposed to UNintentional programming, which is what we've been doing all along I suppose.) It's been in the works since at least '94 which is when a classmate of mine went to work on the project after graduating.

    He got shafted as the power inside the dev tools group shifted. Most of his group got cut loose and ended up looking for other positions, Oddly enough, Simonyi himself left the group and gave up on it a year or so ago apparently without telling the remaining core of the group.

    See:

    http://web.archive.org/web/20000815211509/http:/ /w ww.research.microsoft.com/ip/
    http://www.edge.org /digerati/simonyi/simonyi_p1.ht ml
    http://www.omniscium.com/nerdy/ip/
    http://www .aisto.com/roeder/active/ifip96.pdf
  • It states:

    Mr. Simonyi's departure, to be announced today, will leave Microsoft with only three senior people from the team that led the company in the early 1980's: Bill Gates, a co-founder and the company's chairman; Steven A. Ballmer, the chief executive; and Jeffrey S. Raikes, a group vice president.

    I'm pretty sure Marc Macdonald is there again; Marc was the first employee of Microsoft.
  • by alispguru ( 72689 ) <bob.bane@me.PLANCKcom minus physicist> on Tuesday September 17, 2002 @02:53PM (#4275820) Journal
    Could it be that the real reason Simonyi wants away from Microsoft is that he's interested in aspect-oriented programming [xerox.com]? And the language that's getting the buzz in aspect-oriented programming is AspectJ [aspectj.org], where J stands for Java? And promoting Java would be a career-limiting move at Microsoft for anyone these days?

    Instead of the Times article, look at this one [washingtonpost.com] in the Washington Post which gets a little closer to this interpretation.
  • The company itself (Score:5, Informative)

    by Nygard ( 3896 ) on Tuesday September 17, 2002 @02:55PM (#4275847) Homepage
    Odd that no-one's posted this yet.

    The company can be found at http://intentionalsoftware.com/ [intentionalsoftware.com] with some vague-but-cool-sounding stuff about changing the world.

    • The company can be found at http://intentionalsoftware.com/ [intentionalsoftware.com] with some vague-but-cool-sounding stuff about changing the world.

      Now the interesting thing I found out there is that the *other* founder is Kiczales, a Xerox PARC person who was a prime mover in the Aspect-Oriented programming movement. So it looks like we have here is a start-up featuring really smart people whose efforts to do world-changing programming tool/language research did not get anywhere in the large companies they previously worked for. Or something like that.

      The success rate for start-ups is not very high, but this is at least an interesting sort of venture, unlike so many of the dot-coms of the past few years.

  • Blackmail? (Score:5, Funny)

    by CormacJ ( 64984 ) <cormac DOT mcgaughey AT gmail DOT com> on Tuesday September 17, 2002 @03:19PM (#4276049) Homepage Journal
    'Mr. Simonyi has left Microsoft with the right to use the intellectual property he developed and patented while working there.'

    Charles S.: I'm leaving to go my own stuff
    Bill G.: Charles, you'll have to give up your rights to all the stuff you've developed over the years
    Charlies S.: Did I metion that I still have a copy of those memos that the government never saw?
    Bill G.: Well when you put it like that, I'll give you the rights to all your stuff. Need any cash? No? Here have some anyway. Anything else I can do? Anything at all? Coffee, Water? Sure..?

  • Interview? (Score:5, Interesting)

    by PhilHibbs ( 4537 ) <snarks@gmail.com> on Tuesday September 17, 2002 @03:45PM (#4276262) Journal
    I for one would be interested to see a Slashdot interview with him.
  • I'm not alone! (Score:3, Insightful)

    by Trinition ( 114758 ) on Tuesday September 17, 2002 @08:01PM (#4278309) Homepage
    I've been increasingly troubled that I perhaps was alone in thnking textual representation fo source code is silly. As a Java programmer, every IDE under the sun ahs a little side panel where the structure ofyour class is represented as a tree, and as you clickon elements in the tree, the file jumps to that delcaration.

    Turns out, though, that it doesn't really matter that method A appears before method B in the file. Code folding is a very simple step in this direction. And all of this arguing over tabs vs. spaces, curly-braces on their own line, etc. would be obliterated if code were stored in some other, unformatted manner.

    I know IBM's alphaWorks has a project [ibm.com] that transforms Java into XML and back. Once in unformatted XML, it is easier to see if a file changed functioanlly whereas typical diff programs would higlight a curly brace being moved to its own line.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...