Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

On Coding Multiplatform Distributed Systems... 132

Wiggly asks: "I would like to program distributed systems using the same code base on multiple platforms and multiple languages therefore I am asking around..." And he's asking Slashdot. You've only read the tip of the iceberg, however. There's much more to digest if you decide to click on thru.

"I will firstly say though that none of this is meant as flamebait, or to detract from what any of the projects/products mentioned here have achieved. I just have a wishlist and I am looking for answers and opinions, not a holy war. I am sure that people use many of the things mentioned here on a regular basis for heavy duty apps quite happily and with great results.

There are a whole bunch of distributed programming frameworks around. RPC, ILU, CORBA, DCE, Java RMI and DCOM to name but the most common. Many of these are available on multiple platforms and there are a whole slew of interoperability tools to get them to talk to each other with varying degrees of success. Right now I will focus on CORBA as it is getting much more press than any other recently, and because it is the system that I personally know more about than the others..

Commercially there are a few good ORBs but they are terribly expensive. Developer kits for 'a well known brand' with good CORBA compliance start around 1500 - 1900 UK Pounds, for developer kits. Redistribution costs are around 1700 UK Pounds per processor. These kinds of costs don't really let people play with systems before buying although I know that most comercial ORB vendors will give you trials if they think you are a good bet to buy. Additionally most of the commercial ORBS support as few platforms as they possibly can.

On the Open Source side of things there are many, many implementations of CORBA to choose from, with their own special focus. CORBA compliance, speed, interoperability or whatever else that project's maintainers view as the most important goal(s). There is some great code out there, and a load of people spending every waking hour making it better.

What I cannot find at the moment is a system that targets multiple platforms and multiple languages. Want to use Perl to talk to C++ back ends? Well MICO/COPE is coming along. Want to use the same code on Windows NT as well? Too bad, NT support is very flaky (I have spent too many hours trying to get it working). Want to use Java Applets to talk to C? You have problems. Pick your favourite front/back end language combination and platform then try to find a solution. Problematic at best, and probably not possible at the moment.

Are these very strange requirements/wishes or would other people be willing to sacrifice ratified standards compliance and possibly performance for orthogonality of language/platform availability? I would like to be able to write code for Linux/Unices/Windows in my languages of choice (for me this would be Perl, Java and C++) without having to use multiple implementations on the different platforms.

The way things are shaping up I am thinking hard about rolling my own, because right now I have a need that I cannot fulfill from outside sources. Yes, not Invented Here strikes again, but I can't find a solution. Am I alone in this? What do you think? Do you have any solutions?"

This discussion has been archived. No new comments can be posted.

On Coding Multiplatform Distributed Systems...

Comments Filter:
  • I guess you could write everything in Java and use the CORBA support Sun's added.
    You could also look into free (and GPLed) ORBS like ORL's [att.com] (now AT&T Labs) omniOrb.

    On NT I always use D/COM+ simply cause it's reliable (laugh all you want - it's true) and ubiquitous.
    That all being said. I still primarily use sockets for cross machine communication and only use COM for IPC and inprocess componentisation.
  • by Anonymous Coward
    I've recently completed a project involving an ASP driven web site that talks to a J++ COM server that talks to a pure-Java RMI Client on NT, that then talks via RMI to a pure-Java RMI Server on Solaris that then talks to a shared object via JNI written in C that then talks to an ISAM database. All works fairly well except that the bridge between the J++ and the RMI client is a new jre process forked on the command line. This works okay and yes it is ugly.
  • Awright, some people are probably gonna dump on me for this, but I'd say your best bet is java.
    Yes I know some consider Sun Evil(tm), and that it can't compare with C performancewise. However, development time in java tends to be shorter and it is a helluva lot more stable (no messing around with pointers causing memory leaks and access violations) and it has RMI and CORBA capabilities in the core libs.

    Do I come off as a java advocate? I didn't intend to; I hate advocacy. So please feel free to form your own opinion :)

    --The Bogeymeister
  • i am by no means familiar (aside from reading occasional user documentation) with distributed programing frameworks, but i am fairly familiar with C/C++. one thing i think should be noted is that while C/C++ syntax and semantics are well standaradized, as operating systems and architectures differ vastly, they are apt to have different implimentations of certain system calls (eg network sockets, file system calls (beyond things like open/close), etc). afaik, interpretted languages like java and perl should be able to avoid this problem since its all handled by the interpreter.

    if you plan on using a lot of things like that while still maintaining cross-platform-ness, i would chock that up as a Con for C++.

    *donning asbestos suit*

    --Siva


    Keyboard not found.
  • by Anonymous Coward on Tuesday October 12, 1999 @11:02PM (#1618041)
    You might consider the fairly widely used ACE/TAO [128.252.165.3] framework from Washington University. It is Ooen Source and supports many different platforms. You can also get an idea what ACE/TAO are good at by looking at the list of current projects using ACE/TAO [wustl.edu]. The framework is CORBA/C++ based, and implements several common design patterns for you.
  • Isn't the only real option Java ? Unless, you write the BASE stuff in ANSI c++, and just add in calls like MyWindowCreate(), which is written differently for each platform ... something like that ?


    -
  • After all the promises Java failed to turn into reality (maybe because of MS, maybe because of any other factors), the only language which has proven to be 100% portable is C.
    I love Java the promise od 'write once, run anywhere', but after all it's just a promise and has not been reached yet. C programs like Apache, GIMP, etc. are well known to be compatible with this 'write once, run anywhere' promise and also are extremely fast and powerfull.
    So, why not you choose your favorite CORBA library written in C and join the development team to add the features you need? It seems fair to me.
  • s/C/C\/C\+\+/g
    I hope this is enough to keep C++ advocates in calm.
  • well, the person asking the question mentioned perl as one of his/her languages of choice. im pretty sure perl interpreters are available for most platforms.

    i dont know of the availability of any sort of distributed frameworks for perl however...

    --Siva
    p.s. mooo

    Keyboard not found.
  • Use Corba !
    There is no other way in your enviroment at the moment.
    I can recommend Orbacus (http://www.ooc.com).
    It's fast AND stable AND comes with source code.

    You may also have a look at Smalltalk. It's
    at least as crossplatform as Java and probably
    still more mature. I can recommend Visualworks
    now available from CinCom (www.cincom.com).
    They also have Corba support.


  • by JonJon ( 26343 ) on Tuesday October 12, 1999 @11:25PM (#1618047)

    I'd definatly have a look at XML-RPC [xml-rpc.com] (http://www.xml-rpc.com/)

    While implementations are not available in every language (of note, Java, Perl, and Python have implementations), it's simple enough to write your own easily.

    I've writtern a few programs using it with Python and Delphi, with great results.

    In essence, it's doing a procedure call, with the parameters and return values in an XML format, over HTTP. If you're famliar with both, it's dead simple to do, if not, it's a great excuse to learn :)

  • C/C++ is 100% portable? Sure, a for loop works the same on all platforms. Or does it? Better make sure you have ANSI compatible compilers. Better turn off any non-ANSI features. What about things that ANSI does not specify? Are your compilers going to treat these cases differently? What about library calls? There are NO guarantees here. When you get anywhere near the operating system, funky things happen. Even with supposed POSIX standards! Have you seen what a mess signaling is in across UNIX platforms? Same is true with Java. Actually Java pushes the limits even further with support for threading and graphical interfaces. Give me a C/C++ program that has threading. Next get that to even pretend to work cross platform. It ain't gonna happen. Give credit where credit is due!

    I haven't looked at Apache but I would really be surprised if it did not contain plentiful #ifdef's and tons of platform specific Makefile foo. Same with GIMP. Having personally done cross-platform porting for a commercial product, I can tell you C/C++ code porting is NOT a trivial task. Even with Java deficiencies, I'd rather Sun do the work of converting operating system depending things so I can spend my time making my code do something useful.
  • Huh?
    "all the promises Java failed to turn into reality"?
    I wouldn't exactly say that. For platform independence, java beats C seven days a week. Sure, your do have to be a little careful when designing your app, but isn't that even more true for C as well?

    --Bogey
  • After re-reading your post/question I still am confused on what you are actually doing.

    are you creating a framework to allow other people to write programs to run on this distributed system?

    Are you going to be doing computation and need a way to communicate amongst all of the programs?

    other?

    In general you have two options for doing distributed stuff semi easily:

    0) define a CORBA "system" where each application talks via an ORB (ie most languages have CORBA hooks and this allows people to write in their own "language of choice") So I will grab your corba stubs write my functions to interface properly and then I should be able to talk to the other nodes

    1) use java :-) The java method does not have "as nice" of incorporating lots and lots of different languages into the mix. (ie it is easier to use CORBA if you are going to have all kinds of crazy things participating in your system) If you are going to be writing most of whatever you are actually doing from scratch then java might be really nice for you.



    would be really curious to understand a bit better what you are actually doing. (ie detals :-) )


    msew

  • AFAIK There is a standard Protocol called IIOP (Internet Inter ORB Protocol) That lets different orbs work together.


    Im not really shure about that, but thats what Ive filtered out.


    If that is true you can use different orbs on different machines and they still work together.

  • I am by no means familiar (aside from reading occasional user documentation) with distributed programing frameworks, but i am fairly familiar with C/C++. one thing i think should be noted is that while C/C++ syntax and semantics are well standaradized, as operating systems and architectures differ vastly, they are apt to have different implimentations of certain system calls (eg network sockets, file system calls (beyond things like open/close), etc). afaik, interpretted languages like java and perl should be able to avoid this problem since its all handled by the interpreter.
    Hmm. While not directly relating to the original question (as far as I know, there is no Corba support or similar) there is an attempt to make a "common" Cross-Platform C++ Library at the WxWindows Project [ukonline.co.uk] Which may be of interest.
    --
  • I'm not sure what you are trying to accomplish, but one possible way to achieve interoperability for a system across architectures, languages and operating system is to use PVM. "PVM!" I hear you cry, "isn't that for math and physics nerds doing windtunnel simulations?" Well, yes, but actually, PVM is a fairly general communications and synchronisation toolkit that just happens to have evolved from the needs of scientific computing. PVM solves all the interoperability issues above, and tends to scale fairly well. The only(!) drawback is that it's not well suited for widely (in a geographical sense) distributed processing.

  • Java is cross platform...and works all the time.

    MS didn't 'ruin' java in anyway IMHO. One of the major reasons why I'm now such an advocate of Java is because of MS and their extensions that let me when I only want a windows application to choose an alternative RAD language (other choices like VB aren't attractive anymore).
    I'm not a stupid programmer - if i want cross platform, I'll restrain from using WFC, JNI or J/Direct. Basically, I treat Java as a damn fine language, and also as a cool way to write crossplatform apps as long as I'm willing to give up features.

    Saying that..the thing that Java has failed to deliver is features - because it's cross platform, features obviously aren't left up to programmers but to the VM vendors (and then really - only to sun cause unauthorised additions are 'evil').

    For what this person is asking - I'd say java is the way to go if they value their time.

  • How'd you like MSJVM's handling of COMJava? I think it's damned useful. I can see where people who run around yelling MS is trying to pollute java is coming from - but from a developer's point of view, I think it's great. more choice.
  • by C-)) ( 66215 ) on Wednesday October 13, 1999 @12:21AM (#1618058)
    First off - IMHO there are too many choices - perhap this is a problem with the xNIX world - but this is a side point...

    Second - This is where I get burnt...

    We've now written a good few distributed systems using NT and COM/DCOM. Whats different about this - well we also communicate freely with several Sun based products as well - this is perhaps where COM is kinda neat.

    Many of our Sun applications come with their own APIs and already support remote connections. The drawback here is that they need C/C++ to talk freely. So - we create a small wrapper around this and make it into a COM component We then use VB as the glue to manipulate these smal components as well as to talk to our databases.

    This all seems to work rather well - I know this will cause some laughs - but VB really is a great language for glueing things together and accessing databases either through ODBC or ADO. If you want fast crunching - create a component in C/C++ and expose it through an API.

    The whole thing is easy(ish) to maintain but this is perhaps because we did a propper design first - I guess this is the main thing - get the design right.

    We are currently looking into Java - but this - as has been previously stated - is not quite the cornucopia it promises - perhaps it's betterto forget the concept of one language - after all - a language is mearly a tool to getting a job done - so choose the right tool and the job is undertaken efficiently and correctly.

    C-))
  • I've been part of a development project with a major motor manufacturer for the creation/translation/distribution of technical documentation worldwide (workshop manuals, training guides, owners guides etc).

    The project is spread over AIX/Solaris/NT running on a series of boxes in the UK an US. We use a combination of Orbix on the AIX box providing services in C++ talking to legacy data applications, and Java services running on NT/Solaris with OrbixWeb. The performance of the Java services has never been a problem with this application and I don't want to start an advocacy thread, but we see massive benefits in running with Java services. We can move these services around the hardware on the network without code changes when we see changes in usage profiles. In reality this means we can release test services to small numbers of users on low spec hardware and then roll the same services out onto more powerful kit.

    My belief is that the tooling provided with the commercial Java based Orbs could easily be replicated within the Orb provided by Sun free as part of the J2E platform.

    Given a blank sheet of paper I would very seriously consider using commercial Orbs for our legacy services (C++) and a pure Java solution for the rest of the network. Does anyone have experience of attempting to mix the J2E orb with other solutions?
  • I'd suggest (like many others) that your best approach to a multiplatform environment would be to go straight to Java 2.0 EE - and specifically Enterprise Java Beans. This will give you access to a range of different distributed computing frameworks based on everything from RPC, to CORBA, to XML. With the backing of Sun and IBM, it's a well supported way of working.

    One of the more off the wall reasons for going to J2EE is also Sun's recent purchase of Forte. Forte have been around a long time now, and have a mature distributed computing platform in their TOOL environment. Its partitioning and high availability features are second to none, and Forte are extending these to Java with their application server/distributed computing development environment SynerJ.

    Of course, if you want to target C code at a range of different platforms, TOOL will do the job for you... and will integrate with the Fusion XML-based messaging architecture.

    IBM's WebSphere is more immature, but promises standards-based integration with TXseries and MQseries environments if you have to target legacy systems. Don't overlook legacy system integration - as any large distributed application development is likely to need to link to existing infrastructure and datasources...

    This is not a new problem, and the tools do exist to help you solve your issues. However, they are likely to be proprietary or part of proprietary environments...

    S.
  • by Kitsune Sushi ( 87987 ) on Wednesday October 13, 1999 @12:33AM (#1618062)

    (more sleep-deprived posting from yours truly)

    C/C++ is 100% portable? Sure, a for loop works the same on all platforms. Or does it? Better make sure you have ANSI compatible compilers. Better turn off any non-ANSI features.

    If you're not using ANSI, you're not really using portable C. C is highly portable if you comply with ANSI. And you know, if you don't know how to force your compiler to compile under strict compliance with ANSI, you really don't have any business coding in C/C++. Besides, practically every C compiler in existence supports the ANSI C standard (in that it allows you to compile ANSI C either by default or by conscious choice). Those C compilers which do not provide ANSI support are not in very widespread use.

    What about things that ANSI does not specify?

    Like graphics programming? =P Yes, the original poster was incorrect in their assertion that C is 100% portable (as an entire language). ANSI C is extremely portable, however. One might even go so far to say that except for those parts of your program which go beyond the scope of what ANSI covers, if you're not using ANSI C, you're not really using C at all.

    Actually Java pushes the limits even further with support for threading and graphical interfaces. Give me a C/C++ program that has threading. Next get that to even pretend to work cross platform. It ain't gonna happen. Give credit where credit is due!

    Are you trying to tell me either a) that the Linux kernel doesn't support threading or b) that the Linux kernel is written in Java?

    I haven't looked at Apache but I would really be surprised if it did not contain plentiful #ifdef's and tons of platform specific Makefile foo. Same with GIMP. Having personally done cross-platform porting for a commercial product, I can tell you C/C++ code porting is NOT a trivial task.

    This in no way refutes the original poster's assertations. No one said that portability in C was easy .. Obviously portable C code is going to include #ifdef's.. They're what helps make C so portable. Indeed, by using them and ANSI C, you can make your C programs 100% portable to every platform you so desire. Or are you in some way trying to suggest that taking advantage of every resource C makes available to you in order to make your code portable is stupid? That's about as "sound" a theory as PC Week saying that no one would apply 21 security patches to Red Hat Linux.

    Even with Java deficiencies, I'd rather Sun do the work of converting operating system depending things so I can spend my time making my code do something useful.

    This quote, like the one before it, adds nothing useful to the conversation. If you'd rather "play it safe", that's your decision. Many of us, myself included, prefer the added speed, power, and flexibility inherent in, say, C++. I'm not frightened of pointers, thanks.

  • As someone who's just started playing around with CORBA, it seems like the source-code compatibility isn't too bad.

    The mapping between the IDL files (Interface Descriptions - kinda like header files) and the interface to the generated code is all pretty standardised. A few #defines and a couple of macros/small inline functions should solve the residual problems. The IDL files themselves are ORB-independent - that's the entire point! Finally, code compiled with one ORB on one platform can talk to another ORB with code in another language on another platform using IIOP.

    As far as I can see (and I admit I've only really been playing so far), CORBA will be the least of my portability worries . . . the joys of multiple GUI interfaces and incompatible build environments will probably be far nastier!

  • by Grimthorpe ( 36603 ) on Wednesday October 13, 1999 @12:50AM (#1618065)
    CORBA will allow you to do most of what you want, you just have to use multiple ORBs.
    I am part of a project that developed a CORBA facility, in conjunction with people from Norway. We each took the IDL and developed a client and server each, using different ORBs, platforms and languages (Delphi, C++, Java, DOS, Unix, NT), and managed to get them all talking to each other within 30 minutes of testing via the Internet.
    In addition, there are products (albeit expensive) that bridge CORBA and COM, and I have had Visual Basic talking to my CORBA objects, and vice-versa.

    As you said, there are plenty of open-source ORBs, each with their own set of benefits, but they should all be able to communicate via IIOP.
    I'm fairly sure that omniOrb, ILU, MICO, Sun's JDK 1.2 and TAO/ACE will all co-exist happily, and are all gratis, and all apart from the JDK are open source.

    Perhaps I am a bit biased, given that I've spent the last 5 years developing CORBA products...
  • by Anonymous Coward
    ACE is definitely worth a look, they've spent a lot of time on cross platform issues, so you can
    write C++ code that will compile on different platforms without change. I've used it under NT
    and Linux. You can also choose the level at which
    to use it, from basic networking right up to full
    blown CORBA. They have also provided platform independant function calls for stuff like multithreading.

    The only disadvantages are that its quite large and takes a bit of learning. The docs are a bit on the academic side, but the mailling list is excellent, very prompt replies.

    I believe they also working on a Java version but I haven't looked into that yet.
  • I love Java the promise od 'write once, run anywhere',

    Wrong! The promise is : "compile once, run anywhere".
    If you use C, it would be: "write once, compile anywhere".

    C programs like Apache, GIMP, etc. are well known to be compatible with this 'write once, run anywhere' promise and also are extremely fast and powerfull.

    Well.. no. GIMP will only compile if you have the GTK library available for your platform. Doh!
    .. and Apache? I'm no Apache expert, but does it use BSD sockets? Well.. "write once, port a lot"

    So, why not you choose your favorite CORBA library written in C and join the development team to add the features you need?

    Ah yes.. and of course; the CORBA library exists for all platforms? Yeah, right.

  • ACE [wustl.edu] is an opensource C++ framework that implements common concurrent design patterns tested in a variety of platforms using a common source tree, not to mention a Java version. As for multiple languages, that's a little harder because some languages make certain assumptions. Easier to write C++ wrappers around them.

    Another future possibility may be OpenMP [openmp.org] which allows a sequential and parallel shared memory version to reside in the same codebase using compiler extensions. Although there are specifications for several languages/platforms, I don't think anyone has tested for intervendor compatibility as yet. However, it is still evolving.

    The major problem is that once you start wandering outside the most commonly used languages (C,C++,Java,Fortran) into more exotic variants (Amoeba, Occam, Z etc) you will be running across differences in conceptual models (actor, CSP, timed lambda calculus, etc) which is like mixing different mathematical coordinate systems ... ie not recommended unless you really grok the theory and got a firm grasp of what you're trying to do. Coding is complex enough without making life impossible for yourself. Keeping things simple will then become your best friend.

    LL
  • I love Java the promise od 'write once, run anywhere', but after all it's just a promise and has not been reached yet

    You must be using some other Java! I am personally working on a distributed business object layer for "A Big Firm (tm)" which provides standard business functionality from any platform. For instance, we have java classes which run on our Unix backend for automated processing, but which can also be wrapped (trivially) in a COM wrapper and run inside an Excel spreadsheet. Not only that, we can build them into a swing applet an run it in a browser. All this without sacraficing any features, or writing complex "cross-platform enabling" code (like the usual spaghetti of #ifdefs found in C/C++. There is no way we could have got this level of portability and clean design from a C/C++ solution in the timeframe we have for this project.

    I was very sceptical about Java initially, but although it is still a work in progress I would say it has power now to do some pretty amazing stuff. I will certainly never use C/C++ for anything along these lines again...just too much stress.
  • I'm determing a cross-platform demand as well. We're gonna use Java as this is the safest network/cross-platform language there is. Other alternatives would be JavaScript or ActiveX, which aren't safe. Especially ActiveX which was developped as a Java competitor, there have been many security leaks, and we haven't seen the end of this yet.

    Cross-platform nowadays means something different than it used to mean. In the early days cross-platform meant -multiple processor-, it now means -multiple processor, multiple OS-. Does it? If you want to develop cross platform - the hardware way, install Linux on all machines, of you want to be cross-platform Hardware and OS, use something like Java, JavaScript or ActiveX

    Try to avoid CORBA, this is middleware, and middleware is glue for different platforms, and a pain-medicine. Though IF you HAVE to choose for an ORB, DO choose CORBA, more than 300 organizations participate in the CORBA development, there won't be an end to this the next 10 years - in my opinion CORBA has continuity and has the future.

  • This is the way to go. It is an open, simple, cross platform, language independant web-based protocol.

    MS is basing its new SOAP [microsoft.com] "standard" for distributed objects around it, too - but don't let that put you off.

    The good thing about a standard like this is that it is SO simple that you can write your own server, so you really understand how it is working, if you want.

    It is highly scalable, too - all the solutions that have been developed for serving big web sites are immediatly useful, now.

    See Userland [userland.com] for more info.

  • by Kitsune Sushi ( 87987 ) on Wednesday October 13, 1999 @02:10AM (#1618072)

    ..I would have to say that if you hadn't mentioned C++ I wouldn't have been unable to maintain my relative calm. ;) However, I think that many are missing the entire point. The person who submitted this query doesn't want to stick to a single language. Therefore, I doubt they found the original post on this thread to be of much value. It did, however, seem to spawn off a number of subthreads, many of the posts in which could be accurately assessed as a part of a holy war, something else the person who submitted this query didn't want. A la:

    I would like to program distributed systems using the same code base on multiple platforms and multiple languages therefore I am asking around...
    I just have a wishlist and I am looking for answers and opinions, not a holy war.

    Let's stay on the ball, here, people. A Java vs. C holy war is completely missing the point of this entire discussion.

  • C programs like Apache, GIMP, etc. are well known to be compatible with this 'write once, run anywhere' promise

    Really? If that is true, why hasn't apache always run on NT? From http://www.apache.org/info/apache_nt.html [apache.org]:

    (December 22nd 1996)

    Apache has been ported to a very wide array of Unix boxes - in fact, we're not aware of any Unix boxes which Apache can't run on. This has been possible by making conservative architecture decisions, by modularizing the code as much as possible, and sticking to POSIX and ANSI wherever possible (and functional).

    However, due to the code's legacy, and use of metaphors and systems which are Unix-specific (such as, having multiple processes all accept()ing connections to the same port), the road to porting to Windows NT has not been a pretty one.

    Things are better now, but if C were completely portable (and useful in its entirely portable form) then this would never have happened, would it? Even now, I'd be highly surprised if the apache source code didn't have a few #IFDEFs in...

    Having been involved in porting a commercial product from Tru64 Unix to Solaris and Linux (which shouldn't be nearly as hard as going to NT), I can say that it's not as easy as you might think - especially when intricate optimisation (not to mention multithreading) is involed.

    Jon

  • Obviously portable C code is going to include #ifdef's.. They're what helps make C so portable.

    To me, they're what signifies that C isn't truly portable. Or rather, it's port-able but not automatically port-ed, if you take my meaning. If you have to write code which conditions on which operating system it's on, then it's not portable IMO. If a piece of source code is portable, I should be able to compile it on a brand new operating system which supports that language, and have it work with *no* code changes. If you're conditioning code on operating system, that's not going to happen.

    ANSI helps to an extent, but it's not a silver bullet - otherwise the book in front of me, "The Annotated ANSI C standard" wouldn't have sections for undefined or unspecified behaviour. Also, I believe there are other things that ANSI doesn't provide, in terms of library functions. Is select() ANSI, for instance? (It may be, but it's not in the index of the book, so I'm guessing not :)

    Sure, when you're coding something that can be done in ANSI C, it makes sense to do it, being careful not to use any of the undefined/unspecified behaviour. It just isn't always that easy, unfortunately.

    Jon

    PS: I realise all this is rather perpendicular to the original topic which is to do with frameworks than languages - but I'd be interested to know just how far ANSI C can take you.

  • Ok gang. We went through a similar thingie a 'few' years back when most of you were not even a twinkle in your daddy's eyes yet.

    It was the 'real programmers' versus pinko compilers. (Search your favorite search engine [hotbot.com] for real programmers don't eat quiche to catch up on your hacking history.) 'Real programmers' only used assembler language and did not trust compilers. They did not believe. They are, of course, mostly extict now. Nothing, except for some drivers and low level code, gets implemented in assy any more.

    What do we do these days? We do not even look at the compiler-genrated assembly code any more.

    What's next you ask? Complete model-driven, implementation independent code generation, of course!

    How? What? When? Where?

    At this site [projtech.com] and this site [objectswitch.com].

    How about being able to generate C++, Java, C?

    Poor performance? No problem! Just swap the software architecture. Single process/single processor -> multi process/single processor -> multiprocessor/fully distributed. Whatever does the job...

    Why commit to platforms? Switch from CORBA -> DCOM, Sybase -> Oracle, C++ -> Java and so on.. (and back again). If XOOF++ comes along, all you have to do is modify the affected part the software architecture and generate the new code. No changes to source models required to generate a code based on a different implementation!

    Oh - did I mention that the resulting code is practically bug free? It does what you want, as fast as you want. No memory leaks, core dumps, etc.

    It would behoove the open source community not to get stuck in the implementation layer and not become the 'real programmers' of the future. Once these models really catch on, it will be great to have a set of open source tools and models ready for the ultimate blow to proprietary code.

    -- l2b
    #include "std.disclaimer"
  • Ok, I'm about to sound either (a) guru-like or (b) just really patronising. So I know you're not going to like this, but...

    Stop. Forget it. Forget distribution, cross-platform, whatever. Unless you're doing a major, million-dollar project for some telecoms company or something, you really can't justify the use of ORBs or any other distributed nasties like that.

    In your question, you don't mention your requirements once. Or rather, you don't mention the users at all. What do they want? I'll lay money that they didn't say something like "We want a payroll system, oh and we want CORBA/DCE/Whatever distributed processing and we want loads more machines than we actually need".

    Sorry to be dismissive, but I've worked on a number of projects that were 'distributed' or 'cross-platform' for no apparent reason. They all went way over time and way way over budget, and never really worked 100% anyway.

    If you *really* want cross-platform, use Java. If you *really* want distributed processing, use plain old sockets and a simple text-based request/reply protocol like HTTP, or roll your own.

    Go back to your requirements and ask yourself what technology you really need to meet the needs of the users. Then ask yourself again. Think about it.

    In the words of Kent Beck and others at his level, "Do The Simplest Thing That Could Possibly Work".

  • After many hit and miss-attempts the combination of a java front end and perl back-end in the end the solution gravitated towards using XML over HTTP.

    Good to see others thinking the same :-) but here are my reasons anyway

    1) Vast array of HTTP servers ( Apache with mod-perl worked wonders in this case ) .

    2) Debugging in human readable format using a browser

    3) XML Parsers/Checkers available in most languages ( Perl and Java I know about )

    4) Resource location using good old DNS. Rather than re-inventing the wheel ( CORBA springs to mind )
  • I completely agree with you...

    EJB is a really nice object-oriented distributed application framework.

    It is extremely portable, anyone with real Java experience knows that Java portability on the server-side is excellent. In fact, I've been using BEA Weblogic server on NT, Solaris and Linux without any problem due to portability.

    Some vendor offer really high-performance implementation, Weblogic for exaample. Clustering and fault-tolerance are becoming common on the high-end.

    You have EJB server implementation wich cost 0$ up to 10,000$+ per cpu.

    The actual really nice part about this framework, is that the stuff you develop will run as well on the small no cost EJB server up to the high-end one. The only thing that will change are the deployement attributes.

    And with container managed entity-beans, you will no longer have to write a single SQL statement in your life!

    For inter-langage interoperability, you can use CORBA to talk to most of the EJB server implementation. In fact Websphere use CORBA as it's protocol to talk to the beans.

    This framework enable you to concentrate on the business logic instead of the plumbing.

    Note that EJB 1.0 spec did not address some things that caused portability problem between different server, but the EJB 1.1 spec resolve most of these problems.

    This technology is not perfect, but after using CORBA and EJB, I have to say that EJB is far more eassier to use and offer more portability than CORBA. It also enable one to develop application at least five time faster than with CORBA.
  • by Anonymous Coward
    We use ACE for all our threading. It probably the best thread package out there. We use TAO for our interproses communication and it also is just super. I don't know of a better, more compliant CORBA package. And they are both free with source! Our applications are written in a mixture of C++, Java, and Delphi. We use CORBA across all the languages. Specifically we use the Java ORB which ships with 1.2, TAO, and the Naming Service that ships with TAO. You can now get commercial support for TAO and ACE from companies like OCI and RiverAce. Except for the Delphi code our apps run on both Windozes and Unix.
  • "Are these very strange requirements/wishes or would other people be willing to sacrifice ratified standards compliance and possibly performance for orthogonality of language/platform availability? I would like to be able to write code for Linux/Unices/Windows in my languages of choice (for me this would be Perl, Java and C++) without having to use multiple implementations on the different platforms."

    I work for an academic institution, and let me tell you, WE have wierd requirements. Our it shop is going full steam on CORBA and Java. Basically all client pieces we are writing and will be writing will be written in Java for heterogenous campus machines (you can never tell what's out there). Our middle tier is also Java, while our third tier is native or an ugly native/Java hybrid. It all works lovely though because we use CORBA and anything can talk to anything. We use Visigenic's orb product VisiBroker. I've heard of OmniOrb, an open source orb, as well as other custom ones for other open source applications (like Gnome and AllianceOS orbs).

    Java's great because you have the exact same codebase pretty much wherever you run it. We can swap our objects around, load share them, etc. Our middle tier ties in to web servers running servlets, so we can present applications in browser if we want. The performance hit is really not that big a deal. It is certainly worth the automatic portability and flexibility. Plus Java bring some other cool stuff with it that is great for distributed computing (serialization for one).

    I don't know if it will fit your requirements, but CORBA and Java is the basis for our distributed heterogenous system.
  • (yet more sleep-deprived posting)

    ..for someone who isn't a C trainer on the clock to explain in full detail. Most decent books on C should teach you all about most of this stuff.

    The only thing I'd ask you to keep in mind is that when a person talks about how portable a language is, they're usually talking about how portable software written in that language is rather than the language itself. Notable exceptions (sort of) include Perl, which is itself a C application. You should also note a distinct difference between the usage of the terms "portable", "extremely portable", and "100% portable". C is "extremely portable", not "100% portable".

    No, C does not do the porting for you. C doesn't do a lot of things for you. Languages are written with certain concerns in mind. None of them are a silver bullet. If there was a silver bullet language, we wouldn't have so many. C focuses on performance, power, flexibility, portability, etc. If C did the porting for you, you'd likely take a performance hit. It's a constant trade-off. Saying C isn't portable is an incorrect statement. Saying you'd rather have a language that made your software portable for you is an opinion, and one anyone is welcome to, like all opinions. Just don't confuse the issue.

  • Well, I hadn't planned to announce this yet, and we're probably going to get ./'d even though the web site is wrong (for example, it says community source, but I changed my mind and are now releasing as unencumbered open source.)

    I've worked in distributed networking for a long time, first in video games, then in multi-user VR. One thing I was always upset about was the low quality and high cost of available solutions to this problem (I wanted to stick to 3d engine design). On the other end of the spectrum were the "enterprise" solutions, where I was upset about the low performance and VERY VERY high cost.

    To make a long story short, I'm about to release the first in a series of unencumbered open source libraries related to this problem. The initial release is of the distributed networking component.

    It's an implementation of a Java JMS provider, where JMS is Sun's reference interface-only specification for enterprise messaging: Java Message Service. Available enterprise message bus software like JMS providers can price up into the 7 digits, and still suck.

    We chose Java as a reasonable first pass at cross-platform, but the code was designed to port to ANSI C/C++ easily - we already have XP ANSI C/C++ versions of most of the applicable Java platform libraries. The network protocols are defined in ietf-draft format and have no Java dependencies whatsoever. We already intent to release native Win32 and Linux versions of the server, and possibly the client.

    I could type for days about performance and features, but suffice to say it implements the full JMS Publish/Subscribe spec, with substantial extensions for security (one evaluating user is preparing for NSA certification, as we use secure protocols and provide features like compartmentalization across bridged secure networks), packet and real time streams managed using JMS "Topic" abstract addressing (one demo app is full duplex audio chat), and reliability (n-way hot failover, adaptive load balancing, and massive scalability across complex network topologies.) Its about to interact with LDAP servers for user authentication, it uses XML based configuration files, supports dependant handling auto-update according to site policies, is getting a JSP based remote admin feature, and washes dishes on alternate Thursdays.

    BTW, if you want to send CORBA, DCOM, OLE, XML, or any other funky object, just encapsulate it. JMS tries to abstract away these issues - it's a good API.

    If anyone is interested in this, the base API alpha release (including FULL source) should be early-end of next month, just email me. No mailing list yet, we're still setting up the public CVS server and new web site. I'm releasing a pre-pre-pre alpha next week, so its not like I still have 90% of the code left to complete.

    Preemptive comment - Please minimize the "vaporware" status flaming, I mentioned that issue in my subject line. I'm giving away a substantial percentage of the IP I've personally developed over the last 5 years here, primarily 'cause I need more people to help me improve it and the rest of the VR system I'm building. I'm tired of making VC people rich, now I just want to see something cool make it to market. If I have to give it away, so be it...

  • It depends what you need to be cross platform, if its the user end, use Java running in a browser, quick and easy, and use either sockets or RMI to call the backend. If you need backend distribution then I suggest you use JavaSpaces, and then link the Java via JNI to call your C/C++ or PERL, see

    http://developer.java.sun.com/developer/Books/Ja vaSpaces/introduction.html

    for an introduction to JavaSpaces, also check out Java RMI over IIOP which might be another way to go

    http://www.javasoft.com/products/rmi-iiop/index. html

  • C compilers conforming to ANSI C behave very similar, I agree. The standard has been around for something like 17 years. I don't agree that you can get such stable behavior with different C++ compilers. Still in either case, what I mean by things ANSI doesn't specify are the ambiguous cases. Do you realize that ANSI has ambiguity in a few cases of the language? It simply is not a full specification of C/C++. Here's a stupid one but the only one I could quickly find from the C FAQ, ptr = malloc(0); Tell me what that returns.

    ANSI unfortunately covers quite little when you think about it. It defines language semantics, not how it the language interacts with the system. This application deals with sockets. What does ANSI say about that? That is the exact thing I want a cross-platform language to specify in a uniform manner. What is your opinion of Java and C/C++ in this regard? You've tried to defend C but haven't said much about what you think of Java. Java is very close to achieving this.
    Are you trying to tell me either a) that the Linux kernel doesn't support threading or b) that the Linux kernel is written in Java?
    The kernel is 100% portable C? You mean there are no CPU dependent modifications to the kernel when cross-compiling? I think we have to clear somehthing up here. Portable code DOES NOT need #ifdefs. #ifdefs are the hacks that people need to get around the inherent cross-platform deficiencies of C and C++. Did you know assembly is cross platform? Yeah, I can just #ifdef around the code for my specific CPU at the time. Get my point? We are trying to determine the BEST way to have a cross-platform system. Your #ifdefs are not the best way of doing this.

    I still don't get why people say things like this...
    This quote, like the one before it, adds nothing useful to the conversation.
    and their comments do nothing more than criticize mine. Again, you must have lost the point of the original article which was CROSS-PLATFORM development. You are going to spend alot of time fiddling with getting C and C++ to work cross platform when the better choice for this application is to take Java for what it is, a good feature rich, cross-platform language.

    If you have something intelligent to say, you'll log in or get moderated up so I can read it. Otherwise, you're wasting your time.
    Apparently I did get moderated up as I usually do. I have found very few reasons to actually log in. Having the chance for you to read my comments is not one of them.
  • by Anonymous Coward

    [noting that no real requirements were mentioned, this is just a general answer]

    Many people have suggested straight sockets, and, for the most part, I agree with them. Taking one example, telnet-based protocols have shown to be some of the most versatile, scalable, and approachable protocols around (FTP, SMTP, IRC, NNTP, HTTP, ...).

    If you're looking for a more structured protocol, Lightweight Distributed Objects (LDO) [casbah.org] is a modular set of specifications for building structured protocols. LDO also includes modules for basic RPC and distributed objects.

    -- Ken MacLeod
    ken@bitsko.slc.ut.us

  • While I have not used it - Xerox's ILU (Inter-Language Unification) project [xerox.com] has been around for a long time. It comes very highly reccomended. From the same people that brought you gui's, desktop publishing, ethernet and distributed computing.
  • Java runs just about anywhere .. including Mainframes, so that should not be a problem.

    Perl runs on Windows, Linux, and UNIX's, so that shoudl not be a problem.

    For C++ you may want to try qt, as it runs on Windows and Unix and Linux.

    Okay now the 'beef'.

    Java-> avoid J++ as it is not pure Java it may not work everywhere. Stick to 100% pure Java. You can do most things in Java too, there is a very extensive set of API's. You can do threads, and sockets very easily. The only reason to use any other language would be speed.

    Incidentally you never did mention what this program is supposed to do or what you are trying to program. If you are doing system level programming, then cross platform may be more difficult, but if you are doing a GUI program, try perl/tk fpor the front end and C++ for the back end. If you code right you can do it all in one language thou, and still make it cross platform, with only a few if (os = ) {} statements.

  • Why is it that some C++ advocates hate Java so much? Maybe I just haven't read enough comments, Usenet postings, etc, but I haven't seen Java advocates laying into C++ ever. Look at that last line - "I'm not frightened of pointers, thanks." Well I'm not frightened of pointers either, but I don't feel like an idiot for advocating Java, which seems to be the suggestion. I use and like C++ too, but the removal of pointers from Java does so much more than make it easier on a superficial level to learn. I'm sure you know this already, but it doesn't make your point for you. Do C++ advocates feel under some kind of threat from Java? Some of them certainly react to it as though it has them cornered (which I'm sure it doesn't). Or do C++ advocates feel that Java is a dumbing-down language, like VB? I hope this isn't taken as inflamatory, I really would like to know.
  • Could someone please explain the PVM acronym?

    And post a link?

    Thx.
  • Use Java for the remote procedure calls, but only for that. You can talk back and forth between Java and C, you know. I've actually done this; I wrote a C program that used the JVM libraries to make RMI calls to another machine that ran a small Java application that received them.

    You can be clever with threads, too, so that in your client you can make a call (that immediately returns) that dispatches a thread to perform the RMI call and do something when it returns. The thread is a standard Java thread, even though you're using C to instantiate it! I've done this with good success (admittedly not for a big huge highly important expensive project).

    RMI is simple and easy to use. That doesn't mean that you have to be a slave to the restrictions of Java (performance, etc.) everywhere in your code. Your network calls aren't expected to be that fast, anyway, so the layer that makes them may as well be a Java one that has clean, well-enforced, standard semantics.
  • Expect XML-RPC to be much slower than
    Corba. Since its human readable it requires
    a more complicated parser and it needs more
    bandwidth.
  • Correct.
    But you should't use vendor specific extensions
    to Corba, which is not easy to achieve than you might think.
  • I think that we have to assume that the original poster did enough analysis to arrive at the need for distributed/language independent processing.

    There are quite a few reasons to use distributed processing that make sense. First off, very few large or corporate systems exist in a vaccuum. Most new systems either get data from, send data to, control, expose to a new audience (web), or otherwise interact with other systems. So given that you need to send "messages" around, is rolling your own request/reply protocol on top of naked sockets or http the simplest thing?

    I would argue that it is anything but. You have to build all sorts of features yourself that you could get for free from an orb or a Messaging system. There are RPC mechanisms being built on top of HTTP that emulate the other distributed processing mechanisms (interestingly enough Microsoft is ahead in this regard), but I don't get the sense that this is what you are recommending.

    I think that you have to ask yourself, what do you do after you have implemented all of these stand-alone systems? How do you get them to work together?
  • What you're asking for NeXT had in spades. Since you are on a budget. The old NeXTSTEP development environment might fit your needs. Their multi-platform (NeXT, Solaris, HP-UX, Intel)code cross compiled directly. It's Obj-C had some of the easiest API's. It was CORBA compliant. Some Posix support.

  • I couldn't find your email address, but I would be interested. mailto:noah@waquoit.com
  • "using the same code base on multiple platforms and multiple languages"

    Same code base with multiple languages? How about just doing it with one language. Smalltalk.

    VisualWorks Smalltalk from Cincom [cincom.com] runs on a boatload of platforms including Linux and has distributed features. The application can be generated once and dropped on any platform and will run without modifications (Where did you think Sun got the idea?)

    Also, IBM VisualAge for Smalltalk (VAST) has similar features that make it easy program distributed apps. However, you will have to "recompile" for each targeted OS. They have not release VAST for Linux yet, however it is coming soon since VisualAge for Java is on Linux (VA/J was written using Smalltalk).

    In terms of development time, coding/testing a Java application would be faster then coding/testing a C/C++ application. Coding/testing a Smalltalk application would be faster then coding Java.
  • The Enterprise Java Bean architecture is a standard that will let you use any specific protocol (CORBA, RMI, vanilla socket, etc.) with any server (relational dbs, web servers, object oriented dbs, orbs, etc.). It encompasses all these other specifics in a distributed transaction framework that hides the details from developers who wish only to deploy apps.

    Using this standard, you can write the logic of your app and use the app server's tools to deploy it, and not spend time worrying about partial failures, network synchronization, how fine grained access control is managed, and all the other problems associated with distributed programming. And it doesn't mean you have to be a java fanatic -- the latest ejb spec isn't even based on rmi, it aims for iiop compliance instead.

    A couple of open source ejb app servers to consider:
    EJBoss [ejboss.org]
    JOnAS [bullsoft.com]

    Also take a look at the ejb-friendly xml work at Enhydra [enhydra.org] and of course, the world's greatest servlet engine, Apache JServ [apache.org].

  • *disclaimer* I am an employee of OCI directly involved in the commercial support of TAO.

    ACE/TAO actually provides several layers of abstraction to support distributed programming. At the high end there is CORBA, as implemented in TAO. TAO is built on ACE and capitalizes on ACE's services model. The services model is an abstraction layer which allows you to separate the transport mechanism from the service implementation. This layer allows you to write distributed services that can communicate via a variety of mechanisms such as sockets, fifos, pipes, whatever. Combined with the service configurator, which provides dynamic (at run time), configuration of applications, you are able to write very complex distributed applications.

    The next layer down is an OS abstraction layer. If the stuff described above is too much for you, ACE provides THIN (as in inlined functions) wrappers around most OS functions related to interprocess communications, threading, shared memory, etc. This is very useful in writing _portable_ multiplatform code. For work with sockets, there are some very decent C++ abstractions of sockets that take care of the dirty work for you (such as mapping a string hostname to an inet address struct, binding, etc)

    The footprint for ACE alone is about 700K for everything, and can be broken apart to some extent for smaller footprints. If you want CORBA on top of that, TAO adds an additional 1M (about).

    You can download precompiled binaries of ACE & TAO for linux from www.theaceorb.com.

    phil
  • Several years ago I was hired with a group of people to train C programmers in objective-c and later java. The degree of resistance was extraordinary. Every day it seemed we fought the same battles. The same people would say the same things over and over: "but without pointers...", "java is too slow, what about high-performace graphics?", etc. It's like, but we're not DOING high performance graphics. We're doing BASIC business applications.

    And as you said, I will concede to the goodness of C/C++. I used both of them long before java, and still do. There are pros and cons to both, times to use one, times to use the other. Seems to me the smart builder doesn't use just one tool, but knows which tool is the right one for the job. But they will generally not concede that there is any use to Java.

    But I had the same difficulty in teaching objective-c, which DOES have pointers, and imho is a WAY nicer oo language than c++ any day. So I thinks it's not that java is java, it's that these people are resistant to any sort of change. They are either lazy or scared or both. Harsh, yes, but how else can you explain the one-sided (and borsderline irrational) arguments that you see by these people.

    BTW, the aforementioned migration to java failed on all counts. A failure of java? of me? of them? hmm..
  • Yes cross platform C++ development is not there and few people are trying to make it work. I consider this a problem.

    CORBA is neat. It is very cool and fun for writing distributed NETWORK applications. CORBA at this point is only well supported for C++. Several people do have C bindings, but all that I have played with are weak compared to the C++. If I wasn't distributing my processing across a network I might think again about why I need CORBA.

    Perl is nice, for any application of any size be prepared to write your own interface. Consider tcl, doesn't do that much, but it embedds itself really nicely and doesn't have any licencing issues to worry about.

    Java, is cross platform ready, has some CORBA support, but I believe you have to be careful what orbs you use or write your own interface.

    I have worked on several projects that did UNIX/NT work. Both were with companies with a fair ammount of money.

    One solution was purchase roguewave (which finally supports linux!) and [roguewave.com]Iona/Orbix. Roguewave provide very nice C++ libraries, and they are cross platform, among UNIXen and NT. I would love for someone to copy these to the open source world. Orbix is very feature rich, deploys on UNIX and NT (no Linux booo), though at times can be buggy, especially if you try to integrate security into your application. But iona does provide easy programming intrerfaces. End users saw web pages in this particular application. Used CORBA to wrap legacy applications and then quickly integrate them into new applications. Also I have seen nice CORBA implementations on Mainframe which is extremly nice, since talking to mainframes is generally a nightmare. but CORBA isn't very portable either and you have to commit to a specific CORBA. [iona.com]

    The second project stuck with C, used plain old Socket Communication. Went with writing an object layer in house (simple but did the job) then writing a tool to create the .h and .c files out of the simple object language (simply impliment objects as structures, or multi-dimensional arrays) and just use a C compiler to get your binaries. This doesn't give you any performance hits compared to C++, and it makes your underlying application code very portable. Abstract out your database interfaces and don't wory about them untill you work with a new version. Build your own communication server that all applications register there messages with and recieve messages from (old style of programming, but it works) and use Inet Sockets. The end result is you can get applications that are n nodes clusterable and the platforms don't matter. The GUI a poor choice was done in this regard they went with Neuron Data (Now Blaze Soft, Formerly Elements, formerly OIT) which really is as slow as tcl/TK at run time without the portability. They now have a Java product that I haven't tried. Right now either TK or Java make the most sense for GUI's to me. [blazesoft.com]

    Having seen what happens when you go vendor bound, building code interfaces in house with extensability and portability in mind gives you the greatest flexability in the future.

    so that is my thoughts about how to tackle cross platfrom distributed appliations.
  • No, C does not do the porting for you. C doesn't do a lot of things for you. Languages are written with certain concerns in mind. None of them are a silver bullet. If there was a silver bullet language, we wouldn't have so many. C focuses on performance, power, flexibility, portability, etc. If C did the porting for you, you'd likely take a performance hit. It's a constant trade-off. Saying C isn't portable is an incorrect statement. Saying you'd rather have a language that made your software portable for you is an opinion, and one anyone is welcome to, like all opinions. Just don't confuse the issue.

    Okay, let's get some idea of what you mean by "portable" then. To a lot of people (I suspect), portable means "write once, compile and run anywhere" - and that's the goal of using ANSI C wherever possible, I would imagine. That's what Java does particularly well (in terms of language - the implementation of applet viewers is another matter, of course). That's using "portable" in the sense of "mobile" (a portable object being one that can be moved around at will).

    What do you mean by portable? Do you mean you can change the code to make it run on another box (ie it's "able" to be "ported")?

    As I've said before, C is good for portability when you want to do mostly processing, but when you start doing a lot of IO (particularly network and/or user IO) it loses out.

    Jon

  • I would strongly discourage you from writing your own. Developing an ORB is not easy.

    Instead, I suggest you reevaluate your requirements and restrict yourself to a given list of platforms and languages. IMO, if you go with Java, C++, and CORBA as the standard ORB, you'll be able to get enough done that the rest won't matter.

    There exist many excellent CORBA ORBs, some have already been mentioned. One that hasn't is Voyager, a product of http://www.objectspace.com [slashdot.org]. It's Java-based, and while it isn't specific to CORBA Voyager objects can look like CORBA, RMI, or DCOM objects to remote clients. The core ORB is free to download and use (with some restrictions), and if you need more you can step up to the Professional version (which does cost $$$, yes).

    Disclaimer: I work for ObjectSpace. But I wouldn't if I thought Voyager sucked. :)

  • My home email is Kerry [mailto]

    And yes, I meant "/.", not "./"... :)

  • Why is it that some C++ advocates hate Java so much?

    I'd assume this is directed to me in particular. ;) I don't hate Java. Do I dislike it? Yes. Do I use it? No, because I dislike it. Why do I dislike it? It's just not my cup of joe. I also don't trust anything that comes from Sun just on basic principle. This is not to say that Java is a horrible language. It's good at what it was designed for, just as any good programming language should be. The kinds of applications I like to write, however, are better written in C/C++ or Perl. I don't use Java because I don't have a need to use Java. If I needed to use Java, I would.

    Maybe I just haven't read enough comments, Usenet postings, etc, but I haven't seen Java advocates laying into C++ ever.

    I'd say a lot of posts on this thread are dedicated to just that. ;) I myself had heard of but never actually seen any BSD users acting like elitist snobs spitting on Linux and using FUD tactics to get more "mind share".. Well, not until recently. =P Point is, there are zealots in all camps, and objective people in all camps. Just judge each OS, language, or whatever on their own technical merits and not what J. Random Holier-Than-Thou (insert OS, or language, or whatever) User claims as the Gospel Truth.

    Look at that last line - "I'm not frightened of pointers, thanks." Well I'm not frightened of pointers either, but I don't feel like an idiot for advocating Java, which seems to be the suggestion.

    You should take that in the context of the post I was replying to, and the fact that I found much of it quite offensive. Given that, I think I kept my relative cool. =P I have no problem with people advocating Java. Java programmers aren't necessarily idiots, just as people in general aren't necessarily idiots (even AOL users aren't necessarily idiots, contrary to popular belief). I'm not slamming Java, I was simply pointing out that just because C/C++ is a little more difficult to master, that doesn't make me an idiot for wanting to use it instead of Java, which takes care of a lot of things for me. Sometimes playing it safe is good. What language you use depends entirely on what you're trying to do. I prefer the added flexibility. It's just an opinion.

    I use and like C++ too, but the removal of pointers from Java does so much more than make it easier on a superficial level to learn. I'm sure you know this already, but it doesn't make your point for you.

    Perhaps I'm just too lazy to scan this thread at this point, but what point of mine doesn't it make for me? And yes, I already knew that.

    Do C++ advocates feel under some kind of threat from Java? Some of them certainly react to it as though it has them cornered (which I'm sure it doesn't).

    No, not really, on both counts, although I could certainly care less how popular Java is. If people want to use it, that's fine. Just like Perl isn't for everyone, neither is C/C++ or anything else. People just use what suits their needs the best.

    Or do C++ advocates feel that Java is a dumbing-down language, like VB? I hope this isn't taken as inflamatory, I really would like to know.

    I'm sure some do. I don't. It is complex, in it's own way. I just don't like how it was designed. Trying to learn it effectively after learning C++ first probably contributed to this a little. It hurts my mind, not because I don't understand it, but because I don't like it. Again, it's just a matter of opinion. It's not like I wish I could block off downloads of Java development tools akin to how people like to protest in front of abortion clinics, not allowing people to go in. ;)

    And no, there's nothing inflammatory about that post at all. *shrug*

  • Many distributed systesm can benefit greatly from a good group communications system. It provides a network architecture that allows for reliable, effecient network communications between many hosts as well as other useful features. You get the notion of groups and without much effort, you system can gracefully handle network paritions and systems coming up and down.

    SPREAD is a great systems. It provides a consistent API on all platforms and is open source. REALLY FAST TOO! It is a client/server architecture and client can by Java or C or C++ (our port it to your favorite language!) Server is in C.

    Check it out at http://www.spread.org/
  • Say I write a CORBA "component" like for example a spelling checker and other people want to use it. One advantage of a COM based component (yeah, I guess COM has some advantages ;-) is that anyone can register the dll/exe and then have access to it because the COM infrastructure is going to be there.

    But with CORBA, you need to have an ORB installed. Is this an unrealistic expectation? CORBA may be a standard, but how standard is it's installation on computers? Would it make more sense to distribute it as a static/shared lib instead?

    -Willy

  • This is long, but this is a problem I've been thinking about for a LONG time, so please bear with me and hear me out.

    I personally think that all forms of RPC are way over-hyped, and not actually terribly useful in the majority of situations in which you'd be tempted to use them. I know this is an unusual point of view, and needs justification.

    As a broad outline, you can plot different forms of IPC on a graph with one axis being speed, and the other coupling or, it's inverse, portability. Speed and coupling, like space and time complexities in algorithms, can be related, but, IMHO, are largely orthogontal.

    For example, one nieve form of IPC that I've seen commonly is for raw memory to be dumped onto a socket or a pipe. This is great for speed, but very bad for coupling (i.e. very non-portable). It might not even work for two different compilers on the same platform, and if you're trying to communicate between two different languages, each program needs to know intimate details about how the other lays things out in memory. Also, nievely implemented, with little thought given to minimizing round-trip requests, it often isn't as fast as you'd think. (More on that later).

    At the opposite corner in the graph are protocols like SMTP, or even worse, XML over HTTP. SMTP communicates using text messages that have to be parsed. SMTP parsing is pretty simple, although, looking at sendmail, it can get pretty hairy. XML parsing is even more complex. These parsing steps slow things way down. On the other hand, the well defined nature and robust structure of these protocols makes them extremely portable (i.e. low coupling).

    Now, the question to ask is where does RPC fit into all of this?

    Well, if you think about it, all forms of RPC, including CORBA, require a fairly explicit detailing of the messages to be sent back and forth. You have to specify your function signature pretty exactly, and the other side has to agree with you. Also, the way RPC encourages you to design protocols tends to create protocols in which the messages have a pretty specific meaning. In contrast, the messages in the SMTP or XML protocol have a pretty general meaning, and it's up to the program to interpret what they mean for it. This bumps RPC protocols up on the coupling scale a fair amount, despite the claims of the theorists and marketers.

    OTOH, the messages are platform and language independent. It's relatively easy to find a binding for any given language. Usually the code to decode the message from a bytestream and call your function is generated for you. And, the bytestream will be the same if you get your message from a perl program or a C++ program. This bumps it down on the coupling scale from the memory dumping protocols.

    As far as speed is concerned, there are two main problems.

    One is minor in that almost any protocol that is supposed to be language and platform independent is going to have it. That problem is marshalling. You have to get your data from the format your language keeps it in into your wire format and back again. While this problem is not as computationally intensive as parsing, it still exists.

    The other is major. The one thing you always want to avoid in any networking protocol is round trip requests. Round trip requests will always be inherently slow, if for no other reason than the speed of light. You never want to wait for your message to be processed, and a reply sent before you send another message. The X protocol works largely because it avoids round-trip requests like the plague. There is a noticeable lag when an X program starts up because the majority of the unavoidable round trip requests (for things like font information and a bunch of window ids and stuff) are made then. Even when the programs are on the same computer, round trip requests force context switches which eat CPU time. In short, they are BAD.

    Any RPC oriented protocol encourages you to think of the messages you are sending as function calls. In every widely used programming language, functions calls are synchronous. Your program 'waits' for the result of a function call before continuing. This encourages thinking about your RPC interface in entirely the wrong way. It doesn't make you focus on the messages you're sending, and the inherently asynchronous nature of such things. It makes you focus on the interface as you would a normal class interface, or a subsystem interface. This leads to lots of round-trip requests, which is a major problem.

    CORBA generally advocates soloving this problem by making heavy use of threading. But multiple threads have a lot of problems. They just move the descisions about handling asynchronous behavior to the most difficult part of your program to deal with it in, your internal data structures. Not only that, but multiple threads are a big robustness problem. It's difficult to deal with the failure of a single thread of a multi-threaded program in an intelligent way, especially if the job of that thread is intimately intertwined with what another thread is doing.

    Also, threads end up taking up a lot of resources. Both obvious ones like context switches and space for processor contexts, and in less obvious ones, like stack space. One program I saw had a thread for every communcations stream, and it needed to deal with over 300 streams at a time. It also needed a stack of 1M for each thread because some function calls ended up being deeply nested. That's at least 300M just for stack space. The program may have had no difficulty dealing with 300 communications streams at once had it not used threads. As it was, it constantly ran out of memory.

    In short, CORBA, and other RPC solutions look like a quick and easy answer to a difficult problem, but, measured on the coupling and speed scales, they are medium to highly coupled, and medium to low speed. Not as bad on speed as, perhaps, XML, but not all that great either.

    I would like to see (and am in the process of creating) a very message oriented tool for building communications protocols. It would concentrate heavily on what data the messages contained, not what the expected behavior of a program receiving the message should be. It would be supported by a communcations subsystem that emphasized the inherently asynchronous nature of IPC, and made it easier to build systems that used that to provide efficient communcations. It would provide auto-generated marshalling and unmarshalling of data, and even a provision for fetching metadata describing unkown messages, much like XML. It would also allow you to easily override the generated functions with your own functions that are tuned to the data you're sending. It would also make it easy to build and layer new protocols on top of existing ones, or transparently extend protocols in such a way that old programs would still work.

    As a last note, I would like to say that we've known for years how to handle the inherently asynchronous nature of user interaction. All of our UI libraries are heavily event oriented. Processing is either split up into small discreet chunks that can be handled quickly inside an event handler, or split off into a seperate thread that communicates to the main program mainly through events. We need the same kind of architecture for programs that do IPC.

    If anybody is interested in the beginnings of such a system, I have an open source project that I think is still in the 'cathedral' stage. I would like help and input on its development though, and I need help making it easy to port and compile in random environments. If you would like to help, please e-mail me. I call the system the StreamModule system, and it's architecturally related to the idea of UNIX pipes.

  • by Anonymous Coward

    The Netscape Portable Runtime offers non-GUI cross platform operating system facilities. It offers threads, I/O, memory management, networking, and shared library linking. Here are the docs at mozilla.org. It's been used in Netscape browsers and other enterprise products from the beginning, so it's well tested too. It's written in C.

    There is also Cosm [mithral.com], which is a set of protocols for distributed things like cracking DES and sifting SETI data. Probably a little too focused for your needs, but it looks cool, so I'd plug it ;-)

  • Two comments:

    1) For those who mention that you may not want to use Java because of speed issues, keep in mind that there are Java compilers available which can give you much better performance than interpretted bytecodes. It's write-once-compile-anywhere instead of run-everywhere. I'm sure there are issues with compiling bytecodes, though, but it's just an idea.

    2) Another note is that, depending on your application, you may not need something so general as an ORB for use between your processes. For example, if your distributed processes are retrieving and updating shared data, why not use a network-accessible database? There are plenty of database interfaces available for many languages, and this might simplify your system if you have large, simple data structures. It's of course a speed loss and an increase in complexity, but databases are much more mature than ORBs. Think about whether or not a database might be better for your application.

    Of course, I haven't addressed the problem of using the same code on multiple platforms; but I think others are doing a good job of that. But I have to agree that more information on your specific application might be helpful. Good luck!
  • by Anonymous Coward
    Bamboo is exactly what you want. Currently supports perl/tcl/python/c/java. Built upon NSPR for portability, currently tested on SGI,linux,win32,sunos,aix. http://watsen.net/bamboo [watsen.net]
  • by Anonymous Coward
    http://www.epm.ornl.gov/pvm/pvm_home.html PVM (Parallel Virtual Machine) is a software package that permits a heterogeneous collection of Unix and/or NT computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable. The source, which is available free thru netlib, has been compiled on everything from laptops to CRAYs. PVM enables users to exploit their existing computer hardware to solve much larger problems at minimal additional cost. Hundreds of sites around the world are using PVM to solve important scientific, industrial, and medical problems in addition to PVM's use as an educational tool to teach parallel programming. With tens of thousands of users, PVM has become the de facto standard for distributed computing world-wide. For those who need to know, PVM is Y2K compliant. PVM does not use the date anywhere in its internals.
  • I can second the recommendation of ORBacus, I've been using it in a big project for the past few months and stability is way up over the previous custom socket-based system it replaced. It might be a good idea to start with the 4.0 beta release, as it implements newer CORBA standards -- particularly the POA and Interoperable Naming Service.
  • IBM did something very similar to this a number of years ago. They created a language called "Hermes" that allowed inter-process communication through pipe-like mechanisms. Sorry I don't have any references handy.
  • I agree. IDL means "Compile".
    "Compile" means "Test".
    "Test" means time and money down the toilet.
    Lotta pain. Little gain. When it's hard to
    adjust systems, kludges and work-arounds become
    common. Then those systems become a nightmare.

    I think you would love
    "Dynamic Data Objects for Java(tm)"
    if you don't want to compile even
    when elements are added. I can
    add checkboxes and datafields to
    data objects, corresponding
    "alter table add column..." statements
    in the database, and add a few table
    entries pointing to element names of
    the new data. Then a resync, and
    it's running... (Didn't see the "Compile"
    word did you?)...

    After using this inexpensive and simple
    toolkit for Java why in the heck would I
    ever want to go through CORBAtion again?
  • by jem ( 78392 )
    This could have been more useful if I'd posted 60 posts ago. Let me give you some context: I'm working with the guy who originally posted this. The "Ask Slashdot" question was posted over a month ago. I'll try and answer some of the questions that have been raised by your comments.

    We're working on a system that is intended to be a low-cost retail product. We'd rather not divulge the exact nature of the product yet (too early).

    This is a server-based system that requires a web front-end with plans to add other front-end(s) in future. The design specifies that the product (server, not client) must at least run on: Solaris, Linux and Windows NT. As porting code is more difficult than writing it cross-platform in the first place we decided these versions should be developed in parallel.

    The need for inter-process communication arose from a number of factors:
    • different parts of this system would be more effectively written in C++ / Java / Perl.
    • the system should be multi-tier
    • components of the system should be able to communicate with others on different servers

    After testing a number of ORBs on NT, Solaris and Linux we came to see the scope of our nightmare (RMI, Mico, ACE/TAO, omniOrb, Cope, Orbix and a number of others). We started looking into using different ORBs on different platforms, this raised porting and cost issues. There were ways but they weren't pretty and most of it was damn expensive (for what we were doing).

    This process was so depressing that we though hard and long about whether we really needed all of those platforms. The answer was yes. For sure.

    Going back to the drawing board, we realised that we only needed a fraction of the features of CORBA and started to roll our own. It hasn't been easy, but it now works on all the platforms we intended. Also, since our code so far does not use any third-party components we can open source all or part of in the future.

    Time will tell if we made the right decision. For now things are looking good. Watch this space.

    ------------------
    note: I'd appreciate if you'd moderate this up such that people can see it ;-)

    note(2): My web site contains no information about any of this whatsoever - but have at the pretty pictures anyway.
  • Has anyone tried using modula-3 with Network Objects across platforms? Also has anyone tried Obliq? It's an intrepreted language based on M3s network objects. How cross platform are we talking about? Will OS/2,UNIX of all types, Win32 be enough? Besides Modula-3 being a language done right it seems to be very cross platform and very free. CORBA is an OK concept, but when you try to implement cross platform ideas on non-portable languages (read c/c++) it can cause headaches.

    Modula-3:
    http://www.m3.org

    Network objects paper:
    http://gatekeeper.dec.com/pub/DEC/SRC/research-r eports/abstracts/src-rr-115.html

    Obliq:
    http://www.research.digital.com/SRC/personal/Mar tin_Abadi/Luca_Cardelli_Copy/Obliq/Obliq.h tml
  • not trying to add fuel to the flame, but you didn't really articulate your biases against Java, besides admitting it is somewhat "personal" in nature.

    You cite that the fact it came from Sun is a problem. I would claim that Sun is somewhere in the middle between a Microsoft and GNU as far as contribution to free software. That should be irrelevant to an objective discussion of technical merits, at any rate.

    You also acknowledge that Java is good at what it's designed to do. What exactly is that? Remember originally it was formulated for set-top boxes. Three years ago, people thought that Java was destined to take over the client, particularly with what I'd call "applet suites." That was a pipe dream, and Java's current successes and usage are almost all server-side. The point is Java has been over-marketed by Sun, but it is a fairly versatile platform nevertheless.

    Actually, a lot of people feel Java is a markedly simpler implementation language than C++, largely due to garbage collection (hence no programmer memory management), which actually is more of a runtime feature supported by the language.

    Some claims have been made that Java is twice as efficient to create Web-based systems, but such metrics are difficult to substantiate.

    Syntactically, Java borrows heavily from C/C++, so I wouldn't argue it's dumbed-down in any respect. Considering language-level support for multithreading and rich standard libraries, there's little merit to any language superiority arguments here (or in many places).

    This isn't an attack at all on your posts or position, but contribution to the thread at large. No "inflammation" intended. :)

  • I have to give a big thumbs up to Java for portability.. I've spent the last 4 years developing a framework for managing NIS, DNS, and the like in Java. The Java GUI client works fine on many UNIXes, Win95/98/NT, OS/2, and even Power Macintosh, all without so much as a recompile. It uses RMI for the client-server communications, and it works like a dream.

    The only problem is that it really isn't cross-platform in the most virtuous meaning of that term.. the platform is Java, and Sun controls the platform. Things like HTML/HTTP/CGI makes for a more generic and free (libre) cross platform environment, but you can do so many things with Java that would be impossible to do in the traditional web interaction patterns.

  • While Cosm will do things like DES, that's not the goal of the project, which is to handle any and all Distributed Computing project, not just trivial client-server ones.

    Cosm also has an OS/CPU layer that isolates all the functions of the kernel that are needed for Distributed Computing. This layer is in ways similar to NSPR and the dozens of other such libraries, each with a slightly different target set of functions.

  • I've been involved with several projects of this nature. Most folks here have been touting the benefits of one toolset or another, or one abstraction or another.

    I've seen some very nifty toolsets. We've invented one or two ourselves - in at least one case, what we came up with is better than I've ever seen commercially, or open source. But, like any other toolset, it's geared toward only one sort of problem.

    "Distributed computing" covers a lot of ground. If you need lots of transactions in an object-oriented framework, then yes, CORBA is worth a good hard look. If, on the other hand, you want to rework existing applications so that they trade data over a network, then a software bus architecture is something to look at. We implemented one of these, and it allowed us to change from a Smalltalk-based GUI to a C++/Interviews-based GUI without changing a single line of code in the back-end application. But this architecture was useful only because the problem could be decomposed into fairly large lumps with fairly high-level, low-volume traffic between them. An end-to-end heavy transaction system would lose big with this architecture.

    So, tell us a little bit about the application and its architecture, and you might get more useful information (less flamage would be too much to hope for around here :-).
  • not trying to add fuel to the flame, but you didn't really articulate your biases against Java, besides admitting it is somewhat "personal" in nature.

    To explain this, let's go back to where this whole "C++ advocates hating Java" scenario actually came from..

    From the original AC (troller =P):

    Even with Java deficiencies, I'd rather Sun do the work of converting operating system depending things so I can spend my time making my code do something useful.

    My response:

    This quote, like the one before it, adds nothing useful to the conversation. If you'd rather "play it safe", that's your decision. Many of us, myself included, prefer the added speed, power, and flexibility inherent in, say, C++. I'm not frightened of pointers, thanks.

    I never intended to get into any kind of Java vs. C/C++ discussion. I was simply annoyed at the insinuation that coding in C/C++ is a "waste of time" because it doesn't "do everything for you". Maybe I don't want to "play it safe" and have certain things like portability, etc. done for me. If I did, I'd be using another language, probably Java. =P This is why I didn't go into any technical merits. I never went about attacking Java, just the original AC poster. ;)

    You also acknowledge that Java is good at what it's designed to do. What exactly is that?

    Well, it was originally designed as a programming language to run on all sorts of different small devices (yes, I know, I've forgotten all the important details.. Java fans leave me alone.. I'm not a history textbook, ok? =P). Something that could run on damn near anything. I don't believe it was originally intended for personal computing per se, IIRC, but I'd certainly say that it has lived up to its dream of cross-platform compatibility (and I'd rather have it running on my PC than my toaster any day).

    That said, what it was designed to do and what Sun "meant" it to do are not necessarily the same thing. I also never said that it was good at everything it was designed to do (especially since it was designed to do different things throughout different parts of its life.. its certainly come a long way from when it was called "Oak", however.. =P).

    The point is Java has been over-marketed by Sun, but it is a fairly versatile platform nevertheless.

    Precisely.

    Syntactically, Java borrows heavily from C/C++, so I wouldn't argue it's dumbed-down in any respect.

    Actually I've heard it argued that Java borrows more from Objective C than C++. I've never really followed Obj-C, however, so I'm not precisely qualified to substantiate or refute that claim. It's an interesting assertation, nonetheless. Personally, I must have multiple inheritance!

    That said, Java was indeed meant to be an easy-to-use language. It shies away from the more complex aspects of C++, like multiple inheritance or pointers, and does everything in its power to take care of complex tasks itself so the coder doesn't have to (memory management, portability). Therefore, it's a little more difficult for your programs to lose, as you're denied the ability to make some of the more common programming errors (most of which revolves around incorrect usage of pointers.. honestly, how much learning of C or C++ involves having all of the aspects of pointers imprinted upon your mind?).

    Anyway, as I haven't had anything to do with Java since 1.2, I don't think I'm particularly qualified to expound upon its current merits. Besides which, I don't really want to weigh C++ against Java right now, due to a lack of interest and time. :) I'm sure someone more well-versed in both languages could give a more objective comparison of the two, anyway. All I can really say is that Java is a good language, it's just not for me.

  • ..you're wasting my time. Why don't you go see what Merriam-Webster [m-w.com] has to say? Actually, I'll save you the 2 - 10 second search:

    usable on many computers without modification *portable software*

    You'll note the subjectiveness of the word "many". I could consider anything greater-than or equal to 1 to be "many", although for the purpose of this definition, I'd be considered rather stupid for equating "many" with "1" (as I probably would be in general, anyway), but certainly there is little room for debate that any number greater-than one could be considered to be "many" (remember, this is subjective, not objective).

    Okay, let's get some idea of what you mean by "portable" then. To a lot of people (I suspect), portable means "write once, compile and run anywhere" - and that's the goal of using ANSI C wherever possible, I would imagine.

    Well, I guess "a lot of people" would be, um, dead wrong .. The definition clearly states "many" not "all". Big difference, a la what I already said:

    You should also note a distinct difference between the usage of the terms "portable", "extremely portable", and "100% portable". C is "extremely portable", not "100% portable".

    Hopefully you can figure out what that sentence means at this point, because I'm not going to bother explaining it further.

    That's what Java does particularly well (in terms of language - the implementation of applet viewers is another matter, of course). That's using "portable" in the sense of "mobile" (a portable object being one that can be moved around at will).

    What does Merriam have to say about this?

    capable of being carried or moved about *a portable TV*

    So basically you're trying to tell me that Java is a physical manifestation that I can move about as will, sort of like a book on Java? Umm..?

    What do you mean by portable? Do you mean you can change the code to make it run on another box (ie it's "able" to be "ported")?

    I mean it the way the dictionary defines it, perhaps? Unlike some people, I don't attempt to refute the true definition of a word at every turn. Are you somehow trying to argue that you can't write portable C code? Sure, you can make modifications to make it even more portable, but that doesn't mean that it wasn't portable in the first place. The preprocessor is your friend.

    As I've said before, C is good for portability when you want to do mostly processing, but when you start doing a lot of IO (particularly network and/or user IO) it loses out.

    While you are certainly entitled to your opinion, I'm still not interested, and repeating yourself ad nauseam isn't likely to change my mind, but rather to cause me to begin simply ignoring you. That said, this is not an RFC. Don't expect further responses.

  • I don't know what kind of application your are planning on writing, but for 1 to 1 computer communication Corba should be enough. Others have mentioned EJB, but that is only interresting if you plan on setting up an application server (i.e. n to 1 communication) or something of the sort where you are sharing a server with multiple users.

    Since you've already pretty much decided on using Corba as your middleware, and you have a need for multilanguage and multiplatform support, I would suggest you took a look at Orbacus (http://www.ooc.com). It has some features that you need. It is written in Java, so you can run it on any platform(at least theoretically). I ran a NT server with Gemstone app.server and a client on a Linux box with the Blackdown VM and the Gemstone class jars from the NT server, and had no problems, at least for that project. And, I don't think it would be much different for any other platforms. Just make sure you use a propper VM and class jars.

    When it comes to the multilanguage part, Orbacus generated ORB class code for at least Java and C++, if I am not mistaken too badly. I don't know how easy it is to get an ORB code generator for Perl, but there might be someone out there who has done it.

    You should check the Orbacus web site for the detailed spesifications.

    Cheeers



    Now for the disclaimer:
    It has been 6 months since I used Orbacus, so, not everything I said might be completely correct.

    And for the usual one:
    "This disclaimer supersedes all previous ones.

    The views expressed here does not nessecarily represent the views of my employer, the university, me or the view out my window. All things considered, the views might just as well be the views of my cat."

  • There are distributed application frameworks that meet your needs, they just make some assumptions about the sort of applications you want to develop.

    For example, HTTP is a pretty ubiquitous protocol, and there are programmer libraries for working with it in a whole lot of languages-- including all of the ones you mentioned.

    There's a whole slew of other protocols with the same basic feature: widely deployed on multiple platforms and within multiple programming languages. I'm thinking SMTP, NNTP, etc.

    The main disadvantage to these protocols for building distributed applications out of them is that they don't lend themselves well to applications that have certain characteristics which tend to drive design choices affecting scalability and performance.

    Even then, you will usually find that parts of your application will depend on making use of those protocols.

    More than likely, you have an application specific need for a particular kind of framework, and you should just bite the bullet on a middleware solution and live within its weird limitations.

    Consider this, though: you can probably wall off the part of your distributed application that has the scary performance and scalability requirements by writing Perl/Python glue and Java servlets in the interface to your web server and other parts of your application that don't need the middleware.

  • wow.

    long? yes. well tought out? yes.

    I work in the guts of a distributed object infrastructure project, so this is the kind of discussion that my fellow geeks and I would spend HOURS on if management didn't walk by that offten. I think you clearly see the empty spot on the scatter diagram you drew out there, and are obviously on the right track to fill it in.

    I also think you're reinventing the wheel. (don't we all at some point? ;)

    What you've described is the message passing protocols that existed before IPC. This is the style of programming that was taught by having to "run" your programs by carrying a shoe box of punched cards to the window by the machine room and giving it to an operator who would run it and give you back a stack of cards. This stack of cards of course might itself be a program which could then be fed back to the operator in one or more additional shoe boxes and the cycle began again.... I still write scripts that end by enqueing several more scripts for batch processing - and I WASN'T actively part of that era of computing. This style of communications is also still the big winner in the mainframe world of Transaction Monitors, ERPs, etc . . . look at IBM's hugely successful MQ Series [ibm.com] for example. (here is a better read [ibm.com] for the un-initiated.)

    There are several implementations of frameworks for messaging protocols out there. One of my favorites in uni was the Paralell Virtual Machine architecture [lycos.com]. Another was the Message Passing Interface [lycos.com]. Many forms of paralell computation use the messaging model.

    Messaging is also being brought into the Java world with JMS [sun.com] (no, not the great maker [blockstackers.com]) the Java Messaging System.

    wow. This is the kind of discusion that makes me proud to login to /. why can't there be more? Why aren't there? hrmm.
  • After reading everyone's posts, I wonder if there are any programming languages and development platforms that do not magically fullfill all of the given requirements... Mark
  • Several years ago I was hired with a group of people to train C programmers in objective-c and later java. The degree of resistance was extraordinary. Every day it seemed we fought the same battles. The same people would say the same things over and over: "but without pointers...", "java is too slow, what about high-performace graphics?", etc. It's like, but we're not DOING high performance graphics. We're doing BASIC business applications.

    Perhaps I'm just dense, but what in the hell does this have to do with C++ advocates? Since you never mentioned once that these people knew C++, instead saying they knew C and you were trying to "advance" them into Objective C or Java, I don't see how this is even mildly on topic.

    And as you said, I will concede to the goodness of C/C++. I used both of them long before java, and still do. There are pros and cons to both, times to use one, times to use the other. Seems to me the smart builder doesn't use just one tool, but knows which tool is the right one for the job. But they will generally not concede that there is any use to Java.

    I would like to assert that while an intelligent programmer will have more than one language of choice, he will not , I repeat not program in every language imaginable .. Thus, maybe he will never use Java throughout his entire life. Big.. deall..! By the way, did you really mean to say that "they" would assert that Java is useless, as you have, or were you trying to say that "they" would not assert that Java is useless?

    But I had the same difficulty in teaching objective-c, which DOES have pointers, and imho is a WAY nicer oo language than c++ any day. So I thinks it's not that java is java, it's that these people are resistant to any sort of change. They are either lazy or scared or both. Harsh, yes, but how else can you explain the one-sided (and borsderline irrational) arguments that you see by these people.

    So, again, I have to wonder if you are still talking about C programmers (which is off topic) or are now talking about C++ programmers (in which case your arguements make even less sense)..

    Just because you think Obj-C and Java are totally badass and just flat-out wipe the floor with C++ doesn't mean that a C (or C++) programmer trying to learn either is going to agree with you. And since there is a third choice, C++, to "migrate" to, it seems like a biggoted statement to assert that C programmers who don't want to learn Obj-C or Java are "resistant to any sort of change" or "lazy or scared or both".. I myself managed to migrate from C to C++, which is a drastic change in philosophy and style, or had you not noticed that? Just because I don't want to go to your languages of choice, that makes me lazy, scared, resistant to change? Or are you trying to tell me that C++ isn't that much of a change? Well, surprise, it is. Obj-C, however, isn't much of a change. I'd rather use Java than Obj-C (which is practically worthless AFAIC).

    BTW, the aforementioned migration to java failed on all counts. A failure of java? of me? of them? hmm..

    Probably because they just didn't like the way it was designed? I'd imagine even many C programmers (those who haven't evolved into something else yet) would enjoy a good range of flexibility and not want to give it up so easily. I wouldn't know, because I learned Java after C++, not directly after C.

    At any rate, how can you even talk about others having "borderline irrational" arguements when the topic of this particular subthread is C++ programmers, which by their very nature do not fit into the mold you ascribe to them (given that they were once C programmers to begin with, of course), and you're just talking about how a bunch of C programmers didn't want to learn Obj-C or Java. Maybe you should tell us if any of these "clueless idiots" ever managed to learn C++, hmm?

    Since this was spun off from a comment I made, I find myself highly offended and directly insulted, as I do not like myself being referred to in such a manner. Or are you just completely abstracting the topic at this point to where it doesn't even apply to me anymore? That would make sense, since you're not even on the right topic. If not, then see this post [slashdot.org], which is the culmination of this subthread [slashdot.org], which you really should have let me spawn off from the comment [slashdot.org] you replied to before spouting off with your biggoted nonsense, as I believe it was a call for rational discussion, not your lopsided assertations. Again I have to wonder what makes you so holier-than-thou that you accuse others of borderline irrational arguements. Stay.. on.. the.. ball..!

    .
  • I really should stop trying to do anything useful when I first wake up. I think it's kind of obvious that I clicked on the wrong checkbox. ;) Blah. Time for.. more.. caf.. feine..

  • but certainly there is little room for debate that any number greater-than one could be considered to be "many"

    LOL! In other words, when you think most people will disagree with you, you just say it's subjective and leave it at that. I don't think most people would consider 2 to be many. If someone said they had worked with many computers and it turned out to be 2, I think you'd be rather disappointed, don't you?

    As you've said you'll be ignoring me from now on (which together with your tag line says all I feel I need to know about your opinion of yourself with respect to others), I'll leave it at that...

    Jon

  • ILU is not written in Python - it's written i C. It has support for Python using its own CORBA mapping which is becoming a CORBA standard. It has support for CORBA C++, C and Java too. There is experimental and incomplete support for Perl (client only AFAIK).
  • Cheers, yes this looks quite nice but really it is only an encoding/transmission scheme. It doesn't deal with distributed objects you would have to implement that on top of XML-RPC.

    The scheme is very flexible but it doesn't address most of the surrounding problems with distributed systems. Service discovery, location transparency et cetera. All of these would have to be built on top of XML-RPC.

  • I agree, the CORBA spec does do all of that. But when we talk about the actual state of the world a few things become apparent.

    • There still isn't a CORBA mapping specification for Perl.
    • Not all distributions are created equal.
    • Commercial ORBs can be redistributed but cost the earth.
    • Some 'free' ORBs can be redistributed without open sourcing your whole application, not all. Basically a mix of different licenses.
    • Not all ORBs are available on the platforms you want.

    I'm not saying that this is a complete list or that any one of them is a brick wall. But together they are giving me a headache.

    Perhaps a few free ORB developers should get together and pool efforts to come up with something orthogonal? Just a thought.

    Cheers,

  • I have no problem with any of that. UML is one of the things I want to 'get around to looking into'.

    I also don't beleive that there is enough out there to warrant building an entire system out of yet. Just my own personal opinion.

    Finally how does automatic system generation from higher level models help to create distributed systems?

    Cheers,

  • The thing you missed was the question. What distributed framework to use.

    Cheers,

  • I have no trouble in beleiving that function/method call oriented RPC may not be useful to you. Personally I want to be able to split up a program that has at its base an OO model and not have to destroy the model to do it. For this some form of method based RPC (CORBA/DCOM) is required.

    On the subject of synchronous calls I think you should go and look at the specifications for both RPC and CORBA. Both allow 'oneway' invocations to take place. Basically an asynchronous call.

    On the subject of coupling I would have to ask whether or not you like that fact than libraries have specific interfaces that can be relied upon. This is the only restriction that RPC/CORBA/DCOM put upon your programs. The signature of a call has to be known, just like library calls. Yes this means that there is a higher degree of coupling than in a system where the data can be 'interpreted' by the callee but it also means that you don't have to attempt to interpret the data which gives you a performance boost. I don't want my libraries to have to interpret what I give them, and neither do I want my own systems to have such fuzzy interfaces. If I do want something like that then I'll implement it myself but I won't choose it as a feature.

    Having said all of that however I would be interested to keep in contact and discuss your system with you.

    Cheers,

  • Thanks for all the feedback. Here is a quick update.

    The article was originally posted about a month ago. Since then many things have happened.

    I have started to roll my own CORBA-like implementation targeted at Perl and Java in the first instance. Focusing on platform compatibility (which is all but eliminated with the above two languages) and features on an 'as-and-when' needed basis. If a feature is required then it is implemented as close to the CORBA spec as possible. The major exception to this is marshalling/CDR adn transmission protocol which are very simple right now.

    If anyone is interested in something like this then get hold of me on this address [mailto] since the system is being built to enabel the main system and not as a product the licensing for it is currently undecided but may eventually be open sourced.

    Thanks for all the pointers and info so far...I'm still reading.

    Cheers,

  • I'm a little unclear on what this app will do/be -- is it a server application or a client application?

    Writing generic servers are very difficult, since servers have a greater need to be optimized than clients, and optimization is necessarily a function of the specific platform.

    Writing a multi-platform client is simpler. The stock answer nowadays is to make it web-based, and then concentrate on the server end, where you would use (for example) a Java-based application server. I would say, in general, that one of the cross-platform scripting languages is going to be your best bet. Perl, Tcl, and Python are all multi-platform (*nix, Win32, and Mac (Perl, possibly the other two)), and all support Tk bindings, so you can make your application graphical. Python has a C-like syntax, so users of C, C++, and Java should pick it up pretty quickly. Same for Perl. Tcl has its own syntax, but it is not difficult for a trained programmer to learn.

    However, if you are looking to create code that can be compiled into executables for multiple platforms from one code-base, I think your best option is going to be Java. Sure, it's slow (for now), and there are implementation issues between platforms, but porting a Java application from one platform to another is still hugely more simple than porting a C app (I'm assuming Win32 to Unix or a similar port). With the new grahpical libraries (Swing and such) you can create much more sophisticated GUI's than you can with Tk. And, of course, you can also create command-line/text-based apps with Java.

    My personal recommendation? Use Perl. The Perl interpreter is fast and extrememly powerful. Unless you're writing something that has to interact with a server in real-time, the startup/compilation phase of Perl scripts is unimportant--but, of course, once the script is loaded, it runs at the speed of the computer.

    darren

  • You seem to have the worst argumentative skills I have ever seen. You don't support your argument, you can't give a basis for anything you state, and you continuously try personal attacks. Now is my chance to attack you and hope that you try supporting your arguments next time so we can see that they are just wrong.

    As for C++ stupidity, first they still have the same ambiguities as C. Here's another example from the FAQ: i = i++; How about typeid and templates. Here's yet another example:

    int i=1;
    cout
    As for looking in the FAQ. It's called supporting one's statements with reference information. You clearly haven't done so much as try to do that. There are about four more C ambiguities there.

    I still just don't get how you can't make a comparison between system calls in C++ and in Java. This is a very important and relevant part of the article. By skipping talking about this, our whole argument will become irrelevant to the original article.

    Next point... if RedHat had a way of allowing you do download one patch that always made your system up to date, would you get it? Most people would. This is the point of Java. Getting one code base that doesn't have to be modified every so often for new architectures. What is your response to portable assembly code and #ifdefs?

    The fact that you don't understand "hack" is usually synonymous with "kludge" makes me really wonder about your experience in programming. Are you one of the people who think's hack ONLY is supposed to mean some great inspired and clever programming? Are you still in high school? Once again I can support my arguments. Take a look in the Jargon File. The first definition of hack is...

    hack 1. /n./ Originally, a quick job that produces what is needed, but not well.

    Which is exactly what I've been saying all along. #ifdefs are work arounds and not the best way of doing cross-platform development.

    Now that you start thinking about what the article actually was talking about, you can understand that I can still answer the person's question with advice of how to do it better... maybe a way they haven't thought of. I may seem like an egotistical bigot but that's just because I know what I'm talking about and can even support my arguments. You on the other hand must spew your opinions which have no basis and get by on personal attacks.

    Your sig was just one more thing to inflame me. If you want to see bigotry, that was a GREAT definition.

    I logged on because I like to argue and yes, this time having the chance for you to read my comment was worth logging on. So when you prepare your next reply to me... support your opinions. Didn't you ever hear that in school? What grade are you in anyway?

  • Hear hear! Xml-Rpc, in one form or another, is going to be very big in the coming months (and maybe years). It is soooo simple to implement at it's base level that it will be ubiquitous.

    Of course, as another poster mentioned, there are still some issues to make it an object model... but those aren't impossible issues!

    I should maybe mention, too, that KDE will soon have integrated XmlRpc. I don't think it will be ready for our upcoming KRASH release, but it will almost surely be in 2.0. Basically, *every* single KDE app will have the capability of being an XmlRpc server and/or client with very minimal amounts of coding -- in fact, the server part should be transparent to the majority of apps.
  • This is an interesting opinion. I have no doubt the experience you made is real, and the conclusions are valid (based on the experience). However, somebody else in a quite similar situation might have a completely different experiences, and arrive at different conclusions.

    IMO, there are some factual errors and misconceptions in the text, putting them in a different light might help.

    # RPC, ILU, CORBA, DCE, Java RMI and DCOM to name but the most common.

    This seems like comparing apples and oranges, partially. Some of them are specifications, some are implementations, and some are both. In particular, ILU is an implementation which supports (Sun) RPC, CORBA. AFAIK, DCE, DCOM, and Java RMI are supported in an experimental stage.

    Java RMI is not necessarily exclusive with CORBA, as future RMI implementations in the JDK will use the CORBA wire protocol (i.e. IIOP). Also, DCE and DCOM have great similarities.

    # Commercially there are a few good ORBs but they are terribly
    # expensive.

    (Some of) the terribly expensive ORBs are not good. In particular, Orbix has many features, but also many bugs and poor performance. There are some good commercial ORBs that are free for non-commercial use, e.g. ORBacus.

    # Additionally most of the commercial ORBS support as few platforms as
    # they possibly can.

    This is true for "most" indeed. Again, ORBacus comes with full source, and supports a wide variety of platforms - it easily installs from source.

    # On the Open Source side ... some great code out there

    This is definitely true.

    # What I cannot find at the moment is a system that targets multiple
    # platforms and multiple languages.

    This is a misconception. You don't need a single implementation that targets multiple platforms and multiple languages. For one language, using the same system on all platforms definitely helps (although portability of CORBA code is very high, across CORBA implementations). For a different language, you can use a different implementation: CORBA implementations *do* interoperate.

    # Want to use Perl to talk to C++ back ends?

    Well, with Perl you are quite stuck, AFAIK, which is unfortunate. There is COPE (on top of Mico, or ORBacus, or Orbix), and there is an ILU integration, but I don't think the code is portable between these two.

    For C++, you can do whatever you please - no matter what you were using for Perl. C++ is very well supported in CORBA.

    # Want to use the same code on Windows NT as well?

    Don't know about ILU-Perl-NT, or COPE-XYZ-NT. C++ support on NT is also very good.

    # Want to use Java Applets to talk to C? You have problems.

    For Java Applets, I'd either use the VisiBroker for Java inside Netscape, or the JDK ORB, or any other CORBA-for-Java (which would work on any browser, but requires to download the CORBA runtime system). CORBA-based Java applets port to another ORB on the byte code level, since the API for the stubs is defined in terms of interfaces.

    For C, you can chose either ILU, or ORBit. Both work great, and also on a wide variety of platforms (ILU probably on more than ORBit). Portability is adequate, you'll have to modify names of header files, and perhaps some function names, because the C mapping is not specified as thoroughly as the Java or C++ mappings.

    [More on C Java]

    # Problematic at best, and probably not possible at the moment.

    Not true at all if you are willing to use different run-time systems for different languages. If you insist on getting both C and Java support from the same source, you'll probably have to use ILU. Due to the interoperability, there is no need to do so.

    # Are these very strange requirements/wishes or would other people be
    # willing to sacrifice ratified standards compliance and possibly
    # performance for orthogonality of language/platform availability?
    Neither, nor. The requirements stated above are all reasonable, and can be achieve without sacrificing standards compliance, or performance.

    # I would like to be able to write code ... (for me this would be
    # Perl, Java and C++) without having to use multiple implementations
    # on the different platforms.

    Well, *that* is the strange requirement. If that requirement is dropped (and it can be dropped easily), everything is possible.

    For these three languages, there is a singe source, though: ORBacus offers built-in support for C++ and Java, and COPE runs on top of ORBacus as well (AFAIK). Starting with ILU 2.0b1, the same set of languages is supported.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...