Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

A New Protocol For Faster Web Services? 131

Roland Piquepaille writes "Jonghun Park is an Assistant Professor of Information Sciences and Technology at Pennsylvania State University. He says that a new protocol can improve Web services. Sandeep Junnarkar broke the story. "Jonghun Park proposed a method for sharing information between systems linked on the Internet promises to speed collaborative applications by up to 10 times the current rates. The protocol is based on an algorithm that lets it use parallel instead of serial methods to process requests. Such a method boosts the efficiency of how resources are shared over the Internet. The new protocol is called Order-based Deadlock Prevention Protocol with Parallel Requests." Check this column for some excerpts or read the CNET News.com article for more details. More information about Jonghun Park's works can be found at his homepage."
This discussion has been archived. No new comments can be posted.

A New Protocol For Faster Web Services?

Comments Filter:
  • Sounds familiar (Score:4, Interesting)

    by Anonymous Coward on Sunday February 02, 2003 @11:12AM (#5210015)
    Wasn't this reportedly the theory behind the Irish Young Scientists Xwebs project?
  • Faster net? (Score:3, Funny)

    by Big Mark ( 575945 ) on Sunday February 02, 2003 @11:12AM (#5210016)
    Wasn't that what ISDN was meant to do?

    -Mark
  • Where's the info? (Score:2, Interesting)

    by KDan ( 90353 )
    I could find nothing about it on that dude's homepage, and the article is terse to say the least. Where's some actual information about this?

    Daniel
    • Re:Where's the info? (Score:5, Informative)

      by wordisms ( 624668 ) on Sunday February 02, 2003 @02:31PM (#5210867)
      Here is an article [psu.edu] from the IST department. Posted down below. Also if you note on his web page the paper is still under review so that is why there are no links to it.

      New Protocol Speeds Up Internet Resource Sharing

      The new technology speeds to 10 times faster the allocation of Internet resources, said Park of his proposed Order-based Deadlock Prevention Protocol with Parallel Requests.

      "In the near future, the demand for collaborative Internet applications will grow," Park said. "Better coordination will be required to meet that demand, and this protocol provides that."

      Park describes his research in a paper, "A Scalable Protocol for Deadlock and Livelock Free Co-Allocation of Resources in Internet Computing," given Jan. 29 at the Institute of Electrical and Electronics Engineers' Symposium on Applications and the Internet in Orlando, Fla.

      Park's proposed algorithm enables better coordination of Internet applications in support of large-scale computing. The protocol uses parallel rather than serial methods to process requests. That helps with more efficient resource allocation as well as solves the problems of deadlock and livelock caused by multiple concurrent Internet applications competing for Internet resources.

      The new protocol also allows for Internet applications to choose among available resources. Existing technology can't support making choices, thereby limiting its utilization.

      Its other advantage: Because it is decentralized, Park's proposed protocol can function with its own information. That allows for collaboration across multiple, independent organizations in the open environment of the Internet. Existing protocols require communication with other applications - not feasible in the open environment of the Internet.

      Internet computing - the integration of widely distributed computational and informational resources into a cohesive network - allows for a broader exchange of information among more users than is possible today. Those can range from the military and government to businesses.

      One example of such collaboration is Grid Computing that, much like electricity grids, harnesses available Internet resources in support of large-scale, scientific computing. Right now, the deployment of such virtual organizations is limited because they require a more sophisticated method to coordinate the resource allocation.

      Park's decentralized protocol could provide that.

  • No information... (Score:2, Interesting)

    by 1nv4d3r ( 642775 )
    Putting the entire story in the slashdot posting is an interesting solution to the slashdot effect. Of course the content is a little more bland....

    I'd prefer if the article we picked had some actual information about the protocol... off to google....
  • I wonder if this is the idea behind the "revolutionary" yet proprietary SymDesk protocols?
  • It might lessen the effects of /.ing
  • by Angram ( 517383 ) on Sunday February 02, 2003 @11:17AM (#5210039)
    "The new protocol is called Order-based Deadlock Prevention Protocol with Parallel Requests"

    He should've spent more time on the name, no one will call it by it's full name, and think of the acronyms:
    ODPPPR
    OBDPPPR
    OBDPPWPR

    It's bad for the system when no one can talk about it.
  • but... (Score:5, Interesting)

    by Interfacer ( 560564 ) on Sunday February 02, 2003 @11:18AM (#5210040)
    won't that make things more unsafe/unstable too?
    because http is plain simple, it is easy to determine where resides what functionality.

    if systems become more connected and integrated into each other, won't that make it much harder to determine what is going on on your system?

    i can imagine that msft will have a go at running parts on your system on their registration servers. this seems to me like another step towards DRM.

    i understand that this is just a protocol, but if people will start interconnecting systems, there will be (security issues)++

    Int
    • Re:but... (Score:3, Funny)

      by LostCluster ( 625375 )
      From the CNET article linked in the story...

      "Web services is currently held up--in my opinion--by things like security and reliability," said Stephen O'Grady, an analyst at RedMonk.

      Doesn't that translate to "They won't let us do it because it doesn't work."?
      • "Web services is currently held up--in my opinion--by things like security and reliability," said Stephen O'Grady, an analyst at RedMonk.

        Doesn't that translate to "They won't let us do it because it doesn't work."?


        No, it's more like, "webservices are incredibly fucked up because the people writing this stuff don't know what the hell they are doing."

        Unfortunately, the same can be said for many other things ("the world is incredibly fucked up because the people running it don't know what the hell they are doing").
  • by slashuzer ( 580287 ) on Sunday February 02, 2003 @11:19AM (#5210045) Homepage
    Just look at this...

    Jonghun Park is an Assistant Professor of Information Sciences and Technology at Pennsylvania State University. He says that a new protocol can improve Web services. Sandeep Junnarkar broke the story. "Jonghun Park proposed a method for sharing information between systems linked on the Internet promises to speed collaborative applications by up to 10 times the current rates. The protocol is based on an algorithm that lets it use parallel instead of serial methods to process requests. Such a method boosts the efficiency of how resources are shared over the Internet. The new protocol is called Order-based Deadlock Prevention Protocol with Parallel Requests."

    First, there is this whole climate fuelled by RIAA/MPAA that makes the very mention of collaborative applications something criminal.

    Secondly, if there is to be a non p2p media sharing usage for this protocol, it has to get industry support. Read M$.

    This looks like a solution looking to solve a problem that doesn't exist. Where have we seen this before?

    • I think you've got the wrong perspective on the subject: web services are to be used more (at least in the beggining) in a B2B enviroment - for companies to communicate with each other. That's what UDDI, SOAP, etc. are for, and that's where the "collaborative" is used.

      But you've got a big problem - efficiency - just imagine how can you implement things like transactions over HTTP. And that's what this protocol is aiming to solve.

    • if there is to be a non p2p media sharing usage for this protocol, it has to get industry support. Read M$

      I agree. A great example is the archival world. PKware's zip format has been the standard compression scheme, despite gzip and bzip2's better compression ratios. But if you email your mother a compressed file, better make it a zip file.

      And don't get me started on the non-standard HTML implementation of IE. . .
  • by Anonymous Coward on Sunday February 02, 2003 @11:20AM (#5210048)
    Oh, how original.

    Anybody who's done real database engineering knows the two points necessary to prevent deadlocks: (of course, most designers/programmers don't do this...)

    1. Every process locks resources in the same order.

    2. No process ever escalates a lock.

    Enforce these two adages ruthlessly and you'll never get a deadlock.

    So all this guy is saying is "Engineer your distrubuted databases properly." Woot.

    • RTFA (Score:3, Informative)

      by robbo ( 4388 )
      or even better, read his publications [psu.edu]. While deadlocks are deadlocks, his research isn't about databases but concurrency. If there wasn't technical merit to his work his peers would reject his publications.
      • Re:RTFA (Score:1, Insightful)

        by Anonymous Coward
        It's the same phenomena. Just substitute "data store" for "database". If the data requests against the data store aren't concurrent, they can't deadlock no matter what the damn target happens to be named.

        And just because it's been published doesn't mean it doesn't boil down to "build your infrastructure correctly, and enforce your design constraints".

      • by jbf ( 30261 )
        Unfortunately, duplicate ideas come up across fields quite often, and many times PCs (program committees) and journal reviewers don't have the expertise in the other areas...
        • Re:RTFA (Score:3, Interesting)

          by robbo ( 4388 )
          Absolutely. And the mark of a brilliant scientist is one who sees how to transfer existing knowledge about one domain into another.

          That being said, I highly doubt that Park's research has much to do with database mutexes. The courses I've taken in concurrency pretty much left me baffled. There's a lot more to it than thread safety.
        • Having actually seen the SAINT-2003 paper on which I'm assuming this article is based, the approach is indeed related to the standard "aquire all locks in a predetermined order" strategy. However it's not exactly that. If I remember correctly :-), it's a variant of this strategy that allows a bit more flexibility in aquiring locks (and hence more parallelism) in certain circumstances. These circumstances, from what I recall, are when a service can aquire "any one" of a group of resources in order to get its task done (which is perhaps a reasonable assumption if we're to believe the web services hype of multiple providers, yada yada). In the more constrained case where a service needs to aquire a fixed set of specific resources, it degenerates into the simple order-based deadlock/livelock prevention scheme. Now from the article he claims to have further refined the technique, so by now there could be more to it. But I believe this is still likely to be the general idea. Revolutionary? I don't think so, but it's really too early to tell. Indeed as another author posted, lock aquisition has not yet proven to be a bottleneck in web services, but that could be because the vision of having multiple providers offering multiple (possibly equivalent) services has yet to materialize. Complex web services orchestration is something that is more hype than reality at this point, but that could change.
    • by Anonymous Coward
      I think it is more about a generalized proposal for dealing with concurrency at the service level, such as step-1, step-2, step-3, ... which can be done in parallel across multiple machines, ...etc. Anybody who has dealt with software construction (ala concurrent make) would understand.
  • by Omkar ( 618823 ) on Sunday February 02, 2003 @11:24AM (#5210060) Homepage Journal
    But it has something to say about it:

    For many years computer scientists have been proposing protocols to improve the efficiency of distributed computing systems, but Park asserts that his method works with greater efficiency for time-critical applications. The current protocol is generally known as the Order-based Deadlock Prevention Protocol, according to Park.
  • Pipelining (Score:3, Insightful)

    by Karamchand ( 607798 ) on Sunday February 02, 2003 @11:32AM (#5210087)
    That's called pipelining, right? We already have this in various protocol, including HTTP which is used quite frequently for various web services (think SOAP)
    • pipelining enables you to mantain an open TCP connection betweeen the browser and the server and reuse that connection to send the subsequent requests

      but... that is done serially! what this protocol aims to do is to be parallel!
      • Re:Pipelining (Score:1, Offtopic)

        by Karamchand ( 607798 )
        Uhm no. Pipelining is often made parallel. Just look at http - serial is keep-alive. parallel is pipelining.
        To quote RFC 2068 [ietf.org]: Pipelining allows a client to make multiple requests without waiting for each response, allowing a single TCP connection to be used much more efficiently, with much lower elapsed time. (for more details see RFC 2068, 8.1.2.2)
        Serial is done using the Connection: keep-alive header.
        • Re:Pipelining (Score:1, Insightful)

          by Anonymous Coward
          How about you read the entire paragraph of 8.1.1? The jist is that opening multiple TCP connections to a server increases network congestion and is largely unneccesary given that most HTTP responses have a relatively small payload. The pipelining referred to means sending multiple requests on a TCP connection without waiting for a response from the server. Look at the sentence right before the one you quoted: "HTTP requests and responses can be pipelined on a connection". Also, take a look at the sentence you quoted yourself: "allowing a single TCP connection to be used more efficiently."
  • ATM networks do this (Score:3, Informative)

    by rootmonkey ( 457887 ) on Sunday February 02, 2003 @11:33AM (#5210096)
    ATM networks have a high speed channel and low speed channel (I believe). We are implementing a new protocal in our systems at work. Basically data that needs to be blocked is sent on one channel and realtime data that cannot be blocked is on the other. The channel can be easily told apart by indicating it in th header of the message. Note this is different than have more than one port.
    • ATM networks have the ability to police traffic based on the configuration of the channels you build accross the network.

      You can have 1000 channels if you want (try PVCs or SVCs)

      The thing is, you
      1)Consume more bandwidth to do this, because of the ATM cell overhead
      2) Fragment the crap out of your data, because ATM has a fixed cell length (Ie, your 1024byte TCP Frame gets cellified into 48 bytes chunks)
      any one if which is lost, causes the entire packet to get retransmited (unless you have decent cell buffering system on your ATM switch).

      Is generally not recomened for pure date networks, because of the above, ATM was designed more for pure Video/Telephone style apps (realtime) to compete with data apps (Non Realtime)

      If all of the apps on your network use IP, ATM is a redudant waste of money and resources.

      If you have a video or voice system (or even a private line emulation system) that speaks NATIVE ATM, then it makes sense to go ahead and use that, build a CBR or VBR-RT PVC for that application, and let the data traffic run on ABR or VBR-NRT PVCs...

      Otherwise, you should just use QoS at the IP level, and let the routers handle the policing. If you are dealing with trully anal design specs, you will also have to install RSVP to 'reserve' bandwidth, but a proper analysis of most networks will show a properly designed network app will not need to reserve any bandwidth on the network, if the policing is setup properly on the routers.

      Just ask me for more details if you need em!
      • Disclaimer: Or you can ask me for details - I do this stuff as my day job at a major telecomm carrier, and since we can sell you any kind of network you want, I can usually be pretty objective about comparisons.

        While ATM is more important for voice and video applications than pure data, it's also valuable for environments where some applications are more latency sensitive than others, such as database queries. In a local area network, I agree that ATM isn't going to win compared to Ethernets; standard CSMA Ethernet is less efficient for most applications, but the fact that a 100 Mbps interface card costs $10 makes up for that, and ATM-to-the-desktop was pretty much dead before anybody deployed any of it.

        But in a wide area network, what matters isn't how much bandwidth you consume, it's how much bandwidth you _buy_ and how efficiently you can use it to meet your application needs, and ATM and frame relay networks can often do that quite well. Yes, there's ~15 percent overhead on your ATM packets, but that gives you and your carrier a lot of flexibility in tuning performance, and in balancing what kinds of switching and routing goes where. You've probably noticed that most DSL equipment is running ATM, which is one of the main reasons you can get DSL internet service from a large number of ISPs, even though most of them use a telco or Covad to provide the access.

        Fragmentation at Layer 2, which ATM is, is a much different issue than fragmentation at Layer 3. You really, really don't want fragmentation at layer 3, because that typically adds 20 bytes of IP header, but ATM only adds 5 bytes of layer 2 header per 48-byte data payload, and in return you get the ability to interleave different data streams, which matters a lot if you're trying to mix big file transfer packets and smaller interactive-application packets (whether voice or telnet or whatever.) On big pipes, you won't notice, but on a 56kbps connection, you'd really rather not have your voice packet stuck behind a 1500-byte ftp packet (which takes about 200ms), and even on a T1 line, voice is happier if you don't have to wait for more than one ATM cell (about 1/3 ms) as opposed to the ~10ms for the big packet. With two PVCs, you can do this (at least at one end of the connection; some routers are too dumb to interleave ATM cells from IP packets on different PVCs.)

        As far as the problem of losing one cell trashing your whole packet goes, modern ATM switches usually have big enough buffers to handle a multiple TCP sessions well,
        and the early packet discard / partial packet discard capabilities let you trash the remaining half of a packet if you overflow a buffer (which is basically the same thing that happens on an IP router if there's not room for a whole packet.) Before EPD/PPD, there was the "sandblasting" problem, where a switch would lose one or two cells from a lot of packets instead of lots of cells from a small number of packets, which was obviously a Bad Thing, but it isn't a problem now.

  • by MadocGwyn ( 620886 ) on Sunday February 02, 2003 @11:33AM (#5210099)
    And you'll notice the technology is for "Web services" not as in web pages, as in colaberative data bases or applications over the internet. Its not meant as a web server. And this protocol does have some advantages, as in the prevention methods of deadlock (read the article)
  • by CrazyJ020 ( 219799 ) on Sunday February 02, 2003 @11:39AM (#5210118) Homepage

    "The proposed protocol is free from deadlock and livelock, and seeks to effectively exploit the available alternative resource co-allocation schemes through parallelization of requests for required resources,"
    This article is useless. This quote is the only information that is remotely informative in the entire article.

    And to get to my point, the management of resource access is hardly the job of the protocol. It is the job of the underlying web Service implementation to deal with these issues. Why should the protocol even have knowledge of the the resource state?
    • by GreyPoopon ( 411036 ) <gpoopon@[ ]il.com ['gma' in gap]> on Sunday February 02, 2003 @11:59AM (#5210185)
      Why should the protocol even have knowledge of the the resource state?

      I think providing the protocol with this knowledge is supposed to speed up the whole process while still preventing dead/livelock situations. However, as you said, the article is way too barren of any real information to assess how this is really supposed to happen. It may be intentionally devoid of details until the authors of this protocol determine whether or not they really "have something."

      • If the protocol has important info, like info about deadlocking, and not the underlying server it self, wouldn't this be a huge problem. Wouldn't this make it a feild day for hackers, or at least people who want to break things. Just craft one or two fucking packets that say they can access something locked by someone else and poof. Error Error Error.

        • that's when the part about security kicks in (did you read the article?)

          both parts, efficiency and security are being the great problems with web services (think transactions over http!).

          this guy is dealing with the first. now please someone deal with the second. they are ortogonal aspects.

  • by archeopterix ( 594938 ) on Sunday February 02, 2003 @11:50AM (#5210147) Journal
    In other words, instead of concurrent applications collaborating, they will vie for resources or just freeze while waiting for the other to take a lead.
    "Better coordination will be required to meet that demand, and this protocol provides that," said Park, who presented his research this week at the Institute of Electrical and Electronics Engineers' Symposium on Applications and the Internet in Orlando, Fla. His paper, titled "A Scalable Protocol for Deadlock and Livelock Free Co-Allocation of Resources in Internet Computing," has not been published yet.
    As far as I can tell from the articles it's about a protocol for avoiding deadlock in a distributed environment.

    This is cool and schmool, but where exactly are the collaborating applications that need to share and lock resources across Internet? Locking is useful only in preventing concurrent access to a critical nondivisible resource. Of course, web browsers share servers, but they don't need to lock them (well, sometimes they "lock" them, but this is only a side effect known as "slashdotting"). P2P apps? I don't think they need to lock anything in order to share files.

    A-ha! Web services! Ok, what web services? Have you ever used a distributed web service application that needed to lock resources? I thought so.

    I am not saying that this protocol is bogus, but it will probably be useful for apps that don't exist yet, at least on the Internet.

    • This is cool and schmool, but where exactly are the collaborating applications that need to share and lock resources across Internet?

      Any web service which needs to conduct a database transaction will potentially need to lock resources. It may be that the application is structured so that a request is a single transaction, but more complicated application may require multiple interactions, thus a longer transaction and the need for locking.

      For instance, take a web-services application that allows you to edit an, I don't know, address book entry. You retrieve the address book entry in one request, and store the edited values in another. Now, if another instance of the application on another machine comes along and retreives and stores the same address book entry between your request and your store, then when you store, the previous edits are lost. Hence the need for locking. This is obviously a simple, contrived example. Believe me, I've laid awake at night because of the difficulty of distributed transactions.

      -c

    • I am not saying that this protocol is bogus, but it will probably be useful for apps that don't exist yet, at least on the Internet.

      And when they do exist, they'll use XA [ibm.com], a (relatively) open protocol developed by IBM, which has been proven over decades of distributed, heterogenous transaction processing (banks, airlines, telcos, etc). You can already mix CICS, Tuxedo, Oracle and DB/2 transactions with XA. (Note to Slashbots: it's OK if you haven't heard of CICS and Tuxedo). What do we need some newfangled nonsense for?
  • by David McBride ( 183571 ) <david+slashdotNO@SPAMdwm.me.uk> on Sunday February 02, 2003 @11:55AM (#5210165) Homepage
    There appears to be a common misconception that the subject being discussed here is simple web hosting.

    This is not the case.

    Web _services_ are a set of programmatically-accessible services implemented on top of HTTP, using a protocol like XML-RPC or SOAP. These web services are being used in current Grid Computing prototypes, hence the references to "collaborative applications".

    The eventual aim of Grid Computing is to provide a means to expose resources (such as computational clusters, network links, visualisation suites, data-collecting instruments, SAN clusters, etc.); then, when jobs get submitted, the Grid infrastructure should automagically allocate resources for the task, taking into account what resources the submitter is permitted access to, what resources the job requires, what other jobs are already scheduled and potentially even what the monetary cost of using each resource is.

    See also here [gridcomputing.com] and here [globus.com].
  • From the article:
    Park said that he will seek to commercialize the next generation of his protocol that he has been fine-tuning over the past year. Those refinements, he said, make the protocol less theoretical and more appropriate for real-world use.

    Translation:

    Park said that he has already filed an application for patents on this "technology" and any similar "technology" that looks like it might be useful to anyone. Current protocols already in widespread use may infringe on his pending patents, and he plans to form an intellectual property corporation in order to begin infringement lawsuits once the patents have been granted.

    Or something....

  • Oh man ... (Score:3, Funny)

    by Anonymous Coward on Sunday February 02, 2003 @12:07PM (#5210203)
    OBDPPWPR://www.slashdot.org

    that's a little too crazy ... ;)
  • If I combine this with the thing that the clever Irish teenager invented, XWEBS [slashdot.org], does it mean that I can surf 40 times faster?
  • ...can this protocol get you onto the Wired without the need for a computer? Does it lock into the Schumann Resonance of Planet Earth? Have I watched too much Serial Experiments: Lain on TechTV recently? [techtv.com]
  • Hasn't this been thought of before? Small-scale parallel processing sounds a lot like the queues in SEDA:

    SEDA Homepage [berkeley.edu]

  • by jsse ( 254124 ) on Sunday February 02, 2003 @12:52PM (#5210418) Homepage Journal
    May be the answer is to stay away from http.

    Web Services is basically describing the kind of services run over http. Excessive services result in http request saturation and thus people has to find some ways to circumvene the performance problems.

    The reason why people nowaday mostly rely on http is the laziness of admins in handling corporate security. Services like RPC calls multiply the complexity of administration and it'd be easier if we all target the request on a single channel - http, which most enterprise has already opened it for normal web servers. Web Services beat CORBA in term of convenience in depolyment, not in term of its technical merit. (for more information, see this comparison [xs4all.nl])

    The article and the links followed are insufficient to tell what's inside this research. If he could really find a solution to http saturation problem, that solution can absolutely be applied to everything else. I'm pretty skeptic on it. :)
    • by msobkow ( 48369 ) on Sunday February 02, 2003 @03:41PM (#5211177) Homepage Journal

      HTTP was designed to be efficient for cases where a relatively simple request is going to result in a relatively large result dataset. Distributed services don't follow that pattern. You often have a relatively complex request (save changes to customer information) producing a simple result (changes saved/lost.)

      HTTP also was also designed as a stateless protocol, and does not have the facilities to ensure any time or order based serialization of requests and results. (Yes it can be cobbled in via back-end stateful servers and session context data, but it isn't used by the HTTP server itself to serialize anything.)

      Abusing a simple protocol in order to make life "easier" for the network configuration and administration team is just a bass-ackwards way of dealing with things. Networks are an infrastructure service for providing information systems to business, as are databases, file servers, application servers, programming services, etc. Nothing ever seems to end up "easy" except with a loss of functionality, efficiency, or scalability.

      • You're absolutely right. In my sane mind I know building everything on HTTP is doomed to be failure and they'll eventually go back to CORBA solution to save the day. Just like Client-Server/Terminal-based cycles. :)

        but in my more sane mind I work extensively on XML and related web services....well may be we couldn't complain too much to those who made these mumbo-jumbo which secure our job and ensure our paychecks come regularly. :D
  • From the article...

    Park said that he will seek to commercialize the next generation of his protocol that he has been fine-tuning over the past year.


    Does this mean it will be closed source, proprietry and all that jazz? I hope not.
  • ...oh wait, it crashes. Better limit it at 10x speed.
  • Say what? (Score:3, Funny)

    by WeekendKruzr ( 562383 ) on Sunday February 02, 2003 @01:41PM (#5210625)
    Wait wait, OD-3P-R?? What is that, the long lost love child of R2-D2 and 3P-0??
  • Commercialize? Why? (Score:4, Interesting)

    by sean23007 ( 143364 ) on Sunday February 02, 2003 @01:43PM (#5210633) Homepage Journal
    Park said that he will seek to commercialize the next generation of his protocol that he has been fine-tuning over the past year.

    Why? Didn't he look at HTTP at all? The reason it was so successful and widespread was because Tim Berners-Lee did not commericalize it. If Park makes this protocol commercial, it will either not be adopted at all, or it will be bought and proprietized by Microsoft. Neither of those are particularly desirable. If he keeps it open and free, it could eventually garner as much popularity as HTTP. Tis too bad he cares only for getting a check.
    • Um, this is a typical worm to catch a vulture capitalist. Note the almost complete absence of hard details and the wild claims.

      As you say, if the did work then his best bet is to publicize it in open form. I can already find a lot of good quality stuff for free on the internet with good implementations. If I really want top performance for a closed source project, no problem, I can code it up again. The important thing is the protocol is out there together with enough code to demonstrate it.

  • The protocol is based on an algorithm that lets it use parallel instead of serial methods to process requests.

    Isn't this already present and is called FTP? (FTP is notoriously ugly when dealing with firewalls).

  • by docstrange ( 161931 ) on Sunday February 02, 2003 @02:00PM (#5210704) Homepage
    The speed increase will be offset by the length of time it takes to type the url.

    Hypertext Transfer Protocol
    http://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xx

    versus

    Order-based Deadlock Prevention Protocol with Parallel Requests

    obdppwpr://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  • by wickedhobo ( 461297 ) on Sunday February 02, 2003 @02:08PM (#5210738)
    Most developers of enterprise systems don't need this. Traditionally, the bottleneck of data driven medium-to-big apps is the database; Connecting, connection pooling, reading, caching, whatever.

    I'm working on a large web-services product/project now using various J2EE technologies (JRun, Castor, Object Relational Mapping, Axis) and my biggest bottlenecks are the database (problem mostly solved through caching, and clustered caching), XML Serialization/Deserialization or marshalling/unmarshalling (problem solved using Castor XML) of the object graph to and from the SOAP body and Java objects, and simply the passing of large object graphs through XML protocols like soap.

    Go read the server-side.com, or Bitter Java, they'll tell you what the common bottlenecks are, and this usually isn't one of them.

    I assume .NET has similar bottlenecks, but I don't know, haven't worked with it.
  • do a really good job of it, and take into account the work of (and the many I have missed); HTTP is a very primitive protocol. I don't know when or if it will be overhauled or superseeded, but if it is, it needs more than this suggestion, much more, lot's of work, planning, forsight, architecture and engineering.
  • He's an idiot (Score:1, Interesting)

    by Anonymous Coward
    Great, so you can handle 10 simultaneous web requests. That might decrease the latency for a client, but it won't increase the performance of your server. Just the opposite, it will degrade it as the same number of client requests now generate 10 times as many connections. With each connection consuming threads and traffic, say bye-bye to your rock solid web server.

    This is an example of research without grasp of reality.
  • well, there goes the /. effect. Under new protocols, small webpages can get linked by slashdot, and still run the next day.
  • um... (Score:3, Funny)

    by VitrosChemistryAnaly ( 616952 ) on Sunday February 02, 2003 @10:11PM (#5212870) Journal
    Why do I need this protocol when I already bought a Pentium 4 processor to make the internet go faster? :)
  • Sure its all fun and games until you relize that it would be a serious pain in the ass to type Order-based Deadlock Prevention Protocol with Parallel Requests." OBDLPPPR:// to get to /..org
  • I fear odpppr://www... will never catch on, surely they could have thought of a better less uguly name?

BYTE editors are people who separate the wheat from the chaff, and then carefully print the chaff.

Working...