Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
The Internet

Journal tomhudson's Journal: The web browser is a dead end. 39

Let's face it, the browser has come a long way, but ultimately, it's a dead end. Once used just for viewing web pages, we're trying to shoe-horn it into something it can never be - an applications platform.

So we end up with these huge bandwidth-wasting javascript libraries that have to be continually updated, a platform that has a perma-broken security model, and we waste more of our time fixing display and layout issues than in writing real code.

At some point, enough is enough. Embedding an application in a browser is stupid. We have better tools.

I've never been a fan of Java because the performance sucked mightily. However, recent releases of java run acceptably on modern hardware, and allow programmers to concentrate on code, not stupidities like tweaking css files, or having to resort to tables because css is STILL broken in every single browser.

Which would YOU trust your banking information with - a weeble/feeble browser or a java application that communicates directly with the server, that does its' own encryption so you don't have to worry about phishing and MITM attacks on ssl certs, etc...?

The apache/[perl|php|python]/html/css/javascript/dom/xhr "platform stack" couldn't have been worse-designed if you set out to fuck programmers and end users over on purpose.

Think of it - how many times have you had to put aside writing CODE because you had to get something to "work" in all browsers?

The internet is more than just a bunch of web pages. With so much information out there, why do we insist on trying to make it all fit in browsers? If we treated the Internet the same way as *nix treats everything ("everything is a file"), instead of "everything is either a web page or a yadda yadda yadda ...", we'd be able to move foreward again.

This leads me to ask the question my sociology prof said to always keep in mind - "Who benefits from the status quo?"

Oddly enough, the one who benefits the most is ... let's see ... there's Microsoft. By keeping the browser wars alive, and keeping everyone focused on "The Internet as the Web", we are kept locked into the idea that we have to have a browser to use the internet ... rather than moving on to the next big thing.

Who else benefits? Companies like Adobe Systems, who also cater to the "teh web is a series of pages" crowd and think that being able to use Dreamweaver makes them a "programmer" ... Then there's google ...

It's popular to throw rocks at Sun, but Sun had the right idea. Let's look at just one example ... cloud computing.

Originally, cloud computing was supposed to be that all the computers out there are part of this vast "cloud", and can freely exchange information. However, in a browser-based web, that's not practical - you need centralized servers to "host" the cloud, instead of client-side apps that can also act as their own servers, exchanging data.

A real "cloud computing" search engine is distributed among all the users' computers - each one connects to others in the cloud that it has a "relationship" with for searching. For example, I may have a couple of gigs of articles on Canadian politics that I "share" with the cloud. Others can connect and search while I'm "on the net". In doing so, we've disintermediated the search engines, killing off the "we'll make money with ads from google" gang. Funy thing - search is what made the internet big, and moving it into the user cloud will make the internet even bigger, but it will also kill off both google and Yahoo!.

There are a lot of vested interests in keeping everything centralized, whether it be data on search engines, cloud computing, or funneling everything through one particular vendors' implementation of the "web browser experience".

Three things have to hapen:

  1. IPv6, with permanent assignments, so that everyone can find everyone else, any time, any place;
  2. Thinking outside the "box" that web browsers are keeping us in;
  3. Legal framework to make it a requirement to have more secure applications.

That last one is something that might make a few people uncomfortable, but for decades, software vendors have been getting away with putting out crap and disclaiming liability. They tout "best practices", but if "best practices" is more than a buzzword, then we would NEVER see people accessing their bank accounts through ANY web browser.

This discussion has been archived. No new comments can be posted.

The web browser is a dead end.

Comments Filter:
  • Yes to all, it's interesting seeing someone articulate it so clearly ..
    • The next question is "What to do about it?"

      Obviously, the only way to make change is to convince people that their self-interest aligns with dumping the browser as a platform ... which is going to be hard, because a lot of people think that they can make easy money of "teh web", without having to actually ... you know ... *work*.

      For me, I'm just fed up with having to spend the majority of my time most days on on-programming stuff (I don't count futzing around with css, html, and javascript as "real pro

      • by rs232 ( 849320 )
        "The next question is "What to do about it?"

        This is top quality, have you considered submitting it to the tech press, in seems slashdot finds Obamas taxes, the Zune and Mad mag more relevant .. :)
        • I want to see what people in this circle think about as viable alternatives, or whether some of them still believe that the web browser can be "saved."

          The current state of the browser is that, for every improvement, we're getting more potential "issues" than we are resolving. HTML5 won't fix that. Neither will adding still more scripting engines to the browser. Neither will css 3.0. The "target" is all wrong. Trying to change the DOM into a "real application platform" is in the same realm as teaching

      • You want me to start ranting, don't you? *g*

        I'm pondering if I'm really start to dissect your posting from the perspective of a 13+ years web monkey, because that's going to be a LONG post. To sum it up, while I agree with your notion that the Internet must move away from the page metaphor to a real network with interconnected nodes, moving back to the dark ages of regular applications and proprietary protocols and all the associated security nightmares isn't really an option for me. Let's just say with a b

        • Ok, I'm trying to piece this together, as I'm not a web-code-monkey, but rather more in the vein of sysadmin. I truly do want to try and understand the "solution" that needs to occur. Please don't take this question as a troll or a flamebait. I really am curious.

          Isn't that what XML was supposed to allow us to do? Return the requested data set, along with encodings that tell you what the values are for (in case they get mangled or aren't in the expected layout/format) and then let you piece that data tog

          • xml is one of my pet peeves. We had a feed to use last week, and it was available in csv, php, json, and xml. All we had to do to get the data we wanted was two lines of php: a foreach() and a list($junk1, $junk2, $junk3, $good_stuff, $junk4) = explode("\t", $row);

            Instead, "no, we're going to use the xml feed and parse it out yadda yadda yadda - xml is better!"

            Riiiight .... waste memory, waste code dev time :-(

            It has its' place, but people seem to think it's a universal panacea, and ignore simpler

            • by tqft ( 619476 )

              How much of usenet are you planning on re-inventing?

              Publishers/People setup topic specific lists (aka sites in the current paradigm) and send information to it. People subscribe to the list and filter according to taste eg. you are subscribed to /. alt.slash.Frontpage next message is slashdot main page story "Obama makes Canadian beer mandatory replacement for Budweiser in USA"
              You say yes send me more and app goes away and pulls from alt.slash.Frontpage.comment (based on original message number). You th

        • Good points. I certainly have no problems with open protocols, and open implementations of them. I jsut remember a time when people were saying that we'd eventually be able to cobble up our own cusomized apps simply by gluing a bunch of classes together. Don't like the spell-checker youhave now - just plug in another one. Don't like a particular rendering? Ditto. Want to share data? Grab a common share-type class, tell it what you want to share or fetch, and you're done.

          The promise disappeared, becaus

  • I'm wondering if the model - view - controller paradigm might help here.

    There's all the information out there. what do we want it to do, and what do we want to do to it?

    In one regard, shoehorning everything into a browser makes sense. The information is, as a rule, 'out there', not local. Displaying it through a limited-capacity browser is comparable to looking outside your window - you don't expect full resolution, or visibility from all angles. With the cloud approach, the tools to
    • One of the core concepts of the original "cloud" was supposed to be about better sharing, w/o having to dictate what each of us takes from the experience - how we choose to view the data, etc.

      This has been suborned into yet another "lets offer the cloud as a service and we'll make money" scam. I don't, as a rule, trust gate-keepers. After all, who will keep watch over the guardians, especially when their best interests aren't the same as yours or mine?

      I agree that java seems to offer a way out of the

  • You might have a point if the browsers of today were anything like the browsers of yesterday. But they're not. They are designed from the ground up to be application platforms. And that is simply fact. (Look at the XUL/XPCOM design of Mozilla/Firefox if you don't believe me.)

    You may not like that Fact, and you may even feel that it's the wrong way forward. But to claim that a worse set of technologies couldn't be designed if someone tried shows a significant lack of understanding both from a technological p

    • XUL/XPCOM is not a solution. It depends on css, javascript, and the DOM, along with all the accompanying issues we've been dealing with for the past decade. I would flat-out quit my job first.

      The browser is part of the problem. The solution involves getting away from the browser when appropriate. Saying to use XUL and XPCOM is like saying that the solution to the debt crisis is more debt.

      But to claim that a worse set of technologies couldn't be designed if someone tried shows a significant lack of under

  • We should put you in charge of the whole world :-)

  • I see a small problem: Many users enjoy believing that they can remain anonymous and/or secure behind their current IP addresses: In your post, you recommend Permanent assignment of IPv6 addresses - is this not going to threaten whatever anonymity might be available? Or have you relegated those wishing for it to go to "public" gatekeepers, (see also US libraries)?
    Also, how many times will I have to teach my work/school/home/laptop/replacement machine(s) with their unique IPv6 address about my preferences?

    • Good point. I should point out that the idea of a permanent ip address is so that YOU can find your computer from anywhere in the world. There's no reason that it can't at the same time participate in something like TOR, so that everyone, including you, maintains anonymity :-) Packets go to your computer and are routed to others in the cloud. W/O the decryption key, who's to say what is other people's data (which your box is just forwarding) and what is yours?

      Another thought: With 16gig USB drives goi

      • by RM6f9 ( 825298 )

        Actually, if I need to carry something, I'd prefer to carry one of the 1TB external drives (it'd need a very robust carrying case), but other than that small quibble, I like the way this is shaping up.

      • Ya know, that's what I want to do my grad research on, or something. Portable profiles. And there's no reason why we can't have a standard format for that, along with a way to acknowledge system redirects and such. And then there can be a certs folder that has your personal certs that your AD or LDAP or whatever can auth against at sign-on, pending recognition of a verifiable device. But since there are a lot of questions pertaining to this end, it's not something I would say is ready for a serious prop

  • Which would YOU trust your banking information with - a weeble/feeble browser or a java application that communicates directly with the server, that does its' own encryption so you don't have to worry about phishing and MITM attacks on ssl certs, etc...?

    You need to talk to Bruce Schneier. Doing "your own" encryption is always "fail". Besides, it has to do authentication right too, or it will suffer from MITM attacks. (As you said, yourself, the Internet is more than just the web, so do the MITM attack o

    • by Tet ( 2721 ) *
      Agreed 100%. I read the JE, and thought "WTF? You'd have to be utterly insane to trust a random java app doing its own encryption over a web browser doing HTTPS". Particularly for something where I need the security, like banking. Glad to see I'm not alone there. Similarly, the cloud is a stillborn concept (and rightfully so, IMHO). The web browser is far from perfect, but it's infinitely preferable to the half-baked alternatives being suggested here.
      • Just a few pointsL

        1. We don't need a browser to use secure sockets
        2. there's no reason why we can't throw extra encryption over and above that, on both ends (think encrypted comms over ssl)
        3. throw in one-time pad challenges for extra goodness on every packet exchange.

        BTW - 1024-bit keys will fail within the next few years, same as 128-bit keys did. http://arstechnica.com/old/content/2007/05/researchers-307-digit-key-crack-endangers-1024-bit-rsa.ars [arstechnica.com]

        • We don't need a browser to use secure sockets

          No, but you need a standard library for it and the most popular operating system doesn't have such an implementation (apart from in its browser)

          there's no reason why we can't throw extra encryption over and above that, on both ends

          Yeah, encrypting an encrypted stream always makes it stronger. (No, it doesn't, it can even break things)

          throw in one-time pad challenges for extra goodness on every packet exchange.

          Do you even know what you're talking about? You cann [wikipedia.org]

          • There's no reason your bank can't have a cipher-server on-site or at every ATM that you just plug a usb key into and it loads up x gigabytes of cyphers, linked to your bank account.

            Next step would be for the banks to allow people to make a "deposit" of keys for another user. Phone cos and ISPs could offer the same service. And there's always sneakernet, courriers, and snail-mail.

            • Yes, you confirm exactly what I said: not practical....

              I know banks, they will not do this. Besides, you haven't considered the volume requirements. Take a one time-pad and we'll make it 1GB downloadable at the ATM (never mind the time to download 1GB, we'll consider that they have 1GBps connections to ATMs which is of course not the case). The one time pad needs to be stored by BOTH the bank and the customer. For the customer: no problem... It's on his bank issued USB stick. For a small bank with 100

              • The ATM could store a few terabytes of keys. All it has to do is report back which account it dished out which "key-pack" to, then delete that "key-pack".

                1 gig of one-time pads is good enough for several millenia of banking, so we can reduce it to, say, 10 megs. That can be downloaded in seconds, so just keep 1 "key-pack" locally at a time. A bank with 1 million customers would need storage of 10 TB. With 2TB drives running $250, two or three (for redundancy) RAID6 boxes w. 8 drives each would do the j

        • by Tet ( 2721 ) *

          1024-bit keys will fail within the next few years, same as 128-bit keys did

          And? 4096-bit keys won't (barring major mathematical breakthroughs)

      • I also talk about experience, because in the early web banking days, I was maintaining such a Java App (okay, Applet) with a proprietary encryption layer [wikipedia.org] (The reasons back then were the export limitations from the US). It was a frigging nightmare.

        The JSP based version that succeeded it was infinitely more suited for the task.

        • ... perhaps the proper solution would have been to move the development of that part to somewhere not affected by the US definition of cryptography as "munitions."

          Again, though, applets are browser-based, and share the limitations of browser-based apps. There are work-arounds for some of them, but it's still a compromise at best.

          • ...perhaps the proper solution would have been to move the development of that part to somewhere not affected by the US definition of cryptography as "munitions."

            Ehm, that's exactly what they did. It was developed in Germany for a reason! I was there, in their HQ in Stuttgart. Once the "munition" ban was removed by the US, the went broke.

            Again, though, applets are browser-based, and share the limitations of browser-based apps.

            Just a tad bit more limited, but a full-fledged thick client will have the sam

    • I'm not saying come up with your own encryption algorithm - just don't trust a browser to it, simply based on their record ...

      • In that case the implementation will be flawed (good encryption is hard) or we'd have to use system libraries for them. The last one is preferable, but the most popular operating system doesn't even have something like ssh built-in.
        • If by "most popular operating system doesn't even have something like ssh built-in" you're referring to Windows, I'm not worried. Two things are going to change that drastically over the next 5 years:
          1. really cheap, really large, bootable usb keys (! just bought 3 16 gig for $25.00 each, and they'll be turned into bootable linux keys). As the price continues to drop and capacity continues to grow, people are going to like the ideal of carrying not just some of their data, but their whole environment, in th
          • Okay.... That I can see happening...

            I know I'm bitching, but 16Gig USB sticks are expensive for what they are and the 1Gig sticks I tried to put on operating systems sucked donkeys balls.... Of course banks can buy them at a volumen.

            Anyway, booting from USB sticks is flakey. I'm pretty sure my wifes 6 year old P-IV can't do it. What about people still having Mac G4 or Mac G5?

            One of the problems with geeks and nerds is that they tend to see a 3-year old PC as obsolete. It isn't in the eyes of normal peop

            • A 6-year-old P4 should be new enough to offer an option at boot time to select boot media (something like Press F10 to select boot drive).

              As for the price of usb sticks, $25 for 16 gigs isn't bad ... but if you wait a bit, we'll probably see 32 gigs (or maybe even 64 gigs) for the same price before the year is up.

              Hardware detection on newer machines isn't as bad as on older stuff - there's been a lot of standardization in chipsets, etc. But you're right - one bank here WAS doing an "open an account, ge

  • Thick client companies have been touting this for years, but there is no universal thick client outside the browser to use, and no one wants to make a universal thick client, so you are stuck with embeddable thick clients, such as flash, that will be used...

    The internet needs to be split... straight info vs multimedia thick clients... then make a big thick client that's universal for all of them to use and everyone is willing to adopt to and use.
    • Why not:
      1. a container
      2. a list of components or widgets or classes or whatever you want to use (or just the directory they "live" in when you're building it ...)
      3. a specification of how you want them to be configured (could be done via a text file, or a gui app, like a netbean)
      4. an optional description of how you want them to communicate with each other, if you don't want their default behaviour
      5. a "make me" utility to build it

      You should be able to build either servers or clients from this.

The world is moving so fast these days that the man who says it can't be done is generally interrupted by someone doing it. -- E. Hubbard

Working...