Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

XHTML 1.0 now a W3C Recommendation 139

thehermit writes "New info on the W3C's Web site as XHTML 1.0 became a W3C Recommendation on Jan. 26. The specification now features a single namespace, and takes a more cautious approach to Internet media types, following feedback from W3C members on the previous version of the specification. " W3C notes that "XHTML 1.0 is the first step toward a modular and extensible Web based on XML". The full XHTML spec is also available.
This discussion has been archived. No new comments can be posted.

XHTML 1.0 now a W3C Recommendation

Comments Filter:
  • Great.. Now that you've told the first posters that they can have advance shots at first posts, they inevitably will. The rest of us have been keeping it under our hats, why didn't you?

  • It's a sad day when reality is marked as 'flamebait'. I guess there aren't any professional web developers moderating lately. *shrug*

    Whatever.
  • by Matts ( 1628 )
    Fine. Pick nits. I was quoting Tim Bray. I know what XML is and isn't.
  • It's not backwards compatible with HTML

    I think it is backwards compatible enough, at least if you use the 'transitional' DTD. It isn't that hard to convert old sites to it.

    And anyhow, the idea here is that by starting to enforce syntatically correct mark-up, the browser engines can be a lot simpler and so browsers can potentially be less bloated.

    Old (existing) browsers can't parse it properly

    They can't? Why? If you follow the backwards compatibility guidelines that are provided in the appendix of the spec, you shouldn't have problems with existing browsers.

    As a real-life example, there is a site my girlfriend built last weekend for the local Kendo society at http://kendo.greywolves.org [greywolves.org]. That site is in XHTML, and I believe older browsers will mostly have problems with only the PNG images there.

    Compared to what general XML can offer, it's a pretty lame DTD

    That is not the point here. The point here is to allow gradual transition from legacy SGML-based HTML to te XML-based one. And anyway, it is possible to put parts of a XHTML document to follow another DTD by using the XML namespaces. This way you can use any XML DTD you want, and still stay relatively compliant with existing browsers.

    There is a lot hype around XHTML, but I think it really is a good idea (and standard), even while not everything that people write about it is correct.

    And as to who will be using it, I'm doing all my Web development work with it even now, and my employer is also slowly transitioning to the standard...

    I hope this helped at least slightly with the question...

    /Bergie


    --

  • A hairpiece? Really? I couldn't tell (:-) )
  • ...a CSV (colon-separated-value ;-) file is as good a technology as XML.

    Um, no. How do you e.g. assign context to a CSV "element"? In such a way that should the "element" be moved in the file, the context remains? Is a CSV document readable to both a machine and a human reader? XML gives you that and more.

  • I think I'm starting to understand. Kinda. This is for people who want like HTML but want some of the xtensibility that XHTML provides. Hmm.

    I can think of a lot of reasons why someone would stick with HTML. I can think of even more reasons why someone would move to various XML schemas/DTDs. The XHTML is kinda of a weird in-between hybrid that doesn't satisfy any particular set of requirements over and above what XML or HTML would provide.
  • I see so many people in here and on IRC saying that they don't want to go to XHTML because *gasp* they have to have quotes around their attribute values! People, there isn't much to change for XHTML 1.0. The true web designers always push good, proper HTML code. XHTML is now enforcing that. A few changes, and you get many benefits:
    • The code is easier to read and debug
    • Web browsers and parsers can understand it better
    • In the future, your XHTML code will be easily viewed on cell phones, Palm Pilots, web-enabled waffle makers, etc.
    • You can include other types of XML in your document
    And all you have to do is:
    • Include a DTD
    • Put quotes around attribute values
    • Lowercase your tags
    • Self-close some tags like <img/>
    That's not that bad. Though no current browsers understand a XHTML document as XHTML (besides Amaya and I guess Mozilla?), you can still use XML parsers that understand it. It's a good way to get a feel for things. My parser (Mino, at http://mino.portaldesign.net [portaldesign.net]) understands XHTML and will indent it nicely when outputting to the browser. It'll have tags for SQL and XSLT soon. If you want a brief overview of the changes in XHTML, visit Gelicon.com's XML section [gelicon.com]. It should help you get started.
  • TipOfTheDay: Use tags like <br /> instead of <br/> when writing "tight" HTML, otherwise older browsers choke on it.

    I never thought of that. I thought it wasn't well-formed XML if there was a space before the backslash. Is that an intentional feature of XHTML/XML, or just a workaround to make XHTML display well on older browsers? Am I breaking my XML by doing so?

    Jay (=
  • by deusx ( 8442 ) on Wednesday January 26, 2000 @09:13PM (#1333242) Homepage
    No one uses XML yet, it's harder to parse in a program than proprietary formats, so no one uses them

    WHAT?!

    XML is the best thing since sliced bread! And, no this isn't a troll, I mean this! Hard to parse? What are you talking about?

    First of all, YOU shouldn't be parsing it. I don't care what language you're coding in, you'll probably find that someone else has taken care of that for you. I use Perl primarily, and switch between XML::DOM and XML::Parser, both of which handle all of the dirty work of chewing on the tags and characters.

    As I mentioned in the story on the Slashdot code release, I have a project: Iaijutsu: Open Source Content Management and Web Application Framework [ninjacode.com]. And this project makes extensive, pervasive use of XML.
    • The documentation I'm writing [ninjacode.com] (other than POD in the Perl modules) is being done with the DocBook DTD [docbook.com], which lets me write in one common format and publish in HTML, Word doc format, etc... all from one document.
    • Content classes may be created using a hybrid Perl/XML format which defines the class' properties, methods, template accessors, and various other aspects.
    • Objects in my system may be imported and exported in a simple, self describing XML format listing all of their properties. You can write it by hand easily in Textpad [textpad.com] or Emacs to make lots of objects easily...
    • XML is used to syndicate news and headlines from other sites, like the service Slashdot offers in the backend [slashdot.org]. I've written content classes in Iaijutsu which download these syndication files to collect headlines. And, I believe, Slashdot uses these files to make slashboxes.
    yes, I *do* write import/export routines, everyone still uses comma-delimited or dbf files, occasionally Access files too

    Then you've REALLY missed the boat. XML is EASY. Screw comma delimited, I've actually found it easier and more maintainably elegant to write quick Perl scripts which use the XML::DOM, than to hack out a CSV parser. Hell, I even have Oracle DB servers spewing XML streams at me to handle.

    XML is far from failed. Go back and try it again. As for XHTML, I don't know that it will ever be truly adopted, but if it catches on... we could write web browsers and web service consumers in a fraction of the time and code.

  • This is good. But I didn't see any notion of referential integrity in the spec.

    The biggest problem with the web is that we will still have to manage links manually. I hope someone takes the bull by the horns and figures out how to eliminate (or at least mitigate) the "404 Not Found" problem. Perhaps that now-open-source Udanax code could be mined to turn up some good algorithms?

    Another good idea (but really an unrelated spec) would be a file system redirector architecture that enabled all documents (per user preference, of course) to vend docs automagically via an HTTP or "blocks" server.

    .............. kris

  • Well.. since you seem to want compact code...

    while() {
    @arr=split(/([^\\]\")?,([^\\]\")?/); }

    finds , delimited stuff that is surrounded by quotes if present, delimited by ,'s

    but if this was true csv, the ?'s would disappear... it would have to be surrounded by quotes...




    ---
  • Think about it: I can write a page that links to one of your pages, and then you can just move it. There's nothing I can do to stop you.

    The only way that I can think of to enable distributed referntial integrity is to have a totally closed system.

  • Announcing the Ecsponent Message DTD that can be accessed at: http://www.ecsponent .com/opt/xml/message-xml1_0-strict.dtd [ecsponent.com] .
    Hopefully it works as well -;@}=.



    Dean Swift [xirium.com]


    Ecsponent [ecsponent.com] - The future of ECommerce


  • by Zurk ( 37028 )
    anyone know of any tools similar to SAX and DOM to interface to XHTML ? This would be significantly better than XML since its based on HTML 4.x as opposed to XMLs loosely defined structure.
  • > Reread the post - I said that XHTML allows
    > you to extend the tagset without breaking
    > the standard - I never mentioned older browsers.

    Ah, my mistake, you were talking about the standard, I was talking about something useful (implementation). My mistake.
  • > Invented the goddamn web, fool.

    > He set up and runs the w3c in order to
    > maintain its sane development.

    Gosh, colour me impressed. Did he invent those cool 'Internet monkeys', too?

    Whether or not Tim Berners-Lee is a visionary and/or a genius is beside the point. XHTML isn't going to come and save us all from the horrors of HTML, simply because it CAN'T. People don't tend to upgrade their browsers very often, and many times, they CAN'T. I know people who run old Macs who can't even get the Netscape v3 INSTALL program to run, much less the browser itself, simply because of limited resources.

    The whole point of the web is to connect people, despite what platform they're on, and that includes OLDER machines, too. Yeah, it'd be great to be able to depend on everyone having 1024x768 with true colour, the latest javascript & JVM, all the fonts in the world, plus completely perfect implementations of JavaScript, Java, and CSS, but it ain't gonna happen. All the footstomping and crying about how big a visionary and genius Tim Berners-Lee is won't change that.
  • we shall see . . .

    check you logs and tell me how many browsers that don't support frames hit your sites . . .

  • What costs more?
    1. Bandwidth to send a few dozen quote marks
    2. Bandwidth to send a stupid background image
    3. Hours of developer time spent building cross-browser code

    XML will not just eat up bandwidth, it will enable new forms of data exchange and interaction, any of which will add far more dollar value than the scraps of bandwidth saved by tag stripping.

    HTML is yesterday's Web. XML is the future, and it demands some adjustment.

    -cwk.

  • Now I wish I hadn't used my last point on some troll elsewhere.

    But I wouldn't be so grim about it. Reality has always lagged standards, but it'll come around eventually. In another year or two we may actually be able to write off Netscape 4-series browsers...

    -cwk.

  • If end tags and quotes around attributes make any significant difference you have no content.

    Remove the pretty pictures if bandwidth is a problem.

    /mill
  • by Anonymous Coward on Wednesday January 26, 2000 @09:21PM (#1333255)
    I'm involved in the W3C working group, so maybe I can answer...

    XHTML, like all XML, is *required* to be "well-formed", which basically means matched tags, no missing quotes, etc. The XML 1.0 Recommendation *requires* implementations to stop normal processing of an XML document that isn't well-formed. In short, if it isn't well-formed, it isn't XML.

    Browsers will eventually get smart about this. Mozilla already is. :-) If a document declares itself to be HTML, normal (lenient) processing will take place. If a document declares itself to be XML, then strict processing will take place. When authors are unable to view malformed documents, that forces them to fix problems at the front end, which is A Good Thing.

    Anon on purpose. Moderate accordingly.

    Posted with M13


  • XHTML is specifically designed to work in existing browsers if you follow the considerations in Appendix C [w3.org] of the spec. You can code your pages in XHTML now and they will continue to work. Good luck trying to do a cross browser layout using only XHTML-strict and style sheets though.
  • Ummm, that's pretty much exactly what they did. HTML and XML are both evolved from the same language, SGML.

    The information you are looking for is contained in the links. Read them carefully. If you don't understand, read it again.

    And I'd like to emphasise, because you seem to be a little clueless about this: this is not Microsoft hype. In fact, this has nothing to do with Microsoft. Frankly, you can bet they'll take this standard and warp it like they do everything else.


    If you can't figure out how to mail me, don't.
  • Unless you've got an ancient browser, like IE2 or NN2, any web site's XHTML will work as well in your "existing" browser as HTML would ... if it looks at it as HTML, which it probably will. The HTML spec always allowed lowercase tags, and almost all tags allow the matching end tags (which XHTML now requires).

    The "really ancient browsers" incompatibility relates to empty tags like "br", "hr", "link", and so on ... if the browser is that old, it probably doesn't understand the notational convention of "<br />" (space before the regular XML empty element terminator -- hope that shows up!).

    The reason for XHTML is so that tools have a more solid target than HTML can ever be. It's easy to get a good XML parser nowadays, and validators are getting more common (especially for Java programmers). That means that generating valid XHTML is something any tool can realistically do, so the bizarre hacks can start to fade away over time. Not quickly enough for me, probably. Browser bloat is with us for a long time.

    Best possible result: enough XHTML starts to show up that people start discarding all those really ancient browsers. NN3 is current enough, but designing a website to deal with older code is just plain awful.

  • Not too many, that's for sure. But even that's not the entire issue with frames. There's full and partial, good and bad, as with every feature.

    Let's, for example, look at Navigator 2 (yes, lots of Nav 2 people still around). If you make a site that uses frames, if you try to do something fancy, you might get bitten by the inability to make 0 pixel margins. Might get bitten pretty badly, and have lots of cute scrollbars show up. I see this all the time on the web by developers who don't know about that (or don't care, or don't have time, etc.)

    And then there's different WAYS of implementing this, since it's not standard (margins, that is), IE uses 'topmargin' & 'leftmargin' - Navigator, because Netscape is a stupid company, decided to implement the same feature (much later) as 'marginheight' and 'marginwidth'. Lovely. Okay, so we waste some bandwidth by using both. Not that big a deal, as both seem to work.

    Then we come upon BUGS. Okay, lovely, now we get to see that every version of Navigator from 2.x through 4.7 has a bug with fixed-width frames (specify size in pixels). Namely, it doesn't work. This can be very bad if you're depending on it to work. After MUCH harassment, I finally got them to fix it a few builds ago in Mozilla. Oy.

    So, where does this leave us with the previous discussion? Namely - sure, you can add features, and claim they're backwards compatible, but that doesn't mean it's going to work the way you want them to, or when you want them to, or with all the browsers you want them to. XHTML is likely a great idea, but claiming that it's going to be backwards compatible is, I think, a big mistake. Maybe we paraphrase the Hitchhiker's Guide to the Galaxy phrase, "Mostly Backwards Compatible". Which, of course, is worse than Compatible or not. This leads to a real hodge-podge of crap, and is among the many reasons why there's no program out there capable of creating HTML that's multibrowser (as in platform and generation) compatible.

    Good thing I prefer a text editor, anyway.
  • I never thought of that.

    Neither did I until it bit me.

    I thought it wasn't well-formed XML if there was a space before the backslash.

    Nope. Take a look at rule 44 [w3.org] in the XML spec

    [44] EmptyElemTag ::= '<' Name (S Attribute)* S? '/>'

    The place you can't have a space is after the slash and before the tagname in an ending tag: </ P>

  • I ran http://slashdot.org through the validator (http://validator.w3.org), and it puked.

    The sad part of this is that this is common among almost every major website. Nobody follows the standards.

    I hate the web :-)
  • ALL braces should start on new lines. If you look at handwriting, you'll notice that consistancy is what makes something look 'nice' and readable. And that recommendation is NOT consistant. it's horrific. consistency ! what consistency is there to find between a function definition and a block of code or structure definition ? 8 character tabs are a UNIVERSAL constant, try breaking this, soon enough will your code look crooked. you should probably not use tabs for indentation anyway. Real life editors know the difference between tab size and indentation steps.
  • Abolutely right ! TAB SIZE IS 8 FOR EVER you can indent to anything you want, but NEVER EVER CHANGE TAB SIZE
  • Directories do this all day long. It's just a protocol implementation.

    The question is, when is this mechanism going to emerge, and how do you get people to use it?

  • Nope. Well, partly. I see karma as a game. By no means do I take it seriously, as you seem to.

    But that aside, I like making contributions. Sue me.


    If you can't figure out how to mail me, don't.
  • A csv parser?
    #!/usr/local/bin/perl
    my $delim=',';
    my $size=3;
    while()
    {
    chop;
    my @a=split($delim,$_,$size);
    print $a[2],$a[0],$a[1];
    }



    ---
  • That's exactly what I wanted to know. Thanks.


    If you can't figure out how to mail me, don't.
  • Someone has answered your question above.


    If you can't figure out how to mail me, don't.
  • I understand your concern - the management of links, and possibly the inclusion of bidirectional links, has been on the minds of many people.

    As part of the "suite" of XML standards, XLink is a standard for the management and declaration of more advanced linking features.

    I'm not sure if you ever took a look at HyperG, an experimental hypertext system from a few years back, but it had an excellent link management system. Dead links didn't exist by design, and there was an excellent link navigator that showed you the structure of links, not just the page text.

  • Spoken like someone who, I can only hope, has never done web development for a living.

    Anyone who tries to use these things that the W3C says won't break on older browsers is in for a rude shock. I'm sure XHTML will be the same as CSS was, and JavaScript.

    1) There will be bugs in implementation. Things will break, even if (or especially if, in the case of MS, most likely) you adhere to the W3C standard.

    2) Things won't be fully implemented any time soon (yes, MS, I mean you. You, too, Netscape/Mozilla. You don't think Mozilla is going to be able to implement this right away, do you? Get a clue.)

    3) Trying to use these new standards for anything useful means you're likely going to try to depend on them, which means you're going to be doing things that can't BE done in the older standards (else why would you be using the new one?). Once you do this, you lose functionality, or more likely 'break' on the older browsers.

    Unless you're willing to make your server detect what browser is hitting your page, and spit out a version specifically for that type of browser, thus defeating this whole nonsense of 'it won't break existing browsers'.

    Don't be so naive. The W3C isn't made up of people who have had to make websites for a living under normal conditions, so they've little idea of what's going to work and what won't, so they've no hope in hell of ever coming up with any standard that's going to be backwards completely compatible - BECAUSE IT ISN'T POSSIBLE. They think anytime they come up with a new recommendation, it'll be implemented immediately, and bug-free, too, by gosh, and never realize how bad things can get when things aren't implemented completely, and/or are implemented badly (say hello to CSS!).

    IMO, anyway. :)
  • Look around your desktop. That is, your real one, not the one on your computer. If it's a total mess but you can find everything (like any self-respecting user), then you're good to go. Same goes for HTML. If _you_ can understand it, then why should you care that other people need to do the same?
  • Anyone who tries to use these things that the W3C says won't break on older browsers is in for a rude shock.

    Reread the post - I said that XHTML allows you to extend the tagset without breaking the standard - I never mentioned older browsers.

    Frankly, if you've read any of the abundant documnentation available you'd know what I was talking about.

    Once again, XHTML allows you to extend the tagset while still staying within the XHTML standard.

  • XML is essentially a simplified version of SGML. One of the simplifications is that every element which is not empty must have an explicit end tag. In SGML one could use the - and o notation to indicate whether the start and end tags, respectively, are required or optional. Thus, for example: is the SGML definition of the paragraph element in HTML 4.0, with the O specifying that the end tag is optional. I don't think the move to XHTML 1.0 will have any significant effect on existing browsers, as it is entirely backward compatible with HTML 4.0.
  • That's partially right, namely about that future being envisioned: cut the bloat associated with needing to handle any old garbage that shows up at the client.

    But it's also wrong. Extensibility is the "X" of XML; XHTML added nothing to XML's extensibility. Except to standardize one more 'vocabulary' of elements and attributes. That's useful; everyone knows the HTML vocabulary.

    The idea is pretty much like this. XHTML 1.0 has defined the vocabulary of HTML (tags and attributes), and its namespace. An upcoming version (XHTML 1.1 is its current codename :-) will modularize that, so you can have a "text" module or a "table" module or a "list" module.

    So that when you need to define a custom XML document type to fit into some custom application, with PDAs and cell phones being the classic examples, you can pick and choose: Text and lists may be plenty, you don't need bibliographic citations or definitions. BUT you do need your own particular biz-to-biz vocabulary addition; maybe you're providing catalog entries, and the descriptions are simple text but there's all sorts of ways to define fields to describe pricing options, ordering, stocking, etc.

    Or another way to look at it: you're going to be able to throw away HTML tags you don't need, and use only the ones you care about when you create new kinds of documents.

    That's one hundred and eighty degrees away from the "extend HTML" model. It's a new model for how information will show up, as part of the "semantic web".

  • This thread is no longer appropriate for /., so I've sent you an email.


    If you can't figure out how to mail me, don't.
  • After reading the linux kernel recommended coding style i'm absolutely horrified.

    starting braces at the end of lines...except for function declartions? yes ofcourse, this is cause functions can't be embedded. talk about MUNTED.
    And 8 character tabs? geee....he talks about saving lines by not having braces take their own lines,b ut talks about how saving horizontal space is irrelevant.

    ALL braces should start on new lines. If you look at handwriting, you'll notice that consistancy is what makes something look 'nice' and readable. And that recommendation is NOT consistant. it's horrific.
  • Standards lag behind the technology

    They used to, back in the days of HTML 2.0->4.0. I think it's a bit different these days, since the CSS/DOM people have built quite a big bunch of standard which is way beyond what most browsers support, and the browser writers are playing catch-up. Of course Microsoft are adding all sorts of weird extended style-sheet stuff, but I've never seen any of it actually used, probably because no-one really understands it.

    people are way too lazy to actually follow standards.

    Sometimes. I think in more cases, they just don't really know standards are even there to be followed. I think most content on the web is hacked up by people who've learnt HTML from reading other people's HTML, or from a woefully inaccurate "HTML for Tossers" book. Or they're using FrontPage, God help 'em.

    Ten years from now, there will still be messy "optimized for Netscape"

    You're right, of course. :-(

    browser writers will still fudge the standard, and people will still check their HTML on the only browser they have before putting it on the web.

    But ten years from now, we'll fantasise that politicians were honest, prices reasonable, Netscape implemented standards and JavaScript ever worked. :-)


    --
    This comment was brought to you by And Clover.
  • Why everyone is trying to use XML for every possible application while XML itself is not very well "standardizable"? It has no predefined way to attach any formal (or even not so formal) description of semantics of the data to DTD -- I would understand if I was able to attach pieces of, say, portable C code into "the definition", and say that everyone who wants to support my format can compile code, extracted from my "definition" using some standard parser/converter, link it with standard parser library, feed the same DTD to that parser, and the result will be a "skeleton" of compliant with my standard input/output/display/... procedure. But right now we have only trendy-sounding TLA for simple "open tag -- recursion -- close tag" format that isn't much better than anything else, but differs from any other format in rather spectacular way -- no one so far produced completely compliant and usable parser for it in compiled language (no, gnome-xml isn't compliant -- unicode conversion from charsets, other than hardcoded in the source, shows its ugly head).

    I understand the need for standardization. I understand that comma-separated values or plain key-value list poorly represent complex nature of the data. I understand that HTML standard committees royally screwed up under the pressure of companies. I understand that in general text is cheap. I understand that XML at least provides some means to show structure and attributes of the data (but so did RFC-822 + MIME more than ten years ago -- just with a bit more waste of space). But sorry, this feeble attempt of meta-standardization just doesn't _do_ enough to justify itself now. Semantics of the data still should be defined in English, and quality of definitions that I see declines rapidly. It helps with displaying that data, but displaying is a microscopic, almost unnoticeable piece of any serious data processing. Semantics still has to be handled by "manually" written, rewritten, ported everywhere and debugged programs that actually are supposed to know what to do with data. Programmers still can't derive any useful information about the data nature from DTD, and should rely on vague texts and their interpretations of it, so the effort, XML saves (writing a parser for arbitrary format) is a big fat zero compared to the real work programmer still has to do to make his program work. No way to do formal proof of anything except that data is formatted as it's supposed to. No way to derive testing procedure for implementation of the processing program. No anything that actually helps programmers to write a useful program and make sure that it works.

    Parsers are written in the languages that are nice for demos and small web sites, but don't scale on anything large (what is it, a conspiracy of hardware manufacturers?). I can churn out XML-like meta-standards at the rate ten per week, but since all of them will share the same flaws, why would anyone care? Why do we see a lot of "uses" for XML, but no real progress in improving it in the most natural way -- standardizing the linking between format and semantics? It's possible to keep XML as it is -- it's good enough to define some "canonical" form, the data is (or can be) kept, but without a useful way to handle semantics it's dead.

    I am afraid that this situation is created on purpose -- there are already some formats of data that have semantics attached. The problem is, they are proprietary, tied to platforms, languages and architectures. They have semantics, however the formats, they use for data exchange are unnecessarily cryptic or hard to serialize to the stream of bytes. By keeping proprietary "guts" with algorithms, object models, transaction-level protocols and adding "open" formatting of the data vendors get the best of the both worlds -- no one but them can make any sense with the data (both implementation of data handling and the objects-handling engine itstlf are closed-source -- say, COM), but they look "open" and "nice".

  • For instance, say you have a string with commas in it? You could escape them but the standard way is to quote the string. But then we have to deal with quotes in the string. That's usually handled by escaping with double quotes so you get something like

    0,5,"Luigi ""scarface"" McDowd, Li McFadden",12,12,true

    But now you also have the potential for error conditions such as meeting a quote in an unquoted string ( ,abc"def, which you can treat as not an error if you wish) or a quote in a string after an opening quote where the quote is not followed by a comma or a second quote (,"abc"def",) or strings with unclosed quotes (,"abc,asdasd,123,) which generally break things pretty badly.

    Your snippet may work for quick and dirty hacks where you know the file format will be resonably behaved but it is not suitable for production code (unless you're producing for Microsoft).

    Try looking at the output from Excel sometime.

    That's not to say that XML is necessarily better as I've never really used it just that your exmple doesn't hold water.

    Rich

  • Who cares? They wouldn't bother with all the first post crap if your type didn't make such a big deal out of it.
  • It's braces actually ;).

    I used to like putting braces at the end of lines - java convinced me of that - for a while. i went back to C++ and now i always have braces on their own lines. IMHO it's tider ...for some reason code looks all sloped with braces at the end.
    My reasoning is that braces have nothing to do with the the code on the line, a brace simply should be used to identify a block, and as such it should start and end on the same indentation level.

    And yes, I probably will be flamed to hell ;)

    I normally use what VC++ gives me for indentation too ;), it's prolly around 4.
  • Looking over the spec, I see that the w3c spec will begin enforcing things that most browsers have allowed, such as without a closing tag.

    Ugh! While I applaud efforts to bring more standardization to the Web I dislike the idea of forcing things like closing tags and quotation marks where they aren't needed.

    Granted there are people out there who live and die by following standards, but closing every tag is something I don't want to deal with. When you're running a large site that receives millions of accesses daily shaving off a few extra bytes from pages can make massive bandwidth savings.

    Ya, ya..DSL/Cable modems are more common and they enable great things, but anything that forces us to use more bandwidth than we have to in the name of "standards" seems silly to me, especially when most of America is still on 56k modems (not to mention the rest of the world).

    I expect some Holy Hand Grenades to come my way after this one, but I've seem what non-standard trimmed up code can do and it saves time, money, and most importantly mucho bandwidth. Go look at the source at Yahoo [yahoo.com] for good examples of stripping tags and quotes.

    case_igl

  • by Bob Ince ( 79199 ) <andNO@SPAMdoxdesk.com> on Thursday January 27, 2000 @12:16AM (#1333297) Homepage
    Incidentally, I don't see any support for such tricks as using tables to lay out a page

    But I don't see them specifically ruled out either, any more than in HTML 4.01. Sure, W3C don't want people using them, but there's nothing much they can do about that.

    Will this force people to recode their layouts with CSS (which they probably should do anyway)

    Yeah, I know it's very worthy and everything, but have you ever tried converting a table layout to CSS? It ain't fun.

    First, of course, browser support is terrible; Netscape tends to break if you have the temerity to put a positioned element inside another positioned element, and it messes the whole page up if you try to mix CSS-P with tables to achieve some kind of graceful degradation on

    But that's not what's wrong with the standard, obv. What's wrong is the total lack of flexibility in positioning. Normally with positioning you want to say things like "this element is to go 3 ems to the left of that element", or "this element should line up horizontally with that element and vertically with the other element". But CSS gives you only two choices: specify an absolute page position, or move the element a bit in some direction; you can't mix the two horizontally and vertically, and the latter option is usually useless anyway since it leaves an element-shaped hole in the parent.

    This could nearly be half-workable, since you can achieve more complex effects by putting elements inside other elements. But Netscape 4 breaks so very, very badly if you try that the page often becomes completely unreadable.

    So what you end up doing is either making every element absolutely-positioned to the page pixel, which is okay for the kind of fixed-layout fixed-width page which idiots write, but otherwise useless, or you end up writing a complete page-layout engine in several KB of JavaScript at the top of the page, slowing everyone down. And of course writing layout JavaScript that works with IE4+, Netscape 4 and the W3C DOM is a Sisyphean task. Oh, and of course people with JavaScript turned off are screwed.

    To summarise: CSS is not up to producing interesting, dynamic-page-size layouts, and browser-supported CSS is not up to anything at all.

    To summarise the summary: Style. Is a problem.

    To summarise the summary of the summary: Aaaarrrrrghhh.


    --
    This comment was brought to you by And Clover.
  • by noc ( 97855 )
    (I wish I could write the "is a subset of" character, let's pretend that it's XHTML It's an XML application. It's based on XML. Any tool you can use to parse XML will parse XHTML.
  • Actually, I much prefer brackets at the end of a line (most java code is like that, IIRC). Just because you don't like something doesn't make it inherently wrong.

    I do think the 8 char tab space is kind of weird though, I personally only go with two, or whatever VC++ gives me :)

    The sad thing is, you'll probably be flamed to hell and back, or moderated into oblivion for saying anything 'anti Linux' around here...

    Amber Yuan (--ell7)
  • 'Trolling guilds'?!?!, 'The Order Of The Thousand Inch Fan'!?!??!?!

    What are you talking about? And what do I have to do with it?

    Are you trying to convince people that there's some kind of grand conspiracy amongst the trollers and that I'm someone involved in it?

    Whatever, I'm obviously up way to late :)

    Amber Yuan (--ell7)
  • by noc ( 97855 )
    [wow, that's not how it looked in the preview screen! Lemme try again:]

    XHTML is a member of the set XML. It's an XML application; it's an XML-based HTML.

    What I'm trying to say is that any tool you can use to parse XML will parse XHTML.

  • XHTML isn't as much a protocol as a language, though that may have been explained to you.

    Probably more interesting to come out of this discussion is this: you were led to believe that Microsoft was coming out with XML and XHTML. And this is exactly what Microsoft likes to happen. Every time they can make someone believe this, that's one more shoe-in customer.

    It's sad, really. And it's not your fault.


    If you can't figure out how to mail me, don't.
  • One would imagine these guys could author a decent page, but no...

    The text is unreadable due to the background graphic/colour. However this is not always the case! The page seems to alternate between having a blue 'W3C Recommendation' background and a sane white background. Hit reload on it and see for yourself... Chaotic! (or is this Netscape gone crazy)

    Forget XML, XHTML, we need miniHTML! Just the minimal subset of P, UL, simple TABLE, ...
  • The primary goal of XHTML is to allow you to extend the core set of tags with your own tag sets so that you may add markup functionality without breaking the standard (as has been done in the past).

    No! This is a total misunderstanding. XHTML 1.0 is simply a recasting of HTML 4.01 into XML compliant syntax. You cannot extend XHTML as such by adding your own tags. You can produce hybrid documents by combining XHTML with other XML dialects, but the result would not be XHTML. You could even combine XHTML with XML dialects you create yourself. But you would be very foolish to do so.

    XML dialects are only useful if they serve a significant community who have tools which understand the dialect and can do useful things with them. If you just make it up yourself as you go along, then the only thing you can really do with it is use XSL to translate it back into standard XHTML, so you've gained nothing.

  • by Matts ( 1628 )
    Rather than moderate you down (just overrated IMHO - I don't want to take away your karma!) I thought I'd respond to you.

    I think XML isn't what you're looking for.

    XML is pure and simple an interchange format. It is designed for interoperability. I can be certain that an XML file that complies to my DTD does exactly what I say it does. I can be sure that I support all the character sets necessary. I can be sure that someone can author XML files in Windows, Unix or VMS, and still have them work. I can be sure that I can send someone my XML file and have them be able to read it and construe some sort of comprehension of the format.

    XML is not the be-all-end-all file format. It's not small. It's not pretty. It's not fast. But it is a standard that provides some nice features for developers. The key feature is standard tools. It wouldn't have mattered if the standard was some binary format - so long as all developers had access to these first class free tools that all work alike across platforms. I think that's still an achievement.

    I personally think you're ranting a bit, and not experienced the ease with which it is to develop cross platform tools using XML for data interchange. Try it - you might like it. And if you don't, switch back to CORBA with all the nasties in there, or COM or some other supposedly "cross platform" method of data interchange. And write your own parsers for your own mini-format. There's More Than One Way To Do It (tm).


  • by dingbat_hp ( 98241 ) on Thursday January 27, 2000 @01:54AM (#1333314) Homepage

    If XML is a failure, then I hope we should all fail so spectacularly ! I'll be writing the XML handlers that send out welfare cheques to you, and all the other unemployed CSV import coders.

    The downside and "failure" of XML is that it's still immature as a wetware discipline (not as a protocol). XML and especially schema design is regarded in the same way as database design was 5-6 years ago. For years before RDBMS design had been the sole preserve of gurus like Ted Codd (i.e. the SGML era), then along come M$oft with Access and suddenly everyone and their dog thinks they're a real database designer. Cue a whole pile of badly normalised (or just downright ugly) data models, or in today's situation a lot of nasty slapped-together XML structures. It will be a year or so before people realise that XML schema design is a discipline in just the same way as good RDBMS design is.

    TipOfTheDay: Use tags like <br /> instead of <br/> when writing "tight" HTML, otherwise older browsers choke on it.

  • Muhaha! Now you all must learn what I've known all along.. . the table, and font tags have been deprecated.. . you all need to learn Style Sheets to format your page :P
  • You meant to say docbook.org [docbook.org] not docbook.com. Also DocBook is an SGML DTD, not an XML DTD (I guess you knew this already). Interestingly Normal Walsh has written DocBk XML DTD [nwalsh.com], an XML DTD based on DocBook. DocBook 5.0 will be XML compatible. Btw, if you want to use XML extensively checkout task-xml and task-xml-dev in Debian potato. Ganesan
  • The web will never be fully standardized. Standards lag behind the technology, and people are way too lazy to actually follow standards.

    Ten years from now, there will still be messy "optimized for Netscape" (whether or not Netscape is still even used) HTML on the web, browser writers will still fudge the standard, and people will still check their HTML on the only browser they have before putting it on the web.

    Who was it who said "The great thing about standards is that there are so many to choose from." ?
  • did anyone else notice that the Recommendation itself was authored in XHTML 1.0?
  • But it's also wrong. Extensibility is the "X" of XML; XHTML added nothing to XML's extensibility.

    Nor did I claim it did. I simply stated XML is the machanism by which the extensions are provided...and I count contractions as expansions as "extensibility".

  • First step to knowledge is realising when you don't know something.

    A year ago, I realised that I was missing the XML boat and played some rapid catch-up. Now I'm a real evangelist for it. XML is the most exciting new tech I've seen since reliable IP stacks on every desktop. Until Summer I had to push clients into using XML, Autumn I was first recruited because of my XML knowledge, and this year the phone has gone into meltdown.

    Should you rush out and dive into XHTML ? Not IMHO. Start out by getting a good grasp of XML in isolation. I don't know what you do all day, but many big markets will always be pure XML without any XHTML involvement. WAP/WML might be relevant to you too, if you're into palmtops or wireless.

    XHTML is less revolutionary than XML. XML is a way of doing new and exciting stuff that just wasn't practical before, XHTML doesn't really add much to that, it just lets developers roll it out without breaking every existing client. It's good stuff and we should adopt it, but it isn't going to invent new business models the way that XML has (how do I syndicate content from everything in the known universe without something universal like XML ?).

    I think this is another hype from Microsoft

    No, definitely not Microsoft's hype. Microsoft are keen on it, for sure, but they're riding the bandwagon, not generating the hype.

    Yes, Microsoft have broken things. Fortunately XML was up and running before Redmond woke up, so they didn't get to break it. OTOH, XSL has been thoroughly trashed by them and XML Schemas are under attack (it's neck & neck between MS & W3C). I haven't looked at this week's XPath goodies from Redmond (new MSXML download yesterday ! Go get it). Much of the M$oft steamroller effect is because they're actually implementing new and useful stuff like parameterising stylesheets (Caveat - I haven't yet seen what they've done, but I know I want it) and they're still the only people with a usable client-side XSL on desktop browsers. I hate IE5, but it's just so damn useful that I can't avoid it.

  • "XHTML has nothing to do with Microsoft (yet, anyway)."

    Two M$ staffers are listed as being members of the XHTML working group dude.

    But at least they are in the minority. IBM had more people listed than M$.

    ti_dave
  • Invented the goddamn web, fool.

    He set up and runs the w3c in order to maintain its sane development.

    The man is a visionary and a genius.

  • What about using the Xtensibility of this new (XML 1.0) standard to introduce some tags that would allow for client side scripting.
    Sorta like Javascript only with a syntax that's more consistent with the rest of what's going on.
    Actually I'd be happy with just a few basic data structures.
    For Instance:
    a <tree> tag that would enclose <node> tags ; or an <array> tag.
    Seems like I remember Tim Sweeney (of Unreal) talking about parametric data types nd such and being mentioned on these pages this would be kind of the same idea.
  • I personally think you're ranting a bit, and not experienced the ease with which it is to develop cross platform tools using XML for data interchange. Try it - you might like it. And if you don't, switch back to CORBA with all the nasties in there, or COM or some other supposedly "cross platform" method of data interchange. And write your own parsers for your own mini-format. There's More Than One Way To Do It (tm).

    You completely missed the point. XML is just fine as interchange format -- as I have said, MIME is more wasteful, comma-separated lists are too simple, and key-value pairs are both. The problem is that it has "formal" DTD and is being used for standardization and declaration of formats for applications -- something where semantics (substance) must be primary and actual format (form) serves it. It's clearly unsuitable for this goal, and allows all kinds of abuse.

    Parsers are simple, no one even writes them by hand anymore for anything more complex than comma-separated list. Semantics is complex, and every protocol has its own one.

  • A csv parser?

    Okay, that's a good quick hack to parse: foo,bar,baz

    How about: "John Malkovich", "John \"Blah\" Doe", "Steven Wright", Cher, "Larry Wall"?

    Yeah, I know, you needed a quick hack to parse #1, but eventually someone will export an Excel file to what *it* calls CSV and get something like #2. Then, that little hack gets a lot bigger.

    #!/usr/bin/perl
    $parser = new XML::Parser(Style => 'Tree');
    $tree = $parser->parsefile('coolstuff.xml');

    And you get a pretty simple tree data structure of your XML, ready for quick hacks to walk through it and pluck out your data. It's not that hard.

  • Thank you for proving his point.

    What if the fields require a , ?

    Sure just the syntax a bit but soon you have something that cannot be parsed quite so simply. Also what about more structured data?


  • by Anonymous Coward
    XML is not an interchange format. It is a meta language for describing interchange formats.
  • OK

    XHTML is basically just HTML4.0 rewritten so that
    is complies with the XML standard. Luckily this
    doesn't break older browsers at all. Actually
    not really luck, just good design.

    The reason this is done is so that you can use a
    generic XML parser to parse (X)HTML files as well,
    which is great if you need to manipulate/create
    HTML files dynamically.

    For more information check out http://www.w3.org

    Cheers,

    Benno
  • Then you've REALLY missed the boat. XML is EASY. Screw comma delimited, I've actually found it easier and more maintainably elegant to write quick Perl scripts which use the XML::DOM, than to hack out a CSV parser. Hell, I even have Oracle DB servers spewing XML streams at me to handle.

    Why on earth would anyone write their own CSV parser? There are plenty of CSV parsers available to use. Just as is the case with XML parsers...

    Otherwise I totally agree with you - ofcourse one should try to use XML rather than CSV.

    W S B Fear the dingo and its mighty, poisonous fangs...
  • tidy -asxml yourfile.html > yournewfile.xhtml

    Get tidy here [w3.org].

  • Broken links are an inescapable part of the idea that the web allows each site to be accessed or linked from billions of other pages. Imagine if Yahoo had to keep a live connection for every incoming link and a java-style "listeners" array for every page so as to back-propagate changes. They would have to dedicate whole servers to it.
  • Looking over the spec, I see that the w3c spec will begin enforcing things that most browsers have allowed, such as

    without a closing tag. Any idea how browsers such as mozilla or whatever will deal with this restriction?

    Are we going to be getting errors or unrenderable pages due to bad HTML? Frankly, I hope we do :-) It'd serve them right.

    Just an observation/question.


    If you can't figure out how to mail me, don't.
  • hehe, that was supposed to a <p>. Sorry about that.


    If you can't figure out how to mail me, don't.
  • by rambone ( 135825 ) on Wednesday January 26, 2000 @08:36PM (#1333343)
    The primary goal of XHTML is to allow you to extend the core set of tags with your own tag sets so that you may add markup functionality without breaking the standard (as has been done in the past). The "X" comes from the fact that extensions are XML-compliant markup structures.

    While it might not be realistic, the W3 likes to envision a future where clients become much more lightweight and flexible by putting all parsing and presentation into standard XML parsers and stylesheet tools. Currently a significant amount of browser bloat is due to the fact that the browsers pretty much render anything you throw at them. Hopefully this will change lest our HTML parsers grow to 20MB.

  • What, pray tell, does XHTML 1.0 do? Is it extensions onto HTML?

    It is a method for making compliant extensions to traditional HTML.

    XHTML presumes that HTML will always need extensions, most of which will focus on small problem domains, so it no longer makes sense to grow HTML itself into a larger monolithic standard.

  • Here's how you use XHTML in your pages:

    Reach around the back.

    Pull out the plug.

    Place your computer in its original packaging. You *did* keep your original packaging, right?

    Return your computer to where you bought it.

    If they ask why, tell them you're too stupid to own a computer. (The more astute among you will recognize this punchline from an allegedly true WordPerfect Tech Support call and its support tech's response.)

    And to quote Craig McPherson: thank you.

    And if you were actually serious about your question, it will require more explanation than I'm ready to give in this comment. Suffice it to say that when it is finally adopted there will probably be some point-and-click MS program out there. You don't need to worry about that. Don't think. Let MS and its cruddy software do the thinking for you :-) That IS how you trained for your MCSE, right?

    I know this looks like a flame. It isn't. I'm assuming the guy above was a troll, so I am trying to be funny. I don't think it's working...


    If you can't figure out how to mail me, don't.
  • If someone could enlighten me as to the advantages of this protocol it would be helpful

    The general goal is to allow people with special domain needs to extend HTML for their own purposes without breaking the standard.

    How this will translate into rendering is another issue, although the folks at mozquito [mozquito.org] are building work-around tools to allow XHTML to be used in current browsers.

  • Who was it who said "The great thing about standards is that there are so many to choose from." ?

    Andrew Tanenbaum... Although he probably stole it from someone else.

  • Um, why wouldn't you be able to use SAX or DOM to pass XHTML?
  • Standards lag behind the technology They used to, back in the days of HTML 2.0->4.0. I think it's a bit different these days, since the CSS/DOM people have built quite a big bunch of standard which is way beyond what most browsers support, and the browser writers are playing catch-up. Of course Microsoft are adding all sorts of weird extended style-sheet stuff, but I've never seen any of it actually used, probably because no-one really understands it. They still do. At least, my company still does because a significant enough percentage of our visitors use NN 3.0. Even when I have the chance to use HTML 4, I find myself restricted because I must support current versions of Communicator and IE on Windows and the Mac. My company want the content to look a certain way and I try my best to get consistant results across all of the browsers that they choose to support. I know that this is way outside of what HTML was intended to do, but try explaining that to management. Plus, I think about things like PDAs, phone browsers, and web page readers for the blind. I try hard to not cut these folks off from the content, especially for the sake of asthetics. Sometimes. I think in more cases, they just don't really know standards are even there to be followed. I think most content on the web is hacked up by people who've learnt HTML from reading other people's HTML, or from a woefully inaccurate "HTML for Tossers" book. Or they're using FrontPage, God help 'em. That's a big part of it. But I think alot of them wouldn't care even if they did know. It's enough for them that it works on their web browser. Why make more work for yourself by trying it under other circumstances? I guess most of the folks making web pages just don't think like programmers. -Jennifer
  • I was at XML '99 and asked several people what the point of XHTML was and never got a satisfactory answer. Maybe the /. audience will help me out some.

    What's the point of XHTML?
    • It's not backwards compatible with HTML
    • Old (existing) browsers can't parse it properly
    • Compared to what general XML can offer, it's a pretty lame DTD


    Is it meant as a stop-gap between full XML support on the web? Or is it meant to leverage the existing HTML code base? Or is it meant to be a simpler migration to XML for people who know HTML?

    Or let's put it this way: If a new browser comes out with full support for XML plus a compatibility mode for regulat (SGML-based) HTML, what's the benefit to having XHTML?

    I didn't want to say this to anyone's face at XML '99, but I don't get why people are spending so much time and energy on XHTML for. Who will use it?
  • did anyone else notice that the Recommendation itself was authored in XHTML 1.0?

    Yes, my browser (iCab) rendered all of the text (I think). iCab also has the ability to generate an error log. It was really, really long. But, then again, I've never found a page in the wild that didn't have at least one error on it.

    -Jennifer

  • 2-4 seems like a nice tabsize for me; I change it
    on occasion, depending on language, really.

    In shell scripts, you typically end up with tons
    of indentation (and not to mention Lisp...)

    C, on the other hand, is recommended to have no
    more than three levels of indentation; divide it
    up into sub-procedures as much as you can.

    And Gnu code...yes, I agree with those who say
    it should be indented six feet below ;)

  • The documentation I'm writing (other than POD in the Perl modules) is being done with the DocBook
    DTD, which lets me write in one common format and publish in HTML, Word doc format, etc... all from
    one document.



    I've spent a lot of time research XML/SGML solutions for my company and the word the picture painted by vendors (despite the hype) is that XML has not yet established itself in the market. It hasn't yet unseated SGML and, compared to SGML, is not yet ready for prime time.

    The most widely used application for XML I've seen are the examples you've specified (web clipping and inter-application file formats). *Yawn*. The previous poster is right -- a CSV (colon-separated-value ;-) file is as good a technology as XML.

    BTW, what tool are you generating MS Word .doc files with? I've seen db2rtf publishers before but never one that generated db2doc.

  • ...what about the code trimmers of the world? I took a look at the XHTML 1 spec and found a lot of rather useless new ways to read tags; for instance - you must use quotes around _everything_. This includes hex numbers for bgcoloring, table width, and more. W3 also said that coders must now close all of their

    tags, which proves completely useless. Plus, now W3 is weaning the hard-coders of the world off of and adding in since it's an empty tag.

    Maybe I'm just complaining, but when I take a look at the results of my hard-coded page to see how big it is and how long people have to wait, the little things _do_ count. I do agree that this is some extremely trivial stuff to be yacking on about, but I feel that there are a few old-schoolers out there that'll agree with me. And if there aren't, well, it doesn't matter.

  • A week and a half ago, I began converting my site [moby.org] from HTML 4.0 [w3.org] to XHTML 1.0 [w3.org]. Thanks to the W3C's validator [w3.org], it was pretty easy to do.

    Aside from changing the DOCTYPE and adding an XML declaration, all I had to do was make all elements and attributes lowercase, quote all attributes, and close all standalone tags (<br/>, <hr/>, <img src="tweet.jpg" ... />, etc.). It only took a little further tweaking to make it display nicely in Netscape 4.7, IE 5, and even lynx!

    Unfortunately, it seems that XHTML chokes Mac IE 4.5 (and presumably surrounding versions). That browser just displays the page source without rendering it. Since I want my site to be viewable by anybody on any platform (and IE5 is not yet out for Mac), I had to go back to HTML 4.0. Argh!

    I really like XHTML so far, though, and will probably convert to it as soon as Mac IE supports it (4.5 users: tough luck). If you want to see one of my preliminary XHTML endeavors, go to moby.org's mailing list archives page [moby.org]. Try it with any browser. AFAIK, it works fine with almost all of them.

  • Try working with the EDI X12 format for a few months. XML is a dream compared to EDI. As soon as the major players in all of the industries that use EDI can come up with a industry-wide XML standard (i.e. After we go ice skating in hell), XML will really be kicking ass. This may not be anything that you can use at your job now - comma delimited text files are still pretty useful for certian tasks - but XML is worth looking at, especially in this age of buzzwords over substance.
  • XHTML is by and large compatible with existing browsers, if some simple guidelines are followed. Though it is not exactly backward-compatible with HTML, it comes very close, and you don't need to do much to convert a valid HTML 4.0 document to XHTML.

    The reason I want to use XHTML is to add XML functionality to my web pages. This will be nothing grand -- at least in the beginning. I will probably start out with some RDF [w3.org] metadata.

    XHTML might not be a thrilling DTD in itself, but its power lies in the fact that it is made of XML, so you can use other XML throughout your document without violating spec. IMHO, that's a pretty nice improvement over straight HTML. And XHTML has to be well-formed, which is a Good Thing.
  • XHTML is as much a standard language as the Linux Kernel Recommended Coding Style [kernelnotes.org]. According to the W3C press release [w3.org], Authors writing XHTML use the well-known elements of HTML 4 (to mark up paragraphs, links, tables, lists, etc.), but with XML syntax, which promotes markup conformance. So, as I understand it, you write HTML 4, but throw in some extra informative tags and generally make sure your page plays nicely with hypothetical non-web browser programs reading your code.

    Incidentally, I don't see any support for such tricks as using tables to lay out a page. Will this force people to recode their layouts with CSS (which they probably should do anyway), or just give coders another excuse to ignore W3C recommendations?

  • From the HTML v.4.01 spec:

    9.3.1 Paragraphs: the P element




    Start tag: required, End tag: optional

  • I wasn't just refering to that. If you read the XHTML spec, you'll find that is no longer the case.


    If you can't figure out how to mail me, don't.

Whom computers would destroy, they must first drive mad.

Working...