Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

JPEG 2000 Specs 82

Richard Finney writes ""JPEG 2000 might be the graphics file format that even dazzles the pros": at a ZD article entitled JPEG 2000 to give the Web a new image. JPEG 2K will have multiple channels, metadata, variable decompression levels and progressive decompression. "
This discussion has been archived. No new comments can be posted.

JPEG 2000 Specs

Comments Filter:
  • by Anonymous Coward
    Why should this format take off when PNG clearly failed to grab the world by storm?
  • by Anonymous Coward
    A large number of channels could be useful for transmitting data in scientific circles, since many microscopes, telescopes, etc. can record a very large amount of information per pixel. You could download a pic of the Andromeda galaxy, and view it in whatever set of electromagnetic frequencies you feel like.
  • by Anonymous Coward
    Yeah, the interesting pictures of the Earth are in infrared. Landsat (Landsat 7 was just launched on April 15 - your tax dollars at work) produces images in 7 channels. It would sure be nice to have a real, lossless format that supported more than 3 channels.
  • by Anonymous Coward
    The whole point of JPEG, MPEG (I'm talking about the people behind it now) and their ilk, is to be open, and to produce open standards. The whole exercise would be pointless otherwise.

    This is a too idealistic view. The reality is that MPEG-2 for instance is patented to death ; they have a company dedicated to resolving patent issues, see www.mpegla.com [mepgla.com]; you have to pay even for distributing a MPEG2 video (not a coder or a decoder, just the video file itself)! MP3 encoders have also been indicted.

    it's not as if there are any companies involved that could harvest money from patent infingements anyway.

    They do harvest money from licensees (see mpegla site) ; I don't think they sue that often, but sending an official letter with "You're infriging on our patent #USJunkPatent, please do stop distributing your product or give us 1%" is very effective.

    For an example, see the League for Programming Freedom page [mit.edu] on Unisys/CompuServe GIF Controversy.

    I won't be surprised at all if companies on the contrary tried to force standards to use their patented technology ; the MPEGLA example shows that there is an insidious incentive to accept the other's patented technology if the other accept one's patented technology: at the end, the standard is published, the patent owners happily cross-license (or make a patent-pool from which they get %), and the rest of the world is pissed.

  • by Anonymous Coward
    [disclaimer: i'm biased]

    DjVu [att.com] really is 4 compression formats in one.
    - IW44: state-of-the-art wavelet-based continuous-tone image compression method. It's
    similar to what JPEG2K will be, but it's available now, with source code, and without patent restrictions if it is used in OSS projects.
    - JB2 lossless: a compression technique for "bitonal" images (i.e. black&white, no grayscale).
    Very efficient for scanned pages of text
    - JB2 lossy: lossy version of the above (smaller file sizes).
    - DjVu: a "layered" image format where the text and drawings are coded with JB2, and the backgrounds and pictures are coded with IW44.

    IW44 is the thing that should be compared with JPEG2K. It's progressive, and cheap to decode (no multiplications required, unlike many other wavelet methods, and unlike JPEG).

    As of now it doesn't support anything else than
    RGB (really YCrCb internally, like JPEG).
    Files are about a factor of 2 smaller than JPEG for relatively high compression rates, but they are only marginally smaller than JPEG for high-quality settings.

    - One of the creators of DjVu
  • by Anonymous Coward
    I know people in the JPEG 2000 committee, and JPEG 2000 hasn't made the same incredible mistake as MPEG Audio has made.

    So, while there are patented things in it, such as the SPIHT coder (which they will use without any doubt), the patents are merely taken as precautions by academic people to ensure nobody will run away with their idea, in forms of money.
    In other words: there will be absolutely no no problem to create free software that writes or reads JPEG 2000 images.

    a knowledgeable Anonymous Coward.
  • by Anonymous Coward
    The biggest problem with the web today is the lack of professional color management. When I download a Slashdot article and print it out on my $10000 dye sublimation printer, I always feel very uneasy: is that really the shade of green that Rob intended when he made the logo? JPEG 2000 fixes that; now my Slashdot headlines and CNN stories will all be in true, calibrated color, ready for prepress.

    In addition, JPEG 2000 will ride to victory on the shoulders of wavelet compression formats, which are taking the web by storm. I have seen the future and it is wavelet. It's everywhere; here is a list of sites that use it:

    (nil)

    Not only that, JPEG 2000 isn't just a JPEG/GIF/PNG killer, it's also the RIAA-endorsed MP3 killer! Said RIAA spokesman Ana L. Retentive, "by promoting the revolutionary new JPEG 2000 file format for music distribution, we can stop the MP3 plague, ride the wavelet of the future, and increase the American public's sheet music literacy in one fell swoop. Once the filthy, groveling peons out there succumb to the sheer joy of making their own music from lyrics and sheet music (JPEG 2000 encoded, of course) they find on the web, they will quickly realize what poor musicians they are and will head in droves to record stores."
  • will OSS have to step around to use this amazing new technology? I didn't see anything in the article, but I doubt there won't be patent issues with this new format.
  • Up to 256 channels. You don't have to use them all...
  • While this is true, Mozilla will fully support the alpha channel in PNG, meaning that in addition to CSS1, XML, etc, we can look forward to using another great image format - very cool.
  • by gavinhall ( 33 )
    Posted by Gargamelo:

    256 channels has to be one of the most absurb ideas I've heard of. What would you ever use them all for? Also, compression speed on wavelets may be agonizingly slow, but decompression speed should be faster than most other formats.
  • by gavinhall ( 33 )
    Posted by Gargamelo:

    I understand this, but the option of 256 channels even feels like overkill to me.
  • Wavelet theory is cool.

    But then, we've been promised PNG for years, and it too contains many excellent ideas. But for the entire park of current technology browsers to shrink down to 5% of all browsers in use, we shall have to wait an awfully long time. Especially as less and less people (proportionally) care about using the latest versions. Things will only get worse.

    So it's a fair bet to say that this new format will never amount to anything much on the Web. I hope I'm proven wrong, but the way things are going, the current Web standards are slowly but surely congealing into a qwertyuiop-like mass of immovable standards. People are happy enough with GIF and JPEG, and as line speeds get upgraded, people don't really care any more about an image that takes .02 secs to transfer rather than .12 secs. It just doesn't matter any more.

  • Not only are there already too many graphics and video formats to choose from, but there will also be as many new formats as we care to write.

    What do I use for lossy compression? JPEG, due to its popularity. Lossless compression? GIF, PNG, xcf.bz2, ps.gz... Use the right tool for the job. :) Combining raw image formats with generic compressors seems like a great idea for lossless compression, even if that does only get you so far.

    I'd love to see a new, openly developed, lossy compression scheme. Unfortunately, I wouldn't know where to begin, because most of this work is done by commercial software companies and whatnot...
  • Sorenson Video, QDesign audio, JPEG 2000 are all the new wave of high stakes technology that we have to deal with somehow. Every new compression algorithm for the last 4 years has been patented to the core. For a while we were able to get some codecs through Xanim, but now the money riding on video compressors is so high that not even Xanim can get licensed to use them. As the internet bandwidth increases and the stakes get higher, the patents and licenses can only get tighter. There are never going to be any more formats like JPEG and GIF.
  • ...PNG, which is now supported by browsers...

    Well, it is supported, but not very well. I don't remember which browser it is, but either Netscape 4.x of IE 4.x does not support the transparency in PNGs. That really cuts into the usefulness of them for me.

  • Sorry to be anal, but alpha channels control the level of opacity, not the level of transparency :P
  • It's not reinventing PNG. It's reinventing JPEG. JPEG compressed photos are much smaller than PNGs... I have one 550K+ PNG which compresses to about 60K in JPEG. That's because PNG is lossless, and JPEG is lossy.

    JPEG2000 just looks like it's adding more features of PNG to JPEG, so you can still have mega-compression of photos but keep transparency, etc.
  • I have to agree with that. People want to be able to animate JPEG photos just like gif images. I know you can do this with javascript, but why spend time coding, when I format should support it natively.

    JPEG and GIF are here to stay. Style Sheets have yet to prove useful for most websites.

    People are afraid with backwords compatiblity, and even more afraid of things being supported poorly by browsers.

    Copyrights will be a major issue with all formats in the future, which brings up a question: Is it ethnical to copyright a Internet Standard. Standards should be open and allowed so all can access it. Nobody has yet tried patenting HTML (then again, Microsoft or Netscape probally have a patent pending), so this is getting riduculusly stupid.

    I'll Stick to plain HTML and GIF and JPEGs for the near future.

    Thanks,

    AArthur
  • by SEE ( 7681 )
    ...Meanwhile PNG... From my limited work and reading, PNG appears to be an excellent format -- but one that hasn't reached the critical mass that Linux has. HowToHelp: plug ins. M$ probably won't listen...

    Actually, Microsoft has been very good about PNGs. IE has supported PNGs since version 4.0b1, and Office 97 uses PNG as its native compressed image format and also directly in its PowerPoint, Excel, Word and OfficeArt components.
  • I hope that there will be nothing in this standard
    that disallows it's use in open source software.

    Something as simple as that could end up being a
    bad turning point -- what if all of a sudden, we
    could not display images on linux desktops, or if
    something like Mozilla was not allowed to have jpegs (just like it's not allowed to do 128 bit ssl or IM)
  • Could you cite references to back this up?
  • Sun isn't listening, unfortunately. Although it's been at the top-25 of Java "requests for enhancements" by developers for more than a year, they have yet to announce support for PNG.
  • If you're interested, the latest rev of the standard is described in this paper [hp.com].

    I've followed the standard, and I think that it will do very well. Even on the Web.
    The only problem I see is with licensing, but since HP developed the algorithm and they are
    giving [hp.com] away licenses for the JPEG-LS algorithm (which they helped develop), I can't see this as
    a problem.

    --
    Christian Peel
    chris.peel@ieee.org
  • by rpk ( 9273 )
    PNG is for lossless images, such as UI elements, and the like. The JPEG format is for photographic images.

    However, given the perceived "good enoughness" of GIF and current JPEG, any new format faces an uphill battle.

  • Can anyone here give us a meaningful comparison between JPEG 2000 and AT&T's DjVu compression [att.com], which is also supposed to be wavelet-based? Are these two similar, or are we talking apples and oranges?

    Also, it was unclear in the JPEG2K article whether the new image format maintained the current distinction between compression and file format. Currently most "JPEG" files use a format called JFIF, but the file format and compression are separate -- so the file format can be used to store information other than the image, or could be used to store images compressed with other compression algorithms. Conversely, you can have JPEG-compressed images stored in other file formats. Some digital cameras like the Kodak DC-260 do this -- use JPEG compression but not the JFIF file format.

    Anyway, that seemed like a good design, but it seems clear that the new JPEG2K requires both new compression (wavelet) and a new file format (for multiple channels). I hope they manage to keep the two separated as in the original JPEG.

    --JT
  • So what's next?
    Advertising space in graphics files?

    and really, with all that information in the J2K file format, just how small is "small"?

    are we talking 500k here?
  • by wergild ( 10636 )
    i don't really believe that the option to do anything is ever overkill. the more options the better.
  • From the discription of it, it seems as if this file format is really more suited for print. The ability to work with a low res image in your document while simply specifying that it become higher res when you print it is very valuable, adn something that designers have to similate with seperate lower res images right now. the extra channels, the losless compression, the support for programs like acrobat, and InDesign all seem like it's a print oriented format.

    The advantages for the web could be many, but with the assumed slow lack of support in the browsers, and the need to be backward compatable with older browsers, i think this format might give tiff a run for it's money, but not jpeg.
  • compression speed on wavelets may be agonizingly slow

    Wrong. Wrong. Wrong.

    DWT and relatives are computationally very effective algorythms.
    The reason it was not used ealier - it was not developed earlier.
  • JPEG, even though it is lossy, makes most photos look better than they started off. This is due to the way that the sampling for the DCT works. It takes into account how the human eye works, and leaves out things that our brains wouldn't even notice anyway

    BS. Human eye works in nice square pieces? Utter bullshit. Do your homework. It is the other way around.

    Don't know what WT compression you looked at, but all I have seen (a lot) have a MUCH better quality per bit, and most important - they are free from this nice square artifacts of JPEG, that human eye is so well equipped to pick up.
    And when coded sanely - they are computationally MORE efficient.

    Do yourself a favor, pick up a good book (Say Mallat's 97 Wavelt tour of signal processing) and read it...
  • about human eye -
  • about human eye -
    - most wavelet transform based compression algorythms achieve a better quality due to the fact, that it naturally concentrates on changes - edges - of the image - the same thing human eye and mind concentrates on.
    Look at some papers [mathsoft.com] on image processing...
    Notice one about deblocking of JPEG compressed images, for example...

  • published in mid to late 80's, I think

    Huh? You should brush up your understanding of the subject... Most of the developments happened later - I already mentioed some good references...

    From your description of your testing I am positevely sure you fucked it up. E-mail me if you want some more references on the subject...
  • ANY time I do a graphic with lots of colors I use JPEG. If I want to make a zippy website I will convert that JPEG into an adaptive palette GIF. GIF is pretty lossy if you make something in 24bit color and try to convert it to a 256 color GIF. The whole reason GIFs can do transparencies and animations is because they are limited to 256 colors so you can pick one of those colors much easily to be transparent than trying to make a channel transparent with a JPEG (which would need special software in the JPEG viewer). I usually laugh when I see new multi media formats trying to "become the standard". I cry when I find a format that should be universally accepted (JPEG and GIF since they are the internet standards) is not in a given program. If you want to make a format universal forget about the licenses and make it GPL, that way everyone can get ahold of the codecs and write it into their program. Down with licensing!
  • I have to agree. Web site designers are terrified to use PNGs on their site because they know that a sizeable percentage of their viewers can't... well, view 'em. And browser support for them still isn't where it should be.

    JPEG 2K might be God's own personal image format, but if a format falls in a forest and no one is around to implement it, does it make for pretty pictures?
  • Why don't they add a new compression type to the TIFF format?

    I guess because then they couldn't do a truncated download for low-res.

    Oh well. If they release a freely usable source library, I hope the format flourishes. Otherwise, it can go to hell with my bootprint on its ass.
  • I definitely believe that wavelets are superior to the DCT. (Less artifacting) However, there are several key points in my first post which I would like to elaborate on which contradict some of the things in your post.
    1. Wavelets are very very easy to implement...In some ways yes, in most ways, no, they aren't because of one item: iterative math functions. Wavelets are orders of magnitude more processor intensive than other encoding schemes such as LZW (GIF) and DCT (JPEG).

    2. ...for anyone who understands the algorithm...

      Silly me, (sarcasm implied), I forgot, there's only one wavelet algorithm... I do understand the algorithm, if you are talking about the Haars, etc. There are many wavelet transforms, related mathematically, but with differing resolutions, etc. Even subtracting the iterative math problem above, determining an algorithm which will work well for the vast majority of images to be encoded/decoded is not an easy task.

    3. ...moving code to an alternate platform, isn't that like in just a simple recompile?

      Oh, let me just fire up my SGI, my Mac, my OS-2 box, my Netwinder, my 486/DX-2, and my Alpha, my HP... and do a re-compile. I think you get my point. The problem isn't re-compiling the code, it's coming up with code that is general enough to be easily moved between platforms without suffering serious performance degradations. In lieu of that, you have to have coders on those platforms prepared to deal with and optimise for the "truly hairy math"

    By the way, if you've just finished your dissertation on wavelets and want to do something about it for the OpenSource movement, get involved. [mailto]
  • Probably not. However, it is going to have alot of mainstream support, and from my early readings on the subject, should be a useful addition to the array of graphic file formats already in existence.

    For those not familiar with DCT or wavelet compression methods, what we're talking about are ways of telling the computer how to generate the image mathematically from a much smaller set of data. The "ideal" algorithm would quickly reproduce any image accurately with the smallest possible dataset. The problem is, different techniques work better for different images. Which is why I suggest that until hard numbers are available, I wouldn't add momentum to this particular bandwagon. One of the folks working with me on wavelets said it way better than I ever did -- "the math is truly hairy", which means that even if JPEG2000 is a wonderful algorithm, moving it into code for alternative platforms will still be a daunting task. "Hairy math" takes time for the computer to resolve.

    Meanwhile PNG... From my limited work and reading, PNG appears to be an excellent format -- but one that hasn't reached the critical mass that Linux has. HowToHelp: plug ins. M$ probably won't listen [ever tried to run IE for Linux? ;')] but Mozilla's developer's will listen. Octave's developer's will too. (I haven't checked to see if there is already a PNG code branch in Mozilla or Octave, so apologies if I'm speaking out of turn.)

    A last question for the /. world: (later in the article) Speaking about Lizard Tech's MrSID, I noticed a feature I wondered about: "...MrSID supports an exact coordinate system that lets a user zoom in on an area of the picture." Does anybody know if this particular feature is covered under any patents?

  • I did my homework. Firstly I got this from a book (forget which one, published in mid to late 80's, I think), which was given to me as "THE book to read on different image formats". I have also read this in many other places. And I have observed it by eye. Further, the human eye does not pick up sharp borders between colours in natural scenes (except if you stufy them carefully). JPEGs also smooth out such borders - this is one of the most noticeable artifacts with JPEG compression. I should add, though, that the level of compression we use at work is never worse than the 2nd best quality/2nd largest. At tighter compression it becomes a waste of effort.
    For wavelets we started with the company that provides the wavelet decompression engine for Microsoft's Encarta (our direct competitor). If you have ever seen Encarta (at least '98 or earlier), the photos look like shit. Cloud-shaped artifacts on a blue sky are (barely) acceptable. Cloud-shaped artifacts on grass, a window, a (tiled) rooftop - that's pretty dire.
    Having tried that same wavelet compression engine, we compressed images with it at the highest possible quality. 640x480x8 images came out around 65-69K, with both wavelet and JPEG. The wavelets were of a quality we considered unacceptable. The JPEGs were very difficult to distinguish from the originals (excepting almost flat colour areas and sharp borders).

    You may also note that the first thing I said in my post was that this research was done more than a year ago. It was, admittedly, only done on the web, but I figured most companies touting such a technology at a level appropriate for multimedia applications for home-users would also be trying to provide us all with browser plug-ins.

    Oh, and comparing artifacts again, I think that most people would have greater difficulty observing 2x2 square artifacts, than ones that look like 200x2 pixel squiggly lines running all over the place. Oh, and that attractive fuzzy "I can't focu anymore, maybe I need glasses" effect.

    Finally, though, thanks for the book recommendation. I'll check it out.

    ~m.
  • Interesting. But I have some *ahem* concerns. First off let me mention that I work in "multimedia", and a bit more than a year ago I did a lot of research into different image compression formats, as we were starting on specifying a new product.

    So, with that disclaimer:
    1). I seem to recall that JPEG already had a facility for storing multiple resolutions in one image, with all general compression info stored at the start of the file (in JFIFs), and the lowest res version next, followed by the 2nd lowest res, and so on, until you reach the highest res at the end.
    2). Whilst looking at different compression formats, I also looked at several different wavelet based implementations. Without exception they looked worse than JPEG at the same file size. Yes, you read right - WORSE. Both in a 256 colour display (JPEG at its worst, as dithering looks really quite bad in palettised display), and with millions of colours. We dropped the idea of using wavelets. Oh, and the decompression speed _appeared_ to be slower for wavelets than for JPEGs. Not significantly enough for it to have affected our decision, but still...
    3). JPEG, even though it is lossy, makes most photos look better than they started off. This is due to the way that the sampling for the DCT works. It takes into account how the human eye works, and leaves out things that our brains wouldn't even notice anyway. Kinda like the psycho-acoustic sampling used for MP3. Many people are reported to prefer that sound to the original digital sounds aswell.

    Of course, we would never use JPEGs for line drawings, or maps (with lots of text). For those we use a lossless compression. Sadly the choice here had to be GIF (supported by Java, and our app is being written in Java. *sigh*).
    JPEG2K does, however, sound interesting due to the combination of a lossy and a lossless compression scheme into the same "standard". On the other hand, the same thing could be done with JFIF, and already IS done with BMPs and PICTs. And coming from JPEG, it is likely to be fully supported by Sun years before PNG is finally supported. *sigh*

    ~m.
  • The act of extracting a subregion of an image from an appropriate resolution level in a pyramid (or implied pyramid) can't be patented (effectively) since there is lots of prior art. However, the MrSID algorithms are tightly held, and I believe patented.

    I have implemented a read for MrSID and there is nothing special about it's multi-resolution capability that you can't accomplish with a tiled and pyramided TIFF file (and I have). However the compression is great, and is taking the world by storm which is why we need a public and popular wavelet standard.

  • The Independent JPEG Group (IJG), of which Tom Lane is the most visible member, is responsible for the currently popular OSS JPEG implementation. See http://www.ijg.org [ijg.org] for details.

    More information on JPEG 2000 can be found at http://www.jpeg.org [jpeg.org].

    I am wondering if the IJG is planning to (and has sufficient resources to) implement JPEG 2000 support quickly as the specification finalizes. Does anyone know? I asked Tom Lane about this indirectly a while ago, and he just pointed me to the www.jpeg.org web page.

    The IJG did a great job on the current library, and I hope that they can do JPEG2000. I also think that if they need support (manpower/money) it would behoove the OSS comunity to provide it.

    If for one agree that wavelet based approaches to compression are the future off lossy continuous tone compression. The MrSID technology [lizardtech.com] for instance is great, but they hold a very tight hold on their proprietary technology. I think it is important to establish a popular, and public format and technology to fill this void or proprietary interests will damage OSS efforts.

  • by Frank Warmerdam ( 18907 ) on Friday April 23, 1999 @10:43AM (#1920427) Homepage
    Folks ... I contacted Tom Lane of the Indpendent JPEG Group and he says:

    Nothing is happening within IJG; we are waiting to see what emerges from the ISO JPEG committee, and in particular whether it is (a) patent-free and (b) enough better than JPEG-1 to be worth a universal upgrade cycle.

    On point (a), I have made my views quite clear to the JPEG committee, but I dunno whether they are listening. There will not be an IJG implementation of JPEG-2000 unless it is freely distributable and freely usable under essentially the same restrictions (ie, none to speak of) as our current code. Patent licenses are a show-stopper. But from what I've heard, all the proposals before the committee have some amount of patent encrustation.

    On point (b), the poor track record of progressive JPEG has left me unenthused about pushing incompatible standards that offer only marginal or special-purpose improvements. JPEG-1 took the world by storm because it was an order of magnitude better than anything else available. Unless JPEG-2000 is that much better again, it faces at best an agonizing uphill fight; the world might be better off without the ensuing confusion. (I have not heard anything about what performance improvements they actually expect to get ... but I am suspicious that we are going to see percentage points, not integer factors.)

    So, I'm waiting and watching.

  • It has 256 channels.
    Seriously though, I don't see it taking off very fast. Wavelet (de)compression is a slow process. While todays newest computers probally won't have much trouble with this, it will be painful to view the images on a (4|5)86, and you might as well totally forget about saving the images. While messing with wavelets on my 486 it wasn't unusual for a 640x480 image to take 16 hours to compress. I could view the compressed image back rather fast, but it definaly didn't flow onto the screen like JPEGs or PNGs.
  • I would hope JPEG would give a sample implimentataion and C libs like they have done with the past specification.
    They have seemed to be pretty open in the past.
  • by Xenu ( 21845 )
    It sounds wonderful. My concern is that it will end up like TIFF, a standard with so many features that nobody implements them all.
  • by fornix ( 30268 )
    I understand this, but the option of 256 channels even feels like overkill to me.

    Ah, and 640K of RAM was more than one would ever need ;-)

  • People could encrypt a message, then hide the cypher text in an obscure data channel on pictures on their web page.

    Aren't things like this already done?
  • Wavelet theory is cool.

    Whenever my professors start describing how wavelets work, my eyes glaze over while my brain silently hemorages.

    At least I was never asked to perform wavelet compression by hand on an exam. I did have to do LZH compression by hand on an exam once. I wonder if I could be sued for not licensing the encryption algorithm as I used it on an exam?
  • One of the coolest things I have heard of wavelets being able to do, is this:

    1) Start with a picture of a teapot on a carpet.

    2) Reduce the image to a wavelet (it turns into a really nasty mathmatical formula).

    3) Eliminate some of the terms

    4) Use the new formula to create an image

    5) The teapot is now gone, with the texture of the carpet where the teapot used to be.

    Very very cool.
  • I guess I will explain this a little more.

    When you use a wavelet to compress an image, you reduce it to a mathematical expression. Each term in the expression represents a level of detail. Optimally expressed, the mathematical expression takes as much room as the original image. However, now we can hack off the terms that represtent very high levels of detail, without changing the visual quality of the image (to the naked eye). This is what gives us a saving of space.

    In order to acomplish the trick with the teapot, you eliminate some of the more signifigant terms of the expression. Since the carpet texture is highly detailed, it stays. Since there is no information left that the teapot existed, that space gets filled with the carpet texture.

    There. I hope your brain did not hemmorage.
  • Sorry to be anal about your being anal, but the alpha channels do determine the level of transparency. It's the exact same thing as controlling the level of opacity. The degree to which something is opaque is the degree to which it is not transparent. Isn't this kind of self-evident?
  • Sure, it all sounds great, but for web designers like me, this doen't mean a thing. Even when the specification does get released, it won't even be supported by the 5.0 browsers, let alone the older browsers (which give us enough trouble as it is!). Not only that, but this format will have to go head to head with PNG, which is now supported by browsers and has a lot of industry support (it's the official "Next Big Thing" for web graphics).

    If this thing ever does get the support it needs, I'll gladly use it, but as it is, absolute vs. relative positioning give me enough headaches...

    Peter
  • Just wait a few years and all the patents will be void :)

A computer scientist is someone who fixes things that aren't broken.

Working...