Slashdot videos: Now with more Slashdot!
The most obvious points of superiority are simply documenting what they actually measure and how they combine the individual measurements to produce a final result. Although Tiobe doesn't document enough of what they do well enough to be sure, it looks like langpop.com covers a couple of types of sources that Tiobe doesn't (or at least doesn't imply they try to cover). One particularly interesting point is that they attempt to gather data about actual code, not just questions about code (e.g., they look at Freshmeat, ohloh, and Google Code).
Oh, and no, I'm not affiliated with Langpop.com (or Tiobe) in any way.
The biggest problem is that a macro lens can be somewhat on the expensive side. If you want to stay cheap, Cosina makes a 100mm f/3.5 macro that looks and feels cheap, but has quite decent quality optics. This is widely available under various other names (Promaster, Quantaray, etc.) In fact, nearly any current, off-brand, 100mm f/3.5 macro lens is likely to be made by Cosina.
Compared to zooms, dedicated macro lenses are nearly always sharper, and (particularly) have extremely low distortion. FWIW, most of them work pretty decently as portrait lenses too.
Let's see now: the 1220 minute upgrade included 650 Gb of data, but the low-end hardware only included a 320 Gb hard drive. Does it really take any great brilliance to see a problem with that?
Mid-range I believe is the common one. Of course if you only want one end you can have entry level instead of low end.
And, FWIW, "Mid Range Hardware" is what it's called in the original blog entry -- the 'mid-end' nonsense is just another artifact of the
I think "low end" is much more descriptive than "entry level" -- if anything, it seems to tend toward the opposite. Entry level users have relatively new machines with fast CPUs and big hard drives. It's the more experienced users who can get by with horribly obsolete hardware...
The Patent holder should have been required to submit their source code to get the patent to start with, [
The patent office used to require submission of a model for any patent, but stopped, largely because storing all the models became cumbersome and expensive. In theory, it wouldn't need to be so cumbersome for source code, but see more about that below.
[...] Facebook should only have to submit its source to an independant third party for review.
That's almost certainly the case -- it'll really be turned over to the opposing counsel (i.e. attorneys) and they'll hire (non-Facebook) experts to examine the code. Those experts, in turn, will be required to sign a protective order, promising they'll only use it for the specific purpose of proving claims in the current case, not anything else.
I've been in that position a number of times, and can honestly say I've never even been slightly tempted to steal from the source code I looked at. Quite the contrary, such work is usually done on a tight enough schedule that you're working too hard to meet deadlines to really think about much else, and by the time a case is over, you never want to look at any of it again!
I realize this isn't how software patents work, but they need to start requiring source code submissions for the applications.
Perhaps it's best to consider how patents on software came to be accepted to start with. There was a patent on a machine for curing rubber. Somebody else built a machine that clearly did what that patent described -- but under control of software running on a CPU, instead of electronics designed specifically for that purpose. The case got to the supreme court, which ruled that the simple fact that the machine included a CPU and some software to control it didn't change the fact that it was a machine that executed the patent.
From a legal viewpoint, there's still not really a patent on software per se -- there's a patent on a machine that executes some software, or on a process of doing something that happens to be carried out by a computer under the control of some software.
As such, if you try to apply such a rule to "software patents", you almost inevitably have to apply it to patents on other kinds of machines. The minute you do that, however, you're back to the cumbersome, expensive storage of all those machines.
Facebook might be using something within their source that could be patentable that is not related to any existing patents, and they don't want to disclose their methods and routines to any outside party. This is not at all uncommon, we call these things "trade secrets". How do we know that this isn't just a ruse to get access to trade secrets or other unrelated code?
See above or just Google for "protective order". This is hardly the first court case involving information that might be sensitive...
Can Facebook simply provide the source code in obfuscated [wikipedia.org] form?
Probably not. The current federal rules of civil procedure state that you:
(C) may specify the form or forms in which electronically stored information is to be produced.
Doing so would be a bad idea anyway -- giving a judge the idea that you're trying to cover up what you've done will almost always do more harm than good.
You might be surprised how little obfuscation would accomplish though. Quite a few cases are developed just from disassembled executables, with no source code at all.
Oh, there is one minor difference though: a rule 34 inspection is normally used for something like a large machine that can't reasonably be delivered to the other side.
The rest of it is pretty accurate though. For one example, I was involved in a case where the other side was ordered to produce a copy of a floppy disk -- so they sent a Xerox copy. This was recently enough that even the judge realized that was a problem, and told them that they needed to send a copy of the contents -- so they loaded executables into a text editor (Notepad, to be exact), and printed them out -- in a font that didn't have characters for many of the codes, so about half of it was the Windows Empty Square Box. The best part was the (literally) couple of thousand blank pages where a padding character (or something on that order) happened to correspond to a form-feed...
Tactics like that can be dangerous though -- the judge clearly recognized what was going on, and didn't like it a bit. For the rest of the case, he didn't cut them a break on anything. At the beginning of the case, I'd told our clients that IMO, the facts only favored them by about 60:40 or so, but by the end, there was virtually no way we could lose (and we didn't). In his decision, the judge even commented on the "assiduous and ongoing dishonesty" of our opposition (I think I'm quoting that correctly -- it was close to that anyway).
Jerry, Thank you for pointing out my omission of the networking requirement. I am not a lawyer, but I have worked on a few patent cases as an expert, so I know to read the patent before talking about it, even if I am not as careful as a lawyer at reading over it.
I'm in pretty much the same position, except that I've been doing it long enough that I'm probably more anal than most lawyers about how I read claims...
I believe the networking requirement you mention will be fulfilled by any system which needs to use a network to validate user information from a central source, such as kerberos authentication or Windows Active Directory mechanisms. Of course, LDAP was mentioned in the patent, but these go beyond LDAP.
Active Directory (to use your example) certainly provides more than LDAP, but it does support LDAP, and from a viewpoint of the data and organization, it doesn't really provide a lot beyond the kinds of things LDAP can provide. It does add a lot of things like directory replication that LDAP doesn't address, but those aren't really relevant here. Those track things like whether a user is logged in, but this is talking about the applications and files the user has open. You could argue that those are equivalent, but I think with the specific mention of LDAP in the patent, they'd probably be fairly safe from that type of prior art.
As I said in my previous post, though, I'm not really trying to say the patent necessarily is valid though. Maybe Facebook can and will come up with some really compelling evidence of prior art. If the suit settles out of court, it might be for precisely that reason. Then again, it could be just the opposite -- that Facebook looked for prior art, and couldn't find anything even close, so they gave up. On the other hand, it could also be a simple matter of economics -- if Facebook figures it'll cost them five million dollars to defend themselves in court, and gets an offer to settle for two million, there's a pretty good chance they'll take it, even if they're pretty sure they could win in court.
Having a judge presiding on a case whose technical details he is wholly ignorant of strikes me as terribly dumb.
The judge is only supposed to decide questions of law, not of fact (questions of fact are decided by the jury). As such, the judge's expertise is supposed to be primarily in applying the law to the case at hand. Our legal system does recognize, however, that in a technical case, the judge frequently needs to understand technical details to be able to apply the law intelligently. The court is allowed to appoint a "special master", who is a neutral expert in the technical field to advise the court (i.e. mostly the judge) about the technical questions involved.
Of course, leaving all the questions of fact to a jury isn't necessarily a huge improvement. Turning technical questions about code over to a bunch of people who couldn't get out of jury duty doesn't exactly guarantee an accurate answer to those questions...
So they basically claim they have a patent on the one-to-many Foreign Key?
NO! In fact, the patent itself specifically cites a one-to-many relationship as already being known. The attempt at claiming coverage of a one-to-many appears to come only from the incompetent who wrote the summary.
After reading through the '761 patent, any operating system which initiates a user working-space at login, e.g., a shell, will fall under the main claim of this patent.
It's refreshing to see somebody at least try to read the patent. I have a hard time believing anybody could mis-interpret it this badly though. Let's look at part of claim 1:
a computer-implemented tracking component of the network-based system for tracking a change of the user from the first context to a second context of the network-based system
How would an operating system with a shell qualify as a "network-based system"? Answer: since it's not network-based, it's not even close. Even something like logging in remotely isn't really network-based -- it's based on one computer, and happens to have a network between the CPU and the terminal. Here they seem to be talking about something that's truly network-based -- something intended exclusively (or at least primarily) for access over a network, and (quite possibly) the "server" isn't necessarily a single server, but itself an entire network. Exactly what "network-based" means for this patent doesn't seem entirely clear to me though -- and the patent specification doesn't really tell us either (the phrase "network-based" isn't mentioned in the specification). If that claim is part of the lawsuit, there will probably need to be a "Markman" hearing to decide how the claim should be construed. The court is required to presume that the patent is valid, and therefore attempt to construe the claims in a way that doesn't make prior art obvious -- and in this case, I think "network-based" is pretty easy to construe as meaning something that prevents a normal (or even remote) login from being prior art, so if the issue arises, there seems to be little question that the court would do so.
For those who've talked about tagging being an infringement, I'd note that "metadata tagging" is specifically mentioned in the "background of the invention" as being known related art. Likewise, those who've talked about a: "one to many relationship" (or various similar phrases), that's also mentioned in the background of the invention as already being known, not falling within the patent.
Now, I'm not going to try to argue that the patent is necessarily valid -- that's a question the court will probably need to address, and if Facebook's attorneys are doing their jobs, they'll (have specialists at prior art searching) put a fair amount of effort into researching reasonable possibilities of prior art. It does look, however, like if there is prior art, they probably really are going to have to do some serious work to find it. It might well exist -- quite a few people have been working on similar ideas around the same time, and it's entirely possible somebody else beat these guys to it. If it is out there, however, it's going to take quite a bit of hard, careful work to find it and show that it really does include all the limitations in the claims of the patent.
Just FWIW, I'd also note that to invalidate a patent, you don't just have to find prior art to one of the claims -- you have to find prior art for all the claims, or at least all the claims at suit. Looking at their dependent claims, we find things like:
30. The system of claim 23, wherein the first user workspace is associated with a plurality of different applications, the plurality of different applications comprising telephony, unified messaging, decision support, document management, portals, chat, collaboration, search, vote, relationship management, calendar, personal information management, profiling, directory management, executive information systems, dashboards, cockpits, tasking, meeting and, web and video conferencing.
I don't think Facebook provides all those, so they're probably not being sued over that claim, but for statuatory prior art to invalidate that claim, you'll need to find a web site (or something similar to a web site anyway) that provided every one of those applications by December of 2002 (and, of course, did the automated metadata-updating based on context, etc., cited in the earlier claims). It's certainly possible such a thing existed -- but if so, I'm pretty sure it's going to take some real work to find and prove it.
The difficulty is in actually building a real lens to do what you want. Most of how to do that is widely known as well -- but when you're creating something where the basic unit of measurement is a single wavelength of light, virtually everything has to be done quite precisely. It's possible to do things by hand -- but building a reasonably modern zoom lens would be several years of work for one person. Back when I was younger, grinding a mirror was a rite of passage for virtually every amateur astronomer -- it took months of painstaking work to grind just one of a size that fairly easy to handle -- I can't imagine grinding a dozen or more, especially at the much smaller sizes necessary for a typical camera lens (keeping in mind that one element of a lens is already double the work of a mirror, since you have to grind and polish both sides, not just one).
Especially for a zoom, the mechanics are extremely non-trivial as well. You have to build a lens barrel that allows a dozen (or so) different groups of elements to move relative to each other, maintain exactly the right distances, never tilt or go off-axis, etc. I've simply disassembled, cleaned, lubed, and reassembled a couple, and even that's not for the faint of heart. Designing and building one is definitely a serious task.
Today, bayer matrix and AA filters are glued on the chip in the manufacturing process, and it's impossible to get rid of it afterwards.
Not that it really makes a huge difference, but while the Bayer matrix is fabricated as part of the sensor chip, the AA filter is not.
Removing the color filters would not really affect the requirement for AA filtering either. And, just FWIW, there have been a few cameras built with Bayer filters, but not (physical) AA filters (e.g. the Kodak Pro dSLRs).
It would appear that in most cases, the AA filter doesn't really have much affect on final sharpness anyway. Just for example, if you read through a description of a test procedure, you quickly realize that very few pictures approach the maximum sharpness of which a current camera/lens combo is capable.
Finally, I'd note that if you really don't want a Bayer-pattern sensor, you can get a Fuji camera. They use a type of sensor originally developed by Foveon (which Fuji since bought out) that has individual sensor elements "stacked", to there's a red, green, and blue element at each pixel location. While Fuji's cameras are perfectly good and work quite nicely in general, they're not really a whole lot better than most others (if anything, they seem to lag a bit behind the field in general). At least when you look at a JPEG, however, those extra sensor elements aren't doing a whole lot of good -- a JPEG sub-samples the chrominance channels, so they have half the resolution of the luminance channel anyway. This isn't quite identical to a Bayer pattern, with twice the density of green sensors as red or blue, but it works out pretty close.