Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Apple bashing (Score 5, Interesting) 452

Yeah, this.

Mid-2011, I was driving through the Rocky Mountains in Colorado along a road that would around the outside of a canyon. My GPS told me to take a right turn onto "Route 82d." You know what was off to my right? Nothing. A steep degrade, through a bunch of trees, and ending up in the canyon maybe 50 feet down.

I was so shocked by it that I turned around, drove the route again, and captured it with my phone: link

Bottom line: Don't blindly trust your GPS.

Comment Patent != intention (Score 5, Informative) 478

This article makes a mistake that I've seen a hundred times before on Slashdot: confusing "the patent says...", and "the patentee intends to..."

I write software patents for a living. (I didn't write this one.) Let me describe how the patent drafting process goes.

A client comes to me with a simple invention - we'd like to do (A), (B), and (C) to achieve result (X). I talk to them at length about what (ABC) is, and what critically sets (ABC) apart from every similar example. I ask questions about how each of (A), (B), and (C) could be varied; what other elements (D), (E), and/or (F) could be added; and whether (ABC) could also be used for results (Y) or (Z).

And when I write up the patent application, EVERYTHING goes in there. (ABC) is described as the base invention, but all of the other material about (D), (E), (F), (X), (Y), and (Z) is also included as optional extensions or uses of (ABC).

Now, here's the critical thing: I haven't fully considered whether (D) is a desirable feature, or whether (Y) is a desirable result. My client doesn't even know, or says, "we don't really intend to implement (D) or do (X)." None of that is relevant. All that matters is: They are all logical, valid extensions of (ABC), so, typically, they all go in. Anything that could make the basic technique more valuable, appear more useful, or might more fully distinguish (ABC) over known techniques is helpful to add to the specification.

I read this patent the same way. The basic invention is: "Use a camera to count and identify people interacting with a device." Now, you can't just stop there - you haven't said what that information might be used for, and the patent office typically rejects applications that look like, "The technique is: Generate some data." So the patent discloses several uses of that information. That doesn't mean that Microsoft has any interest in using that technique - only that it's logically achievable from the basic techniques.

Look, we all agree that technology is neutral, right? For example, DRM has been *used* for lots of obnoxious purposes (including limiting fair-use rights), but the basic technology of DRM is neither good nor bad - it just is. The same principle applies here.

Comment Re:Changes incoming (Score 4, Insightful) 148

> You can bet the farm that because of this all major online retailers have already started work to change their registration and ordering systems to implement a clickthrough rather than ticking a checkbox that says 'I agree'.

Ah, but many of those ToS'es include terms that are supposed to apply to activities that don't require registration or ordering - e.g., ToS restrictions on copying content to another site, linking to the site without permission, or suing the company due to information presented on the website.

So, coming next: Visitng ANY major site, even anonymously, will present you with a click-through ToS before you get ANYTHING from them. And to ensure that it remains legal and binding (especially as ToS frequently change), the selection will not be persisted in a cookie; you'll have to complete the ToS click-through at the start of every new session with the website.

Ugh. The web is about to become uglier.

Comment Re:So you've invalidated his patent and then him? (Score 1) 503

The initial impression is that he's describing a computer with a TV-tuner that does X, Y, and Z that are common functions already present in common software packages and/or as built-in OS features.

Every invention is a combination of previously known parts. What makes it an invention is a combination in a new way that provides new advantages. The first airplane was a combination of a known type of engine and some known aerodynamic structures.

Certainly, having video editing software in a TiVo-like device is a desirable feature (a quick Google search turns up a lot of people asking about it around 2006... four years after this patent was filed), and that combination hadn't been made yet. That makes it a patentable invention.

That's what it describes, at any level, so that's all I needed to read.

You're free to assert that "this patent should not have been issued based on my understanding of the prior art." (And then we can have a discussion about what constitutes prior art.)

But you're not free to assert incorrect statements about "what the patent covers," which you gleaned by failing to read the claims. That is factually incorrect, and blatantly disregards how the patent system works. Worse, it's a very common mistake at Slashdot - other people in this very same thread are arguing, "the only thing that matters in the patent is the abstract / brief summary; the claims are irrelevant."

If you really want to criticize a system, you should try to understand its basic operation first. Pretty simple stuff.

Comment Re:It's worse than that. (Score 1) 503

> Actually, that's not true. Yes, the claims are used in court. But the full description of the patent, not the claims, are the basis for the PTO's approval or rejection. The claims are simply checked for accuracy -- are they properly descriptive of what's contained in the main body of the patent or not.

That's just horribly wrong. It's practically the exact opposite of reality.

Patent prosecution focuses ALMOST COMPLETELY on the content of the claims - and specifically the independent claims. The entire rest of the patent application - the title, background section, brief summary, detailed description, figures, abstract - exist primarily to support the claims (in addition to a few other minimal requirements - the written description requirement, the enablement requirement, and the "best mode" requirement).

I talk to examiners at the Patent & Trademark Office several times a week. In most cases, our conversation is ONLY about the claims. And in many cases, I feel quite certain that the examiner has only read the claims - the examiner often has ignored or misunderstood the invention and the field of art. I have to explain the invention to them by reiterating the content of the specification, because they didn't read it; they just read the claims. And that's because the claims are really all that matters in the patent.

Comment Re:So you've invalidated his patent and then him? (Score 1) 503

> 1. You can be relatively certain from the summary.

No, you can't. You have to read the claims. The summary section has nothing to do with the scope of the patent, and is often very different from the claims. Many patents don't even HAVE a summary section, because it's not required.

If you HAD read the claims, you'd have come across this: "wherein the system controller module provides a user-selectable option of editing one or more sections of the one or more video files..." Does your TiVo allow you to edit sections of video files? No? If so, then the patent isn't "essentially a TiVo."

The takeaway message from this hopefully humbling experience is simple: FOR THE LOVE OF GOD, READ THE INDEPENDENT CLAIMS BEFORE YOU JUMP TO ANY CONCLUSION ABOUT WHAT A PATENT COVERS. Don't just read the title, or the abstract, or the background, or PART of the independent claim. READ THE WHOLE INDEPENDENT CLAIM. Slashdotters are horrible about this, and they get these types of patent issues wrong over and over and over again.

Comment Re:Lousy summary (Score 2) 62

> I'm sorry, but I shouldn't have to RTFA just to understand the key word in the summary ("memristor"). It's sloppy writing not to explain it.

>

Couple with that the title "U.S. patent officer." There's no such thing.

Blaise Mouttet is a former patent *examiner* for the U.S. Patent & Trademark Office. The USPTO currently employs over 6,000 patent examiners, each of whom is expected to be of "ordinary skill in the art." There's no indication that this individual's opinion is any more significant than that of any other electrical engineer.

Either it's an error, or the title was sexed up to fabricate an aura of expertise. Can anyone explain why this article made it to the front page of Slashdot?

Comment "Nearly all?" (Score 1) 515

> If there is a piece of software, hardware, a technique, etc., I want to know everything about it. On the contrary, nearly all of my coworkers resent it and refuse to even acknowledge it, let alone learn about it.

I doubt that they resent *your* interest in learning about new technology. There's nothing wrong with that in isolation, and it's difficult to imagine your colleagues resenting your enthusiasm by itself.

Also, you mention "nearly all of my coworkers" - that implies many people. In any social conflict of one vs. many, what are the odds that all of the many are wrong?

I'd like to suggest three alternative explanations that seem more plausible:

  • (1) Because of your interest, your colleagues must raise their research or risk looking inadequate by comparison. In other words, your interest is pushing them to put more into their jobs than before - probably without additional compensation or even recognition by their employer.
  • (2) Your efforts to bring fresh tech into the area are creating additional work (e.g., transitioning to new hardware or software to achieve the same task) and/or causing problems (e.g., switching to bleeding-edge technology before learning of its flaws, whereas tried-and-true methods would have worked fine).
  • (3) Your enthusiasm comes with some attitudes that the (many?) others find unpleasant.

The bad news is that all of these problems are not simply "their resentment," but real effects of your behavior. The good news is that when your behavior is the problem, the solution is simply changing your behavior. It's fully within your control. You can evaluate the adverse effects of your actions and find alternative behaviors with less adverse effects.

Submission + - A new commerce model for the software market?

tambo writes: I've been thinking a lot recently about how software is sold. I've felt that the pricing of software has been pretty crazy for a few decades, but I haven't been able to specify the basis of my feelings.

Today, the crux of the problem finally occurred to me: Standard models of capitalism are unsuitable in the absence of scarcity.

As we all learn in Economics 101, standard models of commerce are based on the laws of supply and demand — based on the idea of achieving an optimal price point for a particular article. Both of these curves are critically based on scarcity — the concept that the transaction is about a particular, physical article. The supply side is critically driven by the cost of creating that particular article — the labor and materials needed to manufacture it, the shipping of that product to a store or customer, and associated costs (the costs of building the factory, maintaining a retail store, etc.) And the demand side is critically driven by the idea that only (n) number of articles are available, and that the price should be set to the highest value that (n) customers are willing to pay. In a nutshell, the point is to maximize the utility of the scarce number of goods by providing them to the consumers who want them the most.

None of these concepts apply to a market without scarcity... like software.

Software is increasingly deployed electronically. Consumer-level storage capacities continue to grow at an astounding rate (1 terabyte drives for $100?!), and bandwidth and server costs continue to drop. As a result, the per-item cost of deploying a piece of software to a customer is essentially zero. (Sure, deployment servers and bandwidth cost money, but it's trivial on a per-software-deployment basis — especially with economies of scale for consolidated libraries like Steam and the App Store.) Additionally, the traditional model is deeply economically classist: only rich people can afford the best software, and even though computers are increasingly affordable, poorer users can't actually afford the software. (Are the copyright lawsuits inherently economically classist? Do they disproportionately target poorer computer users, who may resort to piracy to obtain software that they can't afford? Should we regard this as a socioeconomic inequity?)

Viewed from a pure utility perspective, the market should not be organized to ration the deployment of the software, but to deploy the software to everyone who wants it.

This doesn't mean that supply and demand aren't relevant to this market. They are still relevant — they dictate choices about what products may be developed — but they need to be reformulated in the absence of scarcity.

Things that won't work:

* Free-as-in-beer software. Professional development and quality cost money. There has to be some sort of payment mechanism.

* Pay-what-you-want software. These models never work well, because there is no incentive for customers to pay more than the absolute minimum.

* Centrally sponsored and dictated software, where the owner of the distribution system declares what software is to be created next. Again, these models never work, because centrally predicting demand is always inaccurate, if not outright corrupt.

Here's my idea: A software repository with a flat monthly user fee, where developer royalties are apportioned based on popularity.

Here are the details:

* For a flat periodic fee, each customer gets total access to the entire software library. They can install whatever they want — no quotas, no holds barred. Their choices determine demand, and their demand is limited only by their time and interest. Better still, because there is no incremental cost for new software, there is no reason for a user *not* to try any particular product.

* The owner of the distribution platform takes a cut to cover its administrative expenses, but the vast majority of the collected funds go back to developers *in proportion with the popularity of the product*. That is, the subscription revenue collected from customers each month is apportioned to developers based on the use of their products for that month.

Benefits of this system:

* High utility through unlimited deployment. Everyone who wants a piece of software gets it.

* Economic equality. Again, everyone who wants a piece of software gets it — regardless of their socioeconomic status.

* A tight coupling of developer payment to demand — and to *continued* demand. If users keep using the software, the developer keeps getting paid, even though the customers don't need to keep paying *specifically* for the software.

* A more efficient reward system that is not pointlessly limited by inapplicable concepts of scarcity. Instead of losing sales to customers who are priced out of the market, developers benefit from the broadest possible deployment.

* An end to piracy. There is no need to pirate the software when it is free for all subscribing customers. Demand can be better gauged than conventional sales numbers that do not reflect pirated deployments.

* An end to the secondhand software market. Used software markets are inefficient, but are necessary to offset the pointless dependency of the current market on scarcity. (Poorer users who can't afford the retail price can use the software cheaper, but later, and only when retail-purchasing customers are done with the software.) This inefficiency is eliminated in this market — every deployment directly benefits the developer.

* A focus on value. Developers do not need to choose projects based on sales figures. Rather, developers strive to produce software that will achieve the broadest deployment — i.e., the highest popularity, based on the best value presented to customers. Accordingly, this model kills many "features" that benefit the developer at the expense of the user, such as paid-for ad-based sponsorship, sales of private user data, paid-for downloadable content (which is another form of socioeconomic inequity), and unwarranted "leased software" schemes (which often extract ongoing money from a customer without a return of value).

The biggest problem is cheating: a central repository is a potential source of corruption, as we've seen in Apple's draconian control of the availability of products on the App Store. The central repository might also lie about demand in order to skew royalty payments and control the market or sell private user data (a la Facebook). This could be offset by making the central repository an unaffiliated, open organization — maybe a nonprofit. Even better, its administrative costs could be totally sponsored by a government, thereby eliminating the motivation to seek additional funds.

Thoughts?

Comment Re:MPEG-LA prevents non-commercial use (Score 3, Insightful) 310

And submarine patents do exist; there's much FUD by MPEG-LA members being spread about the possibility of Vorbis infringing yet unknown patents.

That's not a "submarine patent," which has a very specific meaning in this field.

What you describe is just MPEG-LA spreading FUD. And the standard response here is: "patent app serial numbers or STFU." Either MPEG-LA can point specifically to the applications which (if they actually mature into patents) it believes are being infringed - or it can't, and its accusations of infringement are meritless. It's that simple.

What we really need is compulsory licensing at some percentage of the per head sale price.

Even looking past the obvious question ("How does this point relate at all to anything in this thread?")... compulsory licensing suggestions have a common problem: who establishes the pricing, and based on what data and guidelines?

Usually, what people mean by these suggestions is: "Let's craft a body that's allowed to grant licenses to patented technologies for $cheap!" The problem with all such suggestions is that if you establish a body that (based on applicants' estimations) consistently underprices the value of those licenses, applicants will simply abandon the patent system - and keep their inventions as proprietary trade secrets. No more industry coalitions, no more industry standards like 802.11 and USB and HDMI... every company will make its own protocols, just like back in the 80's. Is that your notion of an ideal computing industry?

Comment Re:We have it. It's called the World Wide Web. (Score 1) 363

Maybe this "World Wide Web" technology will catch on some day.

Ah, but what did Facebook bring us over Geocities and personalized web pages?

  • Standardized profiles - information is put in the same places on everyone's profiles
  • Searchability - a very easy ability to find someone's profile by name, city, etc.
  • Centralized news feeds - this is Facebook's killer app, really: the presentation of a single page featuring status updates by all of your friends, and the ability to handle status responses in a threaded manner
  • Metadata - the ability to tag friends in photos (and have those photos aggregated into albums for each of your friends), to "like" statuses, to tag friends in notes and to republish others' notes, etc.
  • The ability for non-technical users to create an account and an entire page in an extremely easy-to-use and non-technical way

And... well, that's about it, really. Other innovations (Facebook apps, in particular) are neither new nor particularly interesting or useful.

In exchange for these features, Facebook has imposed a whole swath of misuses and abuses, including (but hardly limited to):

  • The privacy violations documented in the parent post
  • Targeted advertising based on private information ("hey, I see you've written an email about a trip to Boston, here's an offer for a rental car while you're there...")
  • Taking control of users' data (the inability to archive your profile, the heavy compulsion to use Facebook's (terrible!) email system, etc.)
  • Extreme control over the arrangement of a user profile, either by the user or by a visitor... e.g., cluttering up the page with Facebook's advertising widgets and non-removable apps

In short - the social network and information delivery advantages that Facebook offers us are beginning to not be worth the costs that Facebook is extracting as the owner of that social network.

Yes, the author is right - we need a free, open, non-centralized alternative to Facebook.

Of course, it can't be a return to the Geocities model. We do need the advantages of Facebook - discoverability, standardization of information, message delivery, and a clean and easily prepared presentation. But requiring everyone to buy web space, learn HTML or a CMS, and design and deploy their own profiles is just not a viable solution.

So what do we need? I propose the following:

  • A standardized personal information representation - probably an XML schema that holds all of a user's personal information in a standardized way. With the right renderer, the presentation of this information can end up looking exactly like Facebook's. But even better, the viewer of the information - i.e., the visitor of the user's profile - has complete control over the layout of this information, and can render it in a more pleasing way if desired. Better still, the standardization promotes automation - e.g., the automated synchronization of each user's contact information with your contact info database. (No more "my cell number changed, please update your records" email messages!)
  • A centralized database with pointers to user profiles (e.g., to the XML files of various users posted at various points on the net.) This really needs to be *one* registry. However, it serves a very specific and narrow purpose: if you want to find the guy named Joe from Ypsilanti who you met at an event last week, it needs to point you to his representation (if it's public.)
  • A security mechanism. Look, this one's much easier than anyone thinks. We've had RSA for over 20 years. Identifying certificates, and techniques for encrypting select pieces of information for access only by specified individuals, are quite well-conceived. We just need automated protocols that incorporate these techniques. If done well, this model vastly surpasses Facebook's security models - you have exquisite privacy control over *every* piece of information: every post, every piece of identifying information - and you can revoke it from anyone at any time.
  • An information delivery mechanism. This one's also a little difficult. Sure, every individual can have an RSS feed containing all of his or her statuses, and we can probably weave posts together in a "post X is a response to status Y" model. But if you have 500 friends, does your computer have to retrieve all 500 news feeds from each of your friends' representations? That's awfully inefficient. We will need a push mechanism: when you post a message, like a photo, etc., the information needs to be pushed into message boxes of all of your friends. It's definitely achievable in a standardized and decentralized manner. Hell, we could just fall back on email, with automated parsing and weaving to generate your news feed based on the received messages.
  • Easy deployment. This is also easy - just sign up for a service that will help you create your representation and then host it for you.

In other words - we can ditch Facebook for a decentralized model. It will be complicated for a while, and compatibility issues will definitely arise - but the end product can easily have all of Facebook's advantages, and many more, with none of its intrusive and abusive practices.

All we need is the motivation. And Facebook is giving it to us with their terrible business choices.

Comment Re:What's an "industry-recognized standard"? (Score 4, Interesting) 310

...any patent infringement claims against H.264 must be made known within 6 months of the passage of this law.

I don't think that's what the OP means. Here's what he wrote:

any patents that contribute to an industry-recognized standard were unenforceable in the application of that standard.

I think he means that any patents contributed to an industry standard consortium (like the WiFi Alliance) can't be enforceable. You're suggesting something about patents not contributed to the standards body being enforced against implementations of the technology that are authorized by the standards body. Or something.

Honestly, I'm not entirely sure what either of you mean, or why. And IAAL - in fact, I practice in this area every day.

Is this about making sure that technologies issuing from the standards body are freely available for use by anyone? That's the whole point of the patents owned by the body - to ensure that implementations follow the guidelines of the standards body (particularly about compatibility.) So you're lobbying to allow people to implement standardized technologies in non-compatible ways - i.e., in favor of "embrace, extend, extinguish?" I don't think anyone wants that.

Or maybe you're arguing that if a company has technology and patents verging on the subject matter of an industry standard - e.g., a technology competing with WiFi - but chooses to keep it proprietary, then the company can't assert its patents against implementations issuing from the standards body. That's also a bad idea - should we really force the entire industry onto one standard? Doesn't that deter the advancement of technology through the development of alternative standards that might be better? Bluetooth was first conceived as a potential competitor for WiFi, but it has its own niche and is widely implemented for headsets and such. Under this type of rule change, Bluetooth would have been scrapped as soon as WiFi took hold.

As an aside - the "submarine patents" cited by the author of this post haven't existed for decades, because (1) the patent term calculation was changed to be measured not from the date of issue, but from the date of filing, and (2) most patent applications are published at 18 months.

This is a complex field. It's easy to get confused. But the field suffers from a wide range of folks who don't understand it, and yet still want to "fix" it. Hence, this post, and many like it on Slashdot and elsewhere.

Comment Re:Why Artificial Intelligence may never exist (Score 1) 483

"As soon as something becomes routinely doable by a computer, it is no longer considered a sign of intelligence; it's a mere mechanical activity."

I don't think that's a fair comparison.

"Intelligence" isn't just being able to solve a particular problem, regardless of its difficulty. If you throw the smartest chess algorithm in the world at a map, it won't be able to tell you how to get from point A to point B.

Obviously, "intelligence" involves many of the meta-qualities of problem-solving: inductive logic and generalization, deductive logic, the development and use of heuristics, the recognition of general problems and solutions in different domains - flexibility, spontaneity, personality, predictiveness, humor, semantic language skills, self-awareness, curiosity, intellectual growth, the development of goals...

We might be able to develop an algorithm to tackle one, or even a *few*, of these skills - but only in a narrow domain. Even our best language translators typically understand very little linguistic comprehension, and then only in a specific language or topical domain. Yet, the average five-year-old child demonstrates ALL of these capabilities.

Even at a basic level, these skills are what we would consider "intelligence." When we have a machine that demonstrates even a very rudimentary set of these capabilities, it will be considered intelligent. The rest will just be refinement and scaling up.

- David Stein

Slashdot Top Deals

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...