And quite often the antibiotics used in farm animals aren't considered safe for humans.
See also: http://www.sciencedaily.com/releases/2013/05/130513095030.htm
I also wonder about how the concentrations of arsenic vary throughout the chicken - e.g. if you make pate from the chicken livers do you get a higher or lower dose? I suspect significantly higher.
Some people may also be allergic to the antibiotics or other stuff used and not actually allergic to the meat/vegetable itself.
And quite often the antibiotics used in farm animals aren't considered safe for humans.
Would different Strong AIs on the same hardware still be considered the same computer?
A derived finite-state machine (FSM) is obvious if an isomorphic finite-state machine existed, even if no one has ever made the derived FSM before.
What do you mean by derived as opposed to isomorphic? The two terms are not exclusive.
So, for example, if some logic lithographed onto a silicon die or downloaded as a netlist into an FPGA causes a computer to paint the screen in red paisley and that's never been done before, it's new... but it might be nonobvious and in fact patent eligible.
It might, or it might not, if, for example, logic for causing the computer to paint the screen in blue paisley has been done before.
The problem is when modes of implementation of the FSM get compartmentalized into logic gates versus sequential imperative instructions, because patentability of FSMs in logic circuits has been established for decades, while patentability of FSMs in sequential imperative instructions has not.
"Has not been established" or "has been established as not patentable"? Also, again, what do you mean by logic gates vs. sequential imperative instructions, because the two are not necessarily exclusive.
And so, you get cases like CLS or Bilski where the judges want to invalidate the patent because it's stupidly obvious, but they have no evidence on the record that clearly establishes what an apparatus or machine is and is not... so they declare it an abstract idea and invalid.
Yes, they do - the patent claims begin with a preamble that states whether they recite a machine or apparatus, or a method. That's presumptively conclusive evidence.
For example, Intel not only patents the lithography process of silicon dies of x86 processors, but the logic circuit therein as well in separate patents. But that logic circuit can be implemented in a sufficiently large FPGA, which has software-like characteristics that strongly resemble loading a different sequential imperative machine-code program into a general-purpose imperative-machine-code processor, but with the key difference being the lack of sequential imperative instructions in the FPGA or lithographed IC (ignoring the sequentialness of pulsed timing waves of concurrent gate-flipping in the progress of computation in the FPGA or lithographed IC.)
"strongly resemble", "ignoring"... Although useful for a doctoral thesis, glossing over steps is usually not acceptable in a legal conclusion.
Merely shoving some algorithm that is done by hand onto a computer is nothing novel. This isn't to say a particularly clever and non-intuitive software implementation couldn't be patented. But just doing it in software is not novel; it is obvious.
Sure, if the algorithm is already done by hand. But what if you come up with some novel, nonobvious algorithm, like a way to calculate interstellar warp coordinates? Then doing it either by hand or on a computer is novel... The question is whether it would still be patent eligible or not.
Under Bilski and CLS and current jurisprudence, a claim just to the algorithm would not be patentable... because someone could do it by hand. But if the claim had enough some additional limitations that specifically recited the computer, such that while you could do the algorithm by hand, you couldn't do the algorithm in the claim by hand, then it would be patentable (e.g. say it included a step of transmitting the data to a cloud service for distributed processing - that particular step may not be novel, but remember that the rest of the claim includes your novel, nonobvious algorithm).
I keep recommending these rules:
1. If it's already being done in the real world, doing it on a computer is not patentable per se.
That's currently the rule: computers are known, and if your method is known, then simply doing it on a computer is not patentable.
However, what if you have to do additional steps to make it work on a computer? For example, in the real world, we can look at someone and easily recognize their face as belonging to a friend... but machine vision and facial recognition is really, really difficult. There's a whole bunch of processing that has to be done, because computers don't inherently recognize faces. So, while the broad concept of "recognizing a face, on a computer" wouldn't be patentable, "detecting a first location corresponding to a first eye; identifying a second location corresponding to a second eye; determining an approximate facial width based on the inter-eye distance; identifying a mouth shape in a third location; etc., etc.," would be.
2. Doing a simulation of a real-world item is similarly not patentable per se.
Again, same as above - if the real world item is known, then simply simulating it isn't patentable... unless you have to do other things, or make approximations that don't exist in the real world. For example, the real world has a sky, and clouds, and changes smoothly from dark blue to light blue as you get near the sun... but doing volumetric lighting simulations and simulated Rayleigh and Mie scattering in a way that doesn't kill your GPU is really difficult. Why shouldn't a narrower claim to those be patentable?
3. Doing something wirelessly formerly done over a network, or remotely formerly done locally, or on a lil' phone or tablet or tricorder, is also not patentable per se.
And again, that's how it works. You can't get a claim to "transmitting data over a network, wherein the network is wireless" but you can get one directed to some of the steps you have to do with wireless communications that you don't have to do with wired communications, like the additional signal/noise processing, frequency heterodyning, burst interference avoidance, spread spectrum broadcasting, etc.
4. This is not to say particularly clever implementations (the "machine" part of "virtual machine") could not be patented.
Or stuff that you only have to do with virtual machines, like dynamically provisioning them based on load, or having dozens of virtual machines sharing a single hardware network interface and single memory bus, and transparently distributing packets to them in such a way that each machine doesn't realize there are others using the card.
There are no "good" companies out there; that's the thing you don't seem to understand. There aren't any companies which are going to give you a generous raise each year (at least enough to match what you'd make at another job elsewhere, i.e., keeping up with the "market rate"); that's just not the way companies work any more. Companies treat workers like dirt because they're shortsighted, and because a fair number of employees (like you, it seems) put up with it because they're afraid of losing their jobs, and are willing to work 60-hour weeks for years on end just so they can be seen as "loyal", even though company management doesn't give a shit and will sack you as soon as it helps them make this quarter's financials look better.
A killer stereo and huge TV don't cost anything, BTW. You can get a huge TV now for under $1000; to someone making 6 figures, that's really not a lot of money. Nice stereos cost quite a bit less than that these days. And it's not like you're going to buy a new one of these every year. If you want to point at things which Americans usually waste a lot of money on, it's 1) car (with giant car payments), 2) cable/satellite TV (worse if you get the stupid sports packages), 3) alcohol (not really expensive from a store, but at bars and restaurants it's insanely overpriced). Living in a "ritzy" area is not a waste of money, because the alternative is living in a ghetto and getting shot at or robbed on a regular basis. Thanks to the housing boom, a decent house still costs $250k-500k in many cities, even after the housing bust (prices went down, but not that much) (yes, decent houses are much cheaper in other places, but these are generally non-tech cities where Slashdotters are not going to have an abundance of jobs to choose from, and consequently salaries are far, far less, even less than half as much).
As for your cousin, what was his field and his specialty, and where did he live? He was doing something seriously wrong if it took him 10 years to find steady work again. Was he one of those people who became a "web developer" in the dot-com days, with no degree or credentials whatsoever? If so, well no wonder he couldn't find any work after the bubble popped. People with degrees and real credentials and experience haven't had that problem. Moreover, was he one of those people who absolutely refused to move from whatever little city they (and their extended family) have lived in their whole lives? That's a career-killer too. You can't be a professional and be unwilling to move to where the work is, and do well. If you're dead-set on living in a certain place, then you need to forgo education altogether, and just get a job out of high school doing something that's in high demand in your local area (like working as a grocery cashier for minimum wage), or perhaps get an education in something that there's plenty of jobs in your area for (like medical technicians; every little city has several hospitals and lots of medical clinics). Don't bother getting a college degree if you don't want to move to where the work is.
Anyway, sorry about the asides, but the point is, if you have a good education and experience in the software field, and you're willing to move to where the work is, there's plenty of jobs open for software developers, regardless of the economy. I got laid off in 2009 when the economy sucked (along with my entire team; company decided to toss out the whole department because it didn't think its profit margin was high enough, even though it had customers lined up with guaranteed high volumes for years), and I had another job in a month at a 20% increase in salary. Combined with the 4-month-equivalent severance package, it was a pretty sweet deal. And I'm no rock-star performer either. All that stuff you read about high unemployment and no jobs doesn't apply to software people.
The input/output from the antenna is patentable, and presumably it was patented. The bus that transfers the i/o from the antenna to the processor is patentable, and again, it was patented.
The software that manipulates those i/o numbers is the algorithm under discussion - and should not be patentable.
Alone, yes, but why not together? Since you acknowledge that the antenna is patentable and the bus is patentable, then why can't I claim a device comprising an antenna that receives a signal, a bus that transports the signal, and a processor that executes an algorithm to convert the signal into geographic coordinates? I'm sure we can all agree that devices are patentable, yes?
Now lets be clear that I'm not talking about a FPGA! In a FPGA if you "reload" the software you will change the machine because in essence the "software" for a FPGA is like the internal gear system, you can configure it in 1000 different ways to do different tasks. One day it can be a conveyor build and the next it will be a bottle cap remover. So before someone comes in here and blows a gasket at me, I fully accept the fact that a FPGA is very different then a Computer.
Ah, but an FPGA can be simulated in software. Therefore, if you reload the software in the simulator, you change the virtual machine. How is the fact that one is emulated and the other is physical relevant?
The way I look at it, software certainly adds a new state machine into the picture. Whether that qualifies as a machine for patent purposes is a separate question, and there are legitimate arguments on both sides, but whichever way you decide, an FPGA should play by the same rules as software.
Bear in mind also that qualifying as a machine simply distinguishes it from being an abstract idea... it's not enough to make something patentable on its own: it still has to be new and nonobvious, but those are different questions, with different tests - the most revolutionary, novel, nonobvious abstract idea that's ever been invented is still unpatentable, not because it's not new, but because it's not tied to a machine. Software that's tied to a machine only meets that first hurdle - it still needs to jump over the others.
only specific, newly invented machines are supposed to be patentable
And if the machine is implemented in software, the only thing that makes it novel and different from a bazillion other machines with the same physical implementation is the algorithms. You're not supposed to be able to patent those.
Why not? And no, I'm not asking "which Supreme Court decision said that you can't patent algorithms," I'm asking why they said that. It's not an arbitrary rule - there was reasoning behind it, and it's the reasoning that you need to look at when determining whether a machine executing novel and nonobvious software is patentable or not, not the one sentence of dicta.
So then, since the Jacquard loom used a different set of cards to 'program' each different cloth pattern, it qualifies for millions of separate patents.
No, because, after the first one, the others would likely be obvious in view of the first one. Nonobvious, however, is a separate requirement from whether something is drawn to patent eligible subject matter. And the judge's point here was that a programmed computer is a machine, and machines are patent eligible. They still have to be new and nonobvious and clearly described, but they're not immediately disqualified regardless of how new or nonobvious they are, the way pure software is.
We're talking about technology almost 300 years old. I'm thinking some of those judges are not only technically inept, but are in fact Luddites in black robes.
If you came up with a brand new technology, not even 300 ms old, that was just a series of mental steps - say, a method for calculating time travel coordinates, or the like - it could be the newest, most revolutionary technology in the world... and still not be eligible for a patent, because it's just mental steps. But a machine is different. Embody your brand new method in a machine, and it may be patent eligible. That's what the judge was trying to say.
As a shape & color, red paisley would be a 14-year design patent, not a 20-year utility patent. The topic here is utility patents, which are for the "useful arts". Because they are outside the "useful arts", the recognizable shape & color in design patents are for trade-dress topics outside of what can be trademarked.
Since I was citing and discussing the statutes involved, any reasonable person would assume that I was using an analogy to simplify the discussion, rather than discussing something unrelated having to do with design patents. And then there's you.
Now, would you like to go back and add something to the discussion, rather than just trying to be condescending and failing?
"So, for example, a patent claim of triangulating a position given three signals is not patentable, because we could do that on paper. But a patent claim that includes receiving those signals from a GPS satellite with an antenna is patentable. "
Which just shows the absurdity of the patent regime. Your argument that a human being could not do that is worse than wrong, it's entirely ignorant. There is no way to triangulate signals without having an antenna involved, and the type of antenna is a purely functional choice based on the situation. Absolutely anything that a computer can do, a person can do.
Yes, that is entirely ignorant of you: you acknowledge that you need a physical antenna to triangulate a signal, but then suggest that a person can do it without using any hardware. You're being inconsistent.
Allow me to help, since the prior post went over your head:
A person or a computer can perform calculations on input values and output processed values.
A person cannot necessarily receive those values directly regardless of form: people don't have antennas, don't have A/D converters, don't have signal amplifiers, etc. Similarly, while a person can output a set of values, without some additional hardware, a person cannot transmit a packet, close a switch, render a frame on a display, etc. That's the distinction - just performing calculations isn't patentable; doing something with them or doing something to get them is.
Lee added that the Scripps Hackers eventually used Wget to find and download "the Companies' confidential files." (Wget was the same tool used by Facebook's Mark Zuckerberg in the film The Social Network to collect student photos from various Harvard University directories.) The rest of the letter pretty much blamed the "Scripps Hackers" for the cost of breach notifications, demanded Scripps hand over all evidence as well as the identity and intentions of the hackers, before warning that Scripps will be sued.
Folks, there was a big bad security breach. Now, *adjusts his massive belt buckle* we're investigating this like we would any other serious crime. And right now we're just trying to identify weapons used in this heinous attack. Now, we've discovered that the hackers were using a very vicious mechanism in this attack. In a murder, you might find a revolver used to put two bullets into the back of a poor old defenseless lady's skull in order to get all her coupons and a couple of Indian head pennies out of her purse. Or perhaps in a pedophile case, you'll find the "secret candy" that was used to lure the children into a white panel van with painted over windows.
*expels a long tortured sigh*
Well, I gotta say, in my thirty years on the force, I wish we were only dealing with something like that today, honest to God Almighty I really do. Instead this artifact was discovered at the scene of the crime. Now, I'm not asking you to understand that -- hell, I'd warn you against even openin' up your browser to the devil's toolbox. But let me, a trained law enforcement professional, take the time to explain the gruesome evidence just one HTTP request away from you and your chillun'. The page is black. Black as a moonless night sky when raptors swoop from the murky inky nothing to take your kids and livestock back up with them silently. On it is a bunch of white text that makes no sense to any God fearun' man on this here Earth. That's what they call a "man page" probably because it is the ultimate culmination of man's sin and lo and behold it displays a guide to exact torture on innocent web servers across this great and holy internet.
Even if you want to use this "man page" for WGET to learn how to use Satan's server scythe, you would have to read through almost twenty pages of incomprehensible technobabble like what that kraut over in Cali -- the one who took his wife's life -- spoke. And if you want to just see an example, it's not at the top! No, why, it's all the way down at the bottom. For this one, they don't even have examples. Just enough options to kill a man. Probably gave Steve Jobs cancer, they never proved all these options in these pages didn't. Buried in the mud of a thousand evils lie more evils.
And why, oh why are we even wasting taxpayer money on these Scripps Journos? Who needs a trial when the evidence is in the tools they used? Folks, I think it's time we WGET one last thing, I'll WGET a rope and you WGET your pitchforks and torches
What the judges need to understand is that fundamentally, people are general purpose machines. Without any training at all, people are not capable of solving many problems. However, people can solve all sorts of problems based on what you teach them. Sure, you aren't going to be able to get a bushman in the Serengeti to be able to build the Apollo but, the potential is there given enough training. If that same bushman were to go to school and eventually graduate from college, they would be just as capable as anybody else. So how is receiving training through college or a technical school any different than a computer receiving a new set of instructions on how to solve a particular problem?
Computers only run algorithms (which aren't supposed to be patentable).
Except that 35 USC 101 explicitly says that processes are patentable, and a process is an algorithm. You have to go a bit deeper into understanding why the Supreme Court said that algorithms weren't patentable to understand the distinction. Specifically, they wanted to draw a line between thought and action: because one of the remedies for patent infringement is an injunction, you have to be able to order people to stop infringing. And while you can tell someone to stop performing the process for curing rubber, for example, you can't tell them to stop thinking of the equation for determining when to remove the rubber from the oven.
One way of drawing this line...
They follow a set of instructions step by step and can't do anything that you and I can't do with a pencil and piece of paper. (They just can do it a lot faster.)
... is to require that the claimed process has steps that you or I can't do with a pencil and piece of paper, no matter how slowly. So, for example, a patent claim of triangulating a position given three signals is not patentable, because we could do that on paper. But a patent claim that includes receiving those signals from a GPS satellite with an antenna is patentable. Think of it as a system - there's a black box that takes an input of some numbers and outputs some other numbers. You or I could do that as well as a computer (albeit slower), and it's unpatentable. But add on additional hardware that provides that input, or additional hardware that reacts to that output, and it's no longer something that we can do solely in our heads or on paper - you now need to perform an action of getting signals from something else, or closing a switch elsewhere, or whatnot. And so that's patentable, and you can be ordered not to take those physical steps that would infringe.
35 USC 101 requires that a claimed invention be directed to patent eligible subject matter: a process, machine, article of manufacture, or composition of matter (it also requires that the invention is useful). The courts have decided that these categories are very broad, but don't include "abstract ideas" (though that term is never defined), laws of nature, or natural phenomena.
If the claimed invention passes that low threshold for 101, 35 USC 102 requires that the invention must be new or novel. That's a higher bar, but not a huge one - if I'm the first person to make a red car and claim that in a patent application, that's new, even if blue cars existed.
If the claimed invention passes that threshold, then 35 USC 103 requires that the invention must be nonobvious. The red car is obvious if blue cars existed, even if no one has ever made a red car before.
So, for example, if some piece of software causes a computer to paint the screen in red paisley and that's never been done before, it's new... but it's obvious and still patent ineligible.
The problem is when these get confused or conflated into a single requirement, because obviousness and novelty require evidence, while subject matter eligibility does not. And so, you get cases like CLS or Bilski where the judges want to invalidate the patent because it's stupidly obvious, but they have no evidence on the record... so they declare it an abstract idea and invalid. In particular, here, the judges started carving out everything from the patent claim that made it non-abstract, declaring it irrelevant, until the only thing left was abstract. The outcome may be the right one, but it's for the wrong reason - it's like finding a murderer guilty because you hate his face. Maybe he was actually the murderer, but you're finding him guilty for the wrong reason.
Book Review: The Plateau Effect: Getting From Stuck To Success is identical to this Amazon review.
Book Review: The Death of the Internet is identical to this Amazon review.
Book Review: Everyday Cryptography is identical to this Amazon review.
Book Review: Liars and Outliers is identical to this Amazon Review.
It just keeps going