Well let's take a process like "quenching steel" compared to regular steel, you still have all the same basic ingredients, you heat it up and cool it down but really the rapid quenching brings out new and novel properties in the steel. It surely should qualify for a patent, it's not like the regular steel smith has a patent for everything his smithy could do - yet the smith has never done or even thought about doing. In the same way it would be absurd to patent the Turing complete machine and say all software is merely the application of machine states. On the other end of the spectrum if you add 0.01% table salt and claim your quenched steel+salt isn't infringing on any patent because it only says steel, the courts will laugh at your attempt to trivially avoid the patent. Most software is like that, trivial changes of inputs, instructions, ordering etc. are "new" but not in any sense novel while software with new functionality that's never been done before sounds novel and non-obvious to me.
Is there a value to sending people to school beyond testable knowledge? That's a big question.
No, because the obvious answer is yes. But do you have to lump it together with tests to measure specific knowledge? I've had years of regular full time onsite university education, if what I need is to prove my ability in a specific topic then that should be possible without requiring a meager and largely irrelevant addition to my general interpersonal skills, particularly if my available hours, location or other duties make it impractical or impossible. At least anything that can be reasonably accomplished through exams and exercises, I don't really see how we could let loose doctors and lawyers without real world experience with real patients and clients which necessitates a controlled training program. Most fields are not like that though, if it's all on paper or computer or with inanimate objects then you should be able to read yourself to a degree in most STEM fields.
To even form this structure in RAM would require, what? 40-50 more Moores Law iterations? Which I doubt is even physically possible.
As the highly abused saying goes, the proof is in the pudding - in this case the grey matter. If the brain is capable of having this processing and storage capacity, interconnectivity and power requirements then surely so can we, if we don't it's because our silicon-based technology is inefficient and inferior compared to the organic "technology" of the brain. Using custom silicon that mimics the brain - rather than trying to emulate it on a Von Neumann architecture - it should be doable in the same realm as supercomputers, I saw one research paper that said based on the size of emulating one and one neuron it should eventually be possible to do it at the size of a car with a 10 kW power supply. Still, the real issue is that they're not usable for anything else than research since we really, really don't have a programming model for a system like this.
1) Tor is not a peer-to-peer approach. It does not remove the central server, it only makes the routers individually unaware of the contents of a package. You still have to serve replies from a central server subject to a jurisdiction (the problem we were pretending we could solve). Tor works if you wish to obscure who wants what, but it is still an overlay to the client-server paradigm.
Yes, but good luck finding out what that jurisdiction is, at least they don't seem to have much luck in locating and shutting down hidden services. If you only really need a DNS name that'll stay constant and that doesn't need to be "easy" then the onion system would be just fine, you own them by virtue of owning the private key and they all look like ebiueabv35rwas.onion. Unlike an IP you can move the key around and run your site from any box you want, which is the most essential part of DNS. You probably won't type it up but if you find it on some web page somewhere and bookmark it you'll have it.
Robots will be so good at complex tasks that they will find it overkill to use one for simple tasks. They'll simply say, why waste a robot on this task when we have all of these stupid humans who are willing to do it for basically nothing. Half the quality at an eighth the price. Can't beat that.
Yeah right, a robot that smart at complex tasks will use lesser computers and robots as tools the way we use them as tools. You think companies will deal with hiring and training employees with all their quirks and unreliability when they can put in a purchase order for a $10 sensor and a $2 micro-controller and have the complex robot tell it how to do the job? Not bloody likely. Most of the reason computers suck at what they do is because we suck at telling them what to do, well I expect a robot to suck equally bad at telling a human what to do, while it should be excellent at simulating what a cheap piece of hardware could do and could transfer that control software with perfect accuracy in no time. Even the Matrix plot that we'll be living potato batteries is more plausible than that they'll need us for simple tasks. We have a baseline for living, computers don't.
and the defendant in an infringement suit can show proof of actual damages to mitigate statutory damages.
As far as I know, that is not really true - the copyright holder can choose to ask for actual or statutory damages and while it might have an impact on where you end up on the $750-150,000 statutory scale you can't prove your way below the statutory minimum. And that is pretty damn steep for a dollar tune or porn flick. You can knock that down to $200 if you can prove you're an innocent infringer, but then the burden is on you to prove that.
Yes, it's more like this: Imagine if you took a sack of marbles and spread them infinitely thin, you'd expect that the distance between any two marbles to also grow to infinity. This is proof that primes are not like this, no matter how thin they're spread they'll cluster in pairs less than 70 million apart. The conjencture is that you'll always find another pair 2 units apart (like 5 and 7, 11 and 13 etc.) no matter how big the numbers get.
I'm such a cheap drunk that I voluntarily observe a limit of zero when I'm driving. I remember one
night when I was tired and hungry and managed to get completely blasted
on one can of american beer.
For flying the limit is zero as well, with the requirement of eight hours from the last drink to takeoff.
The real solution is social: make it utterly unfashionable to drink and drive.
Can someone please tell me why this is a good idea
The long story is here (PDF). Motherboard will still do the heavy lifting from 12V to 2.4V, but the integrated VRM will distribute it. Advantage is extremely clean, fine-grained, low-latency and flexible power supply to deliver exactly as much power to where it's needed and probably - this is just speculation on my part - allowing the CPU to work on a wider range of voltages since there's less noise and ripple so you don't need the same tolerance limits. It sounds perfect for smart phones, tablets and laptops that are primarily battery-limited, nice to have for average machines but potentially an issue for overclockers. All you need is cooling though, it shouldn't limit overclocking if you can keep the temp down.
Actually I'd argue exact the opposite, this is the earliest possible convenience where it is possible to say when and what features will be coming with reasonable certainty. Post-launch reviews are fun but most of the news are about upcoming products/services/changes and really what you just said - it won't actually be in distros on release day anyway. So you can read about it now, it'll release in two months, be in most distros this fall and in Debian in 2016.... maybe. That said, in the rush to get this story out and get page hits they forgot the part about making a decent summary of what's new and why it should matter, but hey... it would be the right time.
Well if our plane crashes deep in the snowy mountains with some passengers killed on impact with no rescue in sight and it's either eat or starve to death, I'll be happy to bury you under a tombstone that says "Here lies a true ethical vegetarian". Just kidding, actually I'll wait for you to die and eat you too. While I haven't eaten human meat there's a lot in the don't ask, don't tell category that tastes just fine until you hear what it is...
All real world currencies are usually legal tender - It is required that people have to accept legal tender as payment for good and services
Generally no, only as payment for debts, products consumed or services already provided like eating at a restaurant and paying afterwards. Stores can refuse you for having too large money like $100 bills, too small money like tons of one cent coins or pretty much any reason they want. Having legal tender doesn't force anyone to do business with you.
Put that way, maybe Windows isn't quite so bad after all . . .
From the question:
They usually have enough business knowledge that they provide some value to the company, but from a technical perspective they are a slowly-increasing liability.
What you said:
Many people provide value to an organization in ways that aren't always easily visible to co-workers. It's entirely possible the coders who doesn't seem to be "as up to date" in his skills may be providing benefits to the organization in ways you don't yet have the experience or perspective to appreciate.
And if those people moved either up or sideways to continue to provide that benefit, that is great. The problem is when those people are still doing technical work that lead to bad code, bad design, bad practices and bad solutions that they can do by virtue of their seniority. It's very hard to rein in someone who has more experience, longer work history and who has more management clout from making bad decisions even when they really are bad. There's at least three ways to make a senior person butt out like trust, vanity or distraction. The best is of course if they just relinquish control if they trust me, which usually comes along. If that fails, there's always vanity as no senior person really likes to hear he's dealing with the nitty-gritty implementation details and the third is just to run up requirements/business side issues until they leave more of the tech side to you.
Is it a bit cynical and cruel? Perhaps, but I really don't want to have a manager that I feel is holding everyone back, push them towards where they can shine and leave me some room to shine on my own. The best managers seem to have this figured out on their own, they know when to provide direction and guidelines but also when to back off and let the people who know all the details make the right decision. But the world isn't full of great managers and really, you can waste a lot of time and great code writing a flawed system that can't ever really work and perform the way it was supposed to. And nobody cared how great your wall was when the whole house caves in.
You're not in the worst situation you could be in.
Our industry and the career options of our field change so fast, you have to learn new stuff each year, no matter how old you are. If your company keeps you around and basically pays you a salary for you to learn programming, what's you problem? Obviously they trust you and your valuable enough as a progger to them.
Most productive code is of low to mediocre quality anyway and no one cares, as long as it's finished before the deadline, so don't sweat it.
Good luck and enjoy your new career.