Comment The Windows/386 Ad (Score 1) 220
IANAL, but after the 7min mark, is considered a crime against humanity in most civilized nations.
IANAL, but after the 7min mark, is considered a crime against humanity in most civilized nations.
'based upon' is trueish, but in a funny way,
Mach was written out of BSD (modifying's easier than rewriting!), but the final kernel was clearly no longer a unix kernel. For Nextstep, the microkernel was recombined monolithically.
The research required for something better hasn't had funding for decades. Modern UNIX has been good 'nuff. It's got plenty of problems, but none big enough to justify a research budget big enough to rethink the OS.
Actually, it's only really been Sun pushing things forward recently, and it's mostly incremental.
Unix first. It was rewritten in C. http://en.wikipedia.org/wiki/Unix
Unix came out in '69, C in '72.
I've heard some pretty amazing government fraud stories. The best so far is a guy just making a bill in Excel and sending it to the Navy. They ended up paying $3 mil before catching him.
I've been looking at hp c3000 chassis office-size blade servers, which may serve as your production+backup+testing setup, and scale up moderately for what you need. Compact, easily manageable remotely, and if you're good about looking around, not terribly overpriced. Identical blades make a nice starting point for hosting identical VM images.
How's Nexenta vs the latest opensol? Most of the gnu commands are already the first in my PATH. Is there anything else different?
Not in the patent, but MS is happy to give Prof. Tufte credit in their blog: http://blogs.msdn.com/excel/archive/2009/07/17/sparklines-in-excel.aspx
Now, how that helps their patent application in terms of obviously admitting prior art....
Sorry, I was responding to the weird view that
There's something innate in being a good developer. Best I've heard seems to relate to being able to consciously build a mental model of a system and to use it for analysis. With that skill, a CS program can make someone pretty good. That skill without a CS program can still lead go a good developer, but they'll have to teach themselves a *lot*.
The CS program does distinctly alter the likelihood that someone will become a good developer. Point sample data doesn't mean much in anecdotal form. Take two people with the right innate skill, and run one through Hospitality Management and another through CS, and the latter is far more likely to be a better developer.
Hmm, when your algorithm is exponential to the input size, you can spike a cpu with 100 elements. That's a real example from a bug experienced by real paying customers.
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
Java - 18%
C - 17%
PHP - 10%
C++ - 10%
VB 8%
Java's already stagnated and sinking (1 yr delta is -1.9%) Unless CS programs plan to stick to (the incredibly venerable) C, people will always complain that they're sticking to the wrong languages.
Unless you're working on scientific computing, OS development, or similar subjects, you will probably never need to ever write any of those things.
Or if you just can't find what you're looking for....?
Look, schools can't spend 4 years teaching you how to write release-quality code the first day at the job. First, there's a good chance that the language you'll be using when you get out of school wasn't the most popular one around when you went in. Second, that stuff is muscle memory that you'll develop normally at work. What school does cover is how to actually think about what code does, *in*aggregate*. You know, beyond a single function or object.
When you cover a couple dozen bugs where a routine ran instantaneously for an input list of 10 items, but spiked the cpu for 2 seconds when that list went to 100+ items, you'll see the value of big-O. It was "hey I can spell printf!" developers who somehow convinced themselves that what they don't see can't hurt (or help) them.
See no science, hear no science, speak no science.
Clearly:
Don't contribute significantly to what you learn.
bullshit. I work with groups of people in both categories. I see the code they put out, and often have to help them make it work.
Code that only depends on knowing your basic data structures or various apis falls into two groups pretty quickly:
1) Easily composed of library elements --- the code ends up being a driver for library data structures and routines, and really just ends up being friction between the underlying functionality and the client code.
2) A direct reification of business logic --- to write, linear (in time) to typing speed. To test, exponential, as knowing how to test that logic requires better knowledge than what was required to write it. Equivalent-State analysis will reduce it, but don't ask that of someone who thinks finite-state or (heaven forbid!) Turing machines are unnecessary formalistic gobblygook.
What's really going on is that people feel competent when they hit that point where they can write code well enough to be able to use the reference documentation effectively.
*THAT* *IS* *NOT* *COMPETENCE*
If they can't find an API for it, this is where the horror begins: they try to implement it themselves.
Imagine, if you will, a nightmarish world where half the type names and methods are synonyms, where the documentation screams that the author doesn't have the language (in English) to describe what terrible thing they've done in code -- because they don't understand the whole of it either. Often a tiny little language develops in there somewhere, primordial and malformed; hideous to the unsympathetic naked eye. Entire interfaces are inherited simply to perform simple transformations on their arguments and call other methods.
You go through it, slowly at first. Hoping that there's a genius that justifies the madness. Then you see that one routine, that shouldn't be a routine, but a simple (a || !b), and it's a stack of if-then-else branches. The code's written by morons.
The code has broken the author's mind. They're lost -- drowning -- in a sea of logic. No API or language built-ins to save them.
--
Call me what you want --- I know it'll be horrible. But I've seen it too often now. In most places, it was by people who were trained in other disciplines: EE or physics, who just did enough to get their primary non-programmatic work done. They have a good excuse, and they have other real work to do.
But if your entire job is to put out software, and you don't actually want to study software to do it, then where can you go? Do you think that the languages and tools are going to stay as stagnant as your skill level? Not afraid that a tool or API's going to replace the majority of your work?
For God's sake, stop researching for a while and begin to think!