I agree. On the other hand, what can one do in C and cannot do in Pascal?
You might as well ask "what can one do in assembler and cannot do in Pascal?" C is for creating cross-platform system software. Its main benefit is its cross-platform-ness while providing virtually all the capability of assembler. Pascal that has been extended for system software (e.g. Turbo Pascal) is non-portable while a fairly poor substitute for assembler. Even TP would be difficult to use for something like an interrupt handler or thread scheduler. As a result, a lot of TP code contained large amounts of inline-assembler and was rarely factored for portability. Other platforms, like MacOS and the UCSD p-System, that relied on Pascal for programming were largely implemented in assembler.
I am the sort, though, who thinks that nobody should code in anything except maybe BASIC until they're exposed to some Assembly Language. Learn what you're actually moving electrons around in. Quit with the abstraction. Until a little later, anyways.
I would tend to agree with this. Start with some easy to use language (I think python is a very good learning tool since it can teach concepts basic can't - like data structure, functional programming, threading, and networking) just to get started with a broad 30k ft view of what computers behave like for the end user. Then start with the lowest level - the machine itself. Doesn't terribly matter which one - any von neumann architecture conveniently available. I think ARM is good personally because it's easy to program but exposes the details. Easier than Sparc or MIPS to get started with, while more transparent and less 'magic' than x86. Then switch to C as a portable assembler and learn things like inline assembler and factoring for portability. Give students projects to port say code from x86 or MIPS to ARM; let them go research the source machine and figure out how to get it to run on a different machine, refactoring as needed without losing the ability to build for the original architecture. While this may seem like a trivial skill I think it's important to teach students to conceptually distinguish portable system software from machine dependent parts. Understanding the value of code reuse is a driving factor for many higher level concepts, like OO factoring and switching from C to C++ (sans rtti or exceptions) - because of its better code reuse. As part of this students could look at things like the Sun vnode layer as an example of OO factoring in C, and imagine how much cleaner it would read in C++. Then the education keeps building like this until it meets up with the higher level interpreters where it started. (And which are consistently used throughout the education to explore new concepts, like network protocols, transactions/atomicity, or RPC mechanisms.) I think the real issue ends up being how much someone can be taught in four or five short years. It's not much time.
No, I'm a programmer; I was referring to the fact that there are individuals working as programmers who only touch abstract Java code, and have little knowledge of how computers work.
This is really not such a bad thing. While the class in the story is a very elementary introduction to programming (for which all the various interpreted, easy to learn and use languages are great), the real problem domain for CS is, or should be, computing in itself. It should be about making computers easy to use for non-CS people so they can write programs in python, excel, Java, Pascal, or whatever their preference of the day is to implement business systems for their problem domains. No matter what the paradigm of the day is, there will need to be systems implemented to do these things: web servers, database servers, compilers, kernels, runtimes, etc don't just magically appear out of nowhere. And even though these paradigms shift around computers essentially work pretty much like they did 35 years ago, and the system software building blocks are the same. A queue is a queue, a scheduler a scheduler, atomic transactions are still atomic, etc. While some minor things change, the problem domain is the same: implementing programmable, efficient, reliable, useable systems. What's mainly changed is the problem domains of the user. But a computer is still a computer, and someone who did kernel work in 1985 could easily pick up Linux or a web server like Apache today. You still need C and assembler, still need to understand how the machine works, need to understand things like pipeline hazards, cache locality, bus arbitration, etc.
That said though, I'd be skeptical about anyone enrolling in a CS program without knowing how to program in e.g. python or Java. In fact, I'd assume they do and they're actually motivated by a desire to implement and understand runtime tools like this for themselves. But it's a good starting point to study algorithms, or at least the abstract part of them. An efficient implementation then clearly requires some degree of machine knowledge and tools that can leverage benefits while avoiding pitfalls. (It's a typical prototype to production step for this domain.)
Yes, why should anyone be concerned that their ability to afford food and heat in their declining years is dependent on the long-term stability of a system that can be radically damaged by a single mistyped letter?
I guess we're all just lucky this guy hit "b" and not "z".
Not only that, but a system where nobody even knows for sure why it even dropped in the first place! Maybe it was a typo? Maybe it was overseas? Maybe something else? Maybe our savings will be there when we retire? Maybe not?
How about if we start applying the same dumb blind belief to software engineering... Maybe it'll work? Or if it fails, lets design it so it's impossible to tell why! Let's just guess! Blind belief!
It just doesn't feel right to have something so flakey and ineptly designed so centrally positioned in our economy...
And when your 'Organic Only' store is carrying a hybrid/cross breed that occurred in nature (that no one is testing for) you still have consumed a GMO crop.
They DO random genetic testing you know. There are independent labs that test for GM markers, and distributors and retail chains that sell GM-free products do use them. That's not to say that a GM strain can't accidentally make it into the pipeline, obviously it's bound to happen. However, I personally am not too concerned with the occasional accident as much as the regular daily stuff.
I'd expect a libertarian to hold the view that if you contaminate your neighbors' soybeans or corn you're liable for cleanup and replanting, as well as medical care to people who ate it. This in turn would force growing to be geographically separated so as to prevent cross-pollination - or if that's not possible GM crops would be a commercial dead end. The liability litigation would simply bankrupt anyone who tried to grow it.
However, in the real world as opposed to libertarian utopia the best is probably to simply restrict growth of GM strains to laboratory settings.
All you "free market" enthusiasts out there, answer this question for me:
How would the unencumbered "free market" handle a problem like this?
I'm not a free market enthusiast, but even I can answer this: you take your business to a store that doesn't carry GM produce or products that contain it. My local Whole Foods store, for instance, is GM-free and predominantly organic.
If they had decided to start using 64-bit time on the 1st of January, 1970 none of these problems would have happened
header FH_DATE_PAST_20XX Date =~
describe FH_DATE_PAST_20XX The date is grossly in the future.
Will match 2010 no matter how many bits are used to keep time. This is a problem that has been know for years; it's a total embarrassment to OSS practices that it wasn't fixed before 1/1/10, before becoming critical. It's not based on a bad architectural decision, or even particularly bad code - it's just a typo where the fix wasn't pushed in a timely manner. I suspect a problem here is that open source development doesn't separate bug fixes from feature additions, which means non-critical fixes get backed up with features to be considered stable enough to ship. A critical fix can be spun separately (being backported to the current stable branch) - but this problem wasn't critical... yet...
Oh yes, they'd like us to believe that reverse engineering encryption is illegal. It is not.
Right you are. However, what is illegal is publically stating someone has committed illegal acts. Nohl should sue for slander.
It's so weird when reality looks like bad Photoshop.
They can't afford tabletop models anymore, so in this project they used paper cutouts on a black canvas.
If our Army is going to continue to make videogames, surely we can provide our citizens with Universal Single Payer Health Care....
The VA provides single-payer, single-provider, socialized, health care for the Army.
In other words, we need GNU Ecmas.
As a CS student I think you should focus on product development, not IT. You absolutely should intern at a technology company whose main focus is products - and whose _customers_ may include IT departments. You won't be paid a whole lot, but the tasks you get will also be very simple, relatively speaking, and while they may be important, once taken care of you'll have plenty of time to poke around with whatever interests you. You may be asked to say add an option to a compiler, tweak a kernel build, or add data gathering and instrumentation - things that the other developers would like to have but don't find time to do themselves. If in the process you find something you think might make an interesting project by all means suggest it, chances are you'll get to go do it, unless it seems overly ambitious to the extreme.
Does the article discuss how much info each user leaked? I wouldn't be real surprised if the older users posted less info and were thus less concerned with privacy (It also wouldn't be shocking if they were simply less aware of it).
I guess at 45 I qualify as "older" in this context. I don't post personal details, or say things I don't want my business contacts to be aware of. FB serves the same purpose to me as a cocktail party - it's just a simple social function. I don't really care who wants to be my friend (which I view more as a 'live rolodex') - you never know who might be handy to know. I have about 300 friends on FB. My wife, who's a freelance writer and has to network as part of her business, has over 1500. It's just a tool, and like any other becomes what you make of it. I personally like it better than LinkedIn, which is too formal.
Can anything be sadder than work left unfinished? Yes, work never begun.