I agree with the FB position. I do work with youth at Church. Several of them are my friends on FB, although these days more of my friends are professional colleagues. Their parents know that. My privacy is set somewhat tighter than the default, to minimize their exposure to others.
While I am not silly enough to put anything that matters on FB, some of the kids have said things that, while actually not very serious, they might not want other people to see. The difficulty with letting third parties use my account is that most of what's there isn't my postings, but postings of my friends. And they might well not want my potential employer to see them.
I don't know what kind of suit FB has in mind, but if I were going to make up a case I'd make it up based on compromising the privacy of minors without the consent of their parents.
OK, I resist change just like everyone else. But that's not what is going on here.
Monitors are getting bigger. I'm doing more things at once. I want better ways of managing that. But Metro just gives me one thing at a time. Sorry, that's not a solution to the problem. That's going back to the original Macintosh.
Apple isn't perfect, but at least they've been trying some new ideas. I don't think the new ideas on screen management have been all that successful, but at least they're attacking the right problem.
At the moment, nobody has a better idea for a smart phone or a tablet than to show one app at a time. The only way W8 makes sense is if they're adding a piece for portable devices, and said "while we're at it, let's let desktop guys use it too." Fine. But only if they realize that the desktop systems still need new ideas as well. And if I were doing a ground-up redesign, I'd consider whether we might be ready for a better approach with tablets as well. The new iPad has more pixels than many monitors. I'm not sure one app at a time should be the only way to use it.
The skeptics I've read agree that temperatures have gone up. The questions are about models showing continuing rises, and what approach to take in dealing with it.
My concern is that we not exhaust the public's willingness to do something with approaches that will have almost negligible impact.
I would love to use Linux. But first and foremost the OS has to be able to do everything Mac OS or Windows can do. That includes licensed content. Currently iTunes won't run on Linux in any realistic way, and there's no real alternative. I doubt that I'm alone. I'm glad someone in the Linux community is working on things like this. I'm also going to need MS Office, as I have to be able to exchange documents with administrators and OpenOffice's compatibility isn't really good enough for that.
The Linux community has slowly understood that if you want to be a mainstream OS real people have to be able to install it. But once they've installed it, it's got to do what they need to do.
Has anyone participating in this discussion actually done web design for accessibility? I've been looking at it for our course management system. It's not trivial, but it's also not difficult. In increases development time / cost, but probably not more than 10%. It's perfectly possible to design reasonable visual interfaces that work fine with common screen readers. A sighted user won't even be aware that it's been done. It's a combination of avoiding some standard pitfalls that a screen reader can't reasonably work around, and putting appropriate labels and tags on everything. A lot of tools are accessible. jQuery has been doing an increasingly good job. The CK editor has as well.
The issue isn't just blind people. Older people (like me, to be honest) sometimes need to increase font size, and would really like it if the web page design doesn't fall apart.
There's no way you're going to get away with saying "sorry, they should know they're handicapped." The law won't allow it, and in my opinion shouldn't. I might feel differently if there weren't reasonable approaches to dealing with it. The big problem is getting web developers to think about it, and to try their software with a screen reader now and then.
These protocols were designed for a different world:
1) They were experiments with new technology. They had lots of options because no one was sure what would be useful. Newer protocols are simpler because we now know what turned out to be the most useful combination. And the ssh startup isn't that much better than telnet. Do a verbose connection sometime.
2) In those days the world was pretty evenly split between 7-bit ASCII, 8-bit ASCII and EBCDIC, with some even odder stuff thrown in. They naturally wanted to exchange data. These days protocols can assume that the world is all ASCII (or Unicode embedded in ASCII, more or less) full duplex. It's up to the system to convert if it has to. They also didn't have to worry about NAT or firewalls. Everyone sane believed that security was the responsibility of end systems, and firewalls provide only the illusion of security (something that is still true), and that address space issues would be fixed by reving the underlying protocol to have large addresses (which should have been finished 10 years ago).
3) A combination of patents and US export controls prevented using encryption and encryption-based signing right at the point where the key protocols were being designed. The US has ultimately paid a very high price for its patent and export control policies. When you're designing an international network, you can't use protocols that depend upon technologies with the restrictions we had on encryption at that time. It's not like protocol designers didn't realize the problem. There were requirements that all protocols had to implement encryption. But none of them actually did, because no one could come up with approaches that would work in the open-source, international environment of the Internet design process. So the base protocols don't include any authentication. That is bolted on at the application layer, and to this day the only really interoperable approach is passwords in the clear. The one major exception is SSL, and the SSL certificate process is broken*. Fortunately, these days passwords in the clear are normally on top of either SSL or SSH. We're only now starting to secure DNS, and we haven't even started SMTP.
*How is it broken? Let me count the ways. To start, there are enough sleazy certificate vendors that you don't get any real trust from the scheme. But setting up enterprise cert management is clumsy enough that few people really do it, hence client certs aren't use very often. And because of the combination of cost and clumsiness of issuing real certs, there are so many self-signed certs around the users are used to clicking through cert warnings anyway. Yuck.
The problem with books is that most people learn by doing, and toy problems don't teach you what a real application is like.
I'd suggest picking an open-source project and doing something with it. Depending upon the type of programming you want to do, add something to Linux, OpenOffice, or any of the number of Java-based things. (I'm currently working with the Sakai course management system. There are plenty of things that need doing there.)
The languages aren't any worse than what you're used to. The problem is that real programming these days tends to involve lots of complex libraries and frameworks. Those are hard to learn in the abstract, which is the reason for my advice.
Whether it make sense for someone to (re)enter programming as a job I can't say. That's a decision for you. There are a lot of problems with the profession. But there's also lots of important things that need to be done, and a lot of the people who think they're programmers aren't up to it. Programming approaches are changing often enough that skills go out of date in a few years. That's both good news and bad news for people like you. Since people have to learn new techniques all the time anyway, it's not like you have to relive the whole last 30 years.
The language depends upon what you want to do. Systems software and desktop applications typically use C-based stuff (C++ is probably the best place to start, although Objective C and other things have advantages.) Web applications use Java or