(my PS3 is standing freely on a table without anything blocking its vents)
Then you just don't handle them!
I hear you, but that wasn't what I meant really. When I don't want to handle exceptions because I simply can't handle them, I don't mean I want to swallow them.
In my personal experience, it seems that in a lot of cases the only sensible course of action is to let the exception bubble up right to some top handler that does 'handle' the exception by simply aborting the operating.
In a request/response Java EE situation, this simply means aborting the request and displaying a general error page. Runtime exceptions let you do exactly this without all the boiler plate code and verbosity, which can introduce its own bugs simply by means of obfuscating the real code in your system to your programmers.
No, you should declare an own exception, and wrap the exception of the dependencies in that new exception.
Indeed, this is a reasonable approach, but one that does lead to very deep levels of chained exceptions. Now if the exception is really exceptional this might not be the end of the world, but constantly applying the wrap-retrow pattern can still add a lot of mental overhead to your code. Sometimes this is justified, but in a lot of cases all you want is the exception to be handled by the top level exception handler. You're then creating lots of specific exceptions, lots of specific try/catch code, but it's never really used since it's only the first exception in the chain that counts and nothing is ever done with the intermediate ones.
I do agree that checked exceptions are a great way to force people to document what exceptions a method is expected to throw, which is indeed a benefit.
Anyway, I didn't really start the move away from checked exceptions, just observing that this seems to be happening
Elegance? Closures? Now you have me scared. Really scared.
Don't be afraid for the unknown my brother. For these kinds of libraries closures actually are very elegant. Apple added a similar thing to C for their GCD library.
And about generics, they are definitely not a useless and dangerous disaster. Yes, you can go overboard with them, but when used in moderation (i.e. 'typing containers'), they work perfectly fine. Much better than having to downcast Object all the time to some type I think might be inserted into the collection.
No, it is actually worse to have only RuntimeExceptions and handle none..
Checked vs unchecked is and endless debate in Java, but to add my 2 cents:
If you only have RunTimeExceptions, then at run time the error actually will surface if you don't explicitly handle them. This means your unit tests and integration tests (you do have those, don't you?) will most likely catch them.
Checked exceptions are only useful if you can handle them, but 99 out of 100 times you can't. If there's a SQLException being thrown, seriously, what can your code do about it? In practice what we see is that either exceptions get wrapped and wrapped and wrapped... up till 10 levels or more deep. Alternatively, your code can declare the exceptions being thrown by its dependencies, but then you end up with dependencies bleeding into layers where they have no business.
Java checked exceptions do absolutely nothing to help when you're working with dynamically-loaded code, for instance.
That's only partially true. If the language would only have checked exceptions and you would be coding against interfaces (often still the case with dynamically-loaded code), then checked exceptions would still work, whether the code implementing said interfaces was dynamically loaded or not.
Now the problem here is that there are also unchecked exceptions which the code might throw, but this is really unrelated to the dynamic loading. One thing that could break things though, is if you not only dynamically load the code, but also execute said code via reflection. Then indeed checked exceptions might not do anything...
People need to remember that if something is really open, then it also means it is open to someone doing their own thing with it.
Right, and so many companies actually did with Java. Only with one 'minor' difference: they didn't call it "Java SE".
The problem with MS was not that it was extending Java, but that it was luring programmers into thinking they were coding for the Java SE standard. If you were at a MS platform and only checked your code with Visual Studio, then you as a programmer wouldn't know. Until your users tried to run your Java app on their Linux or Solaris boxes, and found out it didn't work at all.
A very simple solution for MS would have been to provide flags in their environment for "standard compliance Java development" and "non-standard compliance Java development". If my memory serves me well, they had such flags for C++ in their compiler.
Java, while widely used is on the down slide. There really hasn't been any new revolutionary additions to the language in about 7 years. In another 10 years, it will become like COBOL is to IBM.
The only thing that has remained the same in about 7 years are the posts about "Java is dead", meanwhile Java is still the most widely used language and in those 7 years additions have been made to the language (generics, annotations, type safe enums, etc) and soon we'll have some extra goodies like closures and automatic resource management. Maybe those are not revolutionary, but enough IMHO to keep the language with the times.
Meanwhile, there is a lot of innovation going on in the platform, especially with Java EE and how it uses annotations for things other languages might use keywords for (e.g. annotations to make methods transactional).
Is Java doomed to get stuck behind in the single processor world
Far from it actually... of course Java has had the absolute low level concurrency primitives from the very beginning (Threads, synchronized blocks, wait/notify). More than half a decade ago, the java.concurrent library was added to the platform, which added tons of goodies for concurrent/parallel programming like concurrent maps, blocking queues, thread pools and executor services, cyclic barriers, programmatic locks with timeouts (which actually performed better than the build-in locks based on the synchronized keyword) etc.
Now Java 7 will be extended with the join/fork framework, which is essentially a thread pool and support code for (recursively) computational intensive operations and supports advanced features like work stealing. The join/fork framework has been specifically designed to scale into the many, many multi-core range. Not just quad, hex or oct cores but well beyond that.
Parallel array is another topic on the agenda, which allows you to express in an almost declarative style operations on arrays, which the library will then execute for you in parallel. To really make this work elegantly, closures are needed, which were on and off the radar for the Java SE 7 release. Because of that, parallel array has somehow stalled. Now that closures are back, so might parallel array be, but I haven't heard anything about it for a while to be honest.
This blog post has a nice summary about some of the added concurrency items in Java 7: http://www.baptiste-wicht.com/2010/04/java-7-more-concurrency/
India and China are cranking out about 600,000 engineers a years, and each of those countries has 4X the US population. And wages in those countries are tiny fraction of wages in the US or UK.
Newsweek has an interesting series of articles on that this week. Turns out there actually is a limit to the Chinese success story. I've read the article "Smart, Young, and Broke" with much interest, which is about a Chinese software developer who's in the same boat as the UK graduates. Graduated with excellent results, yet unable to find a decent job.
Another factor is that the outsourcing of IT jobs assume these IT workers don't want to consume any of the IT goods themselves. I'm not sure this is going to be true indefinitely. If you're working in IT, and you produce for €1 that product people in 'the west' are selling for €100, what do you do when you want that product yourself?
You can't possible buy it for the price it's been sold for, since that's about a 100x more than you get payed. But wait... you know it's actually manufactured for €1, so why not convince people to sell it locally for say €2? Then you can buy it, and those people in the West can also still buy it for €100 and everybody is happy, right?
But then some clever kid will realize that especially software can travel just as easy the other way around, so if you can buy it locally for €2 and those crazy Western people are willing to pay €100, why not sell it to them for say €20? This clever kid will instantly make €18, which would be a lot of money for them, and the Western buyer would still feel he has gotten the product dirt cheap! Win-Win, right?
Eventually this will not work of course. Producing low and selling high will only work as long as the kids producing stuff don't also want to consume, don't want to improve their standards of living. In manufacturing this is maybe possible to uphold, but in IT we're talking about highly educated persons, who have the Internet at their fingertips. They also DO want that iPhone and they DO want this 40" hi-def LCD and they are tired of those crappy low-quality VCDs and prefer those shiny new Blu-rays.
Simultaneously, you have opposite forces working in the West. If you want to sell something for €100, then people you want to buy that product also have to make a €100 at the least. With manufacturing this worked, since the lowly payed jobs were outsourced and the local population was trained for higher payed/highly educated jobs. With IT outsourcing this seems to be the other way around; the highly educated jobs are outsourced and the local population is supposed to take on lower payed jobs?
So the Eastern guy producing the stuff we outsource is going to demand more, thereby increasing the average salary there, while the Western guy is going to have to demand less, thereby decreasing the average income there. Soon, "produce for €1 sell for €100" may not work anymore, since Honghui is going to demand €25 for his work and John will have no more than €30 to spare.
I don't quite agree with your ranting against CS. For starters, I don't really see you mentioning the fact that CS typically has different tracks.
At my university we had two main tracks, applied computer science and theoretical computer science, with the first being further sub-dived in "computer systems" and "software engineering" and the theoretical track being sub-dived in "algorithmic" and "foundational computer science".
In the bachelor part of the education (3 years), you get a mix of subjects from all tracks. The computer systems track will give you courses like computer architecture
The software engineering track will give you subjects about requirements engineering, software engineering (obviously) and teach you diverse stuff like UML diagrams, development cycles, design patterns, etc.
The algorithms track on its turn invited me to look at a diverse range of algorithms (obviously again), but also to datastructures (how does something like a hashmap works internally, what kinds of trees do we have, what variations on linked lists are there, etc).
The foundational track then let me look at stuff like turing machines, grammars, finite state machines, theory of computation etc. This is the stuff few 'programmers' would study by themselves if not told they should.
Finally, knowledge of several tracks was combined for the subject compiler construction, where you had to write in C a Pascal to MIPS compiler. For this course you needed to have (C) programming skills, enough skills to understand a language you might not know yet (Pascal), understand how a machine works at the low level (registers, assembly, etc) and have some idea about context free grammars.
Now all of this is in the bachelor, meaning all the subjects are basically introductions to their respective fields. You're not a scientist yet if you have completed them. In the Master phase, you choose a specific track to specialize in but you can still take subjects from the other tracks if you want. In my case I choose the computer systems track and learned some additional stuff about grids, parallel computing, software architecture, etc. Now the thing is, you can't really say that CS educates you to become a scientist or not or that CS skills have no practical value if you don't take into consideration the track chosen by the student. Obviously an applied computer science track has more practical value for the average company than the theoretical track, but it depends on what you want to do really.
Most of all I don't agree with your point that CS somehow tried to cram knowledge in the heads of dumb students. Far from it... the way I experienced CS was a period of my life where I was simply allotted time and opportunity to directly dedicate on bettering myself. Classes weren't there to teach me stuff, but to *support* me in learning. Basically what the CS program does is compiling a list of books for you to choose from and a set of assignments to challenge you, and the rest really is up to you. If you don't give it your best, you don't progress much if at all. People who don't realize this will drop out or will be sent away. That last thing is maybe a little controversial, but it does uphold a certain level of quality.
Next to that, CS gives you a base level of knowledge, but at least at my University we were also clearly invited to study and practice other materials outside of the curriculum. If your goal is to become the ultimate über programmer (which is indeed not very scientific), participate in open source projects, take summer jobs that let you do some practical programming. Likewise, there are similar opportunities for those wishing to pursue a scientific career, like participating as assistants in ongoing research.
Currently I'm the lead developer of a team of 9. We're building sophisticated enterprise software where we're dealing with some 500k LOC, a rich portal where customers can subscribe, log-in, see their data, etc and a highly clustered back-end capable of doing thousands of transactions per second. This product makes me responsible for the software architecture, the right design patterns being used at the right places, performance at very low levels of the system, setting up the development process for our team, working with complex and sometimes conflicting requirements, etc etc.
A lot of stuff that I studied during university applies to my job... daily!
No, I don't write my own HashMap implementation each day, but when one day some particular HashMap was misbehaving I found the problems in *minutes* since I understand how these things work. The guy originally tasked with solving this particular issue was staring at the problem for the whole day already and could only slam his fist repeatedly on the table crying: "It's broken I tell ya, it's broken!", but he simply didn't understand the (utterly simply) theory behind it.
I also don't construct new foundational CS concepts, but I do apply the stuff I learn practically. A while back some other guy was wrestling with a huge number of if/else statements and many variables involved. After studying his code for a while, I proposed him to use a finite state machine, since it seemed to apply perfectly and would reduce the clutter and complexity of his code immensely. However, he didn't knew what a FSM was. He quickly learned thereafter and did apply it, but because he had never learned about such things in advance he would not have thought of it himself. Googling would also not really helped him here, since he had no idea of what to Google for.
These are just two simple examples, but there have been many, many more of such occasions.
To sum up, CS is absolutely very much worth the effort. Just don't think you can come in dumb, just sit in class every day, come out smart and don't have to learn a thing anymore for the rest of your live. That's not how things work of course. But if you come in reasonably prepared, work hard, and consider the education as a foundation to continue learning, you'll reap the rewards for sure.
The Netherlands are the most environmentally unfriendly country in the world.
Yeah, all that cycling around instead of riding cars is really bad for the environment. And those windmills they historically used to keep the land (polders) dry... oh man, that really must have dealt some blows to the environment...
What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie