Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Bad analogies abound (Score 1) 606

Different skills take differing amounts of time and experience to master. In general, professional musicians start playing in or even before high school. No one shows up at a university with no experience and expects to become a concert pianist or cellist. The expectation is that it takes at least 5-10 years of experience to become a top-notch musician. The counterexample offered of a surgeon having been expected to operate on his pets in high school is also silly; while the expectation is that experience with musical instruments is gained starting in the early teens, that same ten years of experience for surgeons begins in college and extends well beyond, into medical school, internship, and then residency. So it's incorrect to compare the three fields.

Part of the difficulty these academics are having is that "computer science" encompasses many different fields. Some academics are really borderline mathematicians. Some students really belong in a vocational school, because the general knowledge of computer science most university programs teach is not needed nor useful if your plan is to go write business logic in Java for some megacorporation. Those students would be much better served by a 1- or 2-year program that focuses on the specific technologies they'll be using. And there are other students who plan to make programming a career in any of several different fields, each involving its own specialised tools, terminology, and mixture of theory and practical knowledge. All of this lives under one roof in most universities. Complicating matters further, students show up with many different levels of experience; some may have grown up with little access to computers, while others may already be accomplished programmers in the open source community looking for a degree and the opportunity to gain advanced knowledge of theory. It will never be possible to come up with a plan for the first two semesters that works for all of these cases. In that sense, the one-size-fits-all approach is indeed broken. But that is completely different from insisting that "up-or-out" is wrong, or that the basics need to be introduced even more slowly or in even more courses.

Students in most programs get 2 semesters of extremely basic instruction. This covers things like what variables, expressions, and functions are, the concepts of sequence, decision, and iteration, the basics of syntax in one or two languages, what memory is, and maybe some simple data structures like arrays and structures. Anyone who comes in with any programming experience at all, in any language or context, already knows at least 90% of everything taught in these courses. Forcing everyone to take them constitutes a tax. Value is given in the form of tuition and time spent but none is received in the form of increased knowledge. Students with no experience may well benefit from these courses, however. To suggest that they're "too fast" is ridiculous, however. In those two semesters, students will receive about 80 hours of lecture instruction, 2 300-page textbooks, usually at least 1 textbook with practical exercises in it, at least 15 hours of structured practical instruction from teaching assistants, generally unlimited access to computers, compilers, interpreters, and other tools as needed, unlimited access to a library with thousands of relevant documents ranging from trivial to cutting-edge research, access to a peer group, and dozens of office hours with the instructor and teaching assistants. It's silly to suggest that in 8 months a committed student with access to all those resources cannot pick up the basics of computer use and programming in at least one high-level language. And that's really all that's expected; there are separate courses for computer architecture, advanced data structures, operating systems, compilers, graphics, linear algebra, logic, calculus, programming language theory and features, algorithms, networking, databases, and so on. No one is expecting a student completing those two courses to be a master of anything. No one is suggesting that a student with those two courses completed should receive a computer science degree. And no one is suggesting that a student should be ready to get a job as a programmer after completing those two courses. They are introductory, covering the basics that other coursework will leverage to teach more advanced concepts and practices.

A student who can't successfully complete these two courses in 8 months needs to reevaluate his or her plans. That's not at all unusual for freshmen; most students change their field of study at some point. Some of those students may decide computer science is not interesting to them, or that they're ill-suited to it. Others may find that they lacked commitment and dedication and should repeat one or both of the courses with greater focus. But someone who is genuinely trying and using all the resources available and still can't grasp these basic concepts after 2 years has to face the fact that he or she simply is not going to master computer science or programming, regardless of what his or her plans were after graduation. Here's where the opportunity arises to make the right analogies with other fields. Not everyone is cut out to be a surgeon or master musician. No amount of training, practice, and study will ever put a mediocre violinist in the first chair at the San Francisco Symphony. So too are there people who simply aren't cut out to be programmers. No one knows why. Maybe that CS1 dropout will end up in neuroscience and be the first to figure it out. That's a much better outcome for everyone than extending the introductory material across two years, boring the more advanced students to tears and confiscating their time and money while stringing along the inevitable failures for a second wasted year instead of letting them know early on that this field of study isn't right for them so they can move on to something else.

While we're at it, we should be doing what other engineering disciplines do and specialising job functions and programs of study a bit more. For example, a graduate with a degree in civil engineering is a full engineer, with the knowledge required to design a wide range of structures. He or she can then take the professional engineer's exam, conferring mastery and opening doors professionally. That is not the same line of work as steelworker, concrete pump operator, excavator operator, or welder. All of those functions are necessary to construct something, but no one pretends that an engineering degree is needed to operate an acetylene torch, nor does anyone suggest that acetylene torch operators are qualified to design a dam. One way to solve this problem would be to split off computer engineering. A few schools do this already. Unfortunately, in most cases the differences from computer science are superficial, usually consisting of dropping a theory course and adding one or two from an electrical engineering program. This is not necessarily bad for the computer engineering students, but it leaves far too much emphasis on practice for true computer science. Better would be to move computer science into the Mathematics department, which is usually separate from Engineering. All engineering disciplines require some study of mathematics; computer engineering is no different. That gives us true computer engineers, analogous to civil engineers. But it still leaves most employers hiring them to operate acetylene torches. We solve that problem with vocational programs designed to teach students the practical aspects of basic programming in commonly encountered environments. This is where students go to learn about EC2 APIs, writing and deploying J2EE apps, customising PeopleSoft, or writing apps for iOS. Such a program needs an abbreviated version of a computer engineering degree, so that the students can understand the tools they're working with. But it does not require much theory, math, or advanced concepts. These tasks tend to be quick and dirty; get it done now, deploy it for a quick buck, and move on. To the extent that more thought is required, a computer engineer or a team of them should be involved in that, breaking down the project into tasks that can be done by people with practical knowledge but little understanding of engineering principles. This is usually what "good" programmers or architects end up doing anyway; the fiction lies in the fact that they have the same apparent credentials as the functionaries who can't be trusted to choose a sane algorithm, do basic design, or create an interface. It's unfair to the "architects" who studied and mastered computer science to force them to compete for jobs with "functionaries" who lack that knowledge, and it's unfair to the "functionaries" to waste 4 years of their time and stick them with $150k in debt just to qualify them for menial jobs. A building, bridge, or dam can collapse because of a bad weld, or because of bad design. So too can a computer system fail because of an off-by-one error or a misdesigned interface. For these reasons, we require that civil engineers understand the strength of various welds in different materials, and that computer engineers understand how to test modules and identify common errors by inspection or from QA data. But we do not require that welders understand fluid dynamics, nor do we give civil engineering students 4 semesters to figure out statics. Why haven't we reached the same point with computer engineering programs?

Comment Re:No legitimate use (Score 1) 374

Good points all, but maybe the point of my assertion was lost. New Zealand is smaller than the US state of Nevada. It's conceivable that a major disaster could affect much or all of the country, so a national-scale warning system makes some sense. My contention was not that improving emergency communications is a bad idea, only that the useful scale for such a thing is local or perhaps regional. In the United States, that would mostly mean at the county or state level. For perspective, if such a thing is useful, then the messages disseminated near Christchurch should also have been sent to the residents of Perth, Australia. Silly, but directly comparable.

Comment Re:No legitimate use (Score 1) 374

But go ahead, prove me wrong: come up with a disaster that takes out Miami and Seattle but leaves the phones intact.

The Doritos factory explodes?

ZOMG, I didn't think of that! I'm sure the next time I'm stoned out of my mind this will seem really important, and the president's personal message of reassurance (and maybe a suggestion that I just go get a burrito instead?) will convince me that the whole thing was worthwhile after all.

Comment Re:No legitimate use (Score 1) 374

Radiological "Dirty Bomb"

Significant impact is limited to the immediate area of detonation and a limited distance downwind. This is unlikely to be important to people in more than a handful of states.

contaminated water supply

Intensely local. Most water supplies serve only a few hundred thousand people. Worst case, this is a regional disaster (like we're seeing with flooding in the Mississippi basin, actually).

biological or chemical attack

Unless I've missed something in weapons development, there's no way such a weapon could affect millions of square miles. Though a biological attack could lead to...

pandemic

...which develops slowly enough that existing warning systems are sufficient, though as I noted in a previous comment by the time a pandemic is under way it's really too late to do much about it. Highly communicable diseases saturate their host populations very quickly once critical thresholds are crossed. Note that in the 1918 flu pandemic, Gunnison, CO had more than enough warning to isolate itself entirely and avoid a single case, despite communication networks that were primitive at best. Everyone else read about it in the newspapers and then got sick a few weeks or months later anyway.

rioting

Perhaps the most localised example of them all. Even the largest riots cover only a few square miles. Even if, as has occurred a few times, there are riots in several cities at once, the vast majority of US residents are completely unaffected. Those who are affected would be much better served by local warnings than some generic nationwide one.

Not sold, sorry.

Comment Re:No legitimate use (Score 1) 374

Yes, I read about this. The science is solid and there are plans afoot to build a similar system in California, using state funds. But that's not what I am challenging here: why would someone in Indiana care that a quake is occurring in Arkansas? This was the point I was trying to make: all useful warnings are for events that are inherently local or at most regional. Even Japan's; as I pointed out, the entire country is about the size of California, so even if the seismic detectors can't determine the affected area any more accurately than that, it's still silly for a detector in Berkeley to set off cell phones in Boston. It truly strains credulity to assert than any warning of disaster relevant to a majority of US residents will be receivable or actionable. We're talking about places separated by 1/6 of the earth's circumference. So back to my point, the "presidential" warning level should be eliminated from this system if it's kept at all; it is not useful and at best serves only to feed conspiracy theories. Local and regional agencies already have effective mechanisms for predicting and disseminating information about emergencies. Keep the federal government out of something that's already working quite well in most places. If something needs improvement here, it's in places with ineffective local disaster management agencies (New Orleans, I'm looking in your direction).

Comment Re:No legitimate use (Score 1) 374

The irony of this example is that in the truly devastating 1918 outbreak, the government actually had all the communication ability it needed to prevent and contain the disease: it began at an army base in Kansas, and the government's refusal to limit troop movements despite full knowledge of the situation is what allowed a small outbreak to become a global disaster. Having special chips in every phone, or for that matter irresistible obedience implants in everyone's brain, would not have helped. In fact, it's very possible that had the national communication network been shut down, the government's orders would never have been received and the outbreak would have ended a few weeks later with a death toll in the dozens. By the time an epidemic of contagious disease becomes relevant to the entire nation it's far too late to do much about it, so your example not only supports my point, it also emphasises that if anything the federal government is actively harmful in containing such outbreaks.

Comment No legitimate use (Score 4, Insightful) 374

Can anyone come up with an example of a "national disaster" (i.e., a disaster affecting most or all of the contiguous United States) in which any significant part of the telephone network would still be functioning? Because I can't. All sub-extinction-level disasters are inherently regional and nearly all are local. As an example, Japan just suffered a colossal earthquake and 15-meter tsunami... and yet despite the catastrophic loss of life and property, nearly all major damage is confined to a few prefectures; many parts of the country didn't even feel it. And Japan is about the size of California.

But go ahead, prove me wrong: come up with a disaster that takes out Miami and Seattle but leaves the phones intact.

Comment Re:Bravo! (Score 2) 537

Way to avoid addressing the underlying problem.

It's not under their control. If they start treating their employees better, their costs will rise and they will either lose money or have to bid higher, in which case they'll lose contracts (and lose money). CM is a very competitive business, and Foxconn as a Taiwanese company is already at a disadvantage in some ways relative to mainland CMs.

There are several ways this can play itself out. The employees can unionise (which on the mainland will require overthrowing or radically reforming the government) and demand better wages across the entire industry. In that case, two things will happen. First, prices of finished goods will rise. Second, these companies will begin investing in places where unions aren't allowed (race to the bottom). Third, unemployment in places with unions will rise, encouraging the creation of non-union shops where standards are lower.

Another way this can play itself out is if the people who buy these goods start demanding verifiable standards of treatment for the people who manufacture them. This would have to be backed up with a willingness to (a) not buy products that fail to meet the standards, AND (b) pay much higher than current prices for them. This is unlikely because people in most developed countries are already living beyond their means and cannot afford to pay more.

We've seen all of this before. Some combination of these things will in fact happen, as they did in today's developed countries. Ultimately, unionised manufacturing workforces are not competitive and will die out, leaving these low-value activities to be moved to whatever country does the most to ensure that labour is cheap. This is nothing more complicated than Ricardian comparative advantage. As this happens, the more developed countries will find that their unemployed union workers' children look for higher-value work to do and shift their economy from low-value manufacturing to higher-value engineering and services. Meanwhile, those in the third phase (debt collapse) will be forced to rejuvenate their own "old economy" sectors and become more competitive with the rising economies. This will mean a diminution of lifestyle as they pay down debt and accept lower-paying jobs in which their products are competitive.

This is best viewed as a wave moving around the planet from east to west. Where the United States was in the 19th century, China is today. Where the United States is today, China will be in the 22nd century. None of this is new. None of it is surprising. It's just basic economics. While it may seem reprehensible to those of us with a recent cultural history of moral outrage with this sort of behaviour, the "robber barons" of 19th century America were no different. In time the Chinese will develop their own moral outrage for it, and put a stop to it. But doing so externally is all but impossible, because it requires fighting economics. Simply put, people will stop being treated this way when they start refusing to be treated this way even if it means not having a job. Happened before, will happen again.

Comment Re:€ (euro) (Score 1) 868

Broke, sort of. Sliding into bankruptcy, no. It is not possible for a US state to declare bankruptcy any more than it is possible for any other sovereign. It is of course possible for them to default on their obligations. It remains to be seen whether any US state will default in the near future, but many have done so in the distant past. Based on their fiscal positions, most people think no US state will have to default, but no one is certain that their politicians will make the difficult choices necessary to continue meeting their obligations to bondholders. In fact, much the same could be said of the so-called PIIGS: Ireland for example could very easily have thrown their banks to the wolves and avoided both default and any bailout; they chose not to do so by a narrow margin and I suspect a referendum would overturn that decision.

Comment Re:Birth Control (Score 1) 637

I doubt it would make much difference. A rational person of either sex is going to look at that list and sort it into two categories: things I need in order to fix the cash crunch (i.e., get a job) and things I don't. If there's no money left after making sure I'm positioned to find work, I'll have to find other ways to solve those problems -- in the case of birth control, skipping sex until I'm getting a paycheck again is an obvious answer.

Slashdot Top Deals

RAM wasn't built in a day.

Working...