You mean the arm of Alcatel-Lucent that decided they will no longer do pure science, and concentrate on product-oriented R&D?
Your comment completely disregards the benefits of the social and emotional learning done in college. Also known as making friends (with a diverse set of people) and getting laid.
And none of this happens in the workplace? I beg to differ... apart from the getting laid part, perhaps. Dunno what your company policy is like.
It also assumes that someone working as an IT monkey will spend free time learning but that someone in college won't.
No, it assumes that both will continue to learn on their own, and thus end up at about par for the time spent. The experience may differ between the practical and theoretical, the mechanics and the concepts. I am assuming a self-motivated, intelligent individual in both cases. But my point isn't that one is better than the other, but that a degree is not necessary to be successful in the field, contrary to the previous poster.
As a blanket statement? For the IT field? Bullshit.
You can be very successful in technology with no degree whatsoever. You just have to be willing to put in the time and the work yourself.
If you're computer literate, can write basic code, have a reasonably broad exposure to different OS and hardware platforms (enthusiastic hobbiest), you can get an entry-level job. You'll spend the four years that you would have spent in college in the field, doing grunt work. At the end of that time, chances are good that you and the recent college grad may be competing for the same jobs. They'll have skipped the cable-monkey/printer-minder stage, and you'll have skipped out on OS theory. However, you as an enthusiast are probably hanging out in the same community of user groups and nerds that they are, and that's where a lot of the real practical learning is - in the sharing of new information amongst peers, not the dictation of established doctrine in the classroom.
The hard part for the non-degreed is establishing credentials, and it's all up to you to do that. Your degree is a certification that you're (supposedly) not an idiot, and it's tangible. As someone without that piece of paper, you have to establish that cred yourself - by contributing to FOSS projects, writing your own tools, getting your name known in the community. I'm more likely to hire the guy with no degree and five OSS projects to his name than someone with a freshly minted BS in comp-sci, to be honest.
I've no degree - all my credentials come from my body of work, professional network and peers. I did my time as a field tech, parlayed my experience with my hobby pursuits into a systems management job, learned all I could in the process, then moved into systems analysis and engineering, picking up skills as needed on the fly. Now, I'm sitting nicely in a position with a solid six-figure salary and 17 years of an established reputation backing me up.
A BS is an ante into the game, but after that both the academic and the street-smart have to play their cards well to advance.
I'm going to go all terminology-pedant on you, because I've been hearing the wingnut teabaggers misuse "communist", "socialist", and "fascist" for a while now as fear words.
Communism is a *socioeconomic* philosophy, where property is held in common, particularly means of production, with common access to means of consumption. It has nothing to do with quantity or quality of government regulation.
Totalitarianism is a *political* philosophy where the state recognizes no bounds to its power to control the actions and lives of its citizens/subjects.
Many Communist countries also had Totalitarian governments, because unless you have a very small, commonly-aligned populace, everyone must be forced to participate in a communist system for it to function as intended. However, you can have one without the other.
You say "communist", you mean "totalitarian."
The link you provide speaks to the problems of bit-packing on the symbol states, and the solution of Trellis Modulation, which I mentioned. Trellis coding allowed for packing more than 4 bits to each symbol without increasing the error rate, leading to the development of the v.32bis standard and 14.4Kbps modems. Which is what I said - it wasn't high baud rates, but better bit packing that realized faster speeds.
And you're still saying "baud" when you mean "bits per second".
I think you're just misunderstanding the problem.
The "baud rate" of telephone lines is pretty slow. Baud rate is the number of symbol transitions per second the media can support. Baud rate and bits/second have not been equivalent since Bell103a/V.21 frequency-shift-keyed modems, where 300 baud meant 300 bps, each state transition being a discrete tone that indicated a "mark" or "space" (0/1). From then on, Bell 212a/V.22 used phase-shift keying to get 1200 bps out of a 600 BAUD symbol rate, encoding two bits of information per symbol.
POTS lines are pretty pokey - the practical maximum BAUD rate is less than 3500 symbols/sec. Where speed advancements were made in later evolutions of POTS modems were in the number of bits that could be encoded per symbol, using QAM and Trellis Modulation. A 33.6 kbps modem is encoding 10 bits per symbol onto a 3429 baud carrier.
So, when you kept hearing "phone lines max out at less than 4800 baud", that was correct. The engineers kept wringing higher bit rates out of narrow-band POTS by putting more information on each of the symbols transmitted.
Then, with V.70 and V.90, the modulation schemes took advantage of certain characteristics of non-muxed POTS lines to use PCM digital encoding instead of an analog audio carrier. Unfortunately, if you were serviced through a SLC-96 ("Slick") muxed subscriber loop, which multiplexed the signal from your subscriber line to the central office, you could only connect with older analog modulation schemes such as v.32/v.32bis/v.34.
Don't want to reduce your smug, but we're doing just this - restart services from the failed component, service the failed resource on a non-critical timeframe. The small shop with a half-dozen server boxes doesn't give a damn about cooling costs or this level of service, for the most part. If they do, they're likely going to someone else to satisfy that requirement, not doing it in house.
I've got stack of servers in my datacenter that are allocatable on demand. Any unused server blade is a potential spare. If a production blade tips over with a CPU fault, memory error, or similar crash, its personality (FC WWN's, MAC's, boot and data volumes, etc.) are moved to another blade and powered on through an automated process. Since the OS and apps live on the SAN, both VMs and dedicated server hardware can be abstracted away from the actual services they provide.
This is a product my company's selling to the market at large right now, and that I designed. Any of our IaaS customers can take advantage of the redundancy and fault tolerance built into the system. Even the six-server small IT shop.
Even then, a small IT organization can easily virtualize and provide some level of HA services in hypervisor clusters now. It's just not that hard anymore. Take the handful of servers you're running on now, replace them with an equal number of nodes in a VMM cluster, and go to town. Any of those systems fails, shift the load to the other nodes and effect repairs.
"Cloud computing," while it has very nearly achieved meaningless buzzword status, is an attempt by the business and marketing types to get their heads around what is a very real evolutionary transformation occurring in IT. The drivers are the drying up of CapEx budgets, the need to reduce service delivery time, and the requirement to purchase and pay for only what an organization needs to fulfill their business requirements.
Capital expenditures are coming under increased scrutiny, and are under constant budgetary pressure. "Cloud" based, on demand services allow IT service procurers to shift from a CapEx based model of owning the infrastructure to an OpEx model. Operational expenditures are typically larger chunks of budget than capital purchases, and moving IT services there can allow them to get "lost in the noise". Less red tape, less stringent approval processes, etc.
Time to deliver service is a labor cost, and if a procurer can shift that operational expense from internal overhead required to deploy an IT architecture to acquiring those same services from a cloud provider, it's perceived as a big win. The provider gets to deal with the headaches of capacity management, infrastructure design and integration, and delivering IT resources. The purchaser gets the luxury of simply specifying how much they want, for how long, and letting the provider leverage its economies of scale and automated processes to deliver the resources within the terms of the provider's SLA. The tradeoff is that the consumer of cloud services loses the ability to specify the platform and all its parameters in exchange for rapid delivery of a standardized service.
IT organizations are also under increased pressure to abandon the concept of designing and purchasing for peak capacity. Cloud providers are specifically addressing these needs by allowing their customers to pay only for what they use, not the spare capacity. Since the "cloud" capacity is shared, reused, and managed by the provider, the customer is afforded the ability to scale their environment dynamically to meet the needs of the business and its budget.
Now, how this ties into Web Services is important. Web Services, for a long time, was a solution in search of a well-defined problem. Now, with the "cloud" becoming a workable construct, Web Services come to the forefront as the way that stateless platforms can interact without intimate knowledge of the underlying infrastructure. Web Services will become more and more important as IT services are increasingly abstracted away from the hardware and OS platform. As I've worked for the past two years as a design architect for an infrastructure-as-a-service type platform, I can say with some authority that they're are an integral part of how we're going to need to deal with virtualized environments and stateless service contexts as they become pervasive elements of IT solutions.
Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker