Is it the burden of the individual to continually upgrade their skills on their own time at their own cost, or does the company have a responsibility to provide access to training if they need constantly updated skills from their workers?
Also, what is the role of the programmer? For example, if the programmer's job is to design efficient algorithms for the team to implement in code, do they really need to understand a specific language or can they communicate effectively in pseudocode?
As to the example coworker you defined - I don't think the issue is that the programming language knowledge is out of date. It sounds more like the coworker is not a team player and is unable to take constructive criticism - the person could be up to date in every programming language and tool out there and this would still be a problem.
A couple more things to add: First, my concern is that a professor would use such a system as a component of your grade. For example, make 10% of your grade "reading the pages" so that you can only get an A if you do all the reading (or do perfect in the class and do half the reading). However, as my wife says, college is "about learning what not to do" - I physically cannot keep up with the 100 page reading assignments of 2 or 3 classes a semester between class time, lab time, and work (in part because I read 10 pages an hour typically, and even less when I am reading critically to learn the material). I tried to do everything and failed miserably, and now that I am being more selective I am doing much better.
Second, and more importantly, it is used to sell e-text books. I can spend $160 and buy a physical textbook for 2 semesters and put it on my shelf - or I can spend $100 and have access to it for 2 semesters. What if I want to skip 1 semester of a 2 semester class because a better course is offered at that time? Then I get to buy the book again for $100 for 2 semesters - and if I want to refresh my basic skills later on in life, then I can buy it again for $100. It is the model that Microsoft is trying to push with Office 360 or what Steam, iTunes, etc are pushing - you are not buying a product, you are licensing it. It is only a lease, and if you want continued access you have to keep licensing it. Greater profits for the companies and disenfranchisement for the poor (who no longer have access to cheap educational sources - such as myself, who used used book stores to get 5 year old textbooks for $5-20).
I've worked many years in a restaurant as a waiter, and my wife has moved up from working as a server to a manager in the restaurant industry. Did you know many restaurant employees have employee meals? These are either discounted (50% typically) or free meals, and sometimes free fountain drinks also. If Google employees have to report the meal as income, does that mean the servers (who in my state make $2.33 an hour plus tips) have to report the 50% discount as income also? Additionally, almost every manager gets free meals during their work shift - so do they have to record those? What of free fountain drinks - do you charge one per shift, or is each soda refill a full drink charge?
I understand that some people are upset that they pay in post-tax dollars while others are getting a free perk. I don't get a company car - but should those that do be charged the lease value as income when they are provided with one? If a company provides you a uniform, do you pay taxes on it as if you earned the money to buy it? If you make personal printouts or use a company computer for personal activities, if you are provided a company mobile phone and free plan, or any other of the "perks" a job provides, does that have to be counted as income?
Again, I understand you are upset. However, if a company wants to give their employees a perk, I think it should be. If a company is required to report food given to an employee as income in goods to an employee, then it should apply to every little perk given and not just food. In other words, take a pen home, then be taxed on that income - which would mean a much more totalitarian system at work to monitor every little thing every employee does.
If you're not into parsing out the particulars of form factors and use cases, here's a really easy way to figure out if your phone or phablet is too big: Can you hold the device in one hand and 1) unlock the phone, 2) type out a text message with your thumb, and 3) adjust the volume with the rocker without using your other hand? If not, you might need a smaller phone
That is a big assumption on my usage. I do not typically use my Galaxy SIII one handed - I typically use it with both hands. What do I typically use my smartphone for?
I text far more often now to communicate with family, but since I can't get a phone with an integrated keyboard and I have yet to custom build a case that holds a small bluetooth keyboard, I need a bigger screen - because I need buttons that fit the size of my short stubby fingers AND I have having only one or two lines of text displayed when typing.
However, I far more often use my phone for internet browsing, reading Slashdot and Reddit, reading email, watching Netflix or Youtube videos on the go, checking weather, and as an alarm clock. My type of usage is becoming more common.
(Oh, as to using Netflix on a phone I often get "why would you want to watch Netflix on such a tiny screen" to which I say "that's why I want a larger screen" - and then they say "why would anyone want a larger screen on a phone" and I say "because I mainly use it to watch Netflix and browse the web on the go or when on vacation." My coworkers mocked me getting a 3.8" smartphone as being "huge" - and yet within the year they all had 4" screens and didn't see a problem with it.)
Next, all three aspects are not a function of the size of the smartphone but design decisions. You can place the volume rocker, the unlock, and make a one handed virtual thumb-board for texting on even the largest of devices - but you have to break the traditional model and move the stuff around. Why are the volume and power buttons towards the top of smartphones when people more often hold them towards the bottom? Why do virtual keyboards mimic physical ones rather than coming up with a novel and more functional layout for one handed usage? They don't have to be designed that way - there was an active choice.
As to the Galaxy Note II (my next phone when I can afford it) - that uses a wacom pen input. As a long time user of what use to be called Tablet PCs but now are called either slates or convertable tablet PCs (as a coworker who now works at Microsoft insisted on since a tablet means an iPad styled device only to him and his Microsoft cohorts *rolls eyes*), I love a pen interface. What is more natural than writing a to-do list or taking a note with a pen? That is definitely not a one-handed activity, and thus there is no need to keep it to a size that is one-handed.
Finally, the pocket issue. How many times do I have to hear this one? First it was we all needed Razrs or at least flip phones because the candybar form factor was too bulky for a pocket. Then physical keyboards or extended batteries made a phone too big for a pocket and too thick to hold in a hand... but nothing felt better then sliding out a keyboard and using my Galaxy S (and the SIII is so thin that a slide out keyboard really wouldn't have been that horrible to add). Now its the large screen makes them too big for all but cargo pants. I don't buy it - I have plenty of space in the pockets of my slacks or jeans with my SIII in a case - even with the "larger screen" (something I was told by coworkers would be too "unpocketable" but was a non-issue). I've looked at the Note II and it will fit fine also. Even if it didn't, then I could get pants with larger pockets - and I don't mean cargo pants. Again, a non-issue.
CONCLUSION: With all that said about it being a design choice and preference - if a person finds a "phablet" like the Note II to be too big for them - that's fine. Just recognize it as a choice. I am saddened that those who want small flip phones and not large screen candybar phones have trouble finding them. People say "there is no market demand for it" - but a product has to be on the market in order for people to show a demand for it. I want large oversized screens for my usage, and I have no pocket issues (even if I did, I have coat pockets that are deep or a bag - or, heck, I'll buy a belt clip case). Some people want tiny and basic. Give the people a choice in the market - not everything has to identical. Also, recognize what may be issues to you may not be issues to other people.
"Looks like Netflix may be getting some much needed competition in the video streaming market. .
Sorry, let me explain - in this case competition is not a good thing. I am fairly certain both services will want exclusivity of distribution (at least for a certain time frame), much in the same way premium pay cable channels want exclusivity. This is not a good thing - it will lead to a dozen different services. It is not just the need to pay $10-$20 a month to different services, it is having to maintain the billings to all those services and secure logins.
Streaming internet video is one of the few places I think a proliferation of businesses would be harmful. I may be wrong, and this may be a better solution - but I only see a more costly and more complex system for the consumer.
If you are talking college, don't think of a 4 year Computer Science (CS) degree as anything to do with computers. Instead, think of it as a specialty field of mathematics that should have been called Computational Theory. True, you can learn about how to program your own compiler, make your own database engine, program your own operating system kernel, and other things related to computers - but there is going to be a lot of discrete math that requires calculus, and a lot of complexity theory and proofs to do also. So, yes, if you want a leg up get as much calculus out of the way and make sure you are taking a math course every semester to keep your skills sharp. (Trust me, I waited over 10 years before going back to college and even with "cramming" all the math in my brain before enrolling, I am far behind in my math skills.)
Along with CS is Computer Engineering (CE), which is more of building hardware. Circuit design, pathways, and all that stuff one intro course made me not care to do - and a lot of optimization is done in that field. Of course, any engineering field is going to be math heavy, so no real change there either (and also master your calculus based physics).
What you are probably looking for is a Software Engineering degree - which, as an engineering degree, will require mathematics also but focus more more on programming and software design.
Note the trend? 4 year college means lots of mathematics no matter what - and if you aren't in an engineering school but in a college of letters and sciences, then be prepared to have a liberal arts education (read: basic biological studies, basic natural science studies, an ethnic study, literature courses, humanities courses, and social science courses that will be at least 1/6 of your total credit load... as a coworker of mine said "a lot of BS work that I will never use". I disagree with him, my most useful courses have been english composition, contemporary art courses (which gave me a new frame of reference to draw on), and my environmental studies courses. They introduce new ways of thinking... and CS is all about thinking of new and efficient ways to solve complex problems.
The only way to avoid heavy mathematics (namely, at the calculus level and above) is to opt for a vocational/technical college. You know, 2 year degrees with titles like "Web Programmer" or "Database Administrator". Also, there are multiple fields to choose within the computer industry.
Of course, I don't want to discourage the 4 year route. It is hard, but worth it... and if you find you like academia, there are graduate programs that will open up a whole new way of learning. Heck, UW-Madison has imitated Cornell and implemented a Games, Society, and Learning program... its serious business, but their lab consists of a PS3, X-box, 5 networked computers, a library of video games, and tons of obscure board games - and something like that is most likely where I will be once I complete my undergrad (if tuition doesn't skyrocket... so if you are going 4 years, my advice is do 2 year transfer at a 2 year college that is less expensive and has significantly smaller class sized... Chem I and II or Intro Calc Physics I and II is much better in a class of 40 than 300.)
5. Creativity - digital citizens have a right to create, grow and collaborate on the internet, and be held accountable for what they create
10. Property - digital citizens have a right to benefit from what they create, and be secure in their intellectual property on the internet
When I saw it was a Republican sponsored bill, I was skeptical. Sorry, just my bias that the Republican party represents business rights over individual rights. I added emphasis to two provisions that I think are really just back doors/foundations for later SOPA/PIPA style legislation. Note the term 'digital citizen' - since corporations have personhood, I am sure entities like Reddit and Slashdot would be digital citizens subject to ensuring "their" (ie, user submitted) "creative content" (content and links to content) was not infringing on others intellectual property. This wording is something I will not support.
I will be lost in the cacophony of answers, but one area of interest and fear for me was neurobiology. Specifically, studying human neurobiology. Imagine if one day someone could figure out how your brain works - what is perception, what is emotion, what is thought, what is learning, and what is memory. Imagine they could explain it in biomolecular terms, how electromagnetic fields from neurons combined with neurotransmitters to cause reactions. X set will induce violent rage, block Y while stimulating Z will create passivity, do process P and you can implant false memories of pattern T. Once a scientist understands how to manipulate these types of things, well... watch any mind-BLEEP movie like Videodrome or even Total Recall - you know the ones I'm talking about, where they leaving you asking "was it really happening to the character in the film, or was it all in the characters head?"
I do have a bias. My sister is schizophrenic, my other sister suffers depression. I believe chronic depression or manic depression is common in my family. I have experienced both depression and manic episodes. I ponder things like perception and reality, and have drunk once to the black-out stage to try to understand what that loss of control and perception is like. I often delve into my own personal philosophical corners about what being human is and what existence is (both in a religious - Buddhist/Shinto - and nonreligious - Zen - manner; disclaimer: I am an atheist). Naturally, research into this area fascinates me. We've seen biology, chemistry, and physics explode - and the advances in neurobiology and psychology are taking great leaps now.
I guess in the end, most of these topics are reflections of our own fears. I see many of the topics raised being "what makes me feel powerless/helpless/lacking control". For me, it is losing who I am - either by accident (such as the traumatic brain injuries that can cause personality disorders or destroy your ability to form long-term memories... imagine living in a perpetual now where new encounters are not encoded for later recall!) or manipulation (a process is discovered that can manipulate mood or memories).
If technological advancement leads to greater and greater destructive powers, and destructive powers are much more easy to develop and implement than constructive powers, then how to do explain the human population explosion? It seems to me that the constructive sciences have far outstripped the destructive ones - at least, so far.
I think destructive power is asymptotic, meaning that you can approach 100% destructiveness but never quite reach it. Remember, human populations have been pushed towards extremely low numbers in the past and we have continued to thrive as a species. In part, this is due to our adaptability as a species. In fact, I would argue that science has made us more resilient to seasonal variations and natural afflictions, but is also making us less resilient to rapid climate change and virulent strains that target monocrops or humans directly. However, even if a disaster strikes, I think there will be some humans who will survive - the question is would they thrive, or would we die off as a new dominant species out competes us.
I think you could convince a pharmaceutical company that creating multiple strains of an existing virus could be worthwhile, if justified by a large potential for creating more efficient vaccines or remedies that are also likely to be effective against future natural mutations of a virus.
Wasn't this the reason why the mutant bird flu was developed? The goal was to create a strain to understand the biomolecular aspects of infection. If you can engineer a more virulent stain and analyze the differences, see how and where it interacts to the cells, the more likely you are to construct a binding agent to its receptor sites that will negate the infectious nature of the disease. True, you have the potential to release a nasty superbug if it gets weaponized - but if you assume that is the only eventual outcome, wouldn't it be just to let humanity be wiped out since they are such an amoral species? (I speak hypothetically, and I am not advocating bio-terrorism in any way - in fact, I don't even advocate using the bio-terrorist in a game of Pandemic, as it ruins the awesome cooperative nature of the game.)
After reading your comment, I can only think of the anime Akira (never got to read the Manga) - for sometimes it is in ignorance that we misuse our knowledge.
- “If something went wrong in the order, and an amoeba has the power of a man...”
- “Is that Akira?”
- “An amoeba doesn’t build houses or bridges. They only eat.”
Unless said Death Ray can be narrowly focused on a pinpoint target to attack cancer cells in a body, thus eliminating the cancer from the patient...
No so fearful of grey goo (nanobots) as much as nano-particles that are starting to be used more often. After all, we once went around spraying DDT on our kids during picnics because it was so safe, and the government reversed their view on DES and let it be used for menopause... then to treat pregnant women... then in our chickens... then in out beef cattle before eventually banning it outright - after all, it was perfectly safe despite each test animal having disastrous reproductive tests. (Animals do not equal humans, even if every model species has negative results we still cannot say it would harm humans, so it must be safe! - Read Toxic Bodies by N Langston if you want a real eye opening tale about endocrine disruptors.)