Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment You should know enough to be able to debug (Score 2) 466

I have fielded this question a number of times.

Right now, the job market for developers is not very discriminatory. They'll take anyone they can. That means your barrier for entry is low. That being said, I've done a bit more research, and I can say that the most lucrative and mobile entry level development job you can land is probably web application developer. Not designer, but rather, someone who makes a web-based application 'go'.

With that in mind, you'll need the following skills: SQL, HTML, CSS, Javascript (jQuery specifically, but other libraries are good), and a backing language - probably Java or C#/ASP.NET. You'll also need to become familiar with your web execution framework - Tomcat is big in the Java world, and naturally IIS is used in the .NET world. Luckily for you, there are many resources to learn all these things absolutely free of charge, with huge communities of volunteers helping each other out. So, what level do you need these skills at?

Well, as a new hire - regardless of your skill level - you're unlikely to be given a new project to start on. Likely, your first few months are going to be a combination of learning your company's domain knowledge (like finances or autos, or whatever), and tackling bug fixes and/or feature enhancements. For that you'll need to understand how the programs work so that you can source problems. You'll have to be familiar with IDE's and the debugging capabilities - especially learning how to setup and debug web based programs on your local system, as well as remote debugging. You're going to have to be able to read code well enough that you can translate most of it into english in your head - without having to go line by line until you have to dig down that deep. That means recognizing structures and flow easily (which is why I also recommend you avoid ruby on rails and spring - and maybe even hibernate/nhibernate until you've learned more).

You're also going to need to know enough about a development environment to know how to ask an intelligent question about it. There's a world of difference between "I can't get it to work," and something like "I tried increasing the max heap size, but I'm still getting an out of memory error each time I execute a prepared statement after the first call." See here: http://www.catb.org/esr/faqs/s... . One important quote to take away from this: "What we are, unapologetically, is hostile to people who seem to be unwilling to think or to do their own homework before asking questions." That faq will help you get past the newbie phase without giving up.

So, an unasked followup question, how long will it take to get there? Well, hour-by-hour, you can compress the entirety of a CS degree program into 4 months of 8 hours, 5 days a week, but you won't need all that. I'm going to say that to get there, to really be employable, worst case it'll take about 250 hours of study total. If you take it at a light pace, about 10 hours a week, you should be ready in 6 months.

With today's environment, I wouldn't be at all surprised if you halved that and still got a job, but I would feel bad for suggesting that was an adequate amount of study and practice.

One last important thing that I've only touched on indirectly; you absolutely must learn how to teach yourself. New libraries and frameworks come out every day, and the flavor of the month changes at a rapid pace. At some point, you'll realize that all languages do more or less the same thing, they just have different syntactical sugar, or internal constructs that make a given task easier or harder, sometimes even between versions of the same language. You need to be able to stay on top of those changes, while googling or asking for solutions to odd problems or configuration errors.

Comment That's a squirrley definition of free speech. (Score 5, Insightful) 284

In the US, free speech is a blacklist-based phenomenon. There's a few things that are illegal to say - like 'Fire' in the theater - for example. If it's not listed, it's probably fair game, and you can't be jailed for it. Thus; westboro baptists and illinois nazis.

In many places in the world, it seems like the definition of free speech refers to the fact that there's a government-approved whitelist - here are the things you are allowed to talk about/say, anything not on the list are disallowed and legal offenses. Anything that's not explicitly on the list (and often times, even if it is) is subject to prosecutions. Heck, it's standard in these places to claim that opposing political parties are, by their language alone, seditionists, and have them locked up. In part, this is why there's outrage against the US that we allow hate speech and open protest; in other countries, that requires a mandate by the government, explicit approval.

Even in western, supposedly enlightened countries, there are onerous restrictions; check out slander laws in England, Germany's stance on anything Nazi-related, or France's many, many restrictions - for example, it's illegal to criticize a public employee (though I have no idea if it's actually enforced).

Calling this 'free speech' is like calling tax laws in the US 'voluntary taxes'.

What we're describing here is not a "tightening grip on free speech". It's just "additional regulations" on a locked down system where participating is the exception, not the rule. The only thing free about it is that one is "free" to follow all the rules, or shut up.

Comment Re:mac only? (Score 1) 121

I'm actually surprised that I had to dig down into the Faqs to see text that said "Mac only right now". I thought maybe my adblockers was hiding anything but the 'download for mac' button.

I don't know about other folks but Mac has never been the assumed default for any program I ever download, especially editors aimed at developers. The ones that are actually say it up front. Otherwise we assume windows, being that it's still the majority desktop OS. Failing that, slashdot links tend to point at linux apps.

I don't think we can ever say that it's 'hipster' to expect the majority opinion.

Comment Re:Here's an experiment to try (Score 1) 191

I am sure. They correctly realized that there was little value in rote memorization.

Maybe it depends on the quality of institution you attend, but my teachers at least were more concerned with whether or not I knew how and when to apply a given formula than rote memorization of it. Sure, they had limits; you could only bring in one sheet of formulas for a given midterm or final (which was well more than enough), but most of the time they wrote the necessary equations right on the board.

As we're even more well connected, with everyone carrying a cellphone in their pocket capable of accessing nearly the whole of recorded public human knowledge in seconds, people are coming around to realize that memorization isn't as important as understanding - or almost as good, knowing how to search for information to gain understanding.

Comment Here's an experiment to try (Score 2) 191

I wonder how well they'll be able to remember, if instead of using a laptop to take notes, they use their laptop to recorded and auto-transcribe it, so it can be replayed over and over. So that any parts that cause confusion can be examined until understood, without worrying about missing the next part. Where, with a press of a button, a user can mark the clip with a note; "important part here" or "come back to this, it's confusing" or even "prof says this will be on the test".

Besides, what a stupid study. There are certain classes where 'remembering' is the most important part of the class, but at least in my engineering and science classes, 'knowing' and 'understanding' had slightly higher priority. I can easily remember the last thing I was expected to memorize, with no other expectations - in 7'th grade, US History, I was expected to memorize each president's name and their start & end dates in office, in years. Completely useless.

Is that a laudable goal to test against for college students? That they're being judged at the 7'th grade level?

Comment Ideas made on company time are company property (Score 2) 72

I seem to remember a case from the middle 90's where an engineer for a ... phone? company came up with an interesting idea for a software-based filter. He talked to a co-worker about it who agreed it could be good, and took it to his superiors. They decided not to pursue it. A year later, he left the company, and he decided to go ahead and write that software - he knew a lot of folks who'd pay for it.

He was still 2 or 3 months from completing it when his old company heard about it. They sued him, claiming that he had the idea on company time, they could cite that he discussed it with a co-worker (so it was 'developed') and they claimed ownership. He lost his case. The judge had him not only turn over his source code, build environment, and all rights, but made him finish the product and deliver it to the company, with threats of fines or jail time if he acted maliciously (like making it impossible to run, or obfuscated, or anything other than delivering a finished product in a reasonable time frame).

Searching on google now for these terms just gets me lots of hits for 'bully bosses' and 'henry ford' for some reason, but ... precedent is out there. I just can't find it.

The upshot is this... If he discussed ideas he had while at Zenimax with anyone else, and those ideas were shared with Oculus, even if they're not patentable or Zenimax had no desire to implement them, they may have a very good case.

Of course, I'm not a lawyer, so I may be bull-poopooing you inadvertently.

Comment I doubt it (Score 1) 347

Based on the wording, he's comparing watching it on a given screen equal to watching it in a movie theater. That is, you don't get to keep it. Watch once, that sort of thing. Maybe a netflix model. At $4 bucks, 10 years from now, for a large screen tv, it sounds like it's some sort of rental, like the holy grail of DRM has promised the MPAA folks; they can only watch it _x_ times, or only until date _y_.

Of course, like all models that revolve around these sorts of limitations, you need to implement increasingly restrictive DRM, enforced by both software and hardware, and it can't have any holes or alternative routes. We all know how well that works. We've seen exactly how well it works. People, by and large, aren't in a big rush to adopt hardware for features that only benefit copyright distributors at the consumer's expense.

My guess is that in the best case, they'll end up partnering with cable companies and/or netflix to have some sort of ala carte channel model with a monthly subscription fee. Direct digital distribution is unlikely because they won't be able to set a price point that makes sense to the public - their price point will be based around the concept of giving up control completely, because once one person 'hacks' it, it free.

Comment Mavenized projects are doing this to me. (Score 1) 230

I won't run down the whole story, but my last company had a completely horrible, 100% custom, python based build system for a very large product that contained hundreds of subparts. Despite that - or perhaps because of it - I was able to easily get all the active code running from an eclipse instance, meaning that testing a code change usually required no more than republishing to an eclipse tomcat instance. You could pull the previous day's changes from all the other devs and in about a minute or two, have some 200 components fully uptodate and deployed locally. All very nice.

Then some folks came in and mavenized the whole thing, completely ignoring the concept of even using eclipse. Now, your only real choice in getting things to work is to make your changes to your one component, build up the jar, war, or installer for it (depending on the complexity of your change), and then overwrite your installed instances. Then you can start everything up and attach a remote debugger, with all the limitations that provides.

Of course, that's only if you're touching a front-end component that doesn't get client-side customizations. If you made some changes in a shared library, you now have to recompile the whole project and frankly, it's easier to run the installer than try to make sure you get each dependent jar everywhere it's used in the system. The compile takes between 40 minutes to an hour with tests disabled. 3+ hours otherwise. You still are stuck with the remote debugger instead of a local tomcat instance because there's no maven plugin to produce an eclipse web faceted project with all the libraries and dependent projects properly crosslinked. It's painful enough to try to set up the 30 plus dependencies and smattering of configuration settings required on one component, but then the next change comes through and I have to do it all over again - and that's only if the change introduces a dependency breaking change, or else I won't even know to update things in the first place.

Comment Sounds like my old comp-sci professor. (Score 5, Insightful) 237

I remember he used to lament the fact that we had to use computers to run programs, because they were always so impure. Hacked up with model-breaking input and output considerations. He loved APL. Had us write our programs as math equations and prove that they had no side effects. On paper. Step by step, like how elementary teachers used to have you write out long division. He was a computer scientist before they HAD computers, he'd point out.

To be fair, APL was a wonderful language, and perfect so long as you didn't want to actually /do/ anything.

Well, that's unfair. As long as you meant to do a certain type of thing, these languages work out fairly well. The issue is the old percent split issue you normally see with frameworks and libraries - by making it easy to do some percent, X, easily, you create a high barrier to performing the remaining percent, Y. The problem with adhering to pure functional languages is that Y is not only high, it's often the most common tasks. Iterating, direct input and output, multi-variable based behavior, a slew of what we'd call flow conditions - these are very hard to do in a pure functional language. The benefit you get is far outweighed by the fact that you could use C, or the non-functional aspects of OCaml, or some other so-called 'multi-paradigm' language to fix the problem in a fraction of the time, even with side-effect management.

Then, have you ever tried to maintain a complex functional program? There's no doubt you can implement those Y-items above. The problem is that it makes your code very specific and interrelated as you're forced to present a model that captures all the intended behaviors. It's a lot of work. Work that will then need to be repeated each time you need to make additional changes. Adding a mechanism to - for example - play a sound at the end of a processing job based on the status - that's a line of code in most languages. Not so in a functional language.

The problem here isn't the oft-cited 'Devs just have to think of things differently, and they'll see it's better.'. It's more basic. It's simple throughput. Functional languages might be a theoretical improvement, but they're a practical hindrance. That, in a nutshell, is why they're not in common use in a corporate environment, where "value" loses it's theoretical polish and is compared to hard metrics like time and cost for a given quality.

Comment We knowingly discriminate based on analyitics (Score 1) 231

Another problem is that many of these analyses - let's assume they're generally accurate and not misleading - could result in numbers that are not politically correct. Like the sort of statistics that might drive law enforcement policies, or new laws targeting certain lifestyles or races. I don't know how we can differentiate between statistical analysis driving action and action born of discrimination, but simply ignoring the issues is not the correct decision.

There are whole sets of these sorts of problems with certified causes and guaranteed solutions, but we're not even allowed to talk about it because they are almost wholly restricted to the behaviors of specific minority. It's simply not politically safe to grapple with these issues. So we ignore them. Strike that - we actually go a step further and demonize the people who say them. My personal experience is that if a white male says something like this:

"National crime rates show that low-income black males between the ages of 15 and 34 are responsible for over 50% of the violent crime in America, most in urban environments. If we want to lower the crime rate, we should start there." ... people will start mentally - if not verbally - tossing around terms like 'neo-nazi republican' and 'racist', and much much worse. It doesn't matter if that's what the statistics show. It's simply not politically correct to point out minority groups have negative traits, especially by white males - the majority of our politicians.

Then we have the deliberate ignorance by politicians which focuses on emotional appeals in popularity and re-election bids. For example, attacking scary looking rifles and calling them 'assault weapons', over the weapon of choice in something like 98% of the crimes involving firearms in the US: sub $400 handguns. It's just not as sexy, somehow. Not to mention that a fix like simply raising the price via tax - like we've done with cigarettes for example - will have a backlash claiming it's targeting the poor.

Yes, privacy is an issue, yes, poor input will result in inaccurate trend predictions or incorrect results, and yes, even something like the manner in which the data is collected can introduce a bias. However, I think these are minor issues compared to the glaring problem of not using the data once it's obtained. We have such a poor track record for making rational, data-driven decisions as a country that we don't need to speculate about misuse. The fact that we obtain it and then ignore it is misuse!

Comment Re:Correcting for aspirations (Score 1) 302

Yes. I've been beating this drum for the last 5+ years, but there have been studies along the lines of motivations. At least one that I can remember; The study was on entrepreneurs - so no glass ceiling issues - and the summary was this: Women outperformed men in every case where they were compared on matching motivational goals.

So, between those interested in money - women made more money.
If family time was more important - women spent more time with family.
Where flexible working hours or vacation time ... you get the idea.

Of course, for the large part, women prioritized personal happiness, short commutes, family time, pretty much everything except money, whereas men prioritized money and professional recognition almost exclusively.

Unfortunately, the study is not online - at least, I haven't found it - I believe it was in a trade journal for industrial psychologists, or maybe just a business management magazine.

There are more items to throw into the mix too. As someone pointed out, women are not entering college programs where the degree is associated with higher financial income. Yet studies show that when they do, they often advance faster than men in the same position[1]. This is referred to as career discrimination - it's a self-selective discriminator meaning the person affected is the person making the choice. It's responsible for the largest source of wage disparity.

Career stability is another factor - someone else brought up that women are more likely than men to take time off for family, having & raising children, and so on - and thus have employment gaps which indicate family is more important than career. Not that this is bad, just that it does affect hiring decisions. This is why top execs (not surprisingly, those who make the most $$$) are rarely female. Not every woman is Marissa Mayer (Yahoo's CEO who went back to work with almost no leave after giving birth). When choices are made, you pick the one who places the business over their family, and 1-3 year gaps show that's not the case.

Then there's a whole other set of things; men being more willing to negotiate salary, self-selective bias towards culturally expected roles & professions, so on and so on.

When all is said and done, we still have some wiggle room, but most of the 'discrimination' can be explained due to non-discriminatory sources. There IS still discrimination however, make no mistake, but in the US today, it's easily offset by current affirmative action-promoted hiring standards and the apparent ability for women in general to outperform men - when they choose to.

If the goal is equality, I suggest that we're already there. If the goal is to eliminate discrimination, I think it's not only more challenging, but will necessarily require eliminating gender- and race-based decisions from things like college acceptance and hiring decisions. Obvious you say? It would mean no more affirmative action, as that provides advantages based on gender, race, sexuality or other minority status, which is literally discrimination based on those lines.

[1] - Yes, this could be due to affirmative action more than ability, the studies tend to focus on large companies with high data counts, and those companies tend to desire an appearance of gender and other forms of equality, as well they should. However, the first study I commented on indicates that women outperform men, so I'm theorizing this is earned promotions due to ability.

Comment We've already been through this (Score 1) 189

A pretty comprehensive study was done that showed regardless of the language selected, the programs written by developers with the most experience in that language were the the ones with the least security bugs, regardless of traits of the language like attack surfaces or complexity.

It's short and sweet. A developer's experience with the specific language or framework an application is written in is a better indicator of application security than the language or framework an application is written in.

Comment Isn't there a better way to do this? (Score 1) 65

I've seen the various design concepts before and they're all variations on the same intrinsically flawed theme; displays projected on either a liquid or gas that requires very still air, and a very irritating environmental system to manage, not to mention an image that is disrupted when a user 'interacts' with it because it's interrupting the 'canvas'.

I don't know of any scheme that could avoid these fundamental problems that will stop this from ever being a widely useful, much less consumer level technology.

I think we're just going to have to stick to visual overlays on 3D space, augmented reality style, at least until we can actually produce the sci-fi concept of projected holograms. Anything less is simply not useful enough.

Slashdot Top Deals

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...