Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment: Re:Verbosity is easy? (Score 2) 383

by quietwalker (#49746165) Attached to: The Reason For Java's Staying Power: It's Easy To Read

You may know the quote: "Premature optimization is the root of all evil," but the whole quote in context explains why it is so:

"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
- Donald Knuth

In fact, it takes an experienced eye to know when and where to optimize, to identify that critical 3%. In the meanwhile, novices are so worried they'll miss it that they try to overoptimize everything, not having the experience of maintaining programs written in that way.

So when you see that sort of pattern abuse, especially naming classes after patterns, like -Facade, -Decorator, -Adapter, -Mediator, (and -Mixin depending on the language) realize that what you're dealing with is a novice's code, written by an amateur with no clear understanding of the entirety of their job, and adjust your expectations accordingly.

Comment: Re:The reason is : Corporate buy-in. (Score 1) 383

by quietwalker (#49746095) Attached to: The Reason For Java's Staying Power: It's Easy To Read

That may explain why it's still around even after it's been technologically eclipsed, in a similar vein to Java.

They're not around because they're easy to read or the choice language or any other criteria that would make sense to a developer, but rather because there was a corporate investiture. Money spent means more to a business than other metrics.

Comment: Re:Verbosity is easy? (Score 5, Insightful) 383

by quietwalker (#49743657) Attached to: The Reason For Java's Staying Power: It's Easy To Read

Agreed.

The whole "convention over configuration" theme is maddening, because if you don't know the conventions - or haven't figured out the new conventions that a particular project uses to drive execution and data flow, you're going to be lost. What is gained in brevity is added cost on the maintenance side. I once spent 2 days helping someone troubleshoot a currency formatting issue on a RoR app; someone had created a mixin to extend a base string class method, in a file describing a service. Fixing the code took all of 5 minutes, but finding it took forever.

With Java, it's worse now than it used to be. A decade ago, your major threat to readability was someone with pattern prejudice; those who ended up encapsulating everything in a factory-factory to factory to interface to abstract to etc, etc, etc, making every data object change require changes to 5-10 files just to surface the changes to the end methods using it, just for the sake of doing it.

Today you've got stuff like Spring. Ever try to do a manual, non-runtime code analysis of a project of decent size that heavily uses Spring, even where it's not necessary (like interfaces that will only ever have a single implementation)? Or worse, have you guys seen the OSGi development model? Let me put it this way; imagine using the original J2EE bean model, with home and remote interfaces and all that, only we've decided to use CORBA as our architectural focus. So every method can be a dependency-injected service-provided module, ready for you to call the service factory and grab an instance of the feature you want, which incidentally makes static analysis of code by humans more or less impossible.

Yes, I understand the concepts for these features, and OSGi actually has some neat capabilities (that we've had since the mid 90's via similarly laborious mechanisms like CORBA or more often, DCOM), but they do detract heavily from both the readability and maintainability of the code. If a new developer can't determine the execution or data flow for a given scenario in a few minutes, it's going to be onerous for them to maintain the system. That pain for the benefit of what we all know is the patent myth of polymorphism; that every project is going to have a need for unplanned sibling code modules in the future - it doesn't seem like a good trade off to me.

I've started to really appreciate code that I can use tools - often just the 'find in files' feature of an IDE - to trace through execution and data flow. It's becoming less and less common, and sourcing and correcting bugs is taking up more and more of my time because of that rarity. I sometimes look back on 'tricky' C code that uses/abuses pointer arithmetic and requires complex memory management, and I feel wistful for when it was that simple, that you could know what the code was actually doing. .. aww crud. I'm like a few years away from complaining about those darn kids on my lawn, aren't I?

Comment: The reason is : Corporate buy-in. (Score 1) 383

by quietwalker (#49743529) Attached to: The Reason For Java's Staying Power: It's Easy To Read

Java was the first language on the scene that came with a top-to-bottom marketing rollout. They had certification, training, professional tutorials and classes, sponsored competitions, commercials and corporate branding, full page ads in trade magazines, and a one-size-fits-all modular system of components to mix and match and fit every need, and you could even become certified as a java evangelist - that used to be a real job at some corporations.

Once they got solid buy-in, well, companies are loathe to change unless there's either a pain point or a generational technology gap, and sometimes not even for the latter (see batch processing mainframes running cobol and RPG programs).

So that's what it is. If anything, as the language matures many people actually want to get away from readability; they want annotations and dependency injections controlled by some non-centralized mechanism, such that you must either rely on everyone else who worked on the product getting things 'right' and just magically working without your understanding of other processes (a theoretical goal of good OO development), or more often, you must understand all the code running at all times. Try doing a manual analysis of a good sized project that relies heavily on Spring, or worse, an OSGi project.

Projects using those features are not self-documenting code. Tracing execution or data flow is extremely difficult. All I can say is at least it's not Ruby, Objective C, or Smalltalk.

So, no, it's not readability. There's a good subset of programmers who are strongly against readability. It's money, plain and simple. It's a corporate investment, and that's all there really is to it.

Comment: Open source documentation challenges (Score 1) 244

by quietwalker (#49690295) Attached to: RTFM? How To Write a Manual Worth Reading

Open source code faces a special obstacle when it comes to quality documentation. By the very nature of the ecosystem, open source projects derive from the needs of those who are directly contributing to it; the experts, in fact, and thus any artifacts the project produces will be based on their requirements, not those of casual users. It's not only required, but expected that in most cases, a user will be knee deep in the lower level details and have a high level of proficiency and knowledge of not only the system, but contingent details.

In fact, I can remember a time when creating a user-friendly webpage for open source projects - one with bubbly callouts, animated page features, and tutorial videos - was seen as some sort of sell out - proof of corporate sponsorship or the like. We know that providing more than just a link to the source code and a paragraph or two is a significant effort, much less organizing all the information in an immediately useful way is a huge time and effort sink. That's time and effort that could have gone into bug fixes or feature enhancements. For the intended users, it's sort of a waste.

Yeah, chicken and the egg situation here, good docs = more users, more users = more need for good docs, but that's only the case if 'large number of users' is a higher priority than bug fixes or feature enhancements (and it very well may be!).

So it'd be nice to have better documentation, and not just hope that stack overflow will be there with a solution, but like the example citing non-english speakers - is that part of the scope of the documentation? Is that the primary target? Is it worth our actual time? And if so;
    - How do we motivate people to write good documentation?
    - How do we attract quality technical writers?
    - How do we know when we've written good documentation/what are the criteria?
    - How do we know how to limit our documentation? How much is enough, and how much is too little?
    - How can we place an explicit value on that documentation, vs. other concerns like bugs and feature development?

The article pointed out some good arguments for better documentation, but barely touched on the bigger problem - how do we get people to actually write it, and how do we write it once we've decided to.

Comment: Re:Talk to the legal team & managers (Score 1) 353

I read about this in a business journal article on IP law. I have searched many times for this on the net, and I could not find it. The relevant details were:

The employee worked for a utility or utility hardware company, either power or phones. The case was brought somewhere in the 98-2000 time frame, though that's just a rough guess. This specific case was cited in a different article relating to the Jacobsen v. Katzer case, regarding strange uses & abuses of IP law, but I can't find that article either.

I believe that the issue with making him finish the work was multipart:
    1) He was nearly done
    2) The seizure was treated like a subpena for records - something which is allowed to incur a burden on the provider (who is allowed to charge a reasonable fee)
    3) At the time, the court could not understand any other mechanism for transfer of the IP other than making him finish his work and turn it over
    4) The prosecution made a case that because the company has no knowledge or insight to this work at the time, that it would be easy for the ex-employee to ...something something willful maliciousness ... and make the product he turns over useless. That includes providing an incomplete product without instructions for completion, etc. So he was required to make it a professional quality product with documentation and non-obfuscated source code, on fairly short notice, a build environment, plus be available for support.

I'm pretty sure that he was to receive a set amount of remuneration for his work, something similar to his previous job, but he'd have no right to the code afterwards.

Today this may be done a bit differently, but fairly draconian IP ownership laws were the standard pre-2000, and few individuals had the ability to stand up to large companies in courts, even for what appears to be insane requirements. Just take a look at the non-compete clauses from that time.

Comment: Talk to the legal team & managers (Score 2, Insightful) 353

This is very easy. By default, anything you create - or even imagine - during business hours or in execution of your duties is 100% owned by the company. In fact, if you produce something at home and you can't show clean room separation between systems and code between your personal code and your work code, you're not likely going to keep the rights to that either.

You know that part of the employment process where they ask you to list all your prior works? This is them giving you a chance to CYA. Granted, the legalese on that page usually states that you're allowing them to use it for free in perpetuity if you include it in any of your work at the company, but that makes perfect sense Think utility libraries you carry around with you from job to job. They don't want to own them, but they can't risk having their products 'poisoned' by arbitrary licensing.

In fact, there's even a case where a guy had an idea, spoke to a co-worker about it, discussed it with his immediate superior and they decided not to follow up on it. After he quit the company, he started work on it himself, and was getting ready to finish/sell it, when he was sued by his prior employer. Because it had been 'developed' (thought of, even if it was never written down) on company time, the judge sided with the company and full ownership was given to them. He had to finish the program and deliver it and the mechanisms required to build and distribute it to them, without malicious sabotage Forced to write code for free, for a product the company didn't even want.

So! The only way this is really going to work for you is if you speak to your legal team and management.

I have, in the past, approached my manager(s) and asked permission to work on side jobs which were clearly and 100% outside of the scope of my current job; working on banking applications while I was writing automobile inventorying software, and was given permission. Got a signed statement, and I was good. Did open source work on the side as well, for a game engine, again, no problems.

However, it's extremely unlikely that anything you do at work will be allowed to be owned by you. No company likes giving away potential revenue and adding competitors with insider knowledge. I mean, really unlikely. Like, I can't even comprehend how you think it's a real possibility. Getting the company to go along with an open source thing might be one possibility, but an employee getting ownership?

Think of it this way: You work as a mechanic in a garage. You have access to all the tools and equipment there. You decide that you'll start your own business, in that garage, fixing cars, but you'll keep all the money instead of giving it to your employer, while still using his equipment and space. You still expect him to pay you for the hours you're working there.

Can you really see this happening? If so, you may need to lay off the cough syrup, cause we're all worried about you.

Comment: I've seen paycheckism, never agism (Score 4, Informative) 429

by quietwalker (#49639329) Attached to: Why Companies Should Hire Older Developers

I worked as a contractor at IBM a few years back. They had just changed their hiring policies to basically three types for engineering positions:
    1. Foreign workers in areas with low cost of living that are paid location-adjusted wages
    2. New hires fresh out of college (preferred if they interned with IBM previously) for about 30% below market cost
    3. Individuals who were known in their field of study - acknowledged experts, basically, obviously a rarity.

Everyone else was being pushed out or required to do the work of the experienced engineers who were pushed out on top of their own work, while training their own replacements. As in, "You can still work for us, but you have to move to brazil and accept a location-adjusted $27k/yr equivalent".

This resulted in the majority of incoming employees being extremely young, low 20's, zero experience, with the older individuals being skipped not because of age, but because they were not willing to pay real market value for them, when they can get cheap labor that can be trained up to the same point for 1/3'd of the cost. Especially when the young kids are willing to put in 60 hour weeks because they don't have competing obligations.

This wasn't a case of IBM being evil; they were just following the industry trends. I've seen other companies do the same thing.

It's not that they aren't hiring people because of their age. If anything, they'd love to hire those experienced professionals. They just want them to work for below average starting pay for a zero-experience, fresh college grad. Someone with 20 years of experience is expensive, after all, and budgets are quarter to quarter - not 5 years down the road. Hard to justify long term ROI in just a single level of management. Got 20 years of experience and you're willing to work for 40k in San Jose? You'll have no problems finding a job. Want a more reasonable 150-200k? Well, there's 5 guys in vietnam that will do your job for 20k a pop, and that makes up for the loss in efficiency - on paper, at least.

Comment: You do not discharge anger from engaging in it (Score 2) 58

by quietwalker (#49586919) Attached to: Tech Credited With Reducing Nigerian Election Death Toll

This is a common misconception. You cannot "get your anger out" by indulging in it. Hitting a pillow or screaming until you're hoarse, or verbally thrashing someone on the internet does not make you act like a gentle person the rest of the day.

If that was the case, you'd see most of our professional athletes; especially hockey players, football players, and boxers, as some of the most gentle, even-keeled people that ever existed.

If anything, this makes us more used to binge angry, as we acclimate to it physiologically and psychologically. Perhaps this is why it seems those above atheletes are prone to excessive and often illegal violence on and off the field.

Comment: Re:Mobile (Score 1) 218

by quietwalker (#49564055) Attached to: JavaScript Devs: Is It Still Worth Learning jQuery?

I don't personally like jquery mobile because I don't like one-page apps, or indeed, any framework that dictates how I organize my application. Though it may be an unpopular opinion, I dislike the whole convention over configuration theme that's popped up during the RoR popularity phase. Not only can my IDE write out any boilerplate, but I can also search for the linkage, and all that without making 80% of my application easy at the expense that 20% will be hard or impossible.

Now, most of the applications I end up writing are fairly complex, don't need fancy animated transitions from page to page, don't require touch/multitouch interfaces, so maybe I'm not the intended audience.

I suppose if you're writing a fairly simplistic application, like say, showing a bus schedule or reading a review, it'll be fine. I wrote a jquery mobile app once that's just a series of short forms that produce sets of calculations at the end, which made really complex calculations for initial settings on industrial tools accessible to an operator without requiring a shift manager or technician to come over and reset things for them. I remember it still being quite a pain pulling values from other pages and doing some of the calculations. I especially disliked how my nagivation logic ends up in the javascript, rather than in the backend with the rest of my controller & business logic.

Really though, it's up to personal preference.

Comment: I don't get it (Score 5, Insightful) 218

by quietwalker (#49563887) Attached to: JavaScript Devs: Is It Still Worth Learning jQuery?

JQuery is just encapsulating some primarily dom-related javascript mainpulation routines with the added bonus that they try to eliminate browser differences. So, when you're saying that the browser provides features that jquery was needed for, you're really saying that the browser does things that javascript is no longer needed for.

I'm just not seeing it though. With pure HTML & CSS and a fancy new browser, can I:

Write ajax requests and parse and conditionally apply the results to various page elements?
Dynamically add and remove elements?
Perform liquid resizing based on a layout approach with glue elements and fixed-but-scalable areas - that is dependent upon the size of other elements rather than explicit browser viewport height/width?
How about perform custom input box validation?
Maybe set the value of a text box only when a value in a linked select box is changed?
Pop up a dialog when a button is clicked?
Start an image upload when you drag an image over a browser region?

In the age of ever-closer-to-desktop-application websites, I'm only seeing more and more use of javascript frameworks - of which jquery is one - and frankly I don't see how anyone could do without it. Maybe if you're making static brochure sites, I suppose, but then you weren't using javascript for that anyway.

Maybe the original poster meant to say "is it worth learning jquery instead of another framework or library" ? Otherwise I can't see anyone who actually creates web applications for a living even asking this.

Comment: Re:The "real" law (Score 5, Insightful) 294

by quietwalker (#49486951) Attached to: IT Worker's Lawsuit Accuses Tata of Discrimination

The "actual law" often says that discrimination is behavior towards a member of a legally recognized minority on the basis of their membership in said legally minority. Of course it varies state to state and between municipalities, but that's usually the language of it. It's only the general, unwritten interpretation that provides the vague assurances of "racial discrimination is illegal" or "gender discrimination is illegal" or similar nice-sounding definitions.

Unfortunately, 'male' and 'white' are not legally recognized minorities, so by many actual, written laws, you cannot discriminate against someone if you disadvantage them because they are either white, or male, in the same way that it's not discrimination if you only hire the deaf over the non-hearing-disabled.

The same is true of the legal definition of rape in some states; rape is only defined as a male penetrating a female. All other combinations (man/man, female/female, female on man) is considered a lesser form of sexual assault. In these places, a female can never be charged with rape.

Comment: Re:No surprise here (Score 1) 131

by quietwalker (#49439071) Attached to: Why Some Developers Are Live-Streaming Their Coding Sessions

There's a big difference between a teaching environment where the teacher can learn something, and a presentation-based teaching environment. Without real time feedback, exposure to the ideas of others, and having to explain things to novices, it's just a vocalization of the thoughts you already had. Maybe you'll get some organizational benefit out of it, but really the teacher is not learning anything.

I skimmed through a few of his videos, and I didn't explicitly see where he was responding to a chat log or taking questions. Perhaps I missed it, but it seems like mostly what he's doing is presenting, not teaching, and so I doubt he's learning anything.

On the other hand, I do think that people watching can learn quite a bit.

Comment: Not an April Fools post! (Score 4, Insightful) 265

Did you know that Texas, home of Big Oil, produces slightly more than 10% of it's power from wind, about 14,098 MW according to wikipedia. They're the nation's leader in wind energy. Florida does solar better than anyone else, and for overall green energy, Washington (via dams, mostly).

In a related tangent, California claims to get almost 5% of their power from wind, though they only produce 5,917 MW from theirs, and have about 10 million more people, so somewhere, something doesn't add up.

My guess is that a lot of these "% power" claims, including the one in the article, come down more to clever accounting than actual, literal green draw.

TRANSACTION CANCELLED - FARECARD RETURNED

Working...