Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: jQuery makes a nice introduction to javascript (Score 5, Insightful) 125 125

If you're doing front end web development, at some point, you're going to have to become good at Javascript.

You've got all these different frameworks where everything gets abstracted 3 redirections deep into a pseudo MVC layer populated with promise events and callback system that work well for 80% of the job and make 20% incredibly hard, and STILL in the end ties into the DOM, and is just as breakage prone, all the while explicitly demanding you adhere to their code organization because that way they can encode configuration into class names and function invokations and all of them claiming the other is bad not because of intrinsic issues but because they don't make the same ideological architectural choices like one-page-apps or putting business logic (including navigation!) in pages - even though it's more predicated on popularity than functionality, and ALL of that built on top of the mess that is Javascript and you are well and truly sunk my friend. Not even a run-on-sentence has enough space to cover all the horrible stuff that lies out there for the aspiring web developer.

So I recommend learning Javascript. Once you have a good strong feel for the fundamentals, once you've treated Crockford's "JavaScript: The Good Parts," the same way as you've done with K&R's "The C Programming Language," and you're solid on the basics, THEN jump off into these new worlds of overly pretentious web designers delving into their first languages and claiming you can't write a decent site without coffeescript, haml, and sass. So you can put them in their place, or at least, so you can understand how much they're complicating what is, after all, not that difficult of a thing. Even experienced developers write really, seriously completely awful code with Javascript, thinking they'd doing a decent job because they didn't invest time and energy to understand why they're not.

The only problem is that Javascript is bad. I like learning by examples, but that is so not the way to go with Javascript. Examples abound - but they're more than likely to be bad examples. There's reference books, like "Javascript: The Definitive Guide," but they're not great for learning. So what I do is tell people to learn jQuery first.

With jQuery and a few minutes, they can pick up the basics. Someone who already knows CSS or writes code for a living should feel productive in less than a full work day. It's easy, it's fast*, it handles browser differences**, there are well documented examples of how to use each feature on the jQuery doc pages, and best of all for me - it's non intrusive, so you can fit in the new vanilla javascript you're slowly picking up in, without needing to learn any framework specific configurations or magic function naming or guessing why it's not calling a method because it's not been added to the object prototype by your pesudo controller registration.

For me, I hate frameworks that are all-or-one solutions. Big heavyweight*** frameworks that tell you "This webpage is now a ____ application". First, they never are perfect solutions, and second, they always make some percent easier at the cost of making the rest painful. I think when learning a new language, especially if you've already got one or more under your belt, forcing someone to do it your way, or making familiar patterns or constructs hard is a huge detriment. Besides, how can you expect someone to learn if they're not allowed to get it wrong? You end up with folks who don't understand the importance of things like memory management if they never had to do it, and how useful are they during crunch mode?

So, yeah. Learn jQuery. Learn dom traversal and manipulation. Learn event registration. Learn AJAX. Try out some animations. Write a plugin or two. Many of these things are appearing in modern browsers now, but once you understand the concept, the basics that are endemic to web development, and you can be productive, then you'll have the grace time needed to properly learn actual Javascript. Then you get out your Crockford book and things like prototypical inheritance, closures & scope, and code as a first order data type will start making sense.

Then you can tackle those foul turds of frameworks that are sold as the newest and best and maybe you'll get them to work for you, instead of the other way around. At least until everyone switches gears 3 months later when the next best steaming pile is excreted.

* - enough. It's fast enough.
** - most browser differences, and it's better than rolling your own in any case.
*** - heavyweight in this case meaning they make all my architectural decisions for me, and I have little or no ability to work around them.

Comment: Re:Tangentially related: Race-based admissions (Score 1) 473 473

I'm not sure where to start. I'd like to do a point-by-point, but there's no need. The issue on my part is very simple:

People should be judged based on their own merits, not the color of their skin or their gender, not based on the socioeconomic class they're from, not by handicap or caste or because someone has a quota or a chart that shows an irregular curve they'd like to smooth out.

What you're calling 'social fairness' is actually quite unfair to society or individuals.

The issue with affirmative action-type policies is that it is saying, in no uncertain terms, capability is not as important as quotas. That folks who have been deemed capable should not only not be given opportunity, but that opportunity should be given to someone who has not been deemed capable. That can't be considered fair in any society - and before you complain, consider that this is exactly the policies apartheid & theocratic governments and pre-civil war US used. People judged based on their skin color, religion or other minority status, rather than their ability.

This is not just a matter of treating people as individuals instead of a member of a group as an ethically and morally correct stance. Treating people who are not capable of a job as qualified and allowing them to occupy important positions is a danger to all of society. How that can be lost on you, I don't know. This is the very root of the problem with nepotism and cronyism!

Perhaps this straw man scenario might help: "Your doctor isn't technically qualified to perform your gall bladder surgery, but we needed someone from a low income family on staff to round out the numbers. So ... yeah... good luck in there!"

I don't think we're there yet with doctors, but it is already happening with police, firefighters, and other emergency personnel. How can you justify this by saying that it's acceptable or worse, preferable simply because some arbitrary binning of the population doesn't turn out equal numbers. I can't wait till someone notices there's not enough people with low intelligence being recognized as geniuses and tries to fix that. ... yes I know it's absurd, but the whole concept seems crazy to me.

Comment: Tangentially related: Race-based admissions (Score 3, Interesting) 473 473

I was listening to NPR the other day, and this story popped up: Examining Race-Based Admissions Bans On Medical Schools .

The short version is; certain states have ruled that colleges are not allowed to consider race as part of their admissions criteria, and medical schools are noticing that black and latino graduation numbers have decreased since then.

The intent was to focus on merit-based evaluations. Seems noble, right? We want the best doctors we can get. However, the effect appears to be to reduce the number of minority students admitted. This, of course, has people outraged, and scrambling to find ways to work around the system - like sending recruiting teams to primarily-black or latino high schools, and hoping that will increase the applicant numbers.

What shocked me is that everyone is dancing around the race issue (and only certain races; not, for example, Indian or other asians). Everyone agrees the minority graduation numbers have dropped because individuals from a given group don't actually meet the admissions criteria. They're not qualified to be students or doctors. That apparently hundreds or thousands of people's failing grades were ignored because of their race. That prior to the no-race rule, doctors, in this case, were not necessarily the most well qualified individuals for the job. In fact, some significant percentage of them should not have been allowed in.

This trend isn't new either. When I was a lifeguard back in the 90's, the requirements changed from being able to swim a specific distance in a certain time, to removing many of these fitness requirements altogether. The reason? It was apparently unfairly eliminating people with poor physical ability or handicaps. The new focus was to do all the lifeguarding from the side of the pool: hooks, ropes, and life preservers.

Heck, just last month there was a minor kerfuffle about fire departments force- and expedited-promotions of minorities over whites.

I can't help but see this girls-only computer science focus being another of these sorts of ill-considered plans, where capability takes a back seat to minority inclusion and political correctness. Sure, it's not as vital as our doctors, firemen, and lifeguards, but it's the same line of thought. In our rush to be politically correct and all inclusive, we mistake equality for equally fair, and it serves no one well except those promoting our differences.

Am I the only one who thinks this is crazy? Like Harrison Bergeron crazy? I can't be the only one, right?

Comment: Re:Why not nursing (Score 1) 490 490

Studies seem to indicate that this is largely due to the constantly shifting cultural landscape which says, for example, women make better psychologists and biologists now, whereas 30-40 years ago, that was not the case. It's not that men in general are hardwired to avoid nursing and want to be garbage men, and women are selected to be secretaries and beauty stylists, but rather, the culture they're brought up in acts as a filter to determine what jobs should be held by which gender based on cultural norms, based on no objective differences in ability or merit at all.

So it is the case that our culture is steering gender-based career selection, in an irrational, non-directed way. The bigger issue is: does it really matter all that much? Is it actively or passively hurting anyone? Would forcing equality, especially when it requires working counter to cultural values, really be any better?

My personal belief is that it won't, but there are a lot of loud, angry people out there who seem to think otherwise.

Comment: Re:Equality (Score 2) 490 490

I'm assuming the article is referencing this data: http://www.randalolson.com/wp-.... That is a pretty striking curve, though there are two interesting points to make:
      - Women have made consistent gains year to year in nearly every engineering and "hard science" field, some more than others (biology is a standout)
      - The absolute number of women CS graduates is actually increasing as well - it's just that male graduate numbers are increasing at about twice the rate.

However, contrary to your claim, there are many, many people trying to asking why. There's lloads of speculation, but the ones that seem most reasonable to me point to a large scale cultural shift that considers women better youth educators (pre highschool teachers), psychologists, journalists, biologists, pharmacists, and a large drive into the agricultural job market. Studies have shown - as referenced elsewhere - that these cultural values are imposed early, and one of the major sources is the female child's mother, though popular media also drives some of this. This goes the other way too, with the cultural acceptance of male interest in video games potentially acting as a familiarization gateway to CS careers.

But really, we're only investigating ~that~ question because we're trying to solve a bigger one: How do we get more women with CS degrees and/or working as software engineers? If the studies are right, it seems the best way is to train new parents to stop limiting their children.

More important in my mind though: is that a valuable goal?

There is some greater potential for creativity in problem solving - meaning things like faster software, better interfaces, when you include a diverse body of people, but it's hard to measure that potential, or determine if it's been realized. There's nothing to indicate that given two individuals, one can be expected to perform any better than the other based on gender in this field, so it seems that if you myopically focus on one with a disregard as to their actual merits, you run the risk of hiring the worse performer.

Generally, I'd say that this is more of a non-issue. As we raise an increasingly tech-savvy generation, and negative cultural stigma of IT workers fall to the side, I think we'll naturally see more women in the field, as the work is fairly light and the pay is reasonable. At the same time, I don't think deliberately pushing towards that goal really serves the public or companies or women in any real way, and it may even be hurting it, as per the article's original question about gender stereotypes.

Comment: Re:The Web needs a lot of things (Score 1) 126 126

This is not anything we're not already doing, the difference is that right now it's incredibly difficult to do well, and the result is an intractable mess. Markup - and CSS is just an externalized form of markup - lacks any real power. It's simplistic, which makes it approachable, but not very flexible or adaptive. In order to achieve even simple goals, we're using fairly complex and increasingly obscured and inflexible markup. When that fails, we turn to javascript to provide liquid layouts that work, and cart this baggage of per-page non resusable javascript around with us, all of it ready to break as soon as we make any changes - or use a viewport we didn't consider.

What I'm suggesting is that we provide the same controls we have for generating a user interface for local software in web browsers. That means being able to programatically adjust layouts, styles, etc, without having to wrap javascript around them to provide for a responsive interface or implement common layout requirements. It means doing all the things we already do, but in a way that increases readability, maintainability and future modifications.

Basically, it means internalizing something like flash, without the proprietary nature and other detriments.

That's pretty much what this WebAssembly bytecode would do, and what I've been advocating for.

What you're talking about seems to have ... absolutely no basis on either the article or my comment. I'm not even sure what it is you're suggesting. Letting other people run any program on your computer through your web browser? That does sound stupid.

Comment: Re:The Web needs a lot of things (Score 4, Informative) 126 126

I have also been saying this for years.

It's nice that HTML and CSS were accessible to the masses, and without their simplicity, we could never have reached the level of adoption we now have. See the argument over the MEDIA vs. IMG tag from way back for a good example of that. However, now, that additive simplicity is holding us back.

What we need is a language with flow control, variables, functions, templating, the whole 9 yards. Not just on top of HTML & CSS, but as part of it. So we can dynamically change the size of page elements based on other page elements (think layout managers) without making incredibly complex and highly specific css for a variety of media screen sizes. Common features like centering a dialog box or a three-column display with header and footer sections shouldn't be a hack you need to ferret out, that may behave badly with long or short pages. It'd be nice to explicitly link action management with page elements and avoid much of the code required to identify actors and route events. Heck, some sort of built in asynchronous request mechanism might be nice, rather than having to write custom javascript each time.

What we don't need is another life support extension to a markup language.

Comment: Re:Lack of mainframe skills maybe? (Score 1) 96 96

This depends on a number of issues.

There's a few parts to a banking system, and the 'core' - is usually on a mainframe. The name 'core' in this context means "The software system used to store financial account data" - a database, for the most part. It stores the account data, and occasionally a few other pieces of information, but mostly the account data. Account data never really changes, so neither have these "core" systems. They're also like 20-30 years old, and so rarely break anymore. They're time-tested.

However, modern banks do not use that system, or usually even a mainframe system, to perform the processing. First of all, most banks won't do real time transactions. They make it look like real time to end-users, but on the back-end, there's delays. Instead, they have proxy authorization systems that try to manage the minute-to-minute data and authorize transactions with the info they have. When you use an ATM and you're not at a bank location - or you are, but it's a distant branch, you're probably using a proxy authentication. If you're sneaky, and you've got someone on the other side of the country, you could probably withdraw the 'same' money twice before the proxies reconcile. Eventually it'll catch up to you, but it may work in the short term.

So we've got proxy authenticators that talk to each other and the core and keep the day's running total.

Then comes the clearance systems. This is the end of day processing where all account transactions are reconciled. These started out on mainframes, but most modern banks have moved off them, leaving only the core behind. It might be because the clearance processors are updated fairly frequently. They need to be aware of things like when (and how) to charge fees, customer plans that allow usage rates, taxes, court-mandated child support deductions, all sorts of things. Any time there's a new plan or program or account type or feature, basically, the clearance nodes need to be updated.

So when do they do their thing? The clearance process will go through all the proxy authorization nodes and do all the accounting that is needed, and when confirmed by a bank employee, and only then, finally updates the core with the new values. This usually happens once a day, but busy banks may be running it 4 or more times (though I rarely hear of anyone even using 2, except for special circumstances where it's manually triggered)

So I'd guess that, probably, the processing is not on a mainframe. It's probably in a cluster of servers running the clearance processing, and that they do reasonably frequent updates. Some percent of the time, these updates are what causes the "loss" of data in the first place.

Comment: Happens all the time. (Score 3, Informative) 96 96

So, I used to write banking software for a living, and that included being in the damage control team when things went south. Which they did. A lot. Which were brought to the technical staff's attention, without fail, every friday at 4 PM - but I digress.

This isn't really a big deal.

This sort of thing happens on almost a daily basis. I'd say that 1 in every 500 banks "loses" a day's transactions each week. From hardware failure to networking problems, to someone entering bad data to simple bugs in the software (usually data-dependent and hard to source), and even an occasional overwrite from a backup. Something breaks.

Once we had an FI lie about their data center setup - it wasn't redundant, offsite, certified, or even a datacenter. They just had some machines running in their basement. Imagine our surprise when they called up wanting us to (remotely) fix their servers which were under 6 feet of water during a flood. ... again, I digress.

The thing about these transactions is they're not really lost. Nothing really goes missing.

See, all financial software is super keen on accounting. I don't mean that in a strictly 'add up the numbers' way, but in an auditing way. There are logs upon logs. Using your ATM card to withdraw $20 probably generates around, oh, I dunno, 30-40 log messages, depending on how it's routed. There's a log of the transmission and response in each node along the way, using a standardized protocol. It's a severe pain in the butt, but even if a system goes down forever, we can regenerate those logs from the other systems. It's a great deal of effort, especially to preserve the sequence order, and it's tedious, but it can be done.

This could easily account for the initial delay in fixing things.

Not only that, all these sorts of systems are very keen on sequence-order processing, so if it's just the case that the end of day processing (clearance) system went down/had a bug/etc and none of the transactions were finalized, then they'll just stack up. Once they get that system up again, it'll start processing them again. It might take longer due to a large backload, but it'll eventually complete.

98% of the time, the only people that notice are companies waiting on a payroll to go out, and most of them are happy enough to accept the financial institution's admission of fault. For a day or two at least. Why this one made the paper, I can't tell you. Probably a customer was savvy enough with social media and decided to burn the bank. I guess they should just be happy the public in general doesn't realize how many problems and how much actual work goes into making banking seem reliable and secure.

Comment: My wish list: (Score 3, Informative) 229 229

I saw the article linked with things some folks want, and hated most of it. Vehicles? Really?

Here's what I'd like;
    - Companion characters & character development done by the Bioware teams (I'm gonna ignore the low-average quality from the Dragon Age Inquisition game). Bethesda Softworks does an admirable job with environment and atmosphere, but their NPC's are generally flat, with a few exceptions. Companions most of all. Multiple companions might be nice, Companion quests, idle-time squawking/interparty squawking, scenarios providing different options with different companions.

    - Combat that always feels like a challenge, and not just in a ninja-monkey way where their stats scale to your level. Perhaps limit the character growth and equipment attributes in a D&D 5'th ed sort of way. Adjustable, though (see 'customization' below)

      - They rock at allowing mods, but having a truly made-for-third-parties-without-a-debugger-running sort of script evaluation (profiling), execution, merging and management system would be swell. Knowing a module was going to crash - or even just which mod caused the crash - is a big help.

      - Enough customization to allow different play styles, not just different difficulty levels. For example, the New Vegas optional 'hardcore' mode requiring food, drink and sleep, but perhaps with a checklist of 'collectables' and an easy combat or excessive loot for folks who are more interested in achievements than someone who wants to soak up the atmosphere. This includes any time a dev said "But that won't work on console" - make it an option. None of this dumbing things down just because it has to run on a console.

      - That mod thing up there? I'm putting it here again because I like mods.

      - Oh, and an easy way to add songs to a playlist rotation, not requiring a mod with a new radio station, necessarily.

Comment: Re:Verbosity is easy? (Score 2) 414 414

You may know the quote: "Premature optimization is the root of all evil," but the whole quote in context explains why it is so:

"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
- Donald Knuth

In fact, it takes an experienced eye to know when and where to optimize, to identify that critical 3%. In the meanwhile, novices are so worried they'll miss it that they try to overoptimize everything, not having the experience of maintaining programs written in that way.

So when you see that sort of pattern abuse, especially naming classes after patterns, like -Facade, -Decorator, -Adapter, -Mediator, (and -Mixin depending on the language) realize that what you're dealing with is a novice's code, written by an amateur with no clear understanding of the entirety of their job, and adjust your expectations accordingly.

Comment: Re:The reason is : Corporate buy-in. (Score 1) 414 414

That may explain why it's still around even after it's been technologically eclipsed, in a similar vein to Java.

They're not around because they're easy to read or the choice language or any other criteria that would make sense to a developer, but rather because there was a corporate investiture. Money spent means more to a business than other metrics.

Comment: Re:Verbosity is easy? (Score 5, Insightful) 414 414

Agreed.

The whole "convention over configuration" theme is maddening, because if you don't know the conventions - or haven't figured out the new conventions that a particular project uses to drive execution and data flow, you're going to be lost. What is gained in brevity is added cost on the maintenance side. I once spent 2 days helping someone troubleshoot a currency formatting issue on a RoR app; someone had created a mixin to extend a base string class method, in a file describing a service. Fixing the code took all of 5 minutes, but finding it took forever.

With Java, it's worse now than it used to be. A decade ago, your major threat to readability was someone with pattern prejudice; those who ended up encapsulating everything in a factory-factory to factory to interface to abstract to etc, etc, etc, making every data object change require changes to 5-10 files just to surface the changes to the end methods using it, just for the sake of doing it.

Today you've got stuff like Spring. Ever try to do a manual, non-runtime code analysis of a project of decent size that heavily uses Spring, even where it's not necessary (like interfaces that will only ever have a single implementation)? Or worse, have you guys seen the OSGi development model? Let me put it this way; imagine using the original J2EE bean model, with home and remote interfaces and all that, only we've decided to use CORBA as our architectural focus. So every method can be a dependency-injected service-provided module, ready for you to call the service factory and grab an instance of the feature you want, which incidentally makes static analysis of code by humans more or less impossible.

Yes, I understand the concepts for these features, and OSGi actually has some neat capabilities (that we've had since the mid 90's via similarly laborious mechanisms like CORBA or more often, DCOM), but they do detract heavily from both the readability and maintainability of the code. If a new developer can't determine the execution or data flow for a given scenario in a few minutes, it's going to be onerous for them to maintain the system. That pain for the benefit of what we all know is the patent myth of polymorphism; that every project is going to have a need for unplanned sibling code modules in the future - it doesn't seem like a good trade off to me.

I've started to really appreciate code that I can use tools - often just the 'find in files' feature of an IDE - to trace through execution and data flow. It's becoming less and less common, and sourcing and correcting bugs is taking up more and more of my time because of that rarity. I sometimes look back on 'tricky' C code that uses/abuses pointer arithmetic and requires complex memory management, and I feel wistful for when it was that simple, that you could know what the code was actually doing. .. aww crud. I'm like a few years away from complaining about those darn kids on my lawn, aren't I?

Comment: The reason is : Corporate buy-in. (Score 1) 414 414

Java was the first language on the scene that came with a top-to-bottom marketing rollout. They had certification, training, professional tutorials and classes, sponsored competitions, commercials and corporate branding, full page ads in trade magazines, and a one-size-fits-all modular system of components to mix and match and fit every need, and you could even become certified as a java evangelist - that used to be a real job at some corporations.

Once they got solid buy-in, well, companies are loathe to change unless there's either a pain point or a generational technology gap, and sometimes not even for the latter (see batch processing mainframes running cobol and RPG programs).

So that's what it is. If anything, as the language matures many people actually want to get away from readability; they want annotations and dependency injections controlled by some non-centralized mechanism, such that you must either rely on everyone else who worked on the product getting things 'right' and just magically working without your understanding of other processes (a theoretical goal of good OO development), or more often, you must understand all the code running at all times. Try doing a manual analysis of a good sized project that relies heavily on Spring, or worse, an OSGi project.

Projects using those features are not self-documenting code. Tracing execution or data flow is extremely difficult. All I can say is at least it's not Ruby, Objective C, or Smalltalk.

So, no, it's not readability. There's a good subset of programmers who are strongly against readability. It's money, plain and simple. It's a corporate investment, and that's all there really is to it.

Nature always sides with the hidden flaw.

Working...