Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: This brings into question our theories... (Score 1) 152

by Lodragandraoidh (#49175045) Attached to: Astronomers Find an Old-Looking Galaxy In the Early Universe

The big bang theory is just that: a theory. It is not yet proven indisputably as a law of nature.

New ideas and observations, such as this article on new equations and this article on lack of expected gravitational waves put the theorum to the test. Furthermore, the Pope declaring the 'big bang theory right' only increases the need to check our models and assumptions on this subject (and now that I think about it, wouldn't the church have a vested interest in a non-permanent universe to mesh with end-times dogma)?

At least until we get some indisputable evidence, we need to continue to question our theories, record our observations - and try to see where the puzzle pieces fit. Being a dogmatic scientist is worse than being ignorant - the scientist should know better.

Comment: Re:Write it myself (Score 1) 153

by Lodragandraoidh (#49150443) Attached to: Invented-Here Syndrome

We need to address the real underlying problem you are describing right there - code written by different people that does not conform to any standards is hard to manage over its lifecycle - and this goes double for limited frameworks that may get some things right, at the expense of not allowing you to get all things right.

This is one thing that open source has gotten right on occasion - think of the Linux kernel for example, and how many people contribute to that and keep it going.

So really the answer I think is twofold - on the one hand people need better tools that make it easier to integrate their efforts, on the other hand entities engaged in this activity need to develop standards that ensure when people develop things - they document and build interfaces that are consistent, if not globally, at least between members of the groups expected to work on the code. If you do both of these things - and by extension some other things that those recommendations imply (e.g. code reviews, agile development methods etc).

Now, if you are only building software for yourself, then this isn't so important. However, if you expect other people to extend and manage your code over the long term, then I would still opt for leaning towards either creating and documenting standards, or selecting and learning existing well known standards - and sticking to that in your own code. Keep it consistent between all the things you build that you want to share, and you just might get people to help - if that's what you are looking for.

Comment: Re:Sociological problem: CYA (Score 1) 153

by Lodragandraoidh (#49149859) Attached to: Invented-Here Syndrome

I would not consider being overly risk averse as being rational behavior.

There are many rational reasons to take risks:

1. Gives you, and by extension your company the opportunity to learn and grow. If you never take risks you stagnate and learn nothing.

2. Real invention occurs through taking risks. If you never take risks you don't innovate.

3. Taking responsibility, and therefore risk is what men and women do. Being overly risk averse is immature slug-like, weasel word behavior.

If your company does not reward risk-taking - then you are in the wrong company.

Comment: Re:About time... (Score 3, Interesting) 153

by Lodragandraoidh (#49149745) Attached to: Invented-Here Syndrome

I've told this story elsewhere, but it applies directly to this issue, so I'll recap in short:

Vender is contracted to create an integrated support application for large sums of money ($millions) over a 6 month period; contractor chooses an obscure commercial java framework to build the system on. The application is delivered and appears to work fine for several months, then starts getting sluggish, then a month later the application locks up - and has to be restarted. This progressively gets worse, and is asymptotic with the growth of the underlying customer base - and soon becomes completely useless - shutting down within minutes of being started with a memory exhaustion error.

The main problem we found was the equivalent of a memory leak in Java. The code would instantiate objects based upon the framework in the main loop, and they would never go out of scope. Furthermore the code imported hundreds of libraries that were never used - further impacting clarity and understanding of what the thing was doing.

To make a long story short, since this was already in production and now there was even more pressure to get a solution in place fast (and all the lawyers threats in the world can't replace a knowledgeable developer) - we rebuilt the whole system using perl in a little over 1 week. That solution is still running today - even as we've scaled orders of magnitude since then.

So - to your point - this stuff really does happen, and wastes godawful amounts of time and money, when a more simpler home grown solution would do just as well, if not better.

Comment: Re:About time... (Score 1) 153

by Lodragandraoidh (#49147891) Attached to: Invented-Here Syndrome

Programmers have to take more responsibility and think holistically about what they are building - and integrate testing to validate their assumptions against the hard light of the real world. To be a great programmer, you should know how to test and build tests and test rigs as needed. To be a great tester, you should know how to code - so you can automate what you're testing. I think the lines have to blur - a firewall between the two only leads to silos, and limits what can be done if they were to work seamlessly (the quote attributed to Aristotle applies here, "the whole is greater than the sum of its parts").

Of course, in many development shops the 'just a programmer' mentality is baked into the whole process - so as a developer you might feel that you are stuck. That being said, if you know better, then it is in the interests of your business if not yourself to champion the issue and effect change.

Comment: Re:Is semver too simplistic for kernels? (Score 1) 199

by Lodragandraoidh (#49066777) Attached to: Torvalds Polls Desire for Linux's Next Major Version Bump

It would all depend on your definition of 'significant rewrite/technology/architectural changes'. There is a lot of room in there for interpretation - particularly if a project was changing constantly.

At the same token, if a project has stablilized to the point of little or no changes, then have a long-lived 'W' wouldn't necessarily be a bad thing either.

Human beings create these numbering schemes for human consumption - and therefore can reasonably adjust them to avoid confusion as necessary.

Comment: Re:Why not use commit date as version (Score 1) 199

by Lodragandraoidh (#49056237) Attached to: Torvalds Polls Desire for Linux's Next Major Version Bump

There are no stupid date formats - just stupid people.

Symbols, and words, phrases, and sentences created with those symbols are neither right or wrong in and of themselves. In a given context (e.g. spoken words, computer file name, database representation, printed document etc) each and every method has its place.

That being said, I do agree, and use myself ISO date/time formats when dealing with data, file names, and other things that I want to conveniently order by date/time. (yyyymmddhhmmss). That does not preclude me from using other methods in different contexts.

Comment: Re:Is semver too simplistic for kernels? (Score 1) 199

by Lodragandraoidh (#49056125) Attached to: Torvalds Polls Desire for Linux's Next Major Version Bump

We did at some point, but users were not able to remember the full version number. People already have trouble sometimes remembering 3 numbers. They start telling you things like "I have the latest version", which they often don't, or confusing 10.0.1 with 10.1.0. 4 numbers makes the situation much worse.

Why is this important? because when someone sends you a bug report, you want to know exactly what version they are using. You may or may not have fixed the bug already, so having accurate version numbers matter.

The fix for the human factors problem is to automate the generation of the bug report on the user's system such that it captures the version information for things critical to your app (e.g. kernel version, libraries versions, your application's version etc...). Have the application itself do this upon failure, and/or provide a tool to capture the requisite information after the fact.

Then, make it a policy not to accept bug reports without the appropriate error log data accompanying it (with clear instructions about how to generate the information, and where to get the output file for sending etc). You can then easily filter out any non-compliant reports to make your life a lot easier.

That's how I would do it anyway. I've been burned/wasted my limited time too often by people raising 'bugs' without any backup evidence that then turned out to be user error - or some other component of the system unrelated to the application upon closer inspection. I no longer accept unsubstantiated bug reports.

Comment: Re:Is semver too simplistic for kernels? (Score 4, Interesting) 199

by Lodragandraoidh (#49048143) Attached to: Torvalds Polls Desire for Linux's Next Major Version Bump

I would argue for adding an extra decimal point: W.X.Y.Z

'W' - Major Release - reserved for significant rewrite/technology/architectural changes

'X' - New Feature Release - significant changes to existing architecture/technology

'Y' - Minor Release - minor changes to existing architecture/technology - could be for major bug patches, or other miscellaneous performance enhancements that we want to differentiate from previous releases.

'Z' - Patches - things that do not rise to the level of a full release - could be for minor bug fixes, or to track iterative evolution and re-factoring of a small component of the overall system. Having the extra number here would allow you to keep each individual decimal number smaller by selectively rolling the number above it without necessarily impacting your major release numbers - basically it splits up the last number - which seems to get a lot of use - into two numbers to spread the load.

Comment: Re:Yes (Score 1) 716

by Lodragandraoidh (#49035527) Attached to: Is Modern Linux Becoming Too Complex?

Excellent points. I think all of the angst that is coming out of the systemd debacle is really the result of a long time defacto state where most distros - because of their common POSIX-ish modular implementations could work with just about any software out there - so even if your distro didn't support something (like a very small X11 compliant text based window manager - which I managed to shoehorn onto an old AST 486 laptop with 20 MB harddrive and 1 Mb ram running a stripped down version of Slackware 10) it could be made to work on the distro you were most happy with. People had their cake and because of interoperability could generally eat it too relatively easily - with some exceptions (e.g. device drivers).

Systemd, Dbus, et al created a situation where the choices that were once 'AND' choices, now became 'OR' choices - at first for the developers of key system components - but with enough momentum this trickled down to the end user. Developers who were once maintainers of alternative versions of various key applications are finding that the code they once depended on for porting no longer supports the old interfaces, and so they are faced with a hard choice - either spend their time working on the most widely distributed version of their software (for systemd based distros - abandoning general support across BSD and non-systemd Linuxes), or focus their energy on back-porting the code in external interfaces to work across non-systemd distros. A Hobson's choice for both developers and users who value interoperability/portability of their systems.

Frankly, I am surprised that Linux, BSD, and the shared GNU POSIX tool set was able to maintain this benign portability for as long as it has across such an eclectic assortment of distributions. I would argue that this gave Linux time to incubate, and grow up in a stable environment. With the systemd gauntlet thrown down it is now time for other alternatives to be put out there - the more the merrier! Maybe one new distro would be enough to address the complaints. Maybe 10 or a hundred. Who knows? The more of these there are, the more likely someone complaining about lack of options today will find something they like tomorrow without having to try to move a boulder up hill with a straw.

Comment: Re:So roll your own. (Score 5, Insightful) 716

by Lodragandraoidh (#49029667) Attached to: Is Modern Linux Becoming Too Complex?

I think you're missing the point. Linux is the kernel - and it is very stable, and while it has modern extensions, it still keeps the POSIX interfaces consistent to allow inter-operation as desired. The issue here is not that forks and new versions of Linux distros are an aberration, but how the major distributions have changed and the article is a symptom of those changes towards homogeneity.

The Linux kernel is by definition identically complex on any distro using a given version of the kernel (the variances created by compilation switches notwithstanding). The real variance is in the distros - and I don't think variety is a bad thing, particularly in this day and age when we are having to focus more and more on security, and small applications on different types of devices - from small ARM processor systems, to virtual cluster systems in data centers.

Variety creates a strong ecosystem that is more resilient to security exploitation as a whole; variety is needed now more than ever given the security threats we are seeing. If you look at the history of Linux distributions over time - you'll see that from the very beginning it was a vibrant field with many distros - some that bombed out - some that were forked and then died, and forks and forks of forks that continued on - keeping the parts that seemed to work for those users. Today - I think people perceive what is happening with the major distros as a reduction in choice (if Redhat is essentially identical to Debian, Ubuntu, et al - why bother having different distros?) - a bottleneck in variability; from a security perspective, I think people are worried that a monoculture is emerging that will present a very large and crystallized attack surface after the honeymoon period is over.

If people don't like what is available, if they are concerned about the security implications, then they or their friends need to do something about it. Fork an existing distro, roll your own distro, or if you are really clever - build your own operating system from scratch to provide an answer, and hopefully something better/different in the long run. Progress isn't a bad thing; sitting around doing nothing and complaining about it is.

Comment: Here's a great idea... (Score 1) 220

by Lodragandraoidh (#48928627) Attached to: Anonymous No More: Your Coding Style Can Give You Away

You can have/use this idea for free:

Before a system will build said code, have the build system verify the code not only by the public key/code hash, but as a secondary method - the code fingerprint of the author in question.

This turns a creepy idea into something worthwhile.

Comment: I'll let you know when I've met one... (Score 1) 214

by Lodragandraoidh (#48928487) Attached to: Ask Slashdot: What Makes a Great Software Developer?

I have yet to meet a really competent programmer. I don't consider myself much beyond capable - but I have too many flaws in my output to be considered really brilliant.

I have worked with or dealt with the output of other programmers who's performance was egregious - most egregious was the contractor who's naive use of a commercial java framework managed to produce the effect of a memory leak in java (e.g. hamstrung java's built-in garbage collection mechanism).

Experience has taught me practical measures of quality programmers in no particular order:

1. They must know how to program at the most simple level (e.g. competency in structured programming in C would be a good starting point - a basic understanding of LISP programming a plus) before tackling more complex programming tasks. I get the sense there are a lot of cut-and-paste programmers out there who really don't understand what the underlying code they are creating is actually doing.

2. Have an innate ability to focus on simple solutions, rather than being clever. KISS principle must be understood and brought into every design decision from the start. That is not to say there are no complexities, but understanding what is simple given the problem at hand - some simple things are complex when compared to other systems - and having the ability to avoid needless complexity.

3. Literate - must be able to not only communicate effectively externally - but also their comments in code should illuminate the subject matter in a clear, concise manner. Ideally should be able to get workable technical documentation straight from their comments - via doxygen or the like (perldoc, pydoc etc).

4. Their code must be maintainable and extendable. If an average programmer cannot maintain the code, and is required to rewrite the system from scratch - then you have failed as a quality programmer. Change is inevitable - how resilient your system is to change is a measure of your ability as a programmer.

5. They must understand a lot about technology outside of the world of their application. Their application will live in a world of networks, machines (physical and virtual), storage systems, communication protocols, and APIs - they must understand the implications of software design choices given a set of environmental requirements. The best programmers not only know how to code up systems, but also how to give advice about what their systems will be capable of doing given the environment, or lack thereof - and act upon that if it is possible to adjust via changes to software alone (e.g. choosing multithreading/multiprogramming design over single thread of execution).

6. They must be able to create secure code. If the company they work for doesn't produce a guide to that, then they should develop that on their own - and live by it - and consistently improve it. If they are using frameworks/libraries written by someone else, they should audit or test it to be sure the underlying implementation is secure.

7. Must be able to get along with others and work as part of a team; ideally if they are really a quality programmer, I would expect them to also mentor and share their ideals and capabilities with their peers to bring everyone up as much as possible. Quality programmers are not primadonnas.

That's it from my standpoint.

Comment: I'm flaberghasted at the sheer stupidity... (Score 1) 307

Define 'Application'. Technically the blackberry operating system is an application - so based upon his own statement, blackberry OS should be made to run on any other operating system. In the annals of dumb-assedness, this is one for the record books!

Breadth-first search is the bulldozer of science. -- Randy Goebel