Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Yes, but GUIs have their place too (Score 2) 55

True security is done in logs.

I get what you're saying, and you certainly have a valid point about flashy GUIs not necessarily being effective GUIs.

However, speaking as someone who does a lot of UI work, there is also the other side of the coin, which is that CLIs and plain text log files are often neither the most efficient nor the most accurate way to configure or discover the things you care about.

In their favour, plain text formats are amenable to scripting and analysis using general text manipulation tools, and of course they have longevity. But they are also unstructured, they offer little interactive, real-time support, and ultimately they are limited to what you can express in sequences of characters (which is just about anything, but only if you're willing to write enough).

Even in highly technical environments, a good visualisation can present information in a form that is prioritised and draws attention to the most important features or anomalous results, or that gives a realistic overview of the current situation far quicker than scanning text output would. If you start to make those visual representations interactive, you can potentially also make complicated configuration work or progressive explorations of the data quicker and less error-prone.

Comment Re:Not enough Flash (Score 3, Insightful) 114

For example, take a classic list ordering GUI with up/down buttons. Works fine without javascript. Add javascript to make it also do drag&drop. It works better with javascript, but still works just fine without.

Web interfaces can gracefully degrade down to a very low level.

Yes they can, but not for free.

This sort of idea makes us geeks feel warm and fuzzy inside, but the reality is that you're talking about implementing two completely different versions of that UI feature. Doing so takes time and money, and you’d be spending that time and money purely to support a use case that probably represents a negligible number of users (people who want to run these UIs but have JS disabled).

Of course portability and compatibility are important for user interfaces, but this is a cost/benefit question. There is a line beyond which the results do not justify the effort, and any resources you’re spending past that line aren’t being spent on implementing other features or improving the usability elsewhere in your UI.

Comment Re:General goodness (Score 2) 114

Code monkeys never ask Rack monkeys what issues they face on the real field.

That’s not entirely fair. As a guy making UIs, I love hearing from the front-line what the users actually want, what they like and what they would like to see improved.

However, most development roles aren’t naturally customer-facing, and the focus for most people between the customers and the developers is usually on features (and commercial matters like pricing, of course), so this is the information that will naturally flow through an organisation and drive development.

Likewise, from the user’s side, often the people who are in contact with suppliers and making buying decisions aren’t the people who are personally going to get that 4am wake-up call to actually use these products. If there are things that matter and they aren’t obvious in the way that a tick in a feature column or a discount on a price are obvious, someone has to tell the guys doing the buying/negotiations so they can pass it on.

Basically, picking up more general usability issues like the ones bertok mentioned above either takes an exceptionally enlightened and well-structured organisation where this kind of information routinely gets passed on as well, or it takes guys at both ends of the chain who form side channels to get the little details through, and this goes on both the supplier and the customer side.

Comment Re:General goodness (Score 2) 114

Thank you for the insightful post. I create user interfaces professionally, I share many of your frustrations with the generally poor standards in the industry, and I find it reassuring that at least some people who use the kind of tools I build do actually value good usability!

The one big thing I would add to your points is that whatever kind of user interface you’re building — CLI, GUI, API, whatever — it’s always going to be limited by how well thought-out the underlying configuration model is. If you have a system that requires 745 interacting settings to be correct before it works, and the guy who changes those settings is doing it at 4am after his pager woke him up, you’re unlikely to see a happy ending no matter how polished the presentation of those 745 settings might be in any UI. It never ceases to amaze me how many UIs don’t get their fundamentals down first, and just think it’ll be OK as long as the UI is pretty, compatible with Brand X, compatible with Scripting Tool Y, compatible with Management Protocol Z, or some other useful but second-tier benefit.

Please do share any other rants, general frustrations, examples of things that were really useful, or other similar comments you have. These kinds of threads are gold for those of us who work in the industry.

Comment Mercurial is underrated (Score 2) 378

What about Mercurial? [...] I'm considering switching from Subversion to something else for my team at work, but the Git UI is awful. I've heard Mercurial is better, including its GUI integration (e.g. Tortoise).

In UI terms, there are respectable GUIs available for both Mercurial and Git these days. I’d say the biggest difference is in the CLI interaction, because the usability of Git is poor even in its native habitat on Linux, and on Windows you're basically stuck with Git Bash, which is a rather glitchy emulation of a Linux shell that IMHO is very irritating to use. Hg is much simpler and less cluttered.

In terms of functionality and the underlying models for how the system works, I’d say there are a few major differences between the Git and Mercurial workflows that you're likely to come across almost immediately. Some are “real” differences. Others look like differences when you first learn the tools, but they’re only consequences of how each tool works by default and you can work the other way if you prefer.

The first is that Git by default does a two-stage commit: you identify the changes you want to commit from your working copy using git add, and then you actually commit them to your local repository with git commit. In Mercurial, the default is commit everything with a single command. However, this is one of those illusory differences, because both systems have alternative commands/options that let you work the other way if you prefer.

The second is that Git and Mercurial (in)famously have different mechanisms for using branches, and here there really are meaningful differences in the underlying model for how things are stored and what you can do. Personally, I intensely dislike Git’s approach where you “forget” which branches things originally happened on or even what those branches were called, because I find the information it discards valuable. However, you’ll have no trouble finding a Git fan to tell you I’m just being silly and obviously the branches-as-moving-tags approach taken by Git is better for other reasons. In this case, there are tools built into Hg these days that basically work the same way as Git’s branches if that suits you, but I’ve yet to find any satisfactory way of getting Git to support history tracking including branches the way Hg does.

Finally, on the subject of rewriting history, in Hg changes are basically permanent once committed, while in Git history is mutable by design using the rebase commands. Again, which is better for you will depend on your own workflow and personal preferences. In theory Git’s view is more flexible, but it’s also one of those issues where it’s a rite of passage to write a blog post about how not to screw up all your colleagues’ repos by rebasing something that was already pushed out to others, so use with care. Hg’s version is less amenable to certain workflows, but it’s also safer.

Finally, I just wanted to mention that whatever anyone tells you about how it’s obviously user error, both Git and Hg have had actual, verifiable, reproducible data loss bugs, even in the past few months. I haven’t checked very recently whether any of the ones I knew about are still unfixed, but definitely make sure you’ve got the latest versions of everything. (And if you’re using Hg, be really careful about cloning a repo on a network server directly and in particular whether you’re getting a truly independent clone or just an improperly-linked version of the original that will corrupt both over time. And if you’re using Git with an external diff tool like Araxis Merge, beware that using a git difftool --dir-diff has been doing funny things when you quit the diff tool and may overwrite any changes you made in your working directory while the tool was open.)

Comment Re:Why does C++ matter? (Score 2) 476

I respectfully disagree. I’ve worked on heavily numerical code in both C++ and Java. Writing horrors like (a.Multiply(a).Add(b.Multiply(b))) instead of a*a+b*b gets old after about the first five minutes. Also, I’m still waiting to meet the programmer who will make those * operators do division just to trick me, yet who writes Multiply to do multiplication as we’d all expect.

Comment Re:Why does C++ matter? (Score 1) 476

I find that file you cite very readable. It's well formatted, it's clear what the code is supposed to do etc., comments where necessary. Why do you think it's sarcasm?

Well, for one thing, it’s just code in a file. There is no obvious indication of how this code fits into the wider design of the program, because C doesn’t have much of a module/namespace system. (There are some comments right at the end that seem to be about build order dependencies, but it’s not clear to me what they are trying to achieve. I assume there is some sort of project standard that requires them.)

Next, consider the first function, xor_blocks. It appears to take about 20 lines of code just to call one of four other functions based on how many entries are in an array that was passed in. A significant proportion of the code is only there because the input arrived as a void** and a count rather than a typed array. The rest is repeating essentially the same pattern of code almost verbatim four times. It’s not clear whether the four do_N functions are completely different algorithms or just the same algorithm using defaults if there aren’t enough inputs provided. In the former case, you could express the entire function in about five or six lines in numerous other mainstream languages, most of which would just be a look-up table identifying the required functions. In the latter case, the entire 20+ line function would probably be redundant in many languages. And I see no reason another language that can express this kind of logic without the overheads shouldn’t generate code behind the scenes that is still 100% as efficient as the example.

A little further down, we start defining macros like BENCH_SIZE. When these are later used elsewhere, you can’t tell whether you’re working with a constant or a function call with side effects. (This is a big objection I have to complaints that C++ overloaded operators could do almost anything, coming from people who then argue that we should use C instead because everything is explicit.)

That brings us to the second big function, do_xor_speed, in which we again encounter our ambiguous struct containing function pointers and void* parameters. This time, we also use a magic number, rely on (presumably) a global variable and implicit side effects for the main loop control logic, apparently try very hard not to let that loop be optimised in some unspecified way, and cause various implicit side effects on some other (I assume) global variable.

The final major function, calibrate_xor_blocks, has similar issues, and further complicates things by interweaving local macro definitions that mean some of the code isn’t executed, or is executed but is immediately overridden anyway, as well as apparently obfuscating a simple function call behind another macro with a name that looks like a regular function itself.

Now, I do realise that a lot of this is how a lot of industrial C gets written in practice. I also realise that there are few realistic choices for a low-level, systems programming language today, and none that I know of has much better readability than C. But that doesn’t negate the criticism that the C code has fairly horrible readability/maintainability properties compared to what could be achieved in a more expressive language.

Comment Re:Why does C++ matter? (Score 1) 476

C is and will always will be more efficient with hardware than C++ (for equally skilled programmers).

Why would you say that?

There’s always been a great deal of emphasis in C++ on not paying any performance penalty for features you’re not using. Using the roughly common subset of the languages should yield similar results either way.

As far as the extra features in C++, I don’t see any reason to assume that (for example) a virtual function dispatch via a vtable in C++ should be less efficient than the old “look up a pointer in a jump table” techniques in C that serve a similar purpose. If anything, it should be the other way around, as the C++ compiler has a little more semantic information that it could potentially use to optimise the generated code for each target hardware platform.

Comment Re:Why does C++ matter? (Score 3, Insightful) 476

True, but those few people who use C++ correctly seem to have learned their lessons with C.

That may be, at least in part, because many of the less than ideal aspects of C++ come from its C heritage.

I don’t understand some of the arguments made against C++ by certain “elder statesmen” of the OSS world. It seems they don’t like some of the extra functionality available in C++, seeing it as overcomplicated or too readily able to hide behaviour. In itself, that’s a reasonable concern. But then they use C, and reinvent the same wheels using crude text substitution macros that could be doing or interacting with anything.

On another forum discussion a few days ago, I saw someone argue that the Linux kernel is very readable, citing this C file as an example. I’m still not sure whether their comment was meant to be sarcasm.

Comment Re:Overblown (Score 1) 625

And it makes sense, why would someone not want to join a site where all your friends are?

I prefer to spend time together with my friends and family in real life. Some of them I see often, and we don't always talk about big news. For those I see less often, I enjoy catching up with when we can, and that gives us interesting things to talk about if we're going to be spending a few days together.

I am well known among my social group as a Facebook skeptic and privacy advocate, but I just don't see how meaningful relationships can be maintained with a couple of impersonal "sentences" of text speak, the occasional cat photo, and dutifully typing "Happy birthday!" each time a little box pops up telling me to. If that makes me a recluse, what should we call someone whose primary social interactions come in 140 character sound-bites and who doesn't spend much social time with others away from their PC?

Comment Re:Oh honestly (Score 1) 436

From personal experience, the version of Java on Macs seems to have lagged significantly behind the version widely available on other platforms from Sun/Oracle. It's not clear to me yet exactly what this announcement/reaction refers to, but if it means clients who use Macs wind up downloading/installing up-to-date Java runtimes like everyone on other platforms, and have the latest version as a result, that sounds like a good thing.

Comment Re:Getting screwed in both directions (Score 1) 443

For what it’s worth, I can see a very strong case for type-safe rendering and systematic parsing of this kind of structured data. However, to my knowledge, no mainstream statically-typed language is expressive enough out-of-the-box to represent the structure of a typical JSON/XML/whatever schema in a concise, readable, maintainable form to support these goals.

Many popular statically-typed languages support all the basic arithmetic and logical operations for numeric and boolean data. Their standard libraries often include a bewildering array of additional mathematical functions as well. However, the basic text operations of rendering and parsing strings just don’t seem to get the same sort of support in most cases, perhaps because they are so much broader in scope. Likewise, manipulating structured data is often a weak point: today’s mainstream statically-typed languages tend to lack both the general flexibility you get with dynamic typing and the expressiveness and polymorphic tricks you get with algebraic data types and pattern matching in various functional programming languages.

Comment Re:Getting screwed in both directions (Score 1) 443

string formatting/regexes are about the same in java as they are in python.

I’m not sure I’d go quite that far. There are several subtle advantages in Python (and one or two not so subtle ones) that IMHO make working with formatted text significantly easier overall.

For example, Java’s basic string formatting tool, String.format, and its regex patterns rely on numerical indices to identify specific placeholders and capture groups. In Python, you can use meaningful names in each case.

Another small but often useful win for Python is having raw strings, which cut down dramatically on backslash pollution when you’re writing regex patterns.

Finally, in perhaps the fairest example of the wider statically vs. dynamically typed language comparison, we have Java’s infamous verbosity against Python’s famous readability. Simple things like matching a regex with capture groups can require several lines of code and explicit creation of several objects in Java. In Python, you rarely need more than a single call to a function in re to get the same job done.

Comment Re:Getting screwed in both directions (Score 5, Informative) 443

If static languages are better, why is the bulk of web development done with dynamic languages?

I don’t know how much of that is reality and how much is popular perception. In any case, here are some general trends in mainstream statically-typed languages and mainstream dynamically-typed languages today that might contribute to the popularity of the latter for web development:

  • The dynamic languages do not require the extra compilation steps in a build process. This probably speeds up prototyping. A lot of the web development in dynamic languages is probably done by small businesses or start-ups, and that sort of culture places a lot of emphasis on rapid prototyping.
  • The dynamic languages tend to have much easier basic text processing. Basics like string formatting and regular expression parsing are a horrendous chore in languages like C++, Java and C#, relative to the trivial one-liners widely available in “scripting” languages.
  • The dynamic languages also tend to have built-in support for structured data like nested hashes and arrays, where again you need to jump through hoops in typical mainstream static languages today. That kind of structured data is widely useful for defining easy interchange formats between browser-side code and server-side code. For example, on a current project, we have standardised JSON data that is accessed using several different programming languages in different contexts. In JavaScript or Python, it’s a breeze. In Java, it’s a chore.
  • Integrations of popular dynamic languages with popular web servers are widely available and easy to set up. Setting up a Java-based web application is the sort of thing people write whole books about, dropping the names of half a dozen different technologies along the way.
  • Likewise, integrations of popular dynamic languages with popular database systems are widely available and easy to use.
  • A lot of web development projects are, rightly or wrongly, not treated as critical software systems where bugs are unacceptable. Encountering an error at run-time and dumping the visitor to some sort of error page is often considered an acceptable response, and people seem to expect and tolerate this behaviour without quite the same level of loathing they reserve for “Your application has crashed” dialogs or blue screens of death.
  • Perhaps most important of all, most web development software is small. More formal systems with static typing and well-specified interfaces probably have a better cost/benefit ratio on larger systems where it is harder for developers to see the big picture and more difficult to co-ordinate people working on different parts of a system without such tools.

I think these are more reflections of the languages in current use and their surrounding cultures, rather than inherent traits of static vs. dynamic typing, but if we’re talking about the state of the industry today, there doesn’t seem to be any practical distinction.

Comment This is a very simple question (Score 2, Insightful) 289

Whether to do The Big Rewrite always boils down to one very simple question: do the expected gains outweigh the expected losses?

Usually, the argument against doing a rewrite boils down to two key points:

  1. it takes time and resources just to get back to what you already had, which confers no immediate business benefit; and
  2. you risk losing the bug fixes and special cases that have accumulated during the real world use of the original implementation.

Those are certainly valid concerns, and IME it is often true that their impact is underestimated. However, what the doomsayers tend to ignore is all the potential benefits from writing a second version of something from scratch but with the experience gained from doing it once already:

  1. you can design based on the knowledge accumulated during the real world use of the original implementation, giving code that might be easier to maintain in future and/or allowing you to add new functionality that was not realistic before;
  2. you can refine your requirements based on that same experience, cutting out things that haven’t helped in practice and cleanly integrating requirements that weren’t anticipated the first time around, leaving you with a code base that is fitter for its purpose;
  3. while you lose all the old bug fixes and special case handlers, you also get to clear out all the old hacks and bolt-on workarounds that are maintenance hazards and a high risk of causing future bugs; and
  4. the best tool for the job the first time around might not be the best tool for the job any more, and a rewrite lets you revisit that decision and take advantage of any relevant advances in development tools, programming techniques, industry knowledge, etc.

I’m sure some rewrites really are just because a developer wants to write something new instead of working with what is already there, and those are almost always a bad idea IMHO. On the other hand, it can be annoying if someone comes in assuming that this is the only possible motivation for a rewrite, without considering whether there is another justification for the decision.

Slashdot Top Deals

"Consequences, Schmonsequences, as long as I'm rich." -- "Ali Baba Bunny" [1957, Chuck Jones]

Working...