Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:A command explodes into objects (Score 1) 123

Try a modern linux with bash completions installed. Type "ls --" and hit tab.

Why you'd want to "arrow through" a large list of commands is beyond me.

First, you wouldn't want to do that through a large list, only a small list. Also, a well designed interface would allow you to more powerful search tools that could be much faster (tree that expands as you type, giving you shortcuts to jump to or prune branches). I think that means you've missed the point.

The point is you're still thinking "text and only text" as the output from any command. Text based key completions (tab, arrow key, etc) are terribly old these days. I think you could find something like that on the old text-based Lotus 1-2-3, if not early word processor spelling checkers. "Modern" isn't a two decade old technology.

A lot of "words" on a command line have an implied object, the obvious example being the file names printed by "ls" - each file name implies there's a file. You can run "ls" again with a "-l" to see more attributes of the file, like size and permissions. You can use "more" to view the contents. And so on. By contrast, a GUI file manager shows you a representation of the file that you can manipulate, to list size and attributes, click to open, rename, and so on. The file appears on screen once, in a human readable form, you don't have to open a new view every time you want to see an attribute like you do with "ls".

So imagine a system where you could type "ls some_app/data" and get a huge list of files, but then decide to "select age > this month" to highlight only older files, then add more selects to add more criteria, sort by size, etc. Say you find the file you want, and want to view it, but don't know the name of the viewer command installed on this system (or if it has one), but you can click on the file to bring up a menu and select "view" to see your options.

To do something like that now you'd have to do your "ls" in a CLI, then open up a GUI file manager to the same directory to click on it. The question is, why can't "ls" output complete file objects to your window, instead of just one limited form (7 bit ASCII) of one single attribute (the name) of those objects? They don't need to look much different until you start clicking on them, you could keep doing things the CLI way until you need something more.

I hope that's clearer about why tab completion of text is insignificant compared to what could be done with CLI/GUI integration. That's one example, you should be able to imagine others (revision control system, system stats, debugging a crashed program).

Comment A command explodes into objects (Score 1) 123

I think there's still a disconnect between GUI and CLI at a more fundamental level - people think of CLI as meaning text and only text, and GUI as only graphics (despite labels, fields, etc. being textual). Most (or every, if possible) UI item should be interactable (is that a word?) by keyboard or GUI, but for an example I'll start with a command line - when you run a command, it should create one or more interactable objects as the output. In a lot of cases (say, "cp" or "rm"), it could be an exit code that just shows up as a widget next to the prompt on the next line. If you want to know more, you can click on it to get execution details like execution time or whatever - normally stuff you're not interested in, so it stays out of you way. If something went wrong, the object displays an error message, with widgets for diagnostics - anything from a stack trace or signal received to rerunning the command with a debugger attached.

A lot of commands would produce output objects. A "mkdir" like command would create a folder icon you could click on to open, move, rename, etc. "ls" could create an explosion of objects in your terminal window that you can manipulate just like you had clicked on a folder or selected files to view separately.

You might not scroll back through your output so much as flip back to previous window states, like the "Time Machine" interface on Mac OS X. IN each case, you could modify and re-run your command, it would fork into another tree of results. You could navigate these result trees until they expire, like web pages.

As for all the complicated options that some commands have, something like the <tab> key would create a command chooser (all commands matching the first letters you typed) that you could arrow through or click on. Once the command was selected, another <tab> could cause the command to create an option configurator (like the Windows PowerShell does).

And that's just some initial thoughts. Smarter people can probably come up with genuinely good ideas. Sadly, I've seen little of this even tried.

Comment Re:What we need is a new DNS system (Score 1) 449

I described above the idea of including a DNS hostname in a normal host string. I like that because it only needs client library changes, DNS can stay the same, and those setting up alternative DNS hosts only need to include a few censored names, not the entire DNS database. If I had a few days, I'd whip a demo up myself, but that's not going to happen for a while.

Comment Make DNS recursive (Score 1) 449

I haven't had time to try this, but there's no reason not to include a DNS host in the hostname, to use as a resolver. An example to explain, imagine "oppressed-group.org" is blocked, but "freeworld.net" hosts a DNS with a list of blocked domain names (just some, doesn't have to be the entire DNS database), you could specify "oppressed-group.org(freeworld.net)" (or give the IP address of the DNS server). It could be chained with as many additional DNS servers as it takes (as in "host(dns2)(dns1)").

In the end, the servers see everything normally, the root DNS and other servers are unchanged, the only change is in the client code that does the lookup.

Alternative syntax could go in the other direction, using "/" or "!" (bring back bang paths!), looking like "freeworld.net!oppressed-group.org".

Comment Re:IBM? Huh? (Score 1) 99

IBM generally sells business solutions and technology - the latter sometimes as patent licenses, sometimes developing products and selling them off. For example, the popular "swiping" method of keyboard entry on smartphones came from ""ShapeWriter" (previously SHARK), an IBM product they sold to another company to commercialise.

I'm sure there are a lot of assets in WebOS that could be developed. For example, what if you go a step beyond Apple's Siri, and integrate a smartphone interface with the deep AI of IBM's Watson Jeopardy champion (currently being commercialised for optimising medical treatment in the health insurance industry)? Sell or license that product to application developers for everything from intelligent tourist guides to on site first aid agents, who sell their wares on Android, iTunes, Blackberry or Microsoft app stores.

Lots of possibilities for a firm with cutting edge research and development. Like HP used to be, once.

Comment Ignorant article (Score 5, Interesting) 128

Sun had out of order SPARCs for years, contrary to the article's claims. Sun had a two pronged strategy, one aimed at single thread performance (the UltraSPARC series), the other at multithreaded performance (the T series). The UltraSPARCs were never really that good, so were eventually dropped in favour of the Fujitsu SPARC64 series, and the replacement (code named "rock") was dropped by Oracle because progress seemed stalled forever, but they did indeed have out of order execution, register renaming, and "Rock" had a promising "pre-execution" thread that was supposed to alert cache controllers ahead of time to pre-fetch data that can't be statically predicted, dropping cache misses to near zero.

The purpose of the multithreaded processor was to support mainly I/O bound tasks, and lots of them - web servers are like this, though more in the past where web content was more static. In those systems, a T series SPARC system noticeably outperformed similarly priced competition (with similar reliability - you could get a lot cheaper if you didn't care about component quality).

The single threading improvements in the T series are being added because even I/O bound systems often have compute-bound tasks. In particular, the T4 lets you assign one high priority thread which gets to hog CPU resources, in addition to out of order execution and other techniques that all threads benefit from, so I/O bound threads don't get hung up waiting for a single CPU-bound task to finish.

Comment Forest for the trees (Score 1) 1027

I suspect a lot of people don't think about that expression "Can't see the forest because of all the trees in the way". But that seems to be life for the vast majority of people. When it comes down to it, the way almost everything works is wrong, somehow. But it's like a local minimum, almost nobody seems to look past a few small tweaks and adjustments to see the global minimum.

Related, but separate, something that bothers me about most technically minded developers or engineers trying to design things - almost all of them never bother finishing the race. When designing the interface for something, they will develop it until it's possible to accomplish a task, but usually no further. A classic example is digital clocks - how to adjust the time on them. Even now, the interface for most sucks into the negative digits, but the earlier ones were painfully stupid: Hold the "time" button, then either the "fast" or "slow" as the clock advances - past the time you want to set. Repeat this for another 12 or 24 hour cycle and miss again. Repeat until frustrated. Newsflash designers, you can add buttons to go backwards! Yes, you need to redesign the chip to subtract, but you only do that once. Millions of people have to set the time over and over.

And why doesn't every clock just have a "DST ON/OFF" switch? Is that really so impossible?

I see this over and over again, from things like Java libraries, Unix networking, Windows (classic press "start" to stop the computer - and why are there seven options, any why should I care which does what?), the damn "smart" photocopier (I needed to copy a slip of paper onto a page, the copier rotated it and cut off the bottom. I rotated the slip to match the new orientation, the copier rotated it and again cut off the bottom. Should I try diagonal?) - it's like technical types go as far enough to see the finish line, and say "theoretically we can finish the race, so let's just stop here".

This is the thing that separates a genuinely innovative product from others, actually getting to the destination. GUI word processors let you see the format, rather than imagine how special codes would make it look like eventually. iPods couldn't do as much as the competition, but the competition only made it possible to do more, iPods were easier to do more. Nintendo Wii games didn't require reading an instruction manual. A Roomba vacuum just had to be plugged in and turned on.

The common aspect of these things is designing for the end use, not the implementation. It's more work, but it's work that has to be done only once. The end user that has to figure out what they hell you were thinking has to do this every single time (until they get used to, say, a list of keystrokes or menu choices written on a post-it that they don't understand). If the user has to enter an email address, help out - have a separate username and host fields, with a "@" label between them, rather than trying to parse it for validity after it's entered and just saying "keep trying until you get the format right". Basically, anywhere there are user instructions that give a list of steps to perform, make the computer do them - computers are better at following instructions!

This, fundamentally, is why almost all smart phones sucked before the iPhone - every possible operation needed a magic invocation of actions (or a long menu path) to start that you had to memorize to use them. Apple designers (not one single person, but an entire design team) broke down what was needed and got rid of all exposed implementation, and put the effort into making it just work. I have no idea how complex the technology is that measures where I tap on an iPod keyboard and guesses which of the four keys my finger overlaps is the one I'm trying to press, but it gets it right so I don't care - it should just work, and it does, so I'm happy.

When it comes down to it, whether it's a dictatorship or not isn't important. But without some strong insistence on finishing the job, most technical developers won't bother. And even then, they can come up with some amazingly hair-brained designs (search for design failures/hall of shame sites). A dictatorship is a way to avoid them, at least.

Comment Re:HP PA-RISC and Itanium (Score 1) 514

[...] since the compilers necessary to perform the required dynamic analysis would have to cover just about every possible scenario

Just to clarify, it's not the compilers (though those are important too), it's the runtime which does the optimisation. The problem is the overhead required. The end result may crunch numbers faster in the end, but it may slow down unexpectedly after the execution has been profiled long enough to get data indicating a hot spot, and the optimiser kicks in. It's a little like the problem that garbage collection has - in the end, GC is faster than manual memory management (with few exceptions), but program freezes when it kicks in are annoying.

Multiple cores help - the optimiser can work in the background, with no slowdown and only imperceptible pauses when the binary blocks are replaced. But it's still a trade-off because you might have a better use for that second core.

Anyway, dynamic optimisation optimises how the program actually executes, it doesn't have to predict anything. The main advantages are that it can optimise in ways hardware can't (e.g. remove dead code entirely), and it can be updated with a software upgrade.

Oh, by CPU independence, I meant that the same code can be run without caring what it runs on, like Java. Previously Apple and NeXT simply included binaries for more than one CPU, but LLVM has the potential to perform the final compile step either at installation or runtime, as well as dynamic optimisation while running.

Comment Re:HP PA-RISC and Itanium (Score 1) 514

In case you're still reading this thread...

For compatibility, I think they had an emulator to run PA-RISC programs on Itanium. One of the incentives for starting an incompatible design was a project called Dynamo, which was like a Java JIT interpreter which analyses a program as it's running, and can optimise the most used parts. One surprise was that they could speed up a program by translating from PA-RISC to PA-RISC - the runtime optimisations in software were better than hardware (at the time).

A similar strategy was followed by Transmeta, interpreting 80x86 programs on custom VLIW CPUs. It failed partly due to Intel manipulation (basically giving a much better price to customers who never use a competing product), but partly from inconsistent performance - that matters less on a server, but more on a laptop, which was Transmeta's focus. Java is used mostly on servers, so the JIT and optimisation works well there.

I think continuing PA-RISC could have matched Itanium performance (if Itanium had not been delayed) eventually, but at the time it wasn't a sure thing. The real question was whether to put the dynamic optimisation in software or hardware, and a number of research projects hinted that software might be capable of far more than hardware (in addition to HP Dynamo, IBM's DAISY project did much the same thing, as well as Java JITs and Transmeta, and the Macintosh which just switched from 680x0 to PowerPC using emulation). Many people thought instruction sets would be irrelevant (Transmeta released several CPUs, all incompatible - no backwards compatibility barriers to newer, faster designs).

The idea is not dead though. Look up the Low Level Virtual Machine, or LLVM, which is used by Apple. One example is in the Macintosh OpenGL stack - rather than including a lot of branches to test for various settings, the LLVM optimiser basically strips out all those out when they don't change (or are handled by hardware), leaving behind simpler code for displaying windows and other graphics. Probably other things too, but that's all that's publicly announced. I suspect Apple is aiming for CPU independence eventually.

Comment HP PA-RISC and Itanium (Score 1) 514

"Itanium" was originally HP's internal replacement for PA-RISC, to leapfrog the next generation of RISC processors in performance. The deal with Intel was intended to split development cost, so the competition couldn't keep up (HP already stopped making expensive fabs, hiring Intel to make PA-RISC). Management changes in both companies led to Itanium being handled by people who didn't understand the original strategy, or the technology, so it was "redesigned to death" until competitors caught up, and it was finally released.

It was a gamble by HP that static program analysis with simpler circuitry would be faster than dynamic analysis, in the same way that simpler RISC outperformed CISC, but it didn't pay off. It turns out that dynamic analysis became such a small percentage of the transistor count that it no longer mattered. RISC processors create their own VLIW instruction bundles (at least the last Alpha and later POWER did), and CISC processors can translate code on the fly internally with almost no speed penalty. I think even the most recent Itaniums ignore the static information and regroup dynamically for better performance.

Still, Itanium is among the top performers for number crunching, and if it had kept to its original plan, probably would have been a leader for at least a few years, which would have been great for HP. As it is, the main accomplishment was strategic - to convince most competitors to stop developing their high end CPUs (Alpha, MIPS).

Comment Max Headroom (Score 1) 157

When I saw this done on Max Headroom, I was skeptical that it could work. Not because a regular news camera had an "infra-red" mode, I expected that could happen (and some do, just not enough to be heat sensitive yet), but I thought the keys would cool down too fast. Good to know how scientifically accurate a show about a simulated human infecting the world's computer networks was.

Comment Fewer products, larger production (Score 1) 350

One thing that Apple does is not waste time on "lower end" products. Every "stripped down" version of a product that companies typically make has engineering, marketing, logistical and other costs. Apple typically won't produce a produce that has any overlap - there is no "iPod Touch Lite" between the Touch and the Nano (say, missing a camera or WiFi or less functional apps... memory size aside - that's a legitimate consumer preference), because the Touch would do everything the "lite" version does (a Nano is physically different enough that it has different uses than a Touch), so why bother?

That means that rather than a dozen different screens or boards or cases for a dozen products, Apple needs one. That means it purchases a dozen times more of what it does buy, and that leads to economies of scale, allowing it to make these gambles successfully more often.

Comment Apple Lisa (Score 1) 350

Like others have said, nobody stops any competitors from doing the same, but they haven't so far, for it is an immense gamble. Apple also could fail and buy out the full capacity of some widget that ends up in a landfill somewhere, but that hasn't happen so far either.

Look up the fate of the Apple "Twiggy" drives for old Lisa.

The most important thing that Apple (as a company) does is pay attention to detail at every level. Back when Jobs was in charge of the initial Macintosh, he even complained if the motherboard looked messy. Most companies will cover up shoddy work with a shiny case or flashy interface, cheaping out on everything not visible to the end customer. I saw an Apple Store in Chicago, it had a bus stop with one of those scrolling billboards. Every add on that scroll was an Apple ad.

Jobs has always been fanatic that everything must be done well, from the sweeping the floors to the design of the head office building. Apple is not unique in that, it's pretty typical of Japanese companies like Toyota or Sony, but Apple takes it a little further. When everything looks good from the inside out, mistakes really stand out.

Slashdot Top Deals

Marriage is the triumph of imagination over intelligence. Second marriage is the triumph of hope over experience.

Working...