Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Do you heat your house? (Score 1) 328

Yeah, I know that, but didn't think that what I was saying was coming off unclearly or wrong. If it did to you, then I apologize for the confusion. While I have studied physics throughout high school and college, I don't handle any of this professionally, but have had some exposure to them before, and have had to explain to others before how they even can work in the first place.

Regrettably, I got mod points shortly after I posted, or else I would have avoided commenting to be able to mod up your further clarification.

Comment Re:Do you heat your house? (Score 5, Insightful) 328

Perhaps the "pump" part of heat pump completely eluded you, since they do not defy the first law of thermodynamics as you seem to be implying.

Heat pumps work by having a sink source off of which they are pumping the heat from or away from. Most of the ones I know happen to be geothermal, which work because the sink which they are pumping from maintains a constant temperature year long underground. So, during the summer, the heat they can extract from that source would be cooler than the air above ground, but during the winter be hotter. They do this by extracting the heat from the source sink, rather than producing it themselves.

So in that respect, they work much like the fan does within your computer, since the air inside the case is much hotter when running than the air outside of the case. The fan can then displace that heat generated inside rather efficiently by just pushing the hotter air inside the case out, while bringing the cooler air from the room outside in without having to require an equal amount of energy to then power those fans as the equipment running inside of it, thus, like the grandparent, requiring less electric energy to power those fans than what the computer itself uses. If this were not so, then it'd make a lot more sense to completely seal computer cases, as the cooling benefit from the fans wouldn't make up for the amount of dust which they bring into the case during operation.

So the next time you're tempted to call bullshit on a well known physics principle, make sure you double check that you're not making some stupid mistake. Or else you'll end up looking rather foolish again when someone else points out how you don't know what you're talking about.

Comment Re:The PC is dying claims are made every few years (Score 2) 291

The same could have been said of Palm Pilots and Blackberries over ten years ago. And yet, here we are. PDAs are dead, and Blackberries are irrelevant. Not because they were terrible ideas, but because technology advanced enough that they became irrelevant, and they adapted to being a complementary device and not a device in which to replace the need for a desktop/laptop.

I also think that stating that the PC is dead is way too much of an overstatement, and see some reflection back to the past. Tablets and phones are good for one thing, and one thing only: consumption of content. Try to do anything outside of that sphere, and it just comes across as a rather clumsy device to use.

And I really have a hard time seeing it ever bridging that gap, since this is a problem inherent with the input system of choice. Much like speech recognition software hasn't replaced the keyboard, touch is not a replacement for the old mouse and keyboard. The problem lies in how by its very nature, touch is less precise of an input method.

That doesn't mean that I'm being a neophyte in regards to touch devices, since they do indeed have their place, but they just aren't the be all end all solution, since unless we're going to go to Minority Report like interfaces to make up for the loss in precision, which, BTW, would be completely impractical for long term use because of the ergonomics involved, then there will continue to be a need for the current input paradigms that we have now.

But that doesn't mean that the desktop won't change because of touch though. Gestures might very well become integrated with the desktop without too many problems, and which for that matter, Opera was a pioneer on this in some respects. However, it's not going to replace the need for finer grained controls. What such an input device would look like though, I dunno. It could be rather much like a trackpad on a laptop, could be integrated into the keyboard for a touch area, or even something else entirely. But whatever it ends up looking like, I'm just not seeing the killer advancement to enriching or supplementing the desktop... yet.

What I think the main problem here is that many who are involved in HCI prefer revolutions to incremental improvement, and then call anyone who doesn't want to jump on their latest bandwagon which doesn't want to go along with their revolution as being technophobic, when their new system that they propose either can't or won't replace all of the use cases that they think that it will.

I get it. Developing for older systems can be boring (although I'm a rather strange one and actually love to be on the incremental edge), and continually delving into unexplored areas is much more exciting. However, computing has never worked that way, with every advancement always being some incrementation and refinement of the older ideas which then builds on the work done by the previous generations of tech, instead of trying to replace it entirely.

So in that respect, I fully expect that touchscreen devices will likely end up being in the same position as your Palm Pilots and Blackberries in 10 years time. They will have failed to live up to the hype being given to them, and they will be relegated to being mostly entertainment devices overall, possibly replacing TVs, gaming devices, eReaders, and so on. Heck, we're actually starting to even see that now.

But they won't be replacing the need for an actual computer, much like Blackberries, iPods, and Palm Pilots didn't supplant them either. The input is just too clumsy to do that, and there's really nothing that technology or software can do to change that. But just because I say that doesn't mean that I then think that the desktop then needs to remain unaffected by it, and that it won't change as well because of it, but that such a change is going to end up being more incremental, rather than revolutionary.

Touch won't kill the need for a mouse, but something else which brings the best of both forms of input just might. And that replacement will then likely be just as inappropriate for use with a tablet as a tablet's interface would be for the desktop, even though it might take a lot of hints from the tablet paradigms.

Comment Re:C is the epitome of a programming language. (Score 1) 460

> And in this day age, the fact that we're still typing to program computers just seems silly. There is no natural law that says computer code has be in the form of ASCII text at some point. Why not visual directly to machine code? I don't see any reason why it can't be done.

You mean like Piet?

But seriously, there's really no technology out there that can really beat text input at the moment for general purpose programming. CLI interfaces, while rather demanding for upfront knowledge, are about as powerful as you can get. Everything else that tries to abstract away from that is going to be taking away some power from the programmer using it.

That doesn't mean that we haven't come a long way in visual editors, from VisualBasic to QT Designer or ALICE, but they're never going to be a replacement for all development. There will always be a need to continue to develop and tweak algorithms as we continue to develop and understand algorithms and their interactions better, and they will need a lower level of interaction in order to do them. The farther you abstract away from the core, the more important it becomes to have good performing code doing all of the intermediate steps, and the less flexibility you're given to tweak how it all works as well.

I know there are a lot of people out there who would like computers to instantly understand what it is that they want them to do, no matter how irrational it may be. However, we're never going to get away from good performing code needing someone behind it who can actually think rationally, logically, and procedurally. But that doesn't mean that we haven't already come a long way to where people can just throw some crap together in a few minutes to do a specific job at hand. Just don't expect that crap to be production quality or really, maintainable, for that matter. You'll still need someone who knows what they're doing for that.

Comment Re:Stupid jargon (Score 2, Interesting) 68

They're not the same thing.

UI is user interface. This can be a CLI (command line interface), GUI, touchscreen, or really whatever sort of way in which you can think to interact with a computer. As such, conflating it to a GUI, a graphical user interface, is narrowing things down too much, since it's much more general by definition. Each different input method is then going to have different things in which it's good for or not good for, and will need to be taken into consideration when designing.

For instance, a CLI is going to almost always be the most powerful input method, although it suffers from low discoverability, since you need to learn some basic commands to interact with it first before you can become too proficient at it. And the best CLIs are going to be ones in which you can infinitely chain commands together and even string them out in its own programming interface, so that you can then set up a batch of jobs together with a few clicks of the keyboard. Heck, I'd even classify voice command interfaces as CLIs as well, like Dragon NaturalSpeaking or even Siri, since while they don't involve keyboards, they have the same strengths and weaknesses as user interfaces (although the voice input could be seen as a fuzzier input method, much like how touchscreens are to GUIs, since you lose a bit of precision in the interaction, due to voice recognition software having to figure out what you intend to say).

While for UX, that stands for user experience, which is a completely different concept entirely. UI only designates how someone interacts with a computer, while UX is more so about whether that interface is optimal for the task at hand, or even whether there's consistency between the user interface interaction. So in essence, the UI designates the what, but the UX is how.

For instance, let's focus on using a touchscreen interface, which is one GUI implementation, and compare it to a mouse input. For starters, a touchscreen is never going to be a precision interaction method, because while you might be able to increase the screen size, you'll never match a mouse without lowering the DPI of the screen drastically, which then makes interaction a bit clumsier. Likewise, a mouse is going to be confined to a single input, while a touchscreen doesn't have to be, but can take in multiple inputs simultaneously, and as such, the mouse will never be able to quite match a touchscreen on this front. As such, while they both do represent graphical user interfaces, they do not share the same user experience, which is part of the reason why you hear complaints from people who don't like having to use one for a desktop, because forcing one UI for both then requires that in order to not completely suck on one input method, it needs to make compromises in the other.

Of course, there are some people who seem to believe that designing for the fuzzier interface while providing ways of doing tasks with single inputs will automatically make it optimal for both (I'm looking at you GNOME 3 and Windows 8), but this is sheer lunacy. Much like a CLI interface is not the most optimal for all cases (e.g. graphical manipulation), despite being the more powerful alternative, a touchscreen is not going to be a replacement for the old tried and true mouse and keyboard, which then allows for you to cram and browse through more information on one screen than a touch interface would, since a touch interface can't handle as much precision as the mouse can, and needs to be fuzzier by default in order to be useful.

So perhaps you might not care about all of this, since it does at least appear like you aren't within the industry since you dismissed all of this as being names for the same things, but at least you've had a brief 101 excerpt of HCI (human computer interaction), and can't claim ignorance to these terms as a defense any more. Because surely you likewise wouldn't say "CPU? GPU? RAM? Why do we need so many names for the same damn thing? It's not like we're using desktops anymore, so what difference does the C or G make."

Comment Re:Towers of Hanoi? (Score 1) 260

The most optimal one for years has been the one that Bill Gates devised, which would result in 5/3 N number of flips, but now the best solution is only 1% more optimal than his solution.

And funnily, that's really the only known contribution that Bill Gates has done to the field of computer science, well, and a tad bit of programming in Windows 1.0 and the editions of DOS that came before that. After that, he didn't do anything anymore, or at least acknowledge that he did, since he became more occupied with managing Microsoft than coding.

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...