Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment Re:Price Adjustment (Score 1) 330

But enterprise business is going to care about the out-of-band management, because at least the business I work for is looking to standardize on vPro hardware for the massive savings in power management and standardized remote control that is based in hardware, rather than an agent that can break, running on an OS that can break.

I have to admit that vPro feels more like a line item than a feature. By that, I mean that I've never encountered anything that's leveraged vPro to make my life easier as a SysAdmin. Now I've got a machine with vPro built in and I haven't the slightest clue what I could do to at least play with it.... I should get out more :D

vPro was ironically one of the features of this Helix that inched me closer to deciding to purchase it, though it wasn't vPro explicitly. Someone on the Xen-Users mailing list made a note that every machine he'd looked at recently that had VT-d capability also had vPro. The Helix's marketing materials certainly made a strong point about vPro capability, and a phone call to Lenovo helped me dig up the proper technical documentation to determine that the Helix does have VT-d support in its BIOS, Chipset, and Processor.

using vPro to remote control the bluescreen'd PC while the OS is halted, reboot it and go into the BIOS, and change the setting. All remotely, from 1000 miles away.

...really? Even on a Wi-Fi-only machine like the Helix? That's.... wow that's useful. I want that kind of stuff on my own machines... especially the fleet of immediate-family-owned computers that are more trouble to support than any enterprise machine I've been paid to lay my hands on :P

The reason I wanted VT-d support has to do with a bit of an epiphany that I had recently about the role of hypervisors in modern computing... I expect them to ultimately replace or supersede the role of firmware in pretty much every system we use. Xen, particularly with the existence of its XenARM branch, is moving this way. VT-d and AMD-Vi can facilitate this already---albeit not with the degree of reliability that enterprise standards require... yet---and more compliance with standards like SR- and MR-IOV will bring this to its full potential.

To illustrate, take the allure of VDI: Independent systems for each user, centrally managed with the ability to leverage datacenter-grade high availability and fault tolerance... but still subject to the same delivery restrictions of thin-client computing. Latency and bandwidth choke out the potential for true high-performance usage, and while server-grade processors pack extreme density per rack-unit of space, they lack the single-threaded performance of even modest desktop-grade chips. If instead of delivering only the video output, when a client connects we migrate the whole kit and caboodle directly to the machine in question and simply wholesale-expose the entire PCI bus to the guest OS. When the user shuts down or disconnects, we disconnect the PCI bus, and save state or migrate back into the datacenter instead.

Extending this to home computer use, I could migrate all of my machines off to my server instead of having to leave my desktop powered up all the time to get the functionality that I want. I could "lock" my desktop, "unlock" my Helix, and bam: I'm literally using the same computer. You or I might migrate to a local server, but one could see the average person migrating an OS into an AWS datacenter.

The allure is more grand for the case of ARM and Android. Lock your phone, throw it in a garbage disposal, whatever, then unlock your tablet: you're using the exact same OS that just "flew" over the WiFi, out of your pocket, and into the tablet.

I'm on a long tangent. Point is, it's a hell of a time to be a nerd :)

Comment Re:Price Adjustment (Score 4, Interesting) 330

Sorry... the point I'm making is that the real competition for Microsoft is the tablet itself. Excellent attempts to shoehorn the Windows on Intel platform into the tablet form factor have been done, and some of them such as the Surface Pro and the ThinkPad Helix have done a really good job at it given the constraints of the technology itself---the bound of which is mostly the Intel chips themselves.

The fact that my Helix has an Intel chip in it is enough for me to want it as the device that fits my needs as a tablet---aided greatly by the fact that it actually IS a tablet. With the catalogs of apps available on iOS and Android being so comprehensive, the benefit of the Helix's or Surface's pedigree doesn't shine as bright as it would have even a year ago. That benefit of course is that I can run damn near anything on it if I need to, "Full Windows" included. If that benefit itself becomes wholly irrelevant by the time Windows becomes cost-competitive in the tablet platform, then its market in that platform will cease to exist.

Comment Re:Price Adjustment (Score 4, Insightful) 330

I firmly believe that the Surface Pro has, at the very least, a decent niche with only two competitors

I'm typing this from a ThinkPad Helix, which I decided to purchase as I felt it offered me a little bit more of what I was looking for than the Surface Pro did. It's definitely got its faults, but it's worth pointing out that they're Lenovo's faults rather than anything to do with Windows.

It's the right product for me, but the thing holding it back is---of course---the price. Microsoft has a huge advantage with x86 being on their side, but unless they can get the platform down to a price that's competitive with other products in the same market, at the rate things are going that advantage provided by the platform itself will likely evaporate as other platforms' app catalogs close the gap and render the advantage of "being Wintel" completely moot.

That's not to say that we're not at least halfway there already. An iPad is a paradoxically capable device in a world that Microsoft has ruled for decades on compatibility and ubiquity alone, especially given the limitations of the hardware and form factor itself.

Comment Good advice for the OP, too. (Score 4, Informative) 241

You might want to invest in a newer router anyway.

The thing that limits the old GL's aside from their pathetic RAM and flash space is that they simply don't have enough CPU power. NAT work on the number of connections today's computers and applications require is a lot of work for that aged ~200 MHz CPU. While it speeds up web browsing of course, it's more noticeable when you do more things. As my friend put it when I talked him into upgrading his router from a WRT54G v8 to a $50 dual band TP-Link unit, "I was gaming on my XBox for about an hour, and I came upstairs to find out that my wife had been watching Hulu the entire time. I had no idea..."

They'd never been able to do that before without his game lagging constantly. It wasn't a bandwidth thing either. They have 6 Mb/s DSL.

I recommend this model for the features. It'll run DD-WRT---you might want that too to ensure you have CoDeL support---but the stock firmware works great and has most of the same features.

Here's a screenshot of DD-WRT's system status on the unit. I'm convinced that the version I'm running isn't quite stable.... hence the high load. It's also serving as an AP for me instead of doing NAT work. My NAT is done by a similarly-spec'ed device, a D-Link DIR-825, runs much better and costs about the same, but it only does 300Mbps on the 5 GHz interface. The D-Link might be a better candidate for DD-WRT if you're dead set on using it.

Comment You might be interested in Xen (Score 1) 196

On the Xen mailing lists, PCI passthrough and VGA passthrough---the latter is the same as the former, except it also loads the VGA BIOS from the PCI device into the guest VM; you get VM BIOS video on the video card too---are a very hot topic. I have a single computer with an AMD 8-core chip in it that hosts four "heads," each is a separate gaming computer for my friends to use when we play DotA 2.

Anyway, if your system supports VT-d for Intel or IOMMU for AMD, you can create virtual machines and pass video cards and USB controllers to each of them. Use Synergy or a KVM to switch between VMs, or, since PCI-E supports hotplugging, you can literally ping-pong the video card and other peripherals between them to change your outputs. It's really slick stuff.

Here's a video if you're curious about performance.

Comment Re:saber rallying (Score 1) 213

Keep in mind: software vulnerabilities exist not because it's impossible to create perfect code, they exist because it's financially impractical. When something as deterministic and self-accountable as artificial intelligence is writing the code, those economies of scale will invalidate that statement.

That was actually my biggest gripe about the Terminator movies... computers wouldn't miss that frequently.

Comment Re:Yawn, another fork (Score 1) 219

Yes they do but the fact remains that this is a horrible business model. Someone, somewhere has already built an open or cheaper alternative to whatever software you can think up.

There are cheaper, open source alternatives to Windows. Its closed-source nature can't really be a horrible part of Microsoft's business model if it's profitable.

I'd like to see Windows (more specifically, the NT kernel itself) be both free and open source, but that has nothing to do with business.

Comment Re:Is it called Ouya? (Score 1) 143

they've created their own custom mechanisms for talking to controllers.

I've often wondered about this. A couple years ago, I got in on the first round for the iControlPad. The device was great in some aspects, and disappointing in many others. In particular, the d-pad is just awful. I blame Nintendo.

Anyway, one of the things I thought was odd was that it isn't, out of the box, a standard bluetooth game pad. It uses bluetooth's serial port profile and communicates that way. It supports showing up as a game pad, a keyboard, and a few other things, but this did puzzle me. I just simply assumed that bluetooth's game pad (HID?) profile is... deficient in some way.

Can you shed any light on this for someone not accustomed to reading SDKs?

Comment Re:351 +2 (Score 3, Insightful) 119

To be fair, those TV's are probably all running Webkit.

I still prefer Chrome over Internet Explorer, but IE 10 (the "Metro" version anyway) isn't a mind-numbingly terrible piece of software in comparison to the competition. It's good to know that, however ironic it may be, Microsoft, Mozilla, and Opera are all working opposite Google to keep the web away from just a different monoculture.

Comment Re:ORACLE = One Raging Asshole Called Larry Elliso (Score 1) 405

Consoles (and gaming in general) are an unusual niche. Most software that does real world stuff has to survive across multiple generations of hardware, where the low level details change. The compiler, OS scheduler, etc know better (and certainly will know better in future) than you with regards to what resources are available and how to use them for most of your program's lifetime.

Thank you! That actually makes a lot of sense. "Trusting the scheduler to know better" is actually something I rely on a lot as an example with regard to hypervisors. I don't want to make the mistake of pinning VMs to certain CPUs or cores unless I know for a fact that it's a good idea, and I shouldn't have to! The scheduler should be smart enough to make that decision on its own, and if I can expect that, so should a programmer. Right? :D

In a fairly counter-intuitive way - hardware evolves to run the software faster. This is why we're still running on x86 lookalike machines - because intel designs hardware to run existing software.

It certainly does. The thing that makes me step back and raises the questions I've made is that "software bloat" is often attributed to "eating up" performance advancements in hardware. If proper computer science education's goal is to minimize this effect, what level of knowledge achieves this? Proper code construction? Is it more about knowing what type of loop to use than it is about the amount of memory you allocate or whether you thread properly?

Don't waste too much time on that if I'm just completely off base; as I said, I have no proper CS education, but the concepts behind it are some of the most interesting things I've ever read about. It makes me wish I had gotten into using assembler back in the DOS days, when it mattered more :P

Don't get me wrong - i'm not saying you won't get better performance in the short term. But over the long term, all that work you did will be invalidated next hardware cycle. And if you chose an intelligent algorithm in the first place, chances are the hardware will evolve to run it faster.

Unless you're in a high performance critical niche industry - Spend your time making the lower level details of your code correct, safe and secure instead.

That's the crux of my argument. Performance is basically the assembler-level expression of your code structure's higher-level construction. You do hit on a very important point. No one can learn everything, so it may indeed be a better idea to teach "safe" versus teaching "fast." Regarding the OP's point though, it's probably best to do that from the standpoint of several different coding languages. Start with something that's fully-managed code, and then work your way down to something more "raw," like C?

The thing that gets me about Java is that it runs on top of the JVM. Perhaps the solution to that conundrum is to move the JVM into hardware, which is something that I recall Sun attempting and mostly failing at doing for mainstream x86 hardware. It'd be kind of funny if the ongoing lawsuit was between Oracle and Intel, but I digress.

Thanks for your input, sir!

Comment Re:ORACLE = One Raging Asshole Called Larry Elliso (Score 2) 405

Yes, and CPU implementation is a further subdivision of computer elements and architecture.

Then the details of register use are an even further subdivision.

I've done work in the field since the 70's. The last time I had to worry about CPU architecture was as a junior in college in 1971 when I was porting Spacewar from a PDP-1 to a PDP-8. Mostly these days it's all about algorithms in high level languages.

Aren't these details important to understand for what might be non-obvious reasons?

To give an example, take a look at gaming consoles. Performance and graphics get better over time because programmers write "to the hardware" with increasing precision as the product ages, and this provides a benefit that's almost completely unseen in modern general purpose computing.

The technique of "expanding the loop" to increase performance is a common tactic taken by programmers, no? I know I've done it with scripts before. Are modern compilers simply so good at optimizing binary code that extending this from a high-level language to the assembler it compiles to is simply not a reasonable thing to expect a programmer to be able to do?

I know I'm showing my ignorance on the subject, but of all the things I've ever researched for my own enrichment, C/C++ types of languages constantly fail to make sense to me... and I'd like to know more about the interplay between them and what actually goes on at the level of the CPU assembler-type code itself, but it seems like even getting partially fluent enough to understand what's going on in a dozen lines of ASM is an impossible dream. :P

Slashdot Top Deals

"Don't drop acid, take it pass-fail!" -- Bryan Michael Wendt