Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Or else?? (Score 5, Insightful) 343

It's ok that they go on this track for consumers of things; but for god sake, make something for the rest of us that are producers of things.

The sad thing is that they actually have done that, but then layered the stupid mobile crap on top, hiding the productivity-enhancing goodness underneath!

For example, PowerShell 3.0 is a pretty big step forward. I've been using the CTP and now the RTM build on Windows 7, and I love it.

The guts of Windows Server 2012 are better than the previous versions, but it's all hidden behind the new Server Manager that has been re-authored to have the "formerly known as Metro style, but not a really a Metro app, because Metro can't actually be used to... do things." The result is a hideous application that doesn't look like anything else in the operating system, and has a terrible control layout that's both confusing and slow. For example, after you open a "menu", you see about three items. About two seconds later, more items appear in the menu. That's just about the worst GUI design failure I've seen since I've had the misfortune of having to use X11 applications, where some buttons perform their command when the mouse button is depressed, and some perform the command when the mouse button is released.

The core: better than ever, better even than UNIX/Linux in many areas, including the command-line!

The skin: worse than ever, worse even than the inconsistency than UNIX/Linux is sometimes bashed for, but all within one operating system that I assume follows some sort of "design guidelines".

Comment Re:Lots of work? (Score 4, Informative) 66

I've seen this MIT project before, but just like that product you linked, they all seem to be about "regular" arrays or arrangements.

I'm thinking more along the lines of ad-hoc arrangements of microphones, which is more like what Photosynth does -- it arranges arbitrary photos together to make a 3D scene, instead of taking specific, precisely aligned photos.

One interesting bit about the MIT project is that they have 1,020 microphones -- a world record -- generating 50MB/sec of data. A quick back-of-the-envelope calculation verifies that this represents 44.1Khz at 8 bits per sample. If you think about it, this amount of data is peanuts to a modern PC. Just one high-end GPU might have 200GB/sec of memory bandwidth and over 2 teraflops of processing power! This translates to about 38,000 operations per sound sample, in real time, at 32-bit precision. That should be enough to track moving sound sources, figure out what's an echo and what isn't, correlate sounds across multiple microphones, perform doppler-shift analysis, etc...

Going to higher numbers of microphones ought to be easy, and could allow some fantastic applications, as well as some scary ones. There would be enough redundancy in the data to build a 3D scene with tracking of both moving sound sources and moving microphones. It may even be possible to determine room geometry, and the movement of large objects could be tracked based on their interaction with the sound field.

One application I can think of would be for capturing sound during movie filming. Often, studios have to discard the recorded sound and re-dub everything because of background noises, but this kind of technology would allow the director to perform arbitrary filtering after-the-fact, comparable to the light-field cameras that allow "refocusing" after an image has been captured. An actors voice could be picked out and made louder, everything with a source "behind the camera" could be edited out, and surround sound effects could be generated from any scene setup.

Comment Re:Lots of work? (Score 3, Insightful) 66

Just altering the levels provides a lot of isolation (as seen in the video clips), but I have to wonder if there's an audio equivalent of "image stacking" or Photosynth, that would correlate all of the audio streams, build a "model" of the audio-scape, and allow noise to be cancelled out. Or more accurately, allow a voice to be extracted with a higher specificity than just 100% of one source.

I'm sensing that we're on the cusp of affordable setups where instead of just a few microphones, rooms could be set up with hundreds of microphones recording in parallel, with analysis done to track and extract individual sound sources moving in 3D. I suspect that a modern GPU already has the computer power, or will soon. This would allow individual speakers to be isolated even if they weren't set up with little clip-on recorders ahead of time.

Comment Re:WinRT is dead in the water (Score 5, Informative) 303

no significant loss of features aside from backward compatibility itself

That's a common misconception perpetuated by clever marketing, but it's flat out wrong.

Metro/WinRT is not Win32 modernized, instead it is Silverlight 6 Tablet Edition.

It's severely sandboxed, even more in some ways than Silverlight 5 was, which means that really important things that a lot of common applications require just Don't Work At All, and can't be made to work unless Microsoft relents and releases Windows 9 with a newer, more permissive API.

To give you an idea of just how restricted Metro/WinRT apps are, they're prevented from communicating with Desktop apps and traditional local services. That means that there's no shared memory, no named pipes, no Windows event passing, not even "localhost" sockets! Really major things can't be done, like runtime code generation (JIT), which directly impacts applications like Firefox and Chrome. Statically compiling Java code may work for some apps, but not if dynamic class loading is required.

Put yourself in the shoes of an Enterprise developer: Message Queues? Missing. LDAP? Nope. Background services? Blocked. Oracle client? Hah! Local database? Can't connect. Group Policy? Unavailable. PowerShell Integration? Desktop only.

Try this from a games developer's perspective: OpenCL? No JIT. PhysX? Can't talk to the driver. OpenGL? Over Ballmer's dead body.

Comment Re:D3 was rushed, but is aging well. (Score 1) 221

The core game design is fucking retarded. The gear upgrade path is market based. In some sense its much more efficient to gear up in D3 by playing "auction house trader" than "hack and slash dungeon crawler".

One thing that I haven't seen too many reviews cover is that auction houses ruin the sense of discovery or surprise when you find a new item, particularly unique items. In TL2, when I find a unique drop, it's like opening a Christmas present. With D3, it's like they showed me everyone's presents before putting them into the boxes. Sure, I don't know which present is in which box, but the surprise isn't quite the same, you know?

In TL2, I don't know what the maximum weapon DPS is. I don't know what modifiers are possible. I don't know how many sockets something can have. I don't know what socketables exist.

I like it that way!

Comment Re:Wha...? (Score 0) 171

Yep. I can't fathom why anyone in apple design thought sticking on a high pixel density display was a good idea.

Just because you have vision problems and can't appreciate higher DPI doesn't mean the rest of us are similarly limited.

I don't think the iPad 3 is a worthwhile upgrade to my parents, because they're past the age where they can see the difference. Ditto with technologies like 4K HDTV. They just can't see any difference.

I most definitely can.

Comment Re:Define premature (Score 1) 269

It's not just that they changed things, which they do with every release, but that they changed things for the worse.

Windows XP to Windows 7 had some pretty major changes -- including the task bar revamp -- but I got used to it. I didn't grumble, I didn't complain, because it wasn't worse, it was just different, and in some ways better. I like the previews. I like the jump lists. Let me reiterate that: it's different, but it's not worse.

The Windows 8 GUI isn't better in any way that I can see. On the contrary, every change is a change for the worse. Things take longer. More movement is required. More clicks are required. The design is schizophrenic and unpredictable. It splits the OS into two distinct styles, neither of which is remotely complete.

That's not even considering the ridiculous decision to eliminate the start menu:

-- It breaks 17 years of muscle memory for most users. I thought I used a lot of keyboard shortcuts, but I only realized just how often I actually click the start menu when I tried Windows 8. Worse still is Windows Server 2012, where the "server manager" shortcut is in the same location the start menu used to be! If you're an admin who's used to administering Windows Server, you will be punished with a one minute wait, often.
-- The new shortcut is a tiny 2x2 pixel area or somesuch. Sure, it's easy to hit it if it's in the corner of the screen, right, because you can just 'snap' your mouse into the corner easily. Unless you have a second monitor to the left of the main one. Or you're connecting over ANY kind of remote desktop that's not full screen. Like a Hyper-V console. Or a KVM. Or VNC. Or VMware.
-- Just press the "Windows key", right? You were about to tell me that, weren't you? Well guess what, it's not always available. Some keyboards don't have it. Some remote access scenarios don't pass it through. Sometimes I'm connected through 3 layers of remoting, and it's just not going to happen. Now what? Have you TRIED hitting a 2x2 pixel area over three hops with a total of 800ms of latency?
-- Accessibility is right out the window. Some people have movement impairments, and just can't hold sufficiently still to hit such a small target. Or end up hitting it accidentally when they actually were trying to click a taskbar icon.
-- Don't even get me started about people with flickering image triggered epileptic seizures. I can't wait until the first lawsuits of users literally collapsing in a twitching heap because of the non-stop full-screen transitions between the traditionally bright desktop applications and the new tiles screen with its dark background. Thankfully I don't have this problem myself, but I've found that in a dark room when using Windows 8 on a large monitor (e.g.: 24" or bigger) my eyes water. The repeated brightness transitions are almost painful.
-- It hides content. If you want to, say, search for something complex on your machine while referencing something in an application's screen... err... no such luck, you can only search full screen now.
-- It's enforced, unlike every other GUI transition Microsoft has ever made. I'm sure it's for our convenience, and not to force a new GUI paradigm down our throats just so Microsoft can leverage their monopoly to barge their way into yet another market.

However, you have a point: the GUI changes are a mere annoyance. I can grit my teeth and get used to it. However, as a developer, I can't help but gape open-mouthed at Microsoft stunningly myopic "strategy". Put yourself in the shoes of a Microsoft Windows developer wanting to write a new full-featured, heavyweight Windows desktop application. Here are the options:

-- Win32: the assembler of GUI programming. Often referenced on microsoft.com as the "legacy" API.
-- MFC: Long dead, for masochists only.
-- WTL: never supported in the first place, requires C++ wizardry, and Microsoft's C++ track record is a joke.
-- .NET WinForms: Supposedly supported, but hasn't been improved for years. A dead-end for all intents and purposes.
-- .NET WPF: Looked really good but incomplete and slow. Windows 8 took it out to the back alley and quietly strangled it.
-- Silverlight: Officially dead.
-- Metro/WinRT: Also knowns as "Silverlight 6, tablet edition". Looks awesome, but it's nerfed. I mean that you can't use WinRT without also being restricted the "tablet app APIs" for everything. Will never be supported on XP, Vista, or Windows 7. That alone eliminates it as an option for the majority of desktop application developers.

So, what exactly are Windows developers meant to do now? All "legacy desktop" APIs are getting end-of-lifed either officially or unofficially, and the new WinRT replacement is so massively cut-down that it cannot hope to act as a replacement.

This is not an improvement.

Comment Re:Comparing 2 different things... (Score 3, Interesting) 513

So hopefully I've made my point that the people who are of the mindset that they buy a device and it last six years are not who computer companies are targeting anymore, at least the mainstream ones. They want to sell a new device every two years to you, and that's why this update crap is a load of non-issue.

I get this mindset, I really do, even though I disagree with it. My IT purchasing habits back up my stance on the matter: I regularly replace my phone, laptop, and PC, usually every 2.5 to 3 years.

Until recently, I haven't been quite able to put my finger on what's wrong with this persistently popular opinion that this regular upgrade cycle is "crap", and somehow a "trick" pulled by the vendors. From a naive ordinary financial perspective it seems... correct. After all, we buy cars that are expected to last two decades, appliances for up to three decades, and even electronics like TVs and HiFi systems usually last at least a decade.

Computers are different, and it's all to do with the pace of Moore's law: Essentially, paying a premium for something to last 6 years or longer is not as efficient for everybody -- vendors and consumers alike -- as buying something cheaper/disposable more often. If it wasn't for the exponential increase of computer power, this wouldn't be the case! In that case it would make sense to buy more expensive computers with longer support and better physical build quality.

For example, I laugh at companies that "invest" in "big iron" that will "last them a decade". Sure, it will, but by the end of that decade it will be 3% as powerful as the "cutting edge" mainframe, because of Moore's law. Had the same company spent half as much every 5 years instead, at the end of the decade they'd have a computer that is 18% as powerful as the bleeding edge. Spending a third as much every 3 years or so would let them stay within 35% of the best possible performance at all times. Even if you assume that spending a third also cuts the performance down to a third, the result is still about 12% of the best available, which is lot better than 3%!

Sure, I'm simplifying an awful lot, but you get the idea: there's an ideal interval to spending, and it's about 3 years. A lot of us IT geeks just "get this" intuitively, but we can't quite put the "why" of it into words without sitting down and doing the numbers.

By the way, this is one major reason why server virtualization (e.g.: VMware ESXi) is so popular: It allows corporations to make the migration process to the "next generation" trivial and virtually risk free. A smooth, regular upgrade cycle of server hardware is so much more efficient than buying "big iron" for a decade it isn't even funny.

Phones are much the same, unless you use them literally only for making voice calls. If you use them for more general purpose tasks, then the same argument applies. Newer phones do more, do it better, and do it all faster, and this pace of improvement is exponential. Sticking to a 6 year or slower upgrade cycle means that you spend the majority of the time near the single-digit percentage level of the best available performance. Why would you pay premium for having less most of the time?

Comment Re:Why have such short limits? (Score 5, Interesting) 497

Every time I see any kind of password length limit somewhere, I instinctively know that somewhere behind the scenes there is this table column:

    user_password VARCHAR(16) NOT NULL

It's the same sinking feeling I get when I see the "the following special characters cannot be used in the password field" error message, which just tells me immediately that the code that submits the password field looks like:

    $cmd = "UPDATE ... user_password='" + $password + "' ... "

There really, really needs to be a "guild of programmers" or somesuch, along the lines of the Bar Association, so that anybody who writes code like the above can be summarily ejected from it.

Comment Re:No kidding (Score 1) 242

You also missed my point, which was that the FDE (in combination with everything else involved) was doing exactly what it was intended for.

The cost overheads are negligible, and the article massively overstates them.

FDE, like passwords, are only a problem in environments with poor IT practices.

For example, I hear people go on about how "password management" is an "expensive headache" all the time. However, I only hear that in environments where the IT department failed to consolidate to a single directory system, and every password reset becomes a nightmare of synchronization, replication delays, incompatible password rules, and account lockouts. Meanwhile, in competent IT land, a password reset takes seconds and never fails.

FDE is similar. When used incorrectly, it requires extra steps and is a royal pain in the ass. I've seen some government environments that insist on using proprietary software to encrypt everything, including USB sticks, which then end up copying files at 100KB/s. That was because they were using old operating systems that didn't have Bitlocker built-in, and they picked the "cheapest" encryption product instead of the best. Had they simply kept up to date with new operating systems (which they were licensed for anyway under maintenance), they could have had a low-overhead system that you'd have to benchmark to notice.

He said you have to factor in user mistakes (like forgetting the password) as a cost of full disk encryption.

Except that normally Bitlocker is transparent to the user, and doesn't require a password. Hence, not an expense.

The password he was referring to was the recover key held in Active Directory, which doesn't require memorization. If you're resorting to recovery keys, then it had better be an unusual scenario, like a user who hasn't synchronized in a year.

It's a lot like complaining that passwords are an "overhead" because people who haven't been given a password can't access the system!

The lady's scenario is perfect. If she hadn't logged on for a year, it would be at least an hour or two to bring her computer up to scratch anyway. At the very least, it's going to require a couple of reboots worth of patches, a virus update, a full disk virus scan to be sure, and probably significant application package updates. Having to type in a 48-digit recovery code on top of that is going to add what, a couple of minutes tops to a multi-hour process? That's maybe $5 of employee time in exchange for hugely stronger security.

Comment Re:Truecrypt TCO (Score 3, Informative) 242

The main difference between Truecrypt and Bitlocker is that the latter allows transparent decryption, which is very hard to solve without special hardware (TPM). Additionally, Bitlocker has automatic key escrow to Active Directory, but Truecrypt can only do the same kind of thing manually, which is useless when managing large numbers of computers.

If you can trust your users to remember passwords, Truecrypt is much more secure. Similarly, Bitlocker can be made more secure as well if you set it up to require a passphrase during boot, without which it keeps the unencrypted key on the machine. The TPM chip is supposedly tamper-proof, but I bet there's at least one three-letter agency with a back door!

Comment Re:No kidding (Score 5, Insightful) 242

Now there are systems out there like that. They have central key stores, key recovery facilities and so on all while maintaining cryptographic security. However all the ones I've seen cost money. Then on top of that is the cost of administering such a system.

Security only costs extra if you had nothing to begin with, which basically never happens. Any corporation with data worth stealing is likely to have Active Directory, which has a convenient key escrow functionality built right in.

If you've already purchased Windows Server and have standardized on Windows 7, then full disk encryption with all the goodies is just a few button clicks away, and costs nothing but the 60 minutes it takes to read through the relevant technet articles and then setting a few settings in group policy.

She also hadn't put the laptop on the 'net in like a year, so it was all desync'd with the Active Directory.

That's not her fault, that's the IT department's fault. That laptop can't possibly have been properly patched, its data synchronized, or up-to-date security policies applied. That should have rung alarm bells in the system, or locked her out until she did synchronize successfully.

Which can be done wirelessly these days. From home. Using transparent VPNs that require zero user interaction. All of which can be monitored centrally.

So he had to hook it up, go through this key recovery thing where the console give you a bigass key to enter in to the system, then get it to sync passwords, then he could log in and get everything working.

Wait, wait, wait.. let me get this straight: she failed to authenticate properly with the system for something like a year, which then correctly locked her out after the timeout expired, protected the data on her laptop, allowed you to recover the data as designed, and all of this required just a few minutes of typing? And to top that off, the security system insisted that her hopelessly out-of-date credentials cache be updated to verify her account?

OH MY GOD THE HORROR! The hassle! Why doesn't the crypto system just fall dead and recognize how important this lady is and unlock all of her data, despite her ongoing blatant violation of IT security policy! The nerve of Microsoft for designing such a thing! Next thing you know, they'll insist that you use passwords to log on to computers! Can you imagine?! We just won't be able to get any work done around here any more!

Clearly this is all just a giant conspiracy to drain valuable IT resources.

You have to count all that kind of thing in cost calculations.

Additional electricity due to use of AD Policy Driven Bitlocker encryption: $57.35
One hour support call to fix non-compliant user's locked out system: $197.50
Incompetent IT team: $457,350.00
Potential lawsuit due to leaking user data: Priceless.

Yes, you do have to factor that kind of thing in, you're right.

Comment Re:Come on, this is 2012 (Score 1) 290

Now we know single bit flip in an ethernet packet is just the sort of low hanging fruit of problems that we have network engineers for right? So I'm guessing you developed your own mathematically perfect CRC that you have published and that we should all use, to solve the 'low hanging fruit' of single bit flip errors?

Actually, CRC algorithms can be designed to handle all single-bit errors, and even larger errors perfectly. For example, it's possible to design a checksum to detect all contiguous "runs" of erroneous bits up to a certain length.

Similarly, it's not at all unusual for mechanical designs to cater for the loss of a single fastener, or even several in a row.

Designing a critical power supply module so that it cannot be installed without every single bolt in place sounds like asking for trouble. Maybe it was done in the interests of saving weight, but still...

Comment A good answer for a bad question? (Score 2, Interesting) 454

As inane as the question is, I can think of a pretty good answer: ask if they like PowerShell!

It tests several things that someone from a UNIX background would want to see in a Windows administrator: it shows that they like CLI and automation, it shows that they're up-to-date with Windows technology, and it shows that they prefer the "UNIX way". That last may seem counter-intuitive, but PowerShell follows the UNIX philosophy better than any flavor of Linux or UNIX I've ever seen. A Windows administrator that likes PowerShell is the kind of administrator that a UNIX administrator can get along with!

Slashdot Top Deals

In the sciences, we are now uniquely priviledged to sit side by side with the giants on whose shoulders we stand. -- Gerald Holton

Working...