Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Subs as aircraft carriers (Score 1) 75

Well, aircraft are more flexible than cruise missiles.

Are they flexible enough to be worth neutering the sub? What kind of speed, range, noise, and depth limitations do you suppose you'll get from including a hangar and a runway?

If I had to guess, the trade-offs are disastrous---as evidenced by the fact that the US has zero in service.

Every square inch of hull adds 400+ lb of pressure at typical test depths. That is the physical constraint that every "feature" must be weighed against. Things that take up a lot of space get very expensive very quickly.

Comment Re:Inflation, slow Internet, skill, slow PC (Score 1) 239

Chess, Go, poker are all games with spectator followings---just like football, soccer, and basketball.

Any game can be a spectator event if the experience of watching it is compelling. And once there is an established spectator community, it becomes a social event as well.

And let's not forget---an idiot with a camera is at least as entertaining as half of the sitcoms that populate (or plague) prime time television.

Comment Because competing on an even playing field is hard (Score 1) 119

Microsoft introduced HTTP.SYS in Server 2003 to improve IIS 6.0 performance. They really wanted to beat Apache.

Each application pool has a dedicated request queue in HTTP.SYS, which provides very fast and low-latency network performance. This advantage may have been more significant on the slower machines of the time than it is today.

I am not a web developer or web admin, so I don't know how important the performance is---but I doubt it outweighs the security shortcomings.

As other OS functions (such as Windows Update) use the functionality provided by HTTP.SYS, this insecure design is difficult to fix.

Comment Re:To answer your question (Score 1) 279

Everyone sounds revolutionary in whitepapers when they're looking for money. See if that revolutionary talk sticks around after they mass-produce their new hardware and have to support it. That's always when the magic disappears.

Transmeta couldn't build anything that competed with Intel's offerings. The power consumption was lower, but the performance sucked. They were about a generation ahead on power consumption but about 2-3 generations behind on performance.

Code morphing has an inherent penalty---negligible for some instructions, severe for others. Same for emulation. Now that Intel is focusing on performance per watt, these "efficient" architectures are going to get buried by a truly efficient native x86 implementation.

Transmeta existed in an environment where Intel was focused on improving performance almost exclusively. They had a little niche of the market all to themselves, and they couldn't even survive then. Now that Intel cares about power consumption, I wouldn't bet on anyone else gaining a foothold.

Comment Re:Fast Lane = Not Faster (Score 1) 112

You might think something like that unless you actually read the article.

From the description and the diagram, it appears rather clear that they are acting as a local seed for clients on their network.

This is an obvious win-win. It reduces their transit to other networks and keeps BT traffic off their backbone routers (provided the "local peer servers" are distributed regionally). Users get higher speeds from a virtually dedicated seed connection.

The obvious downside is that the ISP knows which torrents are being downloaded by which users, so there are potential privacy or legal issues. From a technical standpoint, however, both the ISP and the users would see an improvement.

This is actual innovation from an American ISP. I'm shocked that it happened, and even more shocked that people are upset about it.

Submission + - Driving Force Behind Alkali Metal Explosions Discovered (nature.com)

Kunedog writes: Years ago, Dr. Philip E. Mason (aka Thunderf00t on Youtube) found it puzzling that the supposedly "well-understood" explosive reaction of a lump of sodium (an alkali metal) dropped in water could happen at all, given such a limited contact area on which the reaction could take place. And indeed, sometimes an explosion did fail to reliably occur, the lump of metal instead fizzing around the water's surface on a pocket of hydrogen produced by the (slower than explosive) reaction, thus inhibiting any faster reaction of the alkali metal with the water. Mason's best hypothesis was that the (sometimes) explosive reactions must be triggered by a Coulomb explosion, which could result when sodium cations (positive ions) are produced from the reaction and expel each other further into the water.

This theory is now supported by photographic and mathematical evidence, published in the journal Nature Chemistry. In a laboratory at Braunschweig University of Technology in Germany, Mason and other chemists used a high-speed camera to capture the critical moment that makes an explosion inevitable: a liquid drop of sodium-potassium alloy shooting spikes into the water, dramatically increasing the reactive interface. They also developed a computer simulation to model this event, showing it is best explained by a Coulomb explosion.

The Youtube video chronicles the evolution the experimental apparatuses underwent over time, pursuant to keeping the explosions safe, contained, reliable, and visible.

Comment Re:"Not intentional". Right. (Score 1) 370

That does not work since he is accessing Amazon, Netflix, and whatever CDNs are caching his shows.

With the growth of online video and IoT devices---and their resulting need for CDNs, redundancy, and resilience vs load/DDoS---simple IP filtering is becoming increasingly difficult for even a modestly connected household. And the difficulty is only going to increase over time.

I'm at the point where I'm about to give up on manual filtering entirely, and I don't have a lot of the shiny new networked devices. In the past, "dumber" devices have ended up being relegated to the cheapest bargain-basement hardware on the market. There are very few premium "dumb" devices anywhere.

I want privacy and security, and I want those things without being forced to buy the bottom-of-the-line crap. If they don't stop making invasive, insecure "smart" devices, I see bigger problems looming.

Comment Re:"Not intentional". Right. (Score 1) 370

Corporations have an obligation to follow the laws and make a profit.

And if we notice any behaviors harming our overall social environment while simultaneously enriching businesses, we can pass laws to prohibit those behaviors.

It's certainly feasible to prohibit known-bad behaviors preemptively rather than waiting for the market to discover and react to individual acts of malfeasance or breaches of trust.

In addition, a violation of established regulations or commerce law generally provides firm legal footing for a civil suit against the violators. These cases tend to be litigated much more quickly and successfully. Or settled without taking the matter to court at all, which is really ideal.

Comment Re:Hard To Imagine... (Score 1) 191

1. They probably derive from value from the vendor lock-in than they expect from sharing. The rival OSes can already join an Active Directory domain (some require third-party tools, some don't). Right now, if you want to manage a fleet of Windows desktops you need a few Windows Server licenses for your domain controllers---and the requisite CALs. There are already open source AD clones anyway, which is probably why 2008, 2008 R2, and 2012 functional levels have such nice new features. They want to maximize the number of Microsoft products you're using.

2. Until Microsoft storage demonstrates the reliability of EMC or Compellent, no one is going to care. Linux and Windows can both work as an iSCSI target, and that's good enough for people who want cheap, accessible storage. Customers who already demand reliability and performance are paying for it because they need it. Maybe there is a bigger market for people who could benefit from some middle tier of storage, but there are plenty of vendors in that range too. So, the question still boils down to "Why put that storage in a server and trust Microsoft to present it?"

3. At the enterprise level, if you're relying on AV detection to find malware, you're already behind the curve. Most of the new security features are targeted at the network-connected enterprise machines, with some trickle-down benefits for consumers. The VM/hypervisor idea will wreak havoc on the two performance areas that matter for Windows desktops---CAD and gaming. While I agree with the principle, it's not happening. Microsoft started research on Singularity almost 10 years ago, and little of that work has shown up in Windows.

Comment Re:Hard To Imagine... (Score 1) 191

Modern versions of Windows (Vista and newer) will find your license server automatically, unless you configure either your OS image or your license server to do otherwise.

It's a simple matter of having the right SRV record in DNS, and the license server will add it automatically if it's setup by a user with the necessary privileges.

The current license server supports all modern Windows versions. I wonder if that will change once Vista leaves its extended support phase. I expect it will take minimal effort to maintain activation support for an older OS, so I doubt they will simply drop it and risk irritating their enterprise customers. At this point, they're the only ones willing to pay a substantial amount of money for an operating system license.

Comment Re:Hope the trend continues. (Score 1) 263

If the company had a history of never patching vulnerabilities or even being spotty and refusing to support new products, then it makes sense to out them immediately.

But Microsoft has been issuing monthly patches for supported versions of Windows for years.

Yes, they'll delay or rescind a patch once in a while when it breaks things. Any company can be in that position though, and that's OK too provided they reissue a good patch when it's ready.

Instead of publishing exploit details and POC code automatically after 90 days, they should publish mitigation measures immediately (to actually help admins secure their assets) and sit on the more technical details for longer than 90 days if they reasonably expect the vendor to issue a patch. Maybe set a hard cap of 180 days to avoid being strung along indefinitely. While 90 days is a good starting point, no two bugs are the same.

An automatic one-size-fits-all approach is draconian and stupid. Some bugs require multiple rounds of testing because things get broken unexpectedly by the first "fix". Large software projects often end up with hidden dependencies that complicate bugfixing; it's a fact of life, and ignoring reality in favor of ideologically-driven rules usually ends poorly.

Comment Re:Try Again Next Time (Score 4, Insightful) 248

The fact what they think went wrong was insufficient hydraulic fluid, and not their engineering process that allowed a major mistake to make it into the design and not be detected during testing, is the *real* problem.

It was detected during testing. Their entire retrievable/reusable concept is being developed and tested right now. Their contractual requirement is to put payload into orbit. The landing mechanism is merely an economic advantage for the company that will keep their costs lower; their contracts certainly don't specify it as a requirement.

Some shops use an iterative design process. It usually comes with being new to the market (and thus lacking the funds for extended pre-operative testing).

Some shops even do iterative design as standard practice when they are well-funded.

They were only required to launch supplies to the ISS. The ability to test and refine their landing mechanism is a bonus for the company. Hell, NASA's other contractor doesn't even have a reusable vehicle.

In conclusion: Do you know what we call a service that fulfills its contractual requirements? A success.

Comment Re:Application installers suck. (Score 2) 324

Pretty much.

The Windows Store has more granular permissions, restricted UI modes, and reduced legacy API support. These things will lead to apps using modern security and UI conventions, which is mostly a good thing.

A curated app store is probably good for normal users. As long as sideloading apps is always supported, this should make some headway on taming the burden of legacy software.

I expect to see an unending avalanche of shitty Win32 apps for the rest of my life, but the Windows Store at least offers some vague hope that it will diminish over time.

Comment Re:Application installers suck. (Score 2) 324

Applications and config/data files that need to be available for multiple users can be installed to C:\Users\Public by default without admin privileges. This location is available in an environment variable in case the admin has changed it (can't remember the variable name off the top of my head).

Applications with per-user installation or config files can use the %USERPROFILE% environment variable to find a safe place to store their data (defaults to C:\Users\username). Creating your own directory there is probably a good idea and is permitted by default.

There are guidelines for using the pre-established directories for Desktop, Documents, Downloads, Music, Pictures, and Videos though, since they are shared with the OS and other applications.

Slashdot Top Deals

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...