Follow Slashdot stories on Twitter


Forgot your password?
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×

Comment Re:The bullshit is fresh and steamy (Score 3, Informative) 237

No, they enabled copy protection that the content producers want to see enabled before they let you stream 1080P/4K content. That's just how it is. It sucks, but don't go after Microsoft on this one.

The good news is that since 4K will be so hard to obtain--then most end users will ultimately just use 720P content anyway. There's no demand for 4K content in the sense that if it's too fucking difficult to access nobody will want it.

Comment Re: This brings us one step closer to many things (Score 1) 428

Here's a fun fact:

Anyone who knows anything about datacenters knew what they would be collecting when they built the Utah datacenter. its building wasn't a secret.

You want to know who else has datacenters that size? Facebook and Google.

What the fuck did you think the NSA was going to do with a datacenter in Utah that rivaled a Facebook datacenter?

This shit was common knowledge. Here's an article about it a full 2.5 years before Mr. Idiot leaked information.

Comment Re:Not really. (Score 1) 233

Yes, People still "think" this. And it's in your best interest to read the platform documentation for the systems and applications you're leveraging before you make decisions on whether to go physical or virtual.

As an example of the "scalability problem", Microsoft has documented their Exchange 2013 Preferred Architecture--which is pretty much 2U, 2CPU servers with JBOD of 7200 RPM disks. Essentially, you take away what you think and know being a VM platform guy (Snapshots, SANs, LUNs, RAID, etc.) and throw it out the window because none of it applies to Exchange 2013 (and newer). Microsoft, and other application vendors, have built resiliency into the application stack. Because of this, all of your traditional VM methods of failover (host failover, DRS, HA) do not apply to the technology. Or rather, is an unsupported configuration which may result in performance problems at best, and data resiliency problems at worst.

The links below don't necessarily say don't virtualize, but they do say to understand the design they're going for and build appropriately (scale out, not up). VM Architecture is a massive overhead cost in comparison of throwing a bunch of dumb servers together and saying "make it work".

This is just one particular item that I think doesn't lend itself well to virtualization, but another area is SQL server--where the disk i/o requirements of SQL are so intense that it's cheaper to build out a dedicated SQL cluster than it is to build out a virtualization environment that meets the disk i/o needs of the databases.

An application which leans heavily on iops workload is something like Sharepoint ( Microsoft strongly recommends dedicating a cluster to Sharepoint, do not share (do not add an instance to a cluster running other applications), and that for a large size you may need some serious iops.

Again, these things aren't impossible to virtualize. But the raw needs of both of these applications lend themselves better to physical hardware rather than being tossed on a VM cluster. By the time you dedicate enough resources (whether CPU time, dynamic memory usage, or io priority) to these apps, you would have been better off just buying some dedicated physical hardware, and you'd end up with much better performance.

Yes, for a lot of workloads VM is great. Things like low end application servers, scale-out web servers (using web clusters preferably where you can) are great. You can get a lot of VM density on a great many workloads. But not everything can be done this way. Unless you want to put 100Gbps links in all of your physical servers going to your storage clusters...and SSDs everywhere.

Comment Not really. (Score 4, Insightful) 233

People keep saying this to me. "Oh we won't need your type in a few years because cloud everything." Never mind the fact that around 99% of my work is software-based. I only rarely on occasion mess with hardware. Every 5 years for a hardware refresh, and the occasional drive swap from a vendor. Everything else I do is software-based. And it really doesn't matter whether it's "in the cloud", or "on premise". My job role stays the same. So I save a whole 15 minutes a year on not having to swap drives.

What you will see with "cloud", just like "virtualization", is a maturation of the technology's use inside a company. Not every workload is appropriate for virtualization, and not every workload will be appropriate nor cost effective in the cloud. The cloud is great for every "devops" guy who thinks they're going to write the next Facebook, Amazon, or Netflix--but yet again, for 99% of companies out there, workloads are entirely static. There's just little need for "SUPER HYPER SCALE AUTOSCALING UP AND DOWN CLOUD INFRASTRUCTURE" for a vast majority of business workloads.

Specific applications are hugely appropriate for "cloud", particularly e-mail (and I say this as an Exchange Administrator). And for these "we need this up 110% of the time" applications, they'll find that if the "cloud vendor" has a problem there's nobody they can call to fix the issue. And never underestimate the value of management having someone that they can call to "look at the issue right away at 2:30AM". This need will keep a lot of folks employed.

Finally, you can't really depreciate cloud assets like you can capital expenses. So really, again, you're ultimately just comparing the cost of operating a datacenter versus the cloud technology. And you can already not worry about operating your own datacenter by simply using a colocated one.

So at the end of the day, no matter how much technology changes. No, the 'devops' revolution isn't actually going to happen, and being able to swap a drive or add some ram will still be a necessary skill.

Comment Re:Hang on a minute... (Score 1) 747

I feel ya there. I'm trying to stay away from the concept of "Get off my lawn", and push more towards education.

It is no surprise that a vast majority of the modern investments in technology are "web-based". A lot of people that are coming out of schools now had their first introduction to computing and networks over the "web", and they learned backwards.

There were those of us who saw the evolution of the web "forwards". Who remembers installing the Trumpet Winsock TCP/IP stack on Windows? Because it didn't have a TCP stack? [or for that matter, IP?]

What we're seeing now is a lot of "reinvention of the wheel" because those folks are quite literally working backwards. And not everything they're doing is a "bad thing". There are a lot of great techs coming out to make platforms management and all easier.

I try to meet in the middle. And no doubt, as time goes on, after dozens of years of abstraction from the hardware and the lower layers, these folks will ask themselves "HOLY CRAP I CAN ACTUALLY DO WHAT WITH THE HARDWARE?!", and a world of 50 years of technology will rush into their brain as they discover what those that have followed it have been saying all along.

Comment If you're surprised (Score 4, Insightful) 120

You're an idiot, plain and simple.

Selling "Customers as a service" is the big, new economy and every single "startup" and "app" coming out of places like Y Combinator in the past few years has been about nothing more than selling your information. Every mobile app, every mobile game. Every "CHECK OUT THIS FREE NEW THING!" For example, Life 360. Think they're offering this for free? Life360 is currently valued at $250M. Facebook paid a few billion for WhatsApp Messenger.

You're a complete moron if you haven't been watching this.

Comment Re: But does it matter any more? (Score 1) 181

It is incredibly important as an IT person to be able to MITM your connections on a company network. And we fully employ such functionality where we are.

First and foremost, compliance is a thing. As a personal user you may not have to care, but as a business the organization has to take special care when handling certain types of information. So we need to be able to see where that information is going.

Another reason is for IPS. Many attacks, like spam, change the locations from which they come from. But a particular type of attack is almost never going to change. There are only limited ways, for example, to exploit any individual hole in a web browser. And you can flag on that to a degree that is significantly more successful than simply being able to block IP ranges, which is about all you get if you do not MITM connections.

There are real, legitimate concerns and reasons to MITM. If you don't like it, don't do non-company things on company Internet and equipment.

Slashdot Top Deals

The next person to mention spaghetti stacks to me is going to have his head knocked off. -- Bill Conrad