Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:The bullshit is fresh and steamy (Score 3, Informative) 237

No, they enabled copy protection that the content producers want to see enabled before they let you stream 1080P/4K content. That's just how it is. It sucks, but don't go after Microsoft on this one.

The good news is that since 4K will be so hard to obtain--then most end users will ultimately just use 720P content anyway. There's no demand for 4K content in the sense that if it's too fucking difficult to access nobody will want it.

Comment Re: This brings us one step closer to many things (Score 1) 428

Here's a fun fact:

Anyone who knows anything about datacenters knew what they would be collecting when they built the Utah datacenter. its building wasn't a secret.

You want to know who else has datacenters that size? Facebook and Google.

What the fuck did you think the NSA was going to do with a datacenter in Utah that rivaled a Facebook datacenter?

https://defensesystems.com/Articles/2011/01/07/NSA-spy-cyber-intelligence-data-center-Utah.aspx

This shit was common knowledge. Here's an article about it a full 2.5 years before Mr. Idiot leaked information.

Comment Re:Not really. (Score 1) 233

Yes, People still "think" this. And it's in your best interest to read the platform documentation for the systems and applications you're leveraging before you make decisions on whether to go physical or virtual.

As an example of the "scalability problem", Microsoft has documented their Exchange 2013 Preferred Architecture--which is pretty much 2U, 2CPU servers with JBOD of 7200 RPM disks. Essentially, you take away what you think and know being a VM platform guy (Snapshots, SANs, LUNs, RAID, etc.) and throw it out the window because none of it applies to Exchange 2013 (and newer). Microsoft, and other application vendors, have built resiliency into the application stack. Because of this, all of your traditional VM methods of failover (host failover, DRS, HA) do not apply to the technology. Or rather, is an unsupported configuration which may result in performance problems at best, and data resiliency problems at worst.

The links below don't necessarily say don't virtualize, but they do say to understand the design they're going for and build appropriately (scale out, not up). VM Architecture is a massive overhead cost in comparison of throwing a bunch of dumb servers together and saying "make it work".

http://blogs.technet.com/b/exchange/archive/2014/04/21/the-preferred-architecture.aspx
http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx

This is just one particular item that I think doesn't lend itself well to virtualization, but another area is SQL server--where the disk i/o requirements of SQL are so intense that it's cheaper to build out a dedicated SQL cluster than it is to build out a virtualization environment that meets the disk i/o needs of the databases.

An application which leans heavily on iops workload is something like Sharepoint (https://technet.microsoft.com/en-us/library/cc298801.aspx). Microsoft strongly recommends dedicating a cluster to Sharepoint, do not share (do not add an instance to a cluster running other applications), and that for a large size you may need some serious iops.

Again, these things aren't impossible to virtualize. But the raw needs of both of these applications lend themselves better to physical hardware rather than being tossed on a VM cluster. By the time you dedicate enough resources (whether CPU time, dynamic memory usage, or io priority) to these apps, you would have been better off just buying some dedicated physical hardware, and you'd end up with much better performance.

Yes, for a lot of workloads VM is great. Things like low end application servers, scale-out web servers (using web clusters preferably where you can) are great. You can get a lot of VM density on a great many workloads. But not everything can be done this way. Unless you want to put 100Gbps links in all of your physical servers going to your storage clusters...and SSDs everywhere.

Comment Not really. (Score 4, Insightful) 233

People keep saying this to me. "Oh we won't need your type in a few years because cloud everything." Never mind the fact that around 99% of my work is software-based. I only rarely on occasion mess with hardware. Every 5 years for a hardware refresh, and the occasional drive swap from a vendor. Everything else I do is software-based. And it really doesn't matter whether it's "in the cloud", or "on premise". My job role stays the same. So I save a whole 15 minutes a year on not having to swap drives.

What you will see with "cloud", just like "virtualization", is a maturation of the technology's use inside a company. Not every workload is appropriate for virtualization, and not every workload will be appropriate nor cost effective in the cloud. The cloud is great for every "devops" guy who thinks they're going to write the next Facebook, Amazon, or Netflix--but yet again, for 99% of companies out there, workloads are entirely static. There's just little need for "SUPER HYPER SCALE AUTOSCALING UP AND DOWN CLOUD INFRASTRUCTURE" for a vast majority of business workloads.

Specific applications are hugely appropriate for "cloud", particularly e-mail (and I say this as an Exchange Administrator). And for these "we need this up 110% of the time" applications, they'll find that if the "cloud vendor" has a problem there's nobody they can call to fix the issue. And never underestimate the value of management having someone that they can call to "look at the issue right away at 2:30AM". This need will keep a lot of folks employed.

Finally, you can't really depreciate cloud assets like you can capital expenses. So really, again, you're ultimately just comparing the cost of operating a datacenter versus the cloud technology. And you can already not worry about operating your own datacenter by simply using a colocated one.

So at the end of the day, no matter how much technology changes. No, the 'devops' revolution isn't actually going to happen, and being able to swap a drive or add some ram will still be a necessary skill.

Comment Re:Hang on a minute... (Score 1) 747

I feel ya there. I'm trying to stay away from the concept of "Get off my lawn", and push more towards education.

It is no surprise that a vast majority of the modern investments in technology are "web-based". A lot of people that are coming out of schools now had their first introduction to computing and networks over the "web", and they learned backwards.

There were those of us who saw the evolution of the web "forwards". Who remembers installing the Trumpet Winsock TCP/IP stack on Windows? Because it didn't have a TCP stack? [or for that matter, IP?]

What we're seeing now is a lot of "reinvention of the wheel" because those folks are quite literally working backwards. And not everything they're doing is a "bad thing". There are a lot of great techs coming out to make platforms management and all easier.

I try to meet in the middle. And no doubt, as time goes on, after dozens of years of abstraction from the hardware and the lower layers, these folks will ask themselves "HOLY CRAP I CAN ACTUALLY DO WHAT WITH THE HARDWARE?!", and a world of 50 years of technology will rush into their brain as they discover what those that have followed it have been saying all along.

Comment Good Detail Included In Summary (Score 1) 107

Notably, s2n does not provide all the additional cryptographic functions that OpenSSL provides in libcrypto, it only provides the SSL/TLS functions. Further more, it implements a relatively small subset of SSL/TLS features compared to OpenSSL.

This is the kind of really important detail that is often left out of summaries and winds up making my eye twitch. Thanks OP and/or editors for rising above the common dross.

Comment Re:Big Data != toolset (Score 1) 100

Both Pointy Headed Bosses and Slashdot loooove talking about tools. As the posts generally show, both PHBs and Slashdoters have no clue about what Big Data is used for. It's all about the buzzwords and technology, not about use and utility. There are no references to any algorithms.

Heh. I've been doing big data since 2000. Fifteen years experience in a field that's five years old, I like to say. And let me say this: You nailed it. Your whole post, not just the part I quoted. I've used the tools, from Colt to R, and there is no substitute for the ability to analyze and match a business model, data system, algorithms, implementation, and business controls.

On the upside, give me (or, I'm guessing, you) a month or two to develop a big data strategy, and we'll generate large, measurable, improvement in the company's desired performance metric -- using whatever toolset the company is fawning over at the moment. It may not be what sells the PHBs, but it feeds the bulldog.

It is a shame, though, to see so many charlatans diverting so much revenue into ill-conceived projects. Alas.

Comment Re:More Bullshit (Score 1) 167

I guess they better crack down on paying anyone with beer/food as well.

If it really is pay - or in legal terms, "consideration" - then it is covered by this law exactly the same as money. What you do with your friends is neither pay nor consideration. You give them beer and sandwiches when they help you out for free.

If you claim you don't get the distinction, I believe you are being intentionally obtuse. A judge or magistrate would not be so.

Slashdot Top Deals

When it is incorrect, it is, at least *authoritatively* incorrect. -- Hitchiker's Guide To The Galaxy

Working...