As soon as I saw the CowboyNeal option, I picked it. It's been far too long.
That's bonkers! Or maybe not...
Markets do change though. For the longest time, I was a non-gamer. Then Steam for Linux came out. Two years ago I bought a GTX 760, which was many times the cost of the GT 430 I bought before that. I bought the 760 because it was the best card I could get without upgrading my power supply (which split 12 volt rails). But I've found that the 760 is under powered for driving my 1440p display at reasonable framerates (I have the display for productivity purposes). It's time to build a new computer this year, and I'm probably going to go with a GTX 980 Ti.
Not in all cases. I manage a five figures monthly cloud deployment, and I look at the bill every month looking for ways to reduce costs. Using the cloud is cheaper than maintaining our own data center, before even considering how capital intensive it would be to carry around unused resources ourselves.
If I had to have enough spare resources to handle our occasional traffic spikes, I'd have to spend an extra $100,000 upfront for hardware that would sit around doing nothing almost all the time. But when our traffic triples in fifteen minutes and I need another fifty web servers, they're automatically provisioned and deployed behind the load balancer, and we spend an extra $100 or whatever for the day. Events like that happen maybe five times a year. $500 is a lot less than $100,000.
We also use a similar setup for work queues and scale worker machines based on how long it's taking tasks to get processed. Some hours only one machine is running for a queue, other hours, ten. We use spot pricing, too, on less urgent work, to keep costs down.
At first I was skeptical about cloud computing, but I'm a convert. It works. And it works beautifully. And it saves us a lot of money by allowing us to use a lot fewer servers on an average basis.
Thank you for a megapascal of a laugh
You may want to get a quote from beutilityfree.com, who resells the Chinese Ni-Fe batteries. They're based in Colorado.
Sorry to disappoint, but that's because Steam sniffs the user agent string for OS and filters what it displays by default. If you visited with a Windows or Mac machine you would probably see a different list. You can remove the filter if you wish. Personally, I think filtering by default is great since my time isn't wasted looking at games I can't install and play.
You laugh, but rumors are she was involved in a plot to keep him there.
Be careful with disassembling smoke detectors. First, americium is extremely toxic, like polonium. Second, the amount of americium-241 contained within may exceed the licensing exempt quantity when removed from a smoke detector, depending on your jurisdiction.
It really depends on scale. If you run a small site, one that gets less than 10 million hits a month or so, you're fine on a run-of-the-mill CMS like Wordpress. Though I should mention many frameworks will fall over at much less load due to poor design decisions.
It gets interesting when your concurrency goes higher. Things like ORM baked into many frameworks break down, and if your site is interactive, it's a lot harder to add effective caching.
Over the last six years I designed a Linux-Nginx-MySQL-PHP stack that currently does over 2 billion requests per month. Over 98% of the requests are served entirely from cache, and every request gets a live view (no reverse caching proxy or the like). This is possible because I designed and basically scratch-built a framework that does caching intelligently, in a way that's just not possible with any ORM-based framework I've seen. The front-end is mostly JS, which I did not build, and it does use frameworks like jQuery, angular, less, grunt, etc.
We're starting to see mild growing pains (but we could still handle ten times our current traffic) and are migrating to a Cassandra/Kafka/Storm/Java stack to take things multiple orders of magnitude higher and to make everything real-time. There are simply not any frameworks available, but there are many projects like Cassandra, Kafka, and Storm that do a lot of the hard work and that can be glued together with you own libraries.
It doesn't take a huge team to do it, either, if you're smart. We're a dozen people on the tech side, including design, front, back, ops, QA, and management.
Routers are probably the first thing you want to change. I don't use FreeBSD, but it features zero copy networking for insanely fast routing, which Linux does not.
I also have a fan on all the time. I'm simply uncomfortable in temperatures above 21C.
I'm comfortable in pants and a t-shirt down to 0C if it's not too windy, or -15C if it's calm and sunny and I'm moving around. Below those conditions I'll wear a jacket and perhaps gloves. Only once it gets to -25C do I get out the winter gear and start layering. I don't need it until then.
I have 175 Mbps symmetric at home, and it's good enough for my purposes at the moment. Having reasonable upload bandwidth like that with 3 ms ping to the office is useful for exporting X apps to my work desktop (yes, I do that). It's nearly as fast as a local app to the point where I could forget it's remote.
The decent upload is also really handy for doing remote backups. I have ISCSI targets in distant locations that I simply mount and use like a local file system. ISCSI without reasonable upload capacity or low latency is a frustrating experience.
I used to have 50 Mbps symmetric, and it was okay, but I did find myself waiting on things. I wait less now with 175 Mbps, but I'm also still throttling backup speed.
Most websites I visit don't fully utilize the bandwidth because of the TCP ramp up time, and generally downloads will finish before maximum speed is reached. Well designed services like Mega will easily saturate my connection though.
With a 1 Gbps connection at the office I've seen download speeds up to 80 MB/s from a local free software mirror. It's handy to download a new distro ISO in 15 seconds. It really changes your perspective on what data is worth keeping locally.
If I were regularly downloading and uploading multi-gigabyte files, such as backing up video, 10 Gbps would be very useful! If online storage prices keep dropping it will be very tempting to keep everything in the cloud. Right now the cheapest storage VPS providers are around $20 per TB per month.
But the key point is not so much increasing download bandwidth beyond 1 Gbps, but increasing upload bandwidth to match.
A 50" 1080p TV has a dot pitch of approximately 0.58 mm. That's huge.
My 27" 1440p monitor has a dot pitch of 0.23 mm. I can clearly see pixels jaggies 2' away. It's not capable of producing fonts smaller than 8 px without collapsing the whitespace in and between letters. I can clearly read an 8 px font on that display from 7' away.
The pixels in the 50" TV would be discernible at 5'. I would have to be 18' away from that TV before I couldn't read an 8 px font on it. I would discern detail three times farther away than that, so 4k would be an improvement over 1080p for a 50" TV any closer than 50' away. People who disagree might have less than 20/10 vision (20/10 is actually common).
For desktop work, where I'm usually about 24-30" away, 8k in a 30" format (~294 ppi) would be really nice. I have a feeling I'll be waiting a while for that though.
Not radioactive wolves, but rabid wolves. Probably the biggest danger in the zone along with decaying/collapsing buildings.