Nonsense. One order of magnitude more, at most. On-line storage costs are on the order of $100 per TB per year.
I was going based on my experience with AWS, which is about $30 per TB per month for spinning storage, or $360 per TB per year. An 8 TB hard drive should typically last you about five years, and costs about $250, or about $6.25 per terabyte per year. That isn't quite two orders of magnitude, but it is pretty close. Of course if you're willing to wait several hours to start getting your data back, you can use glacier storage, and that's cheaper, but there are tradeoffs.
Upload time sucks, but only for the initial upload, which I did two years ago. After that, incremental additions are pretty negligible.
Must be nice. I backed up over 12 GB Sunday night, and that was only one week worth of incremental backups for my personal laptop. Over my DSL connection (soon to be retired), that would have taken two days. It would take several hours even over my new cable modem service. It took five minutes to back up locally. That time difference makes the difference between me being willing to back up regularly and never backing up.
Obviously, YMMV, but I would imagine that somebody with multiple terabytes of personal data is probably either a photographer or videographer, and therefore has the same sorts of nightmare backups that I do. But I'm just guessing here. For all I know, it could be a porn collection.
Online backup is cheap. Most start at ~$60 a year for unlimited backup.
I'm having a hard time believing that $5 per month is even possible for anything approaching truly unlimited storage. Just storing 2 TB on Amazon glacier storage would cost three times that much. I assume they count on most of their users treating unlimited as tens of gigabytes. If everybody were storing 2 TB, I'd expect those numbers to go way, way up.
But even if you assume that $5 is your total cost from the cloud provider, that still isn't your total cost. After all, time has value, plus your internet connection costs money. Backing up 2 TB over a typical home Internet connection can take anywhere from many days up to years, which means if your storage needs are that large, you're going to want a faster Internet connection or you'll lose your mind. Tack on another $30 a month for that.
In addition, storing your backup in the same location as your main copy is not smart, even if it is in a bunker or fire proof safe.
Hence my suggestion of periodically cloning your RAID and keeping the clone at work.
The problem with cloud-based solutions is that the cost for backing up several terabytes of data is typically several orders of magnitude higher than building your own RAID array, and the performance of Internet-based backup absolutely sucks beyond measure unless you're the sort of person whose data needs are measured in tens of megabytes.
The ideal solution, if you can pull it off, would be to build a small concrete bunker in your yard, run power out to it, put a UPS and power conditioner in there to protect against bad power, put a RAID array in there, wire it with Ethernet to your house underground, put a watertight door on the thing, add a power cutoff that shuts down power if water does get inside (e.g. a GFI breaker and an unused extension cord whose output end is lower than your equipment), and hope for the best.
But more realistically, I would tend to suggest an IOSafe fireproof RAID array loaded up with five 6 TB drives (or maybe even 8 TB drives). Put it in a closet somewhere, and hope for the best. If you want to increase your protection a bit, you could also get two RAID expansion cabinets, store them at work, and periodically bring one home, clone your main RAID array to it, and bring it back
Actually, try #3. That's the only term that is generic enough to encompass both the individual recording artists (regardless of the degree of artistry) and the record companies that represent them. I'm talking collectively about everyone involved in the process of bringing that content to market who might plausibly be involved in the decision-making process.
You have k(a) Android devices and k(i) failed devices. k(i) divided by n(i) gives you 58%.
No, that's what failure rate is supposed to mean. However, what the numbers actually said are:
These two statements cannot both be true simultaneously by any proper definition of "failure rate". The iPhone 6 is a subset of all iOS devices. The claim is made that its failure rate was 29%. For the failure rate of all iOS devices to be 58%, that would mean that at least one iOS device must have a failure rate greater than 58% to pull the average up from 29% to 58%, which contradicts the statement that the iPhone 6 had the highest failure rate at 29%.
The only way you could even halfway make those numbers plausible would be if you erroneously divided the iPhone numbers by either the total number of iOS devices or worse, the total number of devices. Either of those approaches makes the numbers meaningless because you don't know the relationship between... to use your terminology... k(i) and n(i) at that point.
In your ramblings, you fail to consider that the vast majority of people who want to avoid expensive shipping charges will often bring their unit into a store... which eliminates many of the simpler problems.
The vast majority of people who want to avoid expensive shipping charges will Google the problem and find an answer themselves. People go to a store when that fails.
I don't really understand how this benefits Spotify as it doesn't improve the service in any way that I can see, and such a move likely makes it worse for users for petty business reasons that have nothing to do with the users.
In the short term, the only negative impact would be if the songs they're demoting are extremely popular and if the public perceives their absence as a loss in quality. Given the size of the musical corpus these days, that seems unlikely.
In the long term, this serves notice to content creators that there's no such thing as a free lunch. Normally, those content creators would have to balance the cost of exclusivity (fewer plays on those exclusive songs) against the benefits (presumably dramatically improved promotion and possibly higher royalty per click. With this policy in place, those content creators have to factor in the loss of the vast majority of their income from the other providers—not just on new content, but also on old content. That significantly changes the balance in a way that discourages these exclusive deals.
And that's a good thing. Vendor exclusivity is inherently anti-consumer.
Under 32 hours and the law would say no benefits are required.
That's not true. You're required to pay for health insurance for anyone working 30 hours or more. Similarly, you're not allowed to restrict 401k for any employer working more than 1,000 hours per year (a little over 19 hours per week).
They could cut the number of sick days or vacation days offered, but that's probably roughly the maximum extent to which they could reduce benefits other than salary.
No, it doesn't. 30 hours per week is considered full time for ACA purposes. It is, in fact, the bottom threshold below which you don't have to, so if they hired someone to work 29 hours, you'd be right.
The percentages are percentages of the 58% of failing devices. Of the devices that failed, 29% were iPhone 6, 23% were the 6s, and 14% were the 6s Plus. Add those together and we're missing the final 33% of failed devices but it's safe to assume that a random collection of 6 Plus, 5SE, 5s, 5c, etc. make up that final 33% of the 58%.
So let me see if I understand this epic math fail correctly. Given n devices, there were k devices that were brought in for repair. Of those k devices, 58% were iOS devices, and of those 58%, 29% were iPhone 6 devices.
Which tells us absolutely nothing about the actual failure rate without knowing how the makeup of those n devices relates to the makeup of those k devices. It tells us nothing about the actual failure rate without knowing what percentage of each model within k were junked and replaced without notifying the service center in question. It tells us nothing about whether the Android and iOS users have similar levels of self-sufficiency in terms of figuring out how to solve their own problems. And there are probably at least three or four other fairly fundamental errors that make this data essentially pure noise.
Arguments over minor methodology points, such as whether to count specific types of failures in the reliability numbers, are basically moot, because the "data" is purely anecdotal and is not mathematically related to the actual rate of failure to begin with. This isn't statistically any better than saying, "Of my friends, more people have had problems with Android phones than iOS phones" or vice versa. If you know nothing about whether the sample population has similar distribution to the general population and you know nothing about whether the data is even an accurate measurement of the sample population itself, then these numbers are quite literally no better than a random number generator with a Gaussian distribution. You might as well arrive at the results by throwing darts at a dartboard. It will be approximately as meaningful.
Am I missing something?
Trust me, if even 1% of iPhone hardware failed during its warranty period, heads would roll, much less 58%.
As a 6D user, in my experience, the Wi-Fi is really nice if you're part of a group trip. You can have your cell phone out, and once in a while when there's a pause, you can snag a photo off your real camera and upload it to Facebook so that the folks back home can see what you're all doing. It's much easier than trying to take photos with two devices at once, because the extra time spent fiddling with your phone is while you're on a bus riding somewhere or whatever instead of while you're out sightseeing on a schedule.
It is also occasionally useful if you don't have (or forgot to bring) a remote controlled trigger release. You can use it to see what the camera sees (in live view/EVF mode) and tell it to take photos, albeit with a lot of shutter lag. With the dual-pixel AF in the 5D Mark IV, it should be even better because you'll have actual phase-detection autofocus with continuous focusing in live view mode instead of contrast-detection AF.
With that said, it might be worth clarifying that at high MP counts, a hybrid system might be preferable, using sensor-based stabilization for fine correction after your finger hits the button.
Canon puts the stabilization in the lens for a good reason. Sensor-based stabilization is only useful on point-and-shoot cameras or mirrorless cameras with electronic viewfinders. As soon as you have an optical path to your eye, sensor-based stabilization is worthless, because it won't help you frame the shot. By contrast, lens-based stabilization locks the image in place so that you can actually see what you're taking a picture of.
This makes a huge difference even at 300mm. By 600mm, you'd be hard pressed to ever get a shot of anything without lens-based optical stabilization.
Except that it doesn't, because it doesn't. The 5D Mark IV sensor uses a gapless microlens array. There are no boundaries between the pixels, period. All light that hits the sensor's surface goes into the sensor except for any that gets reflected when it hits the surface.
True but the image will always suffer from less thermal noise on an equivalent sensor with larger photosites.
Realistically, thermal noise is almost irrelevant except for long-exposure photography (e.g. astrophotography). For normal photographic purposes, it's the shot noise that kills you in low light. When the difference between one and zero photons makes a visually noticeable difference in the resulting value, individual pixels are going to have noticeably different values than the pixels next to them even when they're getting approximately the same amount of light, because a pixel either gets the photon or it doesn't.
But that shot noise basically goes away when you downsample. If you double the number of pixels, a "pixel crop" (one pixel on the individual photo to one pixel on your screen) will give you more noise on the one with smaller sites, but it will also be looking at a much smaller area. If you crop them to cover the same area and average the signals, you'll find that the same number of photons hit both sensors and were detected, so the result is approximately the same, with the exception of the small amount of loss caused by the wiring around the pixels. And by the time that starts to become significant, you're roughly at cell phone pixel densities, and you're either doing back-side illumination, microlens arrays, or both to get rid of that problem.
"The pyramid is opening!" "Which one?" "The one with the ever-widening hole in it!" -- The Firesign Theatre