We have a pretty good idea; trees are still trees, and most paper today is acid-free (unlike the paper made from the late 1800s up until about 1980), so it doesn't degrade too badly. But if you're really concerned, buy certified 100% cotton, acid-free paper.
At more than 8 cents per gigabyte, archival DVDs are horribly expensive. You could cycle your backups across three hard drives for about the same amount of money, and then you have three backups instead of one.
Not to mention... have you ever tried backing up your 4 TB hard drive onto a spindle of 1,000 DVDs? Have you ever seen a spindle of 1,000 DVDs? It's slightly taller than an average person. Yes, if you don't have much data, you can do what you're proposing, but....
Hard drives are really the only viable backup medium unless you have a big enough collection of data for tape drives to make sense—maybe Blu-Ray, but only if you don't have more than about a 100-disc spindle worth of data (2.5 or 5 TB) to back up (and really, most people lose interest at more like ten or fifteen discs).
I think the point was that after you clone your backup drive to a new one, you can reuse the drive to replace or expand your main system drive, whereas once you burn an optical disc, "reburning" means throwing away the old plastic (or keeping an extra copy around). This effectively makes optical media a lot more expensive than magnetic media.
Admittedly, Roswell barely qualifies as 1990s, because it began in 1999, but it was one of the better sci-fi shows I've seen. Among other things, it turned the genre on its head by being told from the perspective of aliens, in the present day, on Earth. It had a lot of things going against it, of course, with network politics being the big one, and season two strayed awfully far into X-Files territory, but it had good writing, good acting, and much like Stargate, it didn't take itself too seriously, somehow managing just the right blend of humor, romance, dramatic tension, etc. And in spite of the main characters being teenagers, it managed to almost entirely avoid the usual teen drama that you'd expect to clog up such a series.
My favorite funny moment had to be when Jonathan Frakes (playing himself) told one of the alien teenagers that he just didn't make a believable alien. And my favorite episode was the Christmas special; it was almost pure character development, did nothing to drive the plot, but it was a breathtaking tear-jerker that gave a lot of insight into the main characters' personalities.
If you haven't seen Roswell, it's worth a look.
Your heart is true, you're a pal and a cosmonaut.
For once, this troll is on topic. It demonstrates why voice recognition needs to improve. After all, it's hard to wreck a nice beach.
Correction: Even the China Mobile iPhone 6 and 6 Plus aren't truly carrier-neutral, because they don't support CDMA. So you can either have LTE support in China or you can have CDMA support in the U.S., but not both.
The iPhone 5 had LTE. And it was not carrier-neutral. Each came in multiple models, none of which supported all the LTE bands. AFAIK, even the current iPhone 6 and 6 Plus are not fully carrier-neutral unless you buy the model designed for China Mobile.
But do realize, that was an outlier and is atypical of what Apple does.
No, it isn't atypical, at least for early-generation Apple products. The average support period for Apple is about three years, and there are a fair number of products that got less than that (mostly early models). For example, here's the time between the release date and last supported update of some other first-generation and second-generation Apple iOS devices:
- Original Apple TV: 3 years, 1 month, and 1 day
- Original iPhone: 2 years, 7 months, and 4 days
- iPhone 3G: two years, four months, 11 days
The support period tends to vary based in part on how many of the devices are out there in active use, and in part on how badly underpowered the hardware was to begin with. So later products in a given line are likely to have longer support periods than earlier products.
Actually, the telcos in Europa are preparing to roll out G.fast, which makes telcos again competive with Cable.
Not really. We hit the bandwidth limits of a single twisted pair a long time ago. For G.fast to be usable, the phone company has to replace your phone line with fiber to within just a few hundred feet of your home. For it to reach maximum speeds, you need fiber within just 230 feet. In effect, this means that if the phone company replaces all of their copper with fiber, G.fast lets them skip the cost of running the fiber from the pole outside your house into your house, for now. That's about it.
If your community has no fiber, G.fast won't even connect unless you're within BB gun range of your central office or DSL-capable remote terminal.
It wasn't until i read more that i realised this was about DNA alone. I have no doubt that othes did the same but didn't bother going deeper into it.
Including the people taking the survey, I suspect.
They're well defined now. AFAIK, they were nonstandard when initially proposed. Every time someone wants to deviate from accepted standards, there should be a darn good reason why, and I'm just not seeing any reasonable justification for creating a whole separate transport-layer protocol for something that basically behaves like a normal, connected stream.
And it isn't just explicit blocking that's a problem. Firewalls and NAT often make life miserable for users even when those firewalls aren't trying to block the VPNs. That's why as far as I'm concerned, if you're passing traffic, you should use TCP if you need the data to be robust and reliable, UDP if delayed delivery would make the data worthless, and ICMP for the usual network management purposes. IMO, everything else is anathema.
My point was that there was no valid reason for each of these VPNs to each use its own transport-layer protocol. A normal, connected TCP socket would have done the job just as easily. Every time someone strays from the expectation that all packets are either TCP, UDP, or ICMP, it means every hardware-based firewall maker (and every software-based firewall IT person) has to do extra work to deal with it, and hardware that worked before suddenly doesn't work or (if you're lucky) requires firmware updates. The fact that using a different protocol makes it easier to block is just another in a long list of reasons why the proliferation of transport-layer protocols is a bad idea.
Okay, fair enough. I usually lump firewalls and routers in the same bucket, because outside of backbone hardware, most routers also act as firewalls. The point is that a lot of (badly designed) consumer routers (firewalls) do stupid things like routing only TCP and UDP, or treating those other protocols as "special" under the assumption that VPNs will always be used from the inside out, never from the outside in, resulting in all sorts of fun.
It doesn't help that most VPNs are so easy to detect and block at the IP header level. PPTP depends on the GRE IP protocol (47), and L2TP is usually tunneled over IPSec, which depends on the ESP IP protocol (50). By using different protocol numbers in the IP headers, the designers of these protocols made it mindlessly easy to block them, and made them harder to support, because routers have to explicitly know how to handle those nonstandard protocol numbers.
Nah, it's more like whining that Chryslers should be able to burn the same 87 octane gas as Fords without having to buy overpriced filler necks on license from GM. Or that GE lightbulbs should be allowed to work on ConEd electricity. Standards exist for a reason. Letting monopolists enforce their own whims without accomodating the competition is bad for everyone in the long run. Ask JP Morgan what happened to Standard Oil in the courts.
On the one hand, yes, on the other hand, no. Standards can only go so far. Suppose you design a laptop that has an innovative power storage system that can power it for a week, but in order to get the energy density high enough, you had to run the battery packs at 48VDC. Could you design it to be compatible with an existing 12–18V power supply? Sure. Would it be energy efficient? No.
The same goes for software. If you're designing a new OS, you could ostentibly add the necessary hooks to let it run Android apps, but your OS probably won't run them as efficiently, and you'd prefer folks to develop apps for your own native APIs anyway, because that results in a better, more consistent user experience.