Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Roswell (Score 1) 480

Admittedly, Roswell barely qualifies as 1990s, because it began in 1999, but it was one of the better sci-fi shows I've seen. Among other things, it turned the genre on its head by being told from the perspective of aliens, in the present day, on Earth. It had a lot of things going against it, of course, with network politics being the big one, and season two strayed awfully far into X-Files territory, but it had good writing, good acting, and much like Stargate, it didn't take itself too seriously, somehow managing just the right blend of humor, romance, dramatic tension, etc. And in spite of the main characters being teenagers, it managed to almost entirely avoid the usual teen drama that you'd expect to clog up such a series.

My favorite funny moment had to be when Jonathan Frakes (playing himself) told one of the alien teenagers that he just didn't make a believable alien. And my favorite episode was the Christmas special; it was almost pure character development, did nothing to drive the plot, but it was a breathtaking tear-jerker that gave a lot of insight into the main characters' personalities.

If you haven't seen Roswell, it's worth a look.

Comment Re:The solution is obvious (Score 1) 579

But do realize, that was an outlier and is atypical of what Apple does.

No, it isn't atypical, at least for early-generation Apple products. The average support period for Apple is about three years, and there are a fair number of products that got less than that (mostly early models). For example, here's the time between the release date and last supported update of some other first-generation and second-generation Apple iOS devices:

  • Original Apple TV: 3 years, 1 month, and 1 day
  • Original iPhone: 2 years, 7 months, and 4 days
  • iPhone 3G: two years, four months, 11 days

The support period tends to vary based in part on how many of the devices are out there in active use, and in part on how badly underpowered the hardware was to begin with. So later products in a given line are likely to have longer support periods than earlier products.

Comment Re:life in the U.S. (Score 1) 255

Actually, the telcos in Europa are preparing to roll out G.fast, which makes telcos again competive with Cable.

Not really. We hit the bandwidth limits of a single twisted pair a long time ago. For G.fast to be usable, the phone company has to replace your phone line with fiber to within just a few hundred feet of your home. For it to reach maximum speeds, you need fiber within just 230 feet. In effect, this means that if the phone company replaces all of their copper with fiber, G.fast lets them skip the cost of running the fiber from the pole outside your house into your house, for now. That's about it.

If your community has no fiber, G.fast won't even connect unless you're within BB gun range of your central office or DSL-capable remote terminal.

Comment Re:Defective by design. (Score 1) 222

They're well defined now. AFAIK, they were nonstandard when initially proposed. Every time someone wants to deviate from accepted standards, there should be a darn good reason why, and I'm just not seeing any reasonable justification for creating a whole separate transport-layer protocol for something that basically behaves like a normal, connected stream.

And it isn't just explicit blocking that's a problem. Firewalls and NAT often make life miserable for users even when those firewalls aren't trying to block the VPNs. That's why as far as I'm concerned, if you're passing traffic, you should use TCP if you need the data to be robust and reliable, UDP if delayed delivery would make the data worthless, and ICMP for the usual network management purposes. IMO, everything else is anathema. :-)

Comment Re:Defective by design. (Score 1) 222

My point was that there was no valid reason for each of these VPNs to each use its own transport-layer protocol. A normal, connected TCP socket would have done the job just as easily. Every time someone strays from the expectation that all packets are either TCP, UDP, or ICMP, it means every hardware-based firewall maker (and every software-based firewall IT person) has to do extra work to deal with it, and hardware that worked before suddenly doesn't work or (if you're lucky) requires firmware updates. The fact that using a different protocol makes it easier to block is just another in a long list of reasons why the proliferation of transport-layer protocols is a bad idea.

Comment Re:Defective by design. (Score 1) 222

Okay, fair enough. I usually lump firewalls and routers in the same bucket, because outside of backbone hardware, most routers also act as firewalls. The point is that a lot of (badly designed) consumer routers (firewalls) do stupid things like routing only TCP and UDP, or treating those other protocols as "special" under the assumption that VPNs will always be used from the inside out, never from the outside in, resulting in all sorts of fun.

Comment Defective by design. (Score 4, Informative) 222

It doesn't help that most VPNs are so easy to detect and block at the IP header level. PPTP depends on the GRE IP protocol (47), and L2TP is usually tunneled over IPSec, which depends on the ESP IP protocol (50). By using different protocol numbers in the IP headers, the designers of these protocols made it mindlessly easy to block them, and made them harder to support, because routers have to explicitly know how to handle those nonstandard protocol numbers.

Comment Re:Please develop for my dying platform! (Score 1) 307

Nah, it's more like whining that Chryslers should be able to burn the same 87 octane gas as Fords without having to buy overpriced filler necks on license from GM. Or that GE lightbulbs should be allowed to work on ConEd electricity. Standards exist for a reason. Letting monopolists enforce their own whims without accomodating the competition is bad for everyone in the long run. Ask JP Morgan what happened to Standard Oil in the courts.

On the one hand, yes, on the other hand, no. Standards can only go so far. Suppose you design a laptop that has an innovative power storage system that can power it for a week, but in order to get the energy density high enough, you had to run the battery packs at 48VDC. Could you design it to be compatible with an existing 12–18V power supply? Sure. Would it be energy efficient? No.

The same goes for software. If you're designing a new OS, you could ostentibly add the necessary hooks to let it run Android apps, but your OS probably won't run them as efficiently, and you'd prefer folks to develop apps for your own native APIs anyway, because that results in a better, more consistent user experience.

Comment Re:Please develop for my dying platform! (Score 1) 307

There is no fundamental difference other than the webpages are standardized and the interface between apps and the OS is not standardized. They are fundamentally the same -- apps can be converted to websites and vice versa.

There is no fundamental difference between ice and steam other than the temperature. I don't recommend trying to walk on steam or clean your carpets with ice.

The reality is that the layout system and DOM programming interfaces available for web programming are positively primitive compared with app programming. (I'm deliberately ignoring WebGL for the moment, which though powerful, is low-level enough that it isn't practical except for games, and still isn't broadly available.) And networking is even more limited (same-origin restrictions) without cooperation from every destination site.

So in theory, yes, but in practice, not even close. And the fact that even relatively straightforward stuff like HTML editing isn't fully standardized (or, frankly, fully working) across major browsers should give you serious pause when considering standardizing anything as complex as a full-blown collection of application APIs across multiple platforms.

Comment Re:Please develop for my dying platform! (Score 1) 307

OS companies go to great lengths to create system APIs that are incompatible with other OSes to prevent developers from developing platform-independent apps.

Uh... no. OS companies build their systems using entirely different programming languages, for philosophical reasons that diverged decades back. Because of that difference, they create system APIs that are incompatible with other OSes because it would not be feasible to create APIs that aren't. Additionally, there are a number of fundamental differences between the two platforms (including their security model) that require platform-specific handling. Those differences have nothing to do with wanting to be incompatible, and everything to do with designing APIs to meet their specific goals and ideals.

In fact, platform vendors have gone to a great deal of effort to reduce portability problems. That's why both Android and iOS support cross-platform APIs such as POSIX and OpenGL ES. By taking advantage of those technologies, developers can write much of their code in a platform-independent way (with lots of caveats, of course).

Slashdot Top Deals

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...