Fair enough on the specifics, but you did miss my entire point.
Fair enough on the specifics, but you did miss my entire point.
Actually, the Ariane 5 loss was caused by an overflow, not a divide by zero (see first paragraph at https://en.wikipedia.org/?titl...).
Even if it were a divide by zero problem, though, the error was in the code handling flight trajectory computations; I dare say an uncaught error in the code computing your trajectory is going to put the rocket somewhere that you don't want it, regardless. So you fail in one way or you fail in a different way. I doubt there's any case where ignoring a divide by zero error (or an overflow error) would actually keep the rocket on a correct trajectory.
This can be the right thing to do *if the entire (sub)system is designed to accommodate it*. But I certainly would not want, say, ICBM targeting code to drop a missile on my house because someone decided that continuing to run in the face of an error was better than aborting...
"Rather than failing when an unexpected condition arises, I want all software on my system to continue running with a possibly invalid or meaningless internal state."
Sure, what could go wrong?
No negotiation, replace the suite on both ends once per decade.
So, what... the Internet gets together and decides that January 1 of every year ending with '0' we'll upgrade every server, client, and embedded system in the world to the latest security protocol while disabling the previous decade's? And people whose systems are out of support or can't be patched (which would only be, what, 80% of the current internet?) are just SOL?
I think I see some flaws in your plan.
I would *love* to see a summary of the types of problems the video stream has, and the techniques used to recover them. Anyone feel like sorting through the ~70 pages of thread and cataloging them?
So, um, you do realize that there's not actually a technical differentiation between an ISP and anyone else peering with someone on the Internet, yes? None. A peer is a peer is a peer. There's a lot of companies that don't "pay an ISP for their bandwidth" because they're peering directly with all the big (and plenty of small) network players. The idea that a small handful of companies are "internet service providers" and everyone else must buy from them has never been an accurate representation of how the Internet actually works. And *I* most certainly *do* know the details.
Do you also realize that even if Netflix doesn't have "an ISP," that they still have to transit their own traffic to whatever peering points they use, right? That's far from free. The only reason Netflix would pay "their ISP" to start with would be to move Netflix's traffic from wherever Netflix originates it, to one of their peering points where they peer with Comcast. Not having "an ISP" do that for them doesn't negate the need. The data just doesn't magically appear at a peering point somewhere.
Also, do you realize that it's quite possible that Netflix would actually peer with Comcast in places that were actually *good* for Comcast? Netflix, in general, seems to want to offload their data onto end user's ISP's networks as close to those users as possible, since that's how their users get the best quality service. Doing so means that transiting Netflix's traffic is actually *cheaper* for Comcast, because they don't have to haul it as far across their network to deliver it.
(This is why Netflix actually offers, to major ISPs, *free* servers that the ISP can put on their network in whatever locations they like, which will originate a large portion of Netflix's traffic. This means that the ISPs could put the sources of that traffic in the places that are cheapest and best for the ISP, at virtually no cost to them, and save them lots of money in the process (since they wouldn't have to transit the traffic from wherever they peer at. Hell, shove one of those in the same buildings that terminate all your customers in a major metro area, and you practically eliminate Netflix as a source of traffic on that ISP's backbone in that area...)
Now, I realize you're just trolling, but I'm posting just in case someone out there doesn't realize that and tries to take you seriously.
Not just "popular" -- Yahoo News is the #1 news site in the world (by traffic), Yahoo Finance is the #1 finance site in the world, Yahoo Sports is one of the top three sports sites in the world (tends to bounce around a little), and Yahoo as a whole trades the #1 ComScore spot back and forth with Google quite regularly.
I'm sure there's a lot of "hip" companies out there that *wish* they could even come close.
So, I'm not sure about how the current linux implementations work, but when Solaris went 64-bit, they added an optimization where when you run an *unchanged* 32-bit executable, the libc would recognize that it's on a better processor, and use the improved features of the processor in places that it could, for performance. So, for example, if you called memcpy(), it would use 64-bit load/store instructions (and registers) to copy, giving you twice (or more) the performance for those calls, with no changes to your old code -- you don't even need the source code available!
Is this at all what is intended to be possible with the x32 implementations on linux, now? That would be an additional advantage that I haven't seen mentioned yet.
Overweight people (or at least the ones without chloroplasts) do eat too much.
Wow, and without even having read my medical file. Or gotten a medical history from me. Or even met me.
Do you take insurance? I'm totally coming to you for all my medical needs in the future. You're amazing!
The magic words you're looking for on accounts are "with rights of survivorship," which will give the named individuals direct access even after one dies. It's something you can just ask for on a joint account (if they don't give you the choice directly). I have my savings & investment accounts (and my deposit box) set up this way -- the last thing I want is for my partner to have no access to funds immediately after my passing.
Don't suppose you have a paper/website/whatever talking about your filesystem development work for this?
Why can't they add local echo, predictive typing, and resumable sessions to ssh or another TCP-based protocol? Yes, TCP can *possibly* take longer to recover from network errors, but this isn't something where you can just drop some missing packets (like some audio streaming things) to keep things flowing.
It's implemented over UDP, which means you *still* have to basically do all the functions of TCP, but now you get to do them with code that hasn't been tried and tested over the past several decades. Plus a new crypto implementation on top of that. And for what? Slightly better response time during network loss, which you shouldn't notice anyhow because of the predictive typing?
UDP just seems to solve no real problems, yet *adds* a lot of problems -- the firewall problem, for one. Fresh new daemon with unknown security issues.
I see no reason why you couldn't just tunnel this over ssh and get the vast majority of the benefits -- or better yet, patch the predictive typing/session resume/etc into ssh directly. Then you get to take advantage of the decades of work and bugfixing that's already been done for the majority of your protocol stack.
(And I don't buy for a minute that it's significantly more difficult to handle session resume when it's a TCP connection...)
The caps wouldn't be that bad if the service didn't *utterly* suck.
The gateway they give you is the only thing that works with the service (you can't use your own hardware, or at least nobody has found a way to). It won't do any kind of bridge mode. It won't talk to more than one IP per MAC address, so you can't put a router behind it (unless that router is doing NAT for *everything*). It randomly drops connections, especially long lived ones -- I can't make local backups of my server in a remote datacenter anymore, because the connection will almost never stay alive long enough to transfer the whole ~400MB. Sometimes it starts blocking random incoming connections, even to static, un-natted, unfirewalled addresses -- one day I can't get to my webserver from the outside world for a few hours... the next I can't ssh into my home server ("unknown inbound session stopped"
Recently I've started to notice having periodic problems downloading content (like the slashdot style sheet!) from akamai-based sites, which a little bit of goggling shows to be an ongoing U-Verse problem since 2008.
The support sucks massively. If you call with basically any problem beyond "my internet is down" they will forward you on to their "advanced" support department, who has a fee of $39 (might be $29... don't remember)... which they'll charge you even if all they do is tell you that they can't help you and you need to call regular support.
Netflix, on my 24Mbit downlink, varies from "great quality" to "OMG you can barely do SD quality"... many other people report this as well. Some days the performance is great, some days the performance is just absolutely miserable. I'd try to see if there was some common network path causing problems, but they basically disable traceroute for all of their internal nodes (I'm guessing they just stop them from sending TTL exceeded datagrams completely).
You can't switch back to ADSL -- they wouldn't even let me get U-Verse service unless they disconnected my ADSL at the same time. But it is "no longer available" so now I'm stuck with this garbage.
I'd gladly take a usage cap if it meant any of this crap would get better. I'm somehow doubting it, since not a bit of it seems like it's related to network saturation... just lousy service. And my only other choice in this area (AFAIK) is Comcast, who also has caps, along with their own set of problems...
I'd say "welcome back to the 90s"
Hey AC, can I get your name and email address? I'd like to make certain that I never accidentally hire you. Thanks!
"Love your country but never trust its government." -- from a hand-painted road sign in central Pennsylvania