Furthest-most? When "furthest" is just not far enough?
Technically it should actually be "farthest" since it refers to a physical distance whereas "furthest" means most distant in a figurative sense. For example you say "furthest from the truth" not "farthest from the truth" but "Cape Spear is the farthest east you can go in Canada" not "furthest east". So to summarize: "furthest-most" should not have a hyphen, should not have the 'most' added since it is redundant and finally should actually be "farthest" since it refers to a physical distance.
As for the origin of the "cold spot" I understood that it was completely statistically consistent with quantum fluctuations in the early universe. So how about we rule out that explanation first before coming up with multiple universes or other crazy stuff.
Windows has had IPv6 stacks since Windows 95 and Microsoft even started supplying them as of 98.
IPSec is perfectly usable.
Telebit demonstrated transparent routing (ie: total invisibility of internal networks without loss of connectivity) in 1996.
IPv6 has a vastly simpler header, which means a vastly simpler stack. This means fewer defects, greater robustness and easier testing. It also means a much smaller stack, lower latency and fewer corner cases.
IPv6 is secure by design. IPv4 isn't secure and there is nothing you can design to make it so.
Actuaqlly, according to DC comic book history, he was born in Kansas. The spaceship that the Kents found was considered an artificial womb, and in at least one comic book storyline, the Supreme Court ruled that he was born when he left the spacecraft.
there has to be a good reason for it, and making it easier for bad programmers to produce more bad code is not a valid one.
If all you've got is bad programmers, and their bad code is nevertheless good enough to accomplish the tasks you need to get done, then a tool that allows bad programmers to produce more bad code may be just the thing you need. (of course some would argue that that niche is already filled by Java, but time will tell)
IPv6 would help both enormously. Lower latency on routing means faster responses.
IP Mobility means users can move between ISPs without posts breaking, losing responses to queries, losing hangout or other chat service connections, or having to continually re-authenticate.
Autoconfiguration means both can add servers just by switching the new machines on.
Because IPv4 has no native security, it's vulnerable to a much wider range of attacks and there's nothing the vendors can do about them.
Each level is given the parent's prefix plus one or two bytes. Yes, you can announce that and it is easily summarized.
Anycast tells you what services are on what IP. There are other service discovery protocols, but anycast was designed specifically for IPv6 bootstrapping. It's very simple. Multicast out a request for who runs a service, the machine with the service unicasts back that it does.
Dynamic DNS lets you tell the DNS server who lives at what IP.
IPv6 used to have other features - being able to move from one network to another without dropping a connection (and sometimes without dropping a packet), for example. Extended headers were actually used to add features to the protocol on-the-fly. Packet fragmentation was eliminated by having per-connection MTUs. All routing was hierarchical, requiring routers to examine at most three bytes. Encryption was mandated, ad-hoc unless otherwise specified. Between the ISPs, the NAT-is-all-you-need lobbyists and the NSA, most of the neat stuff got ripped out.
IPv6 still does far, far more than just add addresses and simplify routing (reducing latency and reducing the memory requirements of routers), but it has been watered down repeatedly by people with an active interest in everyone else being able to do less than them.
I say roll back the protocol definition to where the neat stuff existed and let the security agencies stew.
Huh? This sounds like nonsense. Operating systems already cache frequently used data in ram.
-Matt
it actually caused a bug that would crash the system
It would be more accurate to say it revealed a bug. The bug was almost certainly a race condition that had always been present, but it took particular entry conditions (such as an unusually fast I/O device that the transcoder developers never tested against) to provoke the bug into causing a user-detectable failure.
That's isn't correct. The queue depth for a normal AHCI controller is 31 (assuming 1 tag is reserved for error handling). It only takes a queue depth of 2 or 3 for maximum linear throughput.
Also, most operating systems are doing read-ahead for the program. Even if a program is requesting data from a file in small 4K read() chunks, the OS itself is doing read-ahead with multiple tags and likely much larger 16K-64K chunks. That's assuming the data hasn't been cached in ram yet.
For writing, the OS is buffering the data and issuing the writes asynchronously so writing is not usually a bottleneck unless a vast amount of data is being shoved out.
-Matt
Actually, large compiles use surprisingly little actual I/O. Run a large compile... e.g. a parallel buildworld or a large ports bulk build or something like that while observing physical disk I/O statistics. You'll realize very quickly that the compiles are not I/O constrained in the least.
'most' server demons are also not I/O constrained in the least. A web server can be IOPS-constrained when asked to load, e.g. tons of small icons or thumbnails. If managing a lot of video or audio streams a web server typically becomes network-constrained but the IOPS will be high enough to warrant at least a SATA SSD and not a HDD.
Random database accesses are I/O constrained if not well-cached in ram, which depends on the size of the database too, of course. Very large databases which cannot be well cached are the best suited for PCIe SSDs. Not a whole lot else.
-Matt
I mean, why would anyone think images would load faster? The cpu is doing enough transformative work processing the image for display that the storage system only has to be able to keep ahead of it... which it can do trivially at 600 MBytes/sec if the data is not otherwise cached.
Did the author think that the OS wouldn't request the data from storage until the program actually asked for it? Of course the OS is doing read-ahead.
And programs aren't going to load much faster either, dynamic linking overhead puts a cap on it and the program is going to be cached in ram indefinitely after the first load anyway.
These PCIe SSDs are useful only in a few special mostly server-oriented cases. That said, it doesn't actually cost any more to have a direct PCIe interface verses a SATA interface so I these things are here to stay. Personally though I prefer the far more portable SATA SSDs.
-Matt
The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.