Comment Fixed the punchline link. (Score 1) 33
Dang. Typo broke the first, more-punchline-worthy, Schlock link.
I'm really begining to hate the keyboard on this new laptop.
Dang. Typo broke the first, more-punchline-worthy, Schlock link.
I'm really begining to hate the keyboard on this new laptop.
This actually looks good to me. Most helicopters can be shot down with a rifle. They are huge engines with large fuel tanks and large, whirling blades, and it is not that difficult to get them to destroy themselves with their own momentum, height, or fuel.
I concur. Helicopters are a collection of single-points-of-failure, disasters waiting to happen. (Particularly the pilot - they have to be continuously controlled and crash almost instantly if anything incapacitates him.) Their vulnerability is justified only because their extreme usefulness oughtweighs it. With eight rotors I'd be surprised if this vehicle couldn't at least come to ground safely with at least two of them destroyed, and the multicopter approach has been under autonomous computer control from the start - made practical only by the automation.
I envision this thing's missions as being primarily extreme rough-country ground transport, with short hops to bypass otherwise impassible terrain, reach otherwise inaccessible destinations or targets, attack from above, or put on a burst of speed when time is of the essence. Think a truck-sized "super jeep" ala Superman. Being primarily a ground vehicle lets it perform longer missions and reduces its visibility and vulnerability compared to a helicopter.
Just because you CAN fly doesn't mean you DO fly all the time. As is pointed out in the webcomic Schlock Mercenary: "Do you know what they call flying soldiers on the battlefield?"
Right now no one knows the opinion of those 2307 people so it can't be reported on.
In case you've been sleeping under a rock, this is the 21st Century. Not knowing something doesn't stop the media from reporting on it.
Feel free to start such a team, and get that competing implementation going.
If you need to add exceptions to get a tool to work... the tool is wrong for the job.
Well yeah - that's why you add exceptions: to indicate that this tool shouldn't be used for that job, because it's the wrong tool for the job.
Exactly this. I feel its worth pointing out that the code was reviewed, and the reviewer missed the bug too.
steven colbert doesn't even have an 'act' without his schtick.. and with or without it, he's gonna crash and burn, conan-style, in the big chair. i give it a year, tops, before someone else is brought in and colbert is chased back to cable.
He doesn't have an act, that you've seen. Have you ever seen him out of character? The man is brilliant, and I'm excited to see what he does with the new role.
A lot of that hardware does not have Linux drivers either.
So write one!
(Ba-dah-bing! Thank you, thank you. I'll be here all week.)
Seriously, though. If you're buying hardware with an embedded Windows OS as a necessary component, that's what you signed up for. Take that into account when negotiating with vendors for the replacement.
I never thought I'd see the day that anyone would claim Windows Vista was the pinnacle of OS innovation...
Looks to me like the claim was that XP was the pinacle of OS innovation AT MICROSOFT.
After that they jumped the shark with creeping featureitis and failure to support (or provide an adequte, clean, easy upgrade path for) important functionality.
Nothing was said about OS innovation OUTSIDE of Microsoft.
There's also the issue of whether OS innovation was even a Good Thing (TM) for the users of the functionality of the time. (It can still be enabling and yet be a net loss if its costs outweigh its benefits.)
... making further distribution by Sony
or their agents (i.e. YouTube, with Sony still on the hook for the money)
subject to the $150,000 statutory damages penalty.
That is an explicit claim associated with Sony Pictures Movies & Shows. To get that, Sony had to upload content to the YouTube content system saying "I own this content. Anyone matching it is in copyright violation."
This is a very important legal argument to make in court. By submitting content to the system - or to YouTube in a way that would be interpreted as being "Copyright Sony, rights reserved" by the system - Sony knowingly made a claim of ownership.
This both disparaged BlenderFoudation's title and voided their license to distribute the content, making further distribution by Sony subject to the $150,000 statutory damages penalty.
And I've heard a LOT of REALLY BAD ideas.
Most of what has gotten worse in Unix/Linux over the last couple decades has been the progressive hiding of the system admimistration mechanisms - previously built on human-readable text configuratin files - behind GUI configuration interfaces and excessive complexity. (See upstart and systemd for examples of the latter.)
Now they want to bury the kernel error messages in a QR code? That REALLY takes the cake.
"...We recommend that Microsoft Access be used solely for development purposes and not for production." - Microsoft
I do not see why TCP and IP could not have been created as single layer.
That was one of the major divergences from other networking schemes of the time that gave TCP/IP an advantage.
IP is a lower layer than TCP. It's about getting the packet from router to router, and is as deep into the packet that core routers have to look to do their jobs. Core routers are supposed to be "as dumb as rocks", putting as little effort as practical into forwarding each packet, in order to get as many of these "hot potatoes" moved on as quickly as possible and keep the cost of the routers down (and to drop any given packet if there's any problem forwarding it).
TCP is one of several choices for the next layer up. It runs only at the endpoints of a link. It does several things, which are all about building a reliable, persistent, end-to-end connection out of the UNreliable, "best effort", IP transport mechanism. Among these things are:
- Breaking a stream up into packet-sized chunks.
- Creating reliability by hanging error detection on packets and saving a copy of the data until the far end acknowledges successful reception, retransmitting if necessary to replace lost or corrupted packets.
- Scheduling the launching of the packets so that the available bandwidth at bottlenecks is fairly divided among many TCP sessions, while as much of it is used as practical.
- Adding an out-of-band "urgent data", channel to the connection (for things like sending interrupts and control information).
Some other networking schemes of the time did this on a hop-by-hop basis, requiring much more work by the routers. TCP put it at the endpoints only.
If TCP/IP had included crypto, we'd all be using IPX now days...
The reason TCP/IP proliferated was because it was light-weight and easy to implement. Crypto would have killed that.
There would have been more resistance to adopting it, too.
As it was, there was substantial resistance among people and institutions sited outside the US, because the Internet was a DARPA project, i.e. U.S. Military. Other countries, organizations within them, and even some people in the US, were concerned about things like what the US might be building in - like interception and backdoors for espionage and sabotage - or just because "Military! Bad!". Including encryption from the then officially nonexistent, deepest secret, communications spy agency would have boosted that resistance substantially.
You will have many recoverable tape errors.