Which may be why Zimmerman's defense didn't invoke SYG.
IMHO, there was not proof beyond a reasonable double that Zimmerman was on top. It's quite possible he was on the bottom, and was legitimately scared for his life. From what I know from the trial, I don't think I'd have convicted either.
On the other hand, from what I know from the trial, I also wouldn't have convicted TM if TM had managed to kill Zimmerman. It's a crappy situation for everyone.
You're assuming that the drive failures are independent. His point is that they might not be: the common cause may be write cycles.
Let's say that a drive under your write patterns will last 9 months. (Bad wear leveling algo, combined with very re-write heavy data structures?). You put 5 of them in a raid 5 enclosure, all brand new drives. 9 months later, they all fail within minutes of each other. Whoops, lost your data.
If they fail for different reasons, you're more likely to be safe. If they all fail from wearing out the ability to erase cells, you're more likely to be hosed, until you've swapped out enough to randomize the write count./p?
Aka: Leading the country.
Getting elected is a MEANS, it is not the end.
"Did you put the disk in the DVD drive?"
Excuse me for being a horrible pedant, but I would also get confused if you told me to put a disk into the DVD drive. That drive takes discs... the ones that are visibly circular and have no case.
Ahem. Back to your point, and sorry for making the point of the original article.
I had a small OCZ SSD of some variety in my foo-server (which mounted the NAS for all the important changing data). One day I realized that / had gone ready-only days earlier. Console showed a write failure to the journal (ext3).
Rebooted it, and it worked for ~1 day. Reformatted (managed system, I have no idea if there was data corruption. Didn't seem to be any, but I didn't look for any) and it worked for around 1 week. At that point I gave up and replaced it. It had lasted for just over a year when it failed.
The two Intel SSDs I've bought have not failed yet, nor has another OCZ brand SSD (Vertex3, fwiw).
Handling underflow/overflow was also so easy (write ahead as much as the device will take. Use an IOCTL when you need to stop... because the buffer won't run out for several seconds) that it amazes me that buffer sizes apparently have to be configurable in current sound-using applications. Crazy.
This. Like Enry, I've been using linux since pre-1.0. Unlike him, I've lost my desire to constantly upgrade versions.
The "KDE/Gnome are both Windows 95/XP look-alikes" era was probably the top of the usability as far as I can tell. Newer KDE never got back to the same level of usability, and newer gnome makes me turn giant and green. (Look, my monitor is not 1024x768. Stop making UI decisions that only work on tiny-ass monitors.)
And unlike most here, I think that is reasonable. Normal people won't use Linux until the app they want is only available on it... and that won't happen until the developer likes it enough to run it as their default platform. So YES, make it nice for neckbeards first. And once it's (back to being) nice for the neckbeards, THEN go ahead and try and make it nice for your grandmother too... but DO NOT break it for the neckbeards.
And then you declare the basic desktop DONE for 3 years or so, and work on apps. Maintain the desktop in terms of bug fixes, and internal reworks and anything else you need to do, but religiously keep interfaces static for 3-10 years. And instead of going all 2nd system on the interface, work on other things. Maybe those are easier app-building tools? Maybe those are actually just killer apps. Maybe those are better tools for configuring the system, or for managing large numbers of desktops. Maybe that's "work on something completely different that doesn't affect the desktop". Whatever. Maybe that's "work on something completely different, like servers". I don't really care, as long as you stop breaking perfectly working desktops.
Your standard app was not written for silly-high DPR. You could show this on linux too: take your desktop, and crank the DPI to 300 or so, so that the X server thinks your screen is only 5" across. Now move far enough away from it that a 12 point font looks reasonable, and then look at how stupid apps look. Icons are microscopic (because they're defined in fixed pixel sizes). Layouts between menubars and borders look stupid (natural spacing was defined in fixed pixel sizes).
So Apple's approach here is to tell the application that the screen is 1440x900. Any primitives that can be scaled ("place the string 'pants' in font 'Helvitica', size 12pt, at X,Y". "Draw this 2kx2k pixmap in this 500px x 500px space") are then rendered to the screen's native resolution. Things that can't be scaled aren't ("draw this 96x96 pixmap here, in this 96x96 space"). Some apps then look horrible, some look great.
I personally would have rather they just let apps look like crap, and told people to fix their darn apps, but I can understand why they didn't.
IOS (tablet), Android (phone)
Linux in my television, Tivo, and game console.
Whatever the heck RTOS runs my car, car's GPS, my work telephone, microwave, the badge-swipe system at work, and my work monitor (no, not joking. Darn thing can lock up, and has a boot screen)
That's all I'm coming up with on a daily basis.
Less often, ATMs, routers (Mostly linux), NAS devices, smart-switches (didn't seem like a linux box, but had some copyright lines in the packaging) and anything else with a UI more complex than a mechanical watch. Increasingly, EVERYTHING has an OS: I'm sure it won't be long until someone finds a reason to put a fancy UI on a charcoal grill; and then all future grills will have an OS.
Lets see YOU sir figure up an IP V6 address map for...lets say a 40 person small business, in your head.
Our ISP gave us the subnet fd00:cafe:babe/64. I'll put the RA daemon and router on
::1, and use autoconf for the rest
That was easy.
and then once they've excavated what your MAC address is, telling your router to route traffic to your node is trivial.
Could you further explain this attack vector, cause I've not really understood it so far. The bad guy has your IP address. Exactly what is the additional harm in letting him know your MAC address?
I understand the issue of "probable iphone MAC => iphone specific vulnerabilities", but that doesn't seem to be what you're talking about here. (And really, that's not a significant barrier to the attacker anyway. You did something that let him see your IP address: the odds are quite good that he already could figure out your OS more reliably than using a MAC -> OS mapping)