Even that isn't a "No True Scotsman" fallacy, because there was no initial flawed assertion, nor a counterexample that disproves that assertion.
As many folks have already pointed out in other threads on the subject, Intel screwed up the Haswell line by using an entirely different pinout on the i7 than on the i5. The result is that any motherboard with soldered-on chips has to be specifically designed for one or the other.
Apple chose the i5, presumably because that's the hardware grade where most of the Mini's sales came from, rather than doubling their R&D cost by building two very different motherboards.
Here's hoping Intel doesn't screw up Broadwell in the same way.
They only fix 2 problems - weak passwords and keyloggers.
That's not true. They also provide protection against:
- Shoulder surfing attacks, which require no compromise to the internals of the endpoint
- Storage of data encrypted with a protocol that later proves vulnerable in some interesting way, such as a key compromise
For example, consider heartbleed. If someone stores your encrypted communication, and later compromises a host's private key, that attacker could ostensibly decrypt those communications. If you use a password, that password is compromised, and it's "Game over, man." If you use a physical token, only the PIN is compromised (assuming the actual verification happens in a separate process).
Ideally, you would still want to issue new PIN codes, but the account hijacking risk would be largely mitigated by the physical token requirement, at least after the n-hour cookie expiration window passes, and you could even eliminate that window by expiring any cookies in your authentication database before bringing it back online after you fix the heartbleed vulnerability.
So you're saying they made a new network for blackjack and hookers? You know what, forget the network. And the hookers.
Regardless of the fact that it may be legal for others to do so, it's unethical and clearly misrepresentation.
Not true. Lots of small homebrew hardware uses off-the-shelf chips like the ones FTDI builds without applying for their own VID/PID combo. This causes minor headaches because software can't tell them apart from one another, but as long as the final product doesn't have a USB logo on it, it is perfectly acceptable to sell it, even if your homebrew flash programmer looks like a USB to serial adapter to any software that asks.
If you want to use the USB logo, you have to apply for your own VID/PID combo and reprogram the chip to identify itself as being your product, and ship a custom driver that talks to it (which could be a modified version of the official FTDI driver, or the open source driver, or whatever).
As other people have pointed out, not all of the fake chips have those markings—or any markings, for that matter. This tells me that some company special-ordered batches of chips that were silk screened with those markings, but that the part normally comes blank.
And who would want a 9" pianist figurine anyway?
But this involves TECHNOLOGY so it must be evil because without TECHNOLOGY there would be other possible way for the folks at the airport to calculate how long you might be waiting in line.
No siree, no way at all. You standing there, in full view of every person, in a public space. No way to check. None at all.
Look at how counterfeiting laws work for money. If you pay with a $100 bill in a smokey bar at night and get a $20 counterfeit bill in change, and don't realize it until the next day, you're out the $20. If you try to spend it, you're actually committing a felony - it doesn't matter if you printed the phony bill yourself, or if you just accepted it as change and are passing it forward. It also doesn't matter if you realize it's counterfeit or not, although the Secret Service agents may agree to give you a pass the first time you try to spend phony money if you claim you didn't realize it was counterfeit, and cooperate completely.
However, currency counterfeiting laws are very specific to money. Let's look at product counterfeiting, which works similarly but probably without the felony charges.
If FTDI discovered a container of devices with counterfeit chips was en route, they could tell Customs, who would order the contents of the container to be destroyed once they arrived on the dock. This would be a problem for the shipping company, who accepted the devices for shipment and never delivered them, so they would have to pay out an insurance claim. The insurer then has to deal with the liability by going back to the shipper and saying "hey, your devices were destroyed by Customs, I had to pay out for failing to deliver the goods." I expect the shipping companies deal with this all the time, though, and have a contract clause that absolves them of insurance liability in this case. In this case, the supplier is out the money. Their recourse would be to go back to the manufacturer and ask for their money back. Maybe the manufacturer will honor the request, maybe they won't.
If FTDI discovered a shipment of devices with counterfeit chips already went to MicroCenter, they would call the Secret Service, who would contact MicroCenter and MicroCenter would have to pull them off the shelves and destroy them, leaving MicroCenter without the money. Their only recourse would be to contact their supplier and say "hey, you sold us counterfeit goods, we want our money back." Maybe they'd get their money back, maybe they wouldn't. It's a risk.
So FTDI has now found a way to destroy a consumer device. As above, the consumer is similarly out of luck. Their recourse is to go back to MicroCenter and say "hey, this adapter, it's broke." Maybe they'll get their money back, maybe they won't. It's a risk. MicroCenter might eat the losses, or they might go back to their supplier, who might go back to the manufacturer.
In every case when the counterfeits are discovered they are destroyed, leaving somebody without the device and without the money.
I think FTDI may have a pretty solid legal ground for behaving like this, even though it's always a crappy experience to the person who got stuck with the phony. The main difference is that FTDI is doing this without asking the Secret Service to investigate the counterfeits first.
I know this is a radical idea, and I'm just spitballing here, but maybe the part about unauthorized act being done a computer should be a hint. If it's not your computer or your system, don't try to get into it.
Or are we going to use excuses as to why it's acceptable to try and get into someone else's equipment when you're not supposed to then whine about the penalty when you're found out?
First, there's no such thing as "illegal access to software". The customer may be violating a licensing agreement, but as a rule, that's not a criminal offense.
Second, I'm pretty sure there are third-party FTDI drivers out there. So you really can't make the argument that the clone chip vendors don't have an alternate driver. The best you can do is state that if a clone gets bricked, it means that the commercial FTDI driver was loaded at least once by the customer for some reason (possibly with the intent to use it with the clone hardware, but possibly to use it with some other device), and that it matched the clone because it was attached while that driver was loaded.
Besides, they aren't FTDI's chips, so FTDI's statement about what uses their chips are certified for is irrelevant.
Actually, if you sell it as a "USB/Serial converter", then you are, because the USB mark is trademarked.
Only if they use the USB trident mark. The letters "USB" are likely to be held as descriptive.
If some medical device manufacturer uses a consumer-grade FTDI chip - counterfeit or not - in a medical appliance, then that manufacturer is the one who would be liable, as FTDI has already made it clear that these chips are not certified for such uses.
Liability is not binary. If the failure were accidental, you'd be correct. Because it is deliberate, at best, both companies would be held liable—the medical device vendor for choosing an unsuitable part and FTDI for deliberately breaking it, and at worst, FTDI would be held solely liable for deliberately breaking it.
No, I haven't solved any of the hard problems, because determining whether a colored ball or arrow is meaningful really isn't one of them. The hard problems are things like:
- recognizing and handling road signs
- dealing with potentially contradictory lane markings
- dealing with rain on the cameras
- determining which way to swerve when avoiding obstacles (like a dog running across the road), and whether to brake instead, or do both
- choosing whether it is better to hit the object in the road or swerve into the next lane (including computing the distance and speed of an oncoming vehicle correctly, even if it is a motorcycle)
- handling four-way stops when other vehicles don't follow the rules
- determining weather conditions sufficiently to compute braking distance correctly (Is it rainy or just cloudy?)
- recognizing that there are kids playing by the side of the road and you should probably slow down just in case one of them falls out into the street....
Traffic lights are relatively straightforward by comparison, so long as they are working.
No, that's the next generation, when they add backscatter and/or millimeter-wave scanners.