This link should let you view the article until next Monday.
There's another possibility: commingled inventory. By default, multiple sellers selling the same item will have their inventory combined. When a third-party seller "sells" the product, Amazon just grabs one from that pile and ships it. There's rampant abuse there (fakes, not-quite-the-same products, etc.). It's a horrible practice, and sellers should always opt-out (despite the added expense) IMNSHO.
The slow loading speed was caused by some hardware interface issues IIRC that had to be worked around in the customer
It was a curse at Commodore; they had two different bugs in their serial ports that doomed their floppy drives.
The VIC-20 had a flawed chip, the 6522 VIA, that was to read data off the serial bus. Under some circumstances, it would corrupt the incoming data, so they had to resort to slowing it down and using the processor instead.
The C-64 fixed this with the 6526 CIA. However, it was discovered that the trace on the printed circuit board from the serial port to the CIA was omitted after 10,000 boards had already been manufactured. Jack Tramiel deemed this too costly to scrap, so the VIC-20 solution was reused.
This -- I watched it happen from within the electronic design automation (EDA) industry.
In the late 90s/early 00s, if you wanted a serious EDA workstation you were buying or leasing Sun hardware. I was writing placer and router code at a startup on a Sun Ultra 5 on my desk in 1999. A few guys were lucky and had Ultra 10s, and our main server was an Ultra 30. There were a few firms out there running HP-PA and AIX workstations, but Sun was out bread and butter.
A few years later, my Ultra 5 was replaced by a Blade 100. It was... pretty much the same machine. Maybe a tad faster. Meanwhile, we started getting Linux boxes that were comparable/faster, and a lot cheaper. Importantly, our customers were starting to do the same. The Sun salespeople kept deriding them, "But they're not RISC!" And the EDA industry collectively replied, "We don't care..."
There are things I preferred about Solaris over Linux -- the NFS implementation (still finding weird dirent issues in Linux's implementation today), memory management (I get Linus' reasoning about overcomitting memory and the OOM killer, but why-oh-why does the OOM killer seem to target sshd first, no matter how much I tell it never to kill this process?). But Sun basically lost the race to Intel, and tried to shift to PC platforms too late in the game.
I just noticed that Qt does have a startup edition that's available for $500 (instead of the usual $5000). It's the same as the regular commercial edition, but only sold to businesses with under $250k in yearly revenue.
It's a bit screwy if it's an embedded (non-mobile) device, though; seems they want a per-seat fee on top of that.
Their fix? Add 20-29 to the window. Old:
<define name="_year" extract="year">
<text><![CDATA[(20\d\d|19\d\d|[901]\d(?!\d))]]></text>
</define>
New:
<define name="_year" extract="year">
<text><![CDATA[(20\d\d|19\d\d|[9012]\d(?!\d))]]></text>
</define>
See you in 2030...
These aren't placeholders. The video I've seen shows a corrupted image arguably from another user's camera, and not something you'd show as a placeholder.
Xiaomi's explanation doesn't make much sense given the symptoms -- why would caching produce issues only when the network is problematic, and then return corrupted images? (Usually, when I see "caching," I think: You messed up your CDN configuration and are allowing the CDN to cache private data, but that isn't the case here.)
My best guess given the symptoms is they weren't checking for an error condition on the server. Basically, the server expects the camera to upload a still image; they allocate a buffer based on the Content-Length header, but due to the aforementioned network issues the connection dies during the upload. Because they were ignoring the error condition, the upload buffer was only partly initialized. The beginning of the buffer has some uploaded data, but the remainder is whatever was in memory before would be stored as the image you uploaded.
If access patterns are very predictable (e.g., this server is only used to handle uploaded images, the buffer allocated is the same as the previously returned pointer value), you could very well see the image that was previously uploaded. This assumes they're using C or C++ (or features from another language which allows uninitialized memory access) on the server, though; that's not exactly a common way of coding web servers today, though not infeasible.
I think phone marketers are borrowing a page from restauranteurs.
A nice restaurant might carry a few ridiculously expensive wines, with a menu price of $500 or $1000/bottle or more. They don't expect many people to actually order it (maybe only a half-dozen per year will -- buying the most expensive wine on the menu enables them to flaunt their wealth), but suddenly it makes the $80/bottle wine not seem so ridiculous to the average consumer.
For Apple and Samsung, I'm guessing that having a $1200+ phone enables them to push $800 phones to the average consumer. Even this price point seems ridiculous to me (but I was happy with my iPhone 5, and upgraded to a 7 because my 5 was dying and I needed some radio feature that the 5 didn't have -- probably an LTE band or somesuch).
This is an excellent summary of both the history and design tradeoffs. Hats off.
The one thing I would add: designing ASICs is trickier than designing FPGAs, and both require significantly different skill sets than designing software (whether for CPUs or GPUs). With the advent of cloud-based FPGA platforms (where you can try out your FPGA designs without having to invest in a dedicated hardware lab), I suddenly had folks asking about translating their web-based applications onto AWS F1 instances, as though there were magic translators that could automatically make their software faster, more efficient, etc., without effort. (Translation tools do exist, but they're mostly in the research realm/experimental and not that great.) Too many folks see ASIC and get starry eyed, and don't realize just what an investment it is to tape out a chip.
But the bigger problem is that the forward wind-shield pillar blind-spot is NOT a problematic blind-spot because you have direct line of sight while looking forward (just move your head).
False. It is a problem that has caused accidents, and a blindspot is any part of your car that you can't see out of *without* moving your head. People don't move their head while driving. People move their head when doing a very select and usually very slow set of maneuvers. If moving your head during normal operation were the norm then there wouldn't be accidents as cars wouldn't have blindspots (between all your mirrors and vision around the pillars you from the driver seat with a non static head have perfect 360 degree vision (just blindup and down).
Yet accidents still happen and we still call the blind spots.
Side curtain airbags haven't helped here. Don't get me wrong -- I'm all for additional safety against side impact accidents. But if (like me) you grew up driving vehicles with small A pillars that could barely hide a chihuahua, and now an entire F-150 can hide in there at a 4-way stop. It's been an adjustment.
Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?