Now that the requirement for Dual_EC_DRBG has been dropped from NIST's checklist, it would be possible to have LibreSSL meet FIPS requirements without having the troublesome component. Most of FIPS certification is about throwing money at testing vendors, as described by OpenSSL themselves. Doing that would really be incompatible with the crusade LibreSSL is on though, because the result is believed by some to be less secure than using a library that isn't bound to the FIPS process. I don't see those developers ever accepting a process that prioritizes code stability over security.
Oh goodie, a lesson on ABX testing I didn't need. Carbonation is more obvious than the taste differences people often fail to confirm in blind test. Slate even did some coverage on container carbonation differences talking about it. According to that I didn't necessarily describe the cause and effect correctly in my quick comment--it may be from gas escaping rather than a bottling difference--but the effect I was describing is real.
Have you ever noticed the difference between flat soda and fresh? If so, why do you believe carbonation level and bottle specific characteristics are never distinguishable? There's a motion component to it. A major reason flat soda tastes differently is that you expect a different taste from the bubbles, whether or not there even is a taste difference outside of that. Your perception of carbonation turns into a taste even though it's really not a taste, exactly. The same way that knowing the brand alters how you taste--the bit that screws up non-blind taste tests--sensing the carbonation in your mouth changes how you taste too.
Fine, you say that's still me claiming something, not a test result. I looked around for five minutes for a blind test showing some difference between two different Coke product packages that included observations on the "fizziness" of the product impacting preference. Here's a recent blind comparison with untrained testers doing exactly that. I don't think it's studied more because it is too obvious to bother.
Serious Coke drinkers can even tell what type of container the soda was stored in. Larger containers are carbonated more heavily so they can survive being opened more times, and that makes them taste differently.
What? No. Mouse vs. Keyboard shows that the mouse is better for moving around, compared to one of the UNIX-style editors where moving the cursor takes many keys. That's it. If you are doing a job other than moving the cursor and/or text around, keyboard beats mouse. Navigation is the thing the mouse is good at.
The context for TFA is writing new content, and there a save keyboard shortcut is far more efficient than anything else. It's only when you change your focus from there to editing that the mouse becomes a viable alternate navigation method.
All "NoSQL" means is that the database doesn't use SQL as its interface, nor the massive infrastructure needed to implement the SQL standard. This lets you build some things that lighter than SQL-based things, like schemaless data stores. There several consistency models that let you have a fair comparison. It's not the case that NoSQL must trade consistency for availability in a way that makes it impossible to move toward SQL spec behavior.
- Less durability for writes. Originally PostgreSQL only had high durability, NoSQL low. Now both have many options going from commit to memory being good enough to move on, up to requiring multiple nodes get the data first.
- No heavy b-tree indexes on the data.
Key-value indexes are small and efficient to navigate,
- No complicated MVCC model for complicated read/write mixes.
Today NoSQL solutions like MongoDB still have a better story for sharding data across multiple servers. NoSQL also give you Flexible schemaless design, scaling by adding nodes, and simpler/lighter query and indexes.
PostgreSQL is still working on a built-in answer for multi-node sharding. A lot of the small NoSQL features have been incorporated, with JSON and the hstore key-value index being how Postgres does that part. Both system have converged so much, either one is good enough for many different types of applications.
ZFS will also corrupt itself in situations where the drive lies about writes. Running ZFS with unreliable writes has the same properties as running without NVRAM-protected storage, which "can lead to data loss, application level corruption, or even pool corruption".
No, it's OCZ. ext4 is the most popular filesystem that expects good behavior from drive write caches, so of course it also has the most problem reports. The way write barriers work in ext4, the filesystem struggles when hardware lies about data being flushed to disk. See ext4 and data loss for an introduction.
As outlined there, ext3 gets lucky in some situations ext4 just doesn't tolerate so some people see that as a bug in ext4. But the reason for the change is improved performance. You just can't get a fast filesystem and rugged behavior in the face of drives lying at the same time. You have to pick a side there. In the classic "good-fast-cheap--pick two" trio of trade-offs, OCZ always picks cheap and fast.
Bad drives aren't tolerated by zfs or btrfs either. It's just the case that ext4 is deployed on far more servers than they are.
When I give someone a fizzbuzz style program to do, I point out to them that part of my grading is how it handles errors. The example programs people swipe online don't help very much, because they usually don't worry about things like boundary checking. If I can break someone's fizzbuzz by giving it a negative number that's a failing grade. That is much, much more important to me than language mastery. For C programming as one example, I'll trade you a dozen people who know the correct order of arguments for calloc for one who knows how that library call might fail.
I'm very impressed if someone finishes a program that calculates an infinite series in a 15 minute code interview.
Most programmers fall into the average range
Most people fall into the average range. That's how normal distributions work. It's also important to remember that 90% of everything is crap.
Damn it, Elwood, the Bluesmobile shouldn't be carrying about Uber fares.
The FCC is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, unless you slow it to modem speeds.
This is the hard way to mess with douchebags who use this app. The easy way is to create a throwaway account, sell a random parking spot outside where you live/work, and then get a good laugh at the buyer who shows up. Watching a hipster asshole circle the block for 15 minutes, frustrated "their" spot isn't open...now that's quality entertainment.
What's that? You'll get a bad rating and not be able to do that again? Yeah, that Uber feedback model works fine when there are a small number of sellers relative to buyers. For this system to be useful on SF streets, you need to capture the one-shot traffic that parks then leaves. Anything you tailor to make it easy for a city visitor to sell a spot will be easy to abuse for fake sales.
$100K for a 4 year degree is a cheap school now. Take a look at college ROI charts. Top schools can easily hit people with a bill over $200K today.