I am skeptical to the idea tbh. For commonly used commands the effort of learning them is relatively small and the rewards are great (if I ever had to type "change the permissions such that" instead of "chmod" I would just give up and use GUI) .
I believe the tradeoff of CLI is between working more efficiently (by typing commands and not having to use your mouse too often to interrupt your flow)
and a steeper learning curve (learn commands and their params, config file locations and their syntax etc.).
This shell seems to provide a lot of features that most of the people are not interested in, or already use specialized tools for those tasks. It is unclear to me why would one prefer to use such a shell to execute SQL or modify the DOM of a webpage rather than spawn a full-featured querying tool, respectively Firebug.
Their syntax coloring looks pretty poor, and they seem to ask you to "double-click" whenever you want to do anything. I am currently using terminator + fish, which I can highly recommend. It makes me way more productive, has very interesting completion features and uses a really large number of colors to make things more easily distinguishable.
The fact that you can move things around is quite cool, but I don't see any significant advantages, although I've only watched the first ~6 mins of video. Can someone competent perhaps voice his opinion on what does this bring?
My bad, it's n^s instead of s^n. I don't know where the factorial is coming from in your analysis. Or how it magically disappeared
at the end of your comment.
You have a space of `m' accounts, `n' common passwords and `s' threshold.
The first step is to find a subset of `s' people who all have easy passwords. There is no better
way than to pick all such subsets, so that gives binom(m,s).
For such a subset, you have to try all assignments of passwords. You have s
people, each of which can have one of n passwords. That's s^n tries.
The total time is binom(m,s) * s^n * C, where C is the time it takes to test if your guess is correct.
The latter is not feasible, because you don't need to guess passwords, you need to guess user-password pairs.
Yes, you could. This doesn't have any better security guarantees than just doing that.
His whole argument however is that the salt needs to be reentered in the memory manually
after a system crash, whereas with his mechanism the memory gets the needed data automatically
after a few users login.
Yes, it does make it more secure. The security of the hash files relies on a secret stored in memory. To
get that secret, you either need to know the password of K users (user i has password p_i) or you need
access to memory. The point is that access to disk is not sufficient (regardless of how weak the passwords
of the users are).
Plus, for most websites, you can just register 10 accounts, giving you the 10 known passwords.
In any case, the treat model is that you can access all the data in the db, but not all the data in memory (as
is the case with SQL injection and most other attacks). The
memory is used to cache the first n-1 passwords. The n-th guy needs to wait only after the system crashes
and the cache data is lost.
But in such a treat model, the problem can be solved in a way simpler fashion: just store in memory a key
with which all the hashes are encrypted. Write the key down on a piece of paper. If the machine crashes,
just reload the key from the paper into the server's ram.
And this is the only threat model you can work on anyway: if you assume the attacker gets root,
then there's obviously not much security to preserve. This is why authentication should be interactive (
Can you detail how can this support any of the features of a relational database? Filtering rows, joining tables, aggregation, ordering.
To clarify all misconceptions in other posts, having been to a talk of hers a few days ago, here's the encryption types involved:
* RND (salted symmetric key encryption) - used for columns where no sql manipulation is needed
* DET (unsalted symmetric key encryption) - used for columns that need to be looked up by equality
* Partially homomorphic encryption - used for aggregation such as SUM()
* Order preserving encryption - useful for inequality where clauses, indexes, aggregations such as MIN()
* Searchable encryption - allows something like ILIKE on text columns
OPE is the most dangerous, but is rarely needed for the most sensitive fields. They've run CryptDB on top of phpBB and
some other thngs with acceptable overhead. Let me know if you have other questions.
Well, other sources do not typically threaten to send you to a special place to burn forever.
The ability to evolve of the ability to evolve may actually evolve.
I wasn't aware TLS-SRP patched browsers exist. In any case, these mechanisms will likely be adopted only if they can be embedded in HTML. Few designers are going to sacrifice their fancy login form for that ugly-ass browser window that asks for credentials. But allowing proper authentication in HTML forms would imply that you get all or nothing. Either all HTML forms that contain an input type="password" must use TLS-SRP for sending the credentials, or this cannot be adopted. Otherwise a MITM would simply alter the form to switch from secure authentication to plaintext authentication.
To the best of my knowledge , HSTS is merely the Strict-Transport-Security response header. The lists are just something
extra. The "not very useful" comes from the fact that you are still unprotected the first time you access the website. If the
attacker is present the first time you visit an website, he can remove that header via MITM. Otherwise you should be fine.
No, I meant what I said. "https.example.com" is an example of a host supporting HTTPS, yet the browser accesses
it by default as "http://https.example.com". You don't seem to have understood what I said at all.