Pfffft. I should have expected Korny jokes. (Ba-dum-csh.)
Pfffft. I should have expected Korny jokes. (Ba-dum-csh.)
Also the scientific method of formulating a hypothesis and performing an experiment to test it.
Slow down, cowboy! The article is about anthropology — 'social studies' — with some speculation thrown in. It doesn't have to be science, just 'news for nerds', whatever that means.
Now lets say some well meaning and compassionate politicians decided to take care of them and built high rise apartments for all of them who were having trouble paying their rent to live in. Sounds good and compassionate right?
By Hanlon's razor as modified by Clark's law, this is more likely a simple backfire of what was intended to be a helpful act rather than some sort of evil plot; nothing deliberately planned would have gone so horribly right.
Starting with an assumption is not an action that is compatible with science.
Wait, what? At its core, to apply the scientific method is to create a hypothesis and attempt to falsify it. Creating a relevant hypothesis usually requires some sort of initial assumption, though one must be open to the possibility the process will demonstrate that said initial assumption is incorrect.
The technical solution would be to design their storage in such a way that it is _impossible_ for the company to read a customer's data.
"Impossible to read a customer's data" is not a strong enough condition. For example, if a provider uses a convergent encryption scheme, they clearly cannot read their customers' data, yet it becomes possible to deduplicate encrypted data — and consequently to identify everyone who has copies of a given plaintext, or perhaps to guess at a password embedded in a configuration file.
A shell / powershell script is plain text.
Well then, the obvious solution is to disable #! recognition and set-executionpolicy restricted, so shell scripts become useless!
Oh well, back to the drawing board.
Doesn't this create an awkward tension between two objectives of lawful authority? On the one hand, collecting taxes on income generated by commerce; on the other, extinguishing that commerce that tempts the wrath of other laws of the land (and with it the taxable income that commerce ostensibly generates).
This is oddly close to what I think DRM ought to be: advisory, not enforcing. Remove the accountability aspect, not least because it's a farce that leaves the most recent honest party holding the bag, and you have my concept of an ideal DRM engine: provenance meta-tags that let you know what color your bits are, which you can use if it affects you or ignore if it doesn't, leaving no rights-holder the wiser no matter what course you take.
Accountability-oriented DRM, which prevents no action but forces your use of certain combinations of certain colors of bits into public record, would be prone to false positives. Pulled in some GPL code to a local build of 7-Zip? Chances are the other code doesn't have a GPL exception to allow linking against the non-Open unrar, so the resulting software likely may not be conveyed (in the GPLv3 sense) at all, but creating or using the resulting combined work won't infringe anyone's rights and shouldn't require you posting public notice that you have created such a combined work if you have no intent to convey it.
All I see here is a bunch of stuff that all depends on trusted third parties... and in security circles, "trusted" means "can screw you over if they act against your interests". In this case it relies on trusted identity providers, labeled 'Verification Agent' in the paper.
It all breaks down if a verification agent is compromised, and the breach of even a single identity can have severe consequences that the accountability system cannot trace once information is in the hands of bad actors.
The authors effectively admit that this entire mechanism relies on the honor system; it explicitly cannot strictly enforce any access control, because in the context of medical data access control may stand between life and death.
Finally, the deliberate gathering of all this information-flow metadata would add another layer to the panopticon the net is turning into.
My point is that there is probably some dollar value at which the cost to find the next vuln would never increase beyond that... That's what I'm calling the infinite bug threshold.
In other words, some point at which the marginal cost of finding a zero-day roughly levels off, and thus the price elasticity of supply becomes extremely high; your "infinite bug threshold" is a hypothetical limiting case where marginal cost flattens and price elasticity of supply approaches infinity.
On the other hand, there is clearly a segmented market of consumers: software vendors themselves, white hat groups, script kiddies, organized crime rings, information warfare organizations, and covert intelligence-gathering organizations, to name a few. The same vulnerability will have a significantly different price elasticity of demand for each type of consumer, and I expect covert intelligence-gathering organizations backed by world-power governments to have awesomely inelastic demand curves.
Whether or not the marginal cost of finding a new vulnerability levels off, there will almost invariably be some actor willing and able to pay the price to obtain it.
Bad cooling? As long as you don't end up with Din's Fire, I guess. Though if it's that loud, don't they complain about that awful din? I guess you wouldn't have to worry about a Power failure, though.
(Sorry, that last one's a gold... er... rupee mine of jokes.)
One laptop of mine with amazing internal wireless capabilities got named khaydarin for its preternatural ability to communicate across space. The thing could get a signal just about anywhere, and was even able to call for help with VoIP over WWAN when my vehicle broke down somewhere my mobile phone got no service at all.
What is to prevent CLONING (copying) bitcoins?
Because there's nothing to clone? Bitcoin basically acts like a secure multi-party accounting system. Bitcoin balances are held in the form of collections of unspent transaction outputs. Once a transaction spending a previous transaction output is committed to the blockchain (in other words, posted to the global general journal), any other transaction that attempts to spend that transaction output is invalid.
What can be cloned are the private keys that confer the right to spend a transaction output. Two parties having possession of the same private key simply allows either party to authorize the spending of funds signed over to the corresponding public key; it does not provide a way to clone the coins themselves. There's a fair amount of malware floating about that tries to obtain your private keys so it can sign over all your unspent outputs to its master.
As for what's backing it: nothing but agreement among the people involved with it that it has some value, rather like a number of physical commodities that have been used as money in the past. As for the computation cycles expended "mining" it? That's just a measure of resources you are willing to commit to the network's integrity, and (on average) you should expect to be rewarded proportionately to that for providing the service..
I have access to both a MFC-7840W and a MFC-9325CW.
Well, there's your problem.
In my experience, Brother HL-series black lasers get questionable after about 150,000 duplexed pages due to roller wear, but other than that that they're solid. I don't much care for their MFCs though; besides the build becoming awkward due to integrating a scanner, the fax capability is usually about as useful as a USB pet rock, and the driver software they come with is frankly crap -- doubly so if you use the network-based interfaces.
As for personal use, my venerable HP LaserJet 4L continues to serve, and its no-corona-discharge-wire design is a nice touch. Old enough that it could probably use some new rollers to reduce misfeeds, but it still works.
Those who can, do; those who can't, simulate.