Comment Security funding is finite (Score 1) 35
Taking money out of present-day threats to prepare for hypothetical ones is a bad strategy - and it is already difficult to get CEOs to spend money on cryptography!
Taking money out of present-day threats to prepare for hypothetical ones is a bad strategy - and it is already difficult to get CEOs to spend money on cryptography!
Honestly, I don't think Microsoft really gives two shits about locking down tech-savvy at-home users - trying to figure out how to lock down home machines doesn't generate profit.
What they do care about is corporate site-licenses for 10s of thousands of installs with broad-facing support contracts - Fortune 500 companies are very interested in securing each machine, in order to not run untrusted applications. The trick is they also want the ability to control what gets trusted - I have no doubt Microsoft has the ability to privatize their AI for big users and tune it to their needs, at least if you have enough 0s on the purchase order. There is zero chance they will risk these contracts with anything resembling the word "mandatory". (Code signing isn't really a solution here either, as most companies don't have a mature internal process for signing/distribution/key management required to make it work at scale.)
That's the point of the AI - essentially the idea is gather metrics on what individuals choose to trust and use that to drive decisions on what to trust by default. The delicate bit is that has to be shaped by feedback on whether that trust was well-earned or not. The authors shape the AI to make it seem to favor metrics that suggest trustworthiness and disfavor untrustworthiness. The value of this approach is it works at scales that aren't feasible with human review... except it stands on uncertain fundamentals.
The key problem is any such tool will have both a true-positive and false-negative rate - so it becomes a back-and-forth game where malware authors try to mimic true-positives while Microsoft tries to minimize false-negatives. Unfortunately, AI as it exists today is mostly written for modelling behavior that is fundamentally cooperative - so while this offers some promise of counteracting naively written malware, there is no way to know whether it can be effective (or even outright dangerous) against well prepared, strategic adversaries.
My professional opinion is that AI has too many "maybes" and "guesses" to be valuable against human adversaries - virus authors have already proven to be up to the task of innovating against code scanning, it stands to reason they can learn to exploit AIs that lack actual intuition. This is boldly underlined by the research showing that AI tuning can be used against it if exposed - forcing Microsoft into the ugly position of uploading all software to the cloud for analysis or risking costly exposures.
What you are supposed to do is write the code with maintainable structure first, then measure to determine where optimization is necessary - that way you don't waste time optimizing something that turns out not to matter in the important use-cases.
The problem is a lot of key advantages either come all at once or not at all - blacksmithing was around for thousands of years, but without the key ingredients like the Bessemer process, it simply couldn't scale to the extent require for industrialization.
I care... but not that much.
The biggest problem in crypto is just getting people to care at all - look at the sheer scale of data breaches where internal connections/databases are completely unprotected. Unfortunately, that often means purposely setting the bars as low as we can without scaring off users - for instance, better to have users encrypting using only 3DES than nothing at all. This is closely coupled to integrating low-level crypto (e.g., AES) into high-level libraries accessible to typical developers (e.g., TLS 1.2) across a wide range of platforms and devices.
Today we're facing serious headwinds from national governments/corporations suggesting that it is better to leave people unprotected in order to better support surveillance. Add in a frustrating muddle of patented algorithms and copyrighted libraries and a lot of organizations are happy to make a token effort then drop the subject...
I'm simply not optimistic enough to expect quantum resistant algorithms to be widely adopted anytime soon - the unfortunate reality is that any adversary who can afford a quantum computer can already breach just about anyone for a whole lot less.
Relevant xkcd:
https://xkcd.com/1838/
https://xkcd.com/1897/
Same is true in IT - usually taught from exactly the same textbook. Big schools provide networking and brand recognition, but as far as content goes that's really on the individual student to master.
Seems like making all transactions public in a permanent distributed ledger is the exact opposite of hiding assets...
Not to mention the blockchain permanently links the buyer to the seller - so as soon as the seller becomes known as a sanction target, the buyer now has a serious risk of being marked as an "associated person" and having their assets seized by exchanges.
The main problem with bitcoin as it exists today is you are very limited in what you can do with it if you don't have access to exchanges - no one is selling 100M luxury yachts or mansions for bitcoin. The reality is if you are on a sanctions list, you run a very real risk your bitcoin becomes radioactive and no one will touch it - worth less than the bits it is printed on.
10 months of debugging staring a couple octets? That's an almost inhuman amount of patience... I'm kinda glad it paid off for him - it's rare to find something like this!
You are saying coal isn't burned to create cheap electricity, like say this one? https://arstechnica.com/tech-p...
It's nothing but greenwashing fantasy to imagine the whole blockchain is being driven on excess renewable energy.
Editors fall prey to the anti-pattern that garbage news is better than no news at all. Or is that news? I'm confused now.
The sole purpose of a social media app is to gather invasive data from users - things you can't typically access through a browser session (which is plenty invasive on its own). Considering the site is subject to the same laws as any other company, their lawyers are all but certain to avoid the same liability that comes with platforming extremist - this just provides a way to tap into some of that sweet Facebook money by exploiting a partisan fanbase.
Were there fewer fools, knaves would starve. - Anonymous