As a security consultant, I've run phishing campaigns for quite a few clients, usually as part of a pen test where we'd use any captured credentials as a foothold for further testing. Typically, I expect about a 1-5% of recipients to click on the link and enter their credentials, with a convincing email and website combination.
Ten years ago, I might have placed most of the blame on users, for not observing obvious warning signs in the email and after clicking on the link, but these days I put the majority of the blame on the engineers and developers building the legitimate systems that those employees use.
10-20 years ago, one could be pretty sure that any credentials for a given company (let's call them "TransferLicious") would be entered somewhere in the website whose name was the one domain associated with that company ("transferlicious.com"). Over time, devs and engineers embraced vanity/novelty domains for a variety of purposes, and now the same company might legitimately have login forms on "transferlici.os", "xfrlcs.io", "transferliciousbanking.com", and so on. Those URLs might be further masked by link-shortening services.
How many enterprise/social-media single-sign-on services involve redirections to other domains? Now the problem is multiplied, because their employer uses "BlueSkies SSO", and their devs and engineers do the same thing. Am I getting sent to a login page from "blueski.es" now instead of "online.blueskies.com" because it's a phishing attack, or because a BlueSkies dev thought it would be "sick" to use a vanity domain instead?
Browser vendors have made hiding technical information from users a priority, and a huge number of users are on mobile devices that don't support things like hovering the cursor over links anyway, so there's another "how to spot a malicious link" technique down the drain.
Users shouldn't have to care about details like that in the first place, but the people building the systems and browsers have done such a terrible job that there aren't even any consistent rules that users can keep in mind. This makes it easy for me to phish people during pen tests, which is great, but it's sad from just about every other perspective.
If malicious content isn't written to disk[1], it's much less likely to be picked up by AV/antimalware components, because most of those hook into file read/write operations within the OS for their real-time protection. Additionally, this technique can sometimes be used to bypass application-whitelisting tools, if it's a tool already on the whitelist which is injecting the malicious code into process memory. That's why it's treated as something special/"magic".
Post-exploitation tools that avoid writing malicious code to disk are inherently different from more basic tools which *do* write the code to disk. If not "fileless", how would you suggest referring to them?
[1] Doesn't matter if it's magnetic media, SSD, RAM disk, etc., but it needs to be something the OS considers a "disk", not just a random place in memory.
When they analyze all the data that exists, that's the opposite of cherry picking. [Geoffrey Landis]
Indeed. I made this same point after Jane/Lonny baselessly accused Layzej of "cherry-picking" when Layzej loaded all the UAH data. Jane/Lonny then suggested cherry-picking at 1998, and keeps insisting that this somehow isn't "cherry-picking".
Ironically, I even gave Jane/Lonny R code which calculates trends and accelerations of global mean sea level (GMSL) data. That graph accounts for autocorrelation- the red lines are 2 sigma uncertainties. The trends and accelerations are calculated over periods which all end at 2009.5. The new significance.zip (backup copies) contains my R statistics folder, including many data sets.
Again, note that this approach avoids cherry-picking by using the entire dataset. Also note that all the best-fit accelerations are positive.
Once again, that's consistent with this NOAA article:
"Sea level is rising at an increasing rate
And once again, that's consistent with the 2013 IPCC AR5 SPM:
"Proxy and instrumental sea level data indicate a transition in the late 19th to the early 20th century from relatively low mean rates of rise over the previous two millennia to higher rates of rise (high confidence). It is likely that the rate of global mean sea level rise has continued to increase since the early 20th century."
That's also consistent with the US NAS's statement that "Sea level is rising faster in recent decades".
You can't cheat the phone company.