IIRC, just a simple TRIM command can zap all data from a drive, no chance of recovery, no way, no how... and all modern operating systems constantly use TRIM to return freed up space to the SSD's controller.
I wonder how long it will be for a phone to take over the desktop role in a meaningful way (assuming a docking station). We have had some attempts at this, especially with the Motorola Atrix line (RIP) which were pretty good, although the best use (IMHO) was a Citrix receiver .
Already, we are seeing the tablet/desktop line blur, as Microsoft's Surface Pro  models get better. I wouldn't be surprised to see in a few years, a phone with 256-512 GB of SSD be usable in a docking station for basic desktop functionality, with USB 3.1 ports, maybe even Thunderbolt ports.
: Would be nice to have a multiplatform F/OSS project comparable to Citrix Xen Desktop. No, VNC with its eight digit max password, does not count. X-windows over SSH is good, but doesn't play well with MS-Windows based items.
: The Pro is the keyword... The plain old Surface is ARM based. The Pro is an X86-64 machine.
I wonder how many generations of ransomware we will see before backups come back into "style". It used to be in the '90s that people actively did some type of backups, and even PCs shipped with some form of tape drive. Then disks got cheap, and offsite storage become viable, so backups were not done, or if done, were just kicked to the cloud.
Any backup is better than none, but I wouldn't be surprised if the next generation of ransomware would either encrypt files slowly (but use a shim driver to decrypt stuff until it is done, and then completely zap all decryption keys and tell the user to pay up), or if it does notice a backup program being run, actively or passively corrupt it... or just erase the hard disk or the file share it is being backed up to. A simple TRIM command would make the data on a SSD unrecoverable. An overwrite of a directory synced with a cloud service will make that unrecoverable.
I wouldn't mind seeing tape come back, as it isn't slow, and it is relatively cheap (I've seen ads for LTO-6 tapes for $10 each.) The drives are pricy , but tapes are reliable , LTO4 and newer have AES-256 encryption in hardware (and very easy to turn on, be it by third party software, the tape silo's web page, or the backup utility.) A tape sitting on a shelf takes zero energy to store (other than HVAC), and if dropped, unless there is major physical damage, it is almost certain the media will be usable.
Will tape be 100% against malware? Nope. However, it keeps the data offline, so that a single "erase everything" command won't touch the data . One can buy WORM tapes to protect against erasure/tampering as well, as well as flip a write protect tab.
In a ransomware scenario, WORM tapes would be very useful, especially if the malware decides to try to force an erase on all backups. The fact that tapes tend to be offline brings even more security since if the tape isn't physically in the drive, it can't be touched. Again, nothing is 100%, but the barrier for ransomware to destroy all backups goes a lot higher with offline media than with cloud storage or an external HDD.
I wouldn't mind seeing backups be done again, and done in a smart, time-tested way... done to local, archival grade media that is very inexpensive, but yet super reliable.
: I think there is a market niche for USB3 tape drives at the consumer level. Newer drives have variable speeds to minimize/prevent "shoe-shining", and with all the space on a tape, if areal densities similar to HDD are present, it would store quite a lot of data, even with multiple layers of forward-ECC. LTO tape drives are even bootable so a bare metal restore can be done with just the tape in hand and the drive on the machine, no other media.
: In the past decade at multiple IT shops, I've gone through thousands, possibly tens of thousands of LTO tapes. The total number of tapes that I introduced to the degausser were fewer than five, and all the errors thrown when read/written were all soft errors, so all data was recoverable. This is pure anecdotal evidence, but it has impressed me personally on the reliability of these drives. It is wise to have a backup process of rotating tapes and having some task just verify data when nothing else is going on, and goes without saying to use multiple media just in case hard read errors do happen.
: One can tell a tape silo to zero out all tapes sitting in it, but that is going to take some time, and not be instant. It can be done... but if one has a basic offsite procedure in place (where all tapes leaving get the write protect tab sent), even this can be mitigated without much time and effort.
Tripwire/AIDE is passive. It can tell me if a binary is changed, but won't actively block a dropped script.
SELinux is great for assigning roles and denying execution in directories. However, it doesn't sign executables, nor keep a manifest in place.
AppArmor is similar to SELinux.
All of these are quite useful, but what would be an addition which would stop this type of Trojan cold would be something that checks an executable to see if it is on a manifest, checks its signature, then allows/denies/logs access. One can use -noexec flags and ACEs in SELinux for similar effect, but having a feature overlap wouldn't hurt.
Sometimes I wonder if Linux should have functionality similar to AIX's trustchk.
This command on AIX can make a list (signed with an OpenSSL key), then either warn when something runs that isn't on that list, or block it entirely. Functionality can be turned on to watch libraries as well, so if a library was changed, execution stops or a syslog entry is generated. In fact, it can be locked down so a reboot into another OS instance would be required to modify the trustchk settings.
If someone has static scripts that don't change often, this functionality would come in handy and would nip something creating scripts or executables on the fly almost immediately.
Even better would be to combine trustchk with BSD's securelevel so that a signed list of executables can be created, then locked down until the machine reboots.
It might be that if one uses a VPN, and a limited number of IP addresses, maybe just block everything except for those ranges, and the VPN (preferably a less known, but reliable provider, maybe even a static IP on a linode box) would allow one access if one wasn't on that range.
Of course, the attacks I see coming are often compromised Windows boxes on DSL or cable modem IP ranges, so blocking Elbonia directly may not help much. The best bang for buck is maybe blocking the obvious hotspots, then rate limiting dynamic IP pools.
I've wondered, at an extreme, having a custom sshd that had a list of IPs in place, and if someone connected from a blacklisted IP, it would randomly just deny them, or perhaps give them a fake shell before closing the connection. Of course, tarpitting can't hurt either, but a botnet only connecting 2-3 times from an IP at a time, that won't help much.
Another idea would be to combine it with port knocking so that the sshd would give bogus reponses to anything that connects unless it previously knocked on another port. Of course, this would be in combination with blacklists.
I use fail2ban and RSA keys as my primary login mechanism... but I also use the RFC 6238 TOTP tokens (Google Authenticator code available from git, or just fetch it from EPEL if on RedHat or a downstream distro like CentOS. For an app, one can use RedHat's FreeOTP, Google's app, Amazon's, or a slew of others.)
This isn't 100%, but two factor authentication should be the minimum standard for Internet communication these days.
After that, what may or may not help is the push to run everything in containers (think domains in Solaris, or WPARs in AIX.) Docker seems to have a lot of enterprise support, and it is relatively new, and that would put another layer of security in place.
This isn't to say malware can misbehave in a container. In fact, malware running in the user context on Windows can do a lot of mayhem. However, containers provide better defense in depth, same with SELinux.
A swimming pool noodle cut to fit works perfectly with gaps in the hot/cold aisles. Don't ask how I know...
The advantage of a even an easy-bump tumbler lock is that it requires physical presence to do that, with immediate risk (big dog waiting for his/her next meal behind the front door.)
The problem with these devices is that someone can be -anywhere- and break them. Done right, one button push from a script kiddie in Elbonia can unlock hundreds of thousands of deadbolts without warning, and no way for the perp to ever face consequences. This can be done either out of sheer malice, or perhaps extortion/blackmail against each and every user of the device, as well as the device maker.
Of course, if they have an easy mechanism to get flashed, that means an easy mechanism to get hacked, or perhaps bricked as well.
I can put packages down for a second while I stick my key in the lock. Fumbling for an app on my smartphone to unlock the deadbolt actually would take longer.
The problem is that there is an conflict between a password suitable enough for protection (i.e. 20+ characters), and something quick enough to access in a short time.
mSecure addresses this in an interesting way -- they cache the extra long sync password used for the cloud. The password that is used to encrypt the synchronized database that sits in iCloud or DropBox is different from the app's passphrase. Since most phones have decent innate protection, it is not impossible, but very difficult to dump the data on a locked device , so one can have a fairly easy to type in PIN on the device, but the synchronized backend file is protected with a much longer (and more secure) passphrase.
: iOS on the iPhone 4 and up always encrypts. Android since 3.x has the option of using md-crypt and encrypting the
Done right, storing passwords on the web can be decently secure, especially if there is some part of the decryption key (be it a public key, a secondary authenticator, or a keyfile) that is not available to the attacker, in combination with the master passphrase.
I'd say the best implementation of this would be a utility that piggybacked on the cloud provider of choice, so one isn't limited to GDrive, Dropbox, Box, Skydrive, iCloud, or others. The utility would ask for permission just for its own directory (if possible), and would store its main DB file, as well as some backups in that directory. That way, the password program author or company doesn't have to maintain a cloud infrastructure.
Hate responding to my own posts, but adding another idea... Each endpoint device has its own private key... so the data that is stored on the backend cloud provider would be conventionally encrypted, but would be unlockable by any key in the access list, similar to a PGP attachment that lists multiple public keys. That way, one can add and remove devices by using their key, and no common file needs to be shared.
I'd probably say KeePass is as secure as things get, since it doesn't use the Web in any way, shape, or form.
What I'd like to see with password apps that use a cloud provider for backend storage, (be it 1Password, mSecure, or so on), would be a keyfile that is manually transferred between devices, and never is put on the cloud backend. This way, if/when the cloud provider is hacked, the password file is not just protected by the passphrase, but by a keyfile that an attacker would have to compromise a physical device to get.
There is also the fact that some failure modes will take both sides down. I've seen disk controllers overwrite shared LUNs, hosing both sides of the HA cluster (which is why I try to at least quiesce the DB or application so RTO/RPO in case of that failure mode is acceptable.)
HA can also be located on different points on the stack. For example, an Oracle DB server. It can be clustered on the Oracle application level (active/active or active/passive), or it can be sitting in a VMWare instance, clustered using vSphere HA, where the DB itself thinks it is a single instance, but in reality, it is sitting active/passive on two boxes.
Even if the backup stays up, failing back can be an issue. I've seen HA systems where it will happily drop to the backup node... but failing back to the primary can require a lot of downtime. For active/active setups, it can require a performance hit for resyncing.
Even on fairly simple things (yum updates from mirrors, AIX PTFs, Solaris patches, or Windows patches released from WSUS), I like babysitting the job.
There is a lot that can happen. A backup can fail, then the update can fail. Something relatively simple can go ka-boom. A kernel update doesn't "take" and the box falls back to the wrong kernel.
Even something stupid as having a bootable CD in the drive and the server deciding it wants to run the OS from that rather than from the FCA or onboard drives. Being physically there so one can rectify that mistake is a lot easier when planned as opposed to having to get up and drive to work at a moment's notice... and by that time, someone else likely has discovered it and is sending scathing E-mails to you, CC:5 tiers of management.