Who are you calling old?
Who are you calling old?
... to virtual sex online... This is what computers were made for.
They just need a 2-column setup.... current posts on one side, or one pane, and "important"/interesting posts in the other.
Plenty of space for it in the main twitter web page, mobile people may have to swipe left or right between screens... Tweetdeck could do it simply...
Fusion-io did that 5 years ago with their pci-e flash cards. The drives were very vocal about any trauma they might have suffered and would drop into a reduced write mode if you didn't heed the warnings in order to get your attention... if you still ignored them, they would go read only and you'd be forced to copy your data off.
The pricing looks to be around $1/GB, which is a ton cheaper than building a SAN of that capacity, plus it's much smaller in power/space/cooling.
Aah, managed to download this plugin and get it working. It connects to a different server than chat.facebook.com as well.
I've noticed in the comments that some folks can still use xmpp with chat.facebook.com, but whatever local server that DNS is pointing me at is no longer accepting connections.
Pretty much all of them as we move into 3D NAND... I know samsung has one out already as the 850 EVO... everyone should be right behind them with similar products.
Newer 3D NAND is using a charge trap design which basically solves the electron leakage issue found with the older floating gate NAND...
Also, the move to the newer 3D NAND brings us back up to 40nm processes vs the 10nm gates we are currently working with, allowing for much better reliability.
Disclaimer: I've been selling enterprise flash storage for the last 6 years.
I get hit with 1-2 job opportunities every day or two from LinkedIn alone...
Some are good, some are cruft... it all becomes noise since I'm not looking for a job right now.
You can't read game files faster because the process that reads the game file is single threaded vs multi-threaded... so it can only read as fast as a single thread can read.
A fully saturated CPU has enough lanes to do about 26-28GB/sec, but a single I/O thread might only be able to do 100-200MB/sec.
There was never any reason in the OS before today to make that any faster because the spinning disks that fed data to the CPUs couldn't do more than 100MB/sec.
Now that we have all this great flash, code needs to be re-written to be able to use other idle CPU time to spin up more threads and read the data faster.
It's worth remembering that 98k IOPS will be at a very small block size and will rapidly drop as you increase block size to 4K-1MB as the larger transfer size will directly equate to less I/O.
The real problem is that apps are not written for multi-threaded I/O which is what you really need in order to take advantage of the throughput provided by PCI-e flash.
Booting from RAID is more supported, and the support is baked into nearly every BIOS out there... booting PCI-e over NVMe or UEFI is brand new and very few things support it and all the code is new.
Booting through a raid card that has it's own BIOS is nothing like booting off of a native PCI-e device.
Booting from pci-e uses either a UEFI driver or NVMe today, two technologies that are kinda in their infancy.
The code is not yet fully optimized/etc and you may see reduced speeds at least until you can get into an OS layer and load up a more feature-full driver.
The PCI-e native SSDs are indeed faster, the problem is, the code reading data off of them (your application/os) isn't written to take advantage of the increased speeds. Single threaded reads cap out at the read speed of a single thread, and that isn't that fast. This is especially true if they are 4K reads vs 1M reads, as you aren't going to saturate anything until you get up into larger read sizes.
To really take advantage of the bandwidth SSDs enable, you need to be running multiple parallel apps running multiple reads, or you need an app that can do multi-threaded reads.
It is difficult to soar with the eagles when you work with turkeys.