Comment Re:dont need it (Score 1) 221
My point was, even back then there was a push to improve graphics quality. I don't see why that shouldn't change now.
My point was, even back then there was a push to improve graphics quality. I don't see why that shouldn't change now.
Would a company that big actually insure against theft loss from a single store? I would think it would be cheaper for them to insure themselves.
That funny you mention that game, because it included an extra RAM module to upgrade the graphics quality. How many console games force you to install RAM before you play?
I'm having a hard time finding any new phones that support tethering. It's like they want the fact that the feature even existed to slowly disappear. It's one of the only reasons I stay with T-Mobile having their super cheap $20/month internet plan.
I using tethering on my T-Mobile Wing when I need it. It's not very fast, in the bay area I at least get Edge speeds, but things like Youtube video and large downloads usually end up timing out.
I like the tethering option because it's unlimited internet on my phone and my laptop ($20/month). If I was with any other carrier, it would be a separate $60/month plan (yikes). And for occasional use, I don't need the speed.
My guess is that G3 networks are really not ready yet for the kind of use that laptop users want.
RAID0 with MLC is nice. 0.1ms access time and 300Mbyte/sec read/write throughput. It's the access time I think you would worry about with compilers and lots of random reads.
The OS would not be responsible for re-issuing the write. The wear leveling of the SSD should do this automatically and mark the sector as bad.
What should happen is a SMART status update telling the host OS that the drive is running out of writable sectors due to a high % of bad sectors.
The benchmarks I've seen on 1TB drives show about 80MB/sec average, and it goes from 60MB to 100MB/sec depending on the location of the read on the drive, SSD's don't have this non-linearity. Also, the 350MB/sec is reaching the limitations of the RAID controller, a single drive is about 150MB/sec, so it starts scaling down a little when you add more. I will probably try a PCI express adapter that has more bandwidth in the future.
Still, the main speed boost is in the latency. 0.2ms vs 8ms is a huge leap. Also, I like my system as silent as possible, so I'm willing to pay a premium for that.
I save additional files on an external 1TB drive when needed.
As far as price, if you avoid the big name brand Samsung and Intel parts, you can get fairly good prices.
Okay, I just ran this benchmark on my 3 RAID0 SSD array....
From 0.5KB to 128KB write performance (in KB/sec)
3928
7368
12579
19931
48306
83492
143772
233510
252352
The reason for the low performance on small block sizes is the option called "Direct I/O" on ATTO Disk Benchmark. What this probably does is turns of your system's caching capability, so of course you are going to get ridiculously slow rates. It's good for comparison, but to say you're system is going to be slow because of it is ridiculous because in the real world your OS will cache everything. If you look around, you'll see that 7200 RPM HD also do bad on these write benchmarks. Maybe better on read, because HD have a RAM buffer, but that shouldn't matter in real world if you're using the OS's cache.
And ATTO tech is more interested in selling disk controllers anyway, so really this is more of a disk controller benchmark, not real world HD performance usage.
It would be the equivalent of turning L2 cache off on your CPU and publishing those benchmarks as real world performance.
I think a more accurate benchmark would be some type of MySQL database test.
I believe there is so much misconception out there about flash memory performance, it's astonishing. There just isn't a good understanding of how all the layers of cache in the OS work.
SSD's are not slow, do not "die young". I just built a new system with 3 SSD's in RAID0 and I'm getting 350MB/sec sequential read performance, and nearly 250MB/sec sequential write. In fact, I'm less worried adding additional drives in RAID0, because they fail by total wear, not a single point of a failure, and if the wear is being spread out, it should be less of a concern. I have not done benchmarks on random write performance yet, but I'm guessing it will not be that bad.
100% of the problems now are related to some of the controllers and how the OS caches data before writes. For example, why would you write thousands of small 1k blocks, and not cache those in memory first if the write is going to occur a second later? In fact, most OS's will intelligently do this.
So here's the real problem, a bunch of the early SSD's that came out were incredibly cheap and used really really bad controllers. Most of the new new SSD's now have moved beyond these deficiencies. Because of the past products though, it's creating a huge confusion in the market and people just don't "trust" these new products.
Except the fonts are still unreadable, even 13 years later, unlike windows. In fact, this problem still exists on most distros I've tried.
Starting Score: 1 point Moderation +1 50% Insightful 20% Overrated 10% Flamebait Extra 'Insightful' Modifier 0 (Edit) Total Score: 2
"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah