Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Data Storage

Submission + - Stanford creates everlasting battery electrode (extremetech.com)

MrSeb writes: "If it wasn’t for one, niggling, deal-breaking factor — reliability — alternative, renewable energy sources would probably overtake fossil fuels in terms of commercial viability and desirability. Wind and solar power plants are awesome, cost-effective, infinite-until-the-Sun-burns-out solutions — but when the sun goes in, or the wind dies down, you need a backup power source. Today, that’s fossil and nuclear power — but thanks to a discovery made by Stanford University researchers, we might soon be able to use batteries. Stanford has developed a new, mega-rugged, high-voltage battery cathode, made from copper nanoparticles, that can survive 40,000 charge/discharge cycles — enough for 30 years of use on the grid. If that wasn't enough, the cathode works with an electrolyte that is water-based and "basically free." To make an actual battery, however, the Stanford researchers now need to find a matching low-voltage anode — but they already have some "promising candidates," so here's hoping."

Comment Re:Google Patents? (Score 1) 117

It's been said that the whole reason Oracle bought Sun was to clobber Google with the Java patents so they could cross-license the map/reduce patents and get back to an Oracle database that could scale.

Sure it's been said. By those who don't know what they're talking about.

Patent licensing -- in my opinion -- would be somewhere very near the bottom of any reason for acquiring Sun. It was, plainly put, a great deal at a great time, and some huge innovations have already come out of it. Just a few:

* Exadata V2. Many times the power for Oracle Database for much reduced power, space, cooling, and price compared to the V1 offering with HP. I'm seriously blown away by how much performance these things can put out... and we have hundreds side-by-side here in the data center where I work. Sun hardware made the difference, and it's gotta be seen to be believed.

* Sun/Oracle 7000-series NAS appliances. I'll be the first to admit they got off to a rocky start, but since the 2010Q3.4 release they've been solid and high-performing in mirrored and triple-mirrored configurations. ZFS on RAIDZ/RAIDZ2 definitely needs some work to improve IOPS for OLTP loads, but mirrored configs really cook under anything but the most extreme high-data, low-IOPS loads we can throw at it.

* Oracle Secure Backup on Sparc. We had to go this route to get the I/O performance we need to back up the world's largest Oracle databases. Nothing else had what it took; x86 just couldn't keep up with the I/O to dozens of T10000 T2 tapes simultaneously.

Sun hardware is making this and many other things possible. End-to-end custom-tailoring our apps onto custom-built hardware has enabled performance gains we only dreamed of five years ago.

Disclaimer: Yes, I work for Oracle. By choice, not necessity.

Comment Re:Oracle vs Postgres (Score 1) 117

Postgres combined with commodity hardware and database joining extensions like dblink allow you to partition data across dozens (or hundreds) of commodity servers, allowing you to provide massively parallel access to massively large datasets without compromising performance. The cost is developer competence.

You nailed it. Postgres is a superb product, and a stellar example of open-source done right. Incredibly flexible, incredibly tough, easy to use for a newbie yet powerful enough for an expert, and an all-around winner. But if someone is going to shoot themselves in the foot, Postgres (and MySQL) gives them the gun & bullets.

The difference between Postgres and Oracle seems to be on focus: Oracle seems to seek limits to the damage caused by idiot developers, while Postgres (seems to) seek to maximize the capabilities of competent developers. It's an exercise to you to decide which approach is more productive in the long haul.

If I hadn't already participated in this discussion and therefore couldn't moderate, I'd totally mod you up for that. Very insightful comment.

However, I don't necessarily agree with you implication that Oracle mainly seeks to minimize damage over maximizing performance or capability, nor do I agree that the focus of the databases are the primary differences. Your comment above is simply one of those statements that really *feels* true, even if you couldn't point out the truth on a comparison spec sheet :)

Disclaimer: I work for Oracle; my opinions are my own, and not those of my employer.

Comment Re:Oracle = pain (Score 2) 117

In what world do you live? Oracle DB is the only great product they have, the rest is complete and utter crap, and they don't even know how to maintain them.

Disclaimer: I work at Oracle. Your description does not match my experience.

We've been trying for months now to get someone from Oracle to explain us why our OSB does not work as it should, and even the guy from engineering that they shipped over half the world was just as clueless as we are.

I and my team use OSB every day (about 80/20 to Netbackup) to back up petabytes of data. The chances are very good you've encountered one of numerous hardware-related gotchas that can derail success with OSB. I like the product, but the hardware support can be a killer.

Let me know the My Oracle Support number you filed and I'll take a look and see if I or my team might be able to chime in if we've seen the problem before.

Siebel has been going downhill ever since they purchased it.

I was part of the Siebel acquisition.

In fairness, Siebel struggled because within a few months of acquisition, developers had to change focus. Previously it was almost-exclusively a Windows/Internet Explorer product, heavily tailored to that environment, including Windows on the back-end most often with AIX running DB2 for the database. The slowdown in Siebel development -- in my opinion -- was largely due to the huge Linux porting & HTML standardization effort.

Today's Siebel runs on Linux middle-tiers with Oracle database under the hood. And its stability is better for it. Every time you use My Oracle Support, you're actually using Siebel.

Additionally, you're seeing the results of that effort reflected in CRM Fusion. Of course the product has bugs, but we're eating our own dogfood every day and we're acutely aware of most of them!

Fusion is still just a dream that doesn't really work, the list goes on...

Wow, I must dream like every day. I support the boxes it runs on, I use the product, and so do tens of thousands of others. It's ambitious, and it was a huge effort, but I fully expect it to mop the floor with other products. We're using portions of it throughout many of our other enterprise products right now.

And you know what, it's because all the good engineers don't want to work at Oracle.

Really? I work with world-class teams every day that amaze me with innovations large and small. There are pockets of unhappiness, for sure, and now that the tech market is improving there are occasional defections. I've been here almost eight years now, and it's not for lack of options. I interview at least four times a year for other positions elsewhere. I choose to remain.

Why? Great environment. Great commute. Incredibly intelligent co-workers. Highly-focused training in an area I love (storage & backup). Market pay. Lots more.

And they are trying hard to hide that they have people leaving the company in droves, leaving people with very little experience maintaining software that they have no clue about.

Depends on the team. I'm friends with people all over in tech, and right now the market is driving everyone job-hopping. Oracle is HUGE, so from what I can see it's just affecting us as much as anyone else... but with tens of thousands of employees, everybody knows somebody who's left recently.

There were mass defections shortly before & after the Sun acquisition. I've been through numerous acquisitions with several companies, and it apparently comes with the territory. I don't see any more "droves" of people leaving than before. But Oracle's working a little harder to keep the good ones now that the tech job market has improved so much.

To sum up... it's unfortunate you've had a bad experience with support on a few products. Rapid changes in our product offerings definitely have an effect on our support models. But give me the MOS SR # you filed against your OSB problem and if my team has seen it before I'll chime in.

Feel free to email me. And remember all opinions I expressed here are my own, not those of my employer.

Comment Ultimately, that's why I have one of each... (Score 3, Interesting) 381

The E-Ink versions of the Kindle do what they are supposed to do very, very well. If I sit down to read a book on an E-Ink screen, I can read for several hours without eyestrain. The Kindle E-Ink UI is sluggish, but it is generally consistently sluggish, and my brain soon ignores the sluggishness. The slow page-turning stops mattering after a while -- it takes some time to flip a page on a physical book, too! -- and the lack of glare, easy-read screen, and ability to read in sunlight combine to create a pleasant reading experience.

I cannot sit and read for hours on my iPad. After a two or three-hour reading session on the iPad -- even with regular breaks! -- the world around me is fuzzy and I'm often nursing the beginnings of a headache. The Barnes & Noble Nook Color shared the same problem. I don't expect any different from the Fire. Close-range LCD creates eyestrain in many people, despite manufacturer claims to the contrary. I can't read an LCD comfortably outdoors in the sunlight, and the glare is horrendous in many situations.

The Kindle Fire, for me, would only be interesting to me as a replacement for my iPad. So what would I get for $200? A device that isn't a great book reader because I can't read for longer than an hour on it without eyestrain. And now reports claim it shares the same problem every Android device I've used so far suffers from as well: inconsistently sluggish performance. That's the very reason I own an iPad 2 instead of one of the many excellent, high-spec Android tablets out there. UI sluggishness bugs the heck out of me most when it's inconsistent, and I suspect I'm not alone in that observation. The human brain is an organ of prediction, and performance must be predictable to take advantage of that fact.

The Kindle Fire? Meh, I'll pass, while once again pondering the thought of selling my iPad 2. That is, until the next time I play Dungeon Defenders, want to surf quickly without firing up the laptop, or watch a movie when the kids are using the big screen. The Kindle Fire might survive in that ecosystem and might not. I see no compelling reason to pick one up.

Comment Re:response to OP, please read parent as well (Score 1) 320

Taking one way snapshots over a network link to a remote location, for instance using rsync and a remote filesystem that supports snapshots, can be a viable solution for short term backups, but if you want longer term retention, "old hat" backup equipment still is a viable solution.

I agree, for varying values of "short term". "Short term" can mean *years* for low-variability filesystems.

If you plan to drop data somewhere and not change it very often after that, snapshots offer a great long-term storage option as long as you have some sort of off-site replication taking place.

That said, I administer a gigantic SL8500 tape library for exactly those cases where disks won't do.

Comment Re:Thoughts on OCFS (Score 1) 320

Be aware that most of the time, snapshots are not whole copies of the volume they've snapshotted, merely a diff to the live version - one must copy the snapshot through some other method before a full copy exists (although it is conceivable that the snapshot mechanism will handle this automatically).

Not quite. Imagine a snapshot this way. You have a file "A". It takes up blocks 1, 2, and 3 on your filesystem. You then snapshot the filesystem containing "A" with the name "first". You lose about 32kbytes (or so, depends on your structure) of data space to store the block layout.

Then you lengthen file "A" into revision "B". "B" is twice as big, but the first half of the file is unchanged. It now occupies blocks 1, 2, 3, 4, 5, and 6. You snapshot again with the name "second".

Then you alter file "B" into revision "C". This version also takes six blocks, but the changes are in blocks 2 and 3. A snapshotting filesystem cannot change those existing blocks! It has to allocate new blocks because the old blocks are marked as parts of snapshot "first" and "second". So your file now occupies blocks 1, 7, 8, 4, 5, and 6. You snapshot again with the name "third".

Then you don't need the file anymore and delete it. Then you take a snapshot of the filesystem called "fourth".

The full copy of the file exists. If you delete snapshot "first", blocks 1, 2, and 3 are still not available for overwrite because snapshot "second" also owns them.

So basically, you got snapshots backward :) It's a block-level operation that flags existing blocks as always and forever -- until the snap is deleted -- frozen in time until they are freed by the snapshot being deleted. Subsequent revisions to a file & snapshots of the filesystem can re-use the data blocks, but only if an individual block is unchanged. It is in no way a "diff" from the live filesystem!

If this explanation is confusing, feel free to email me directly: matthew@barnson.org. I'm glad to clear up misconceptions about what a snapshot is or is not, because it's a far more robust and reliable system than you suggested!

Comment Re:Thoughts on OCFS (Score 1) 320

That's the exact same approach we (those in my storage department) follow in an awful lot of development, test, & staging environments: snapshots for primary backup, and physical backups only upon specific request.

The strategy works, as long as you are fully aware of the window of loss you're looking at. My home backup strategy has me off-site important documents to a lockbox at a friend's house once every six months. Other than that it's just snapshots. I could tolerate losing six months of data, although it would be far from ideal.

Comment Re:No ZFS? (Score 1) 320

Most of the top Solaris talent jumped the Oracle ship long ago.

I beg to differ. I work at Oracle, and there's plenty of amazing ZFS & Solaris development talent everywhere you look. And you can scarcely throw a rock around here without thunking an Open Source or Free Software enthusiast in the head. Including yours truly.

Comment Re:I know this isn't what you asked but... (Score 1) 320

ZFS with a RAIDZ2 VDEV. 3 disks of data, 2 disks of parity, 1 disk spare for resilvering when one of your cheap-ass 3TB disks eats it (and it will!). If it were me and I were hard up against a budget, that's the way I'd go. Decent performance and 9TB storage with all the data integrity, variable block size, compression, encryption, and deduplication benefits of ZFS, but more spindles would be better.

If performance is what you want, a triple mirror is hard to beat. You can pick up 7200RPM drives for dirt cheap and high capacity. Good data redundancy and performance, at just three times the price :)

Comment Re:Obligatory: RAID is not a backup (Score 2) 320

Mass delete.

ZFS with a snapshot schedule. Sorted, as long as you catch it within the reach of your oldest snapshot.

Overwrite with bad data.

ZFS with a snapshot schedule. Sorted.

Silent filesystem corruption.

ZFS. Sorted.

Batches of disks at one end of the bathtub curve.

ZFS verifies the data, and when your disks poop out the data is rendered read-only long before just about anything else would have realized there's a problem.

Trees going through your roof.

ZFS scheduled remote replication to a second array at your buddy's house. All your data remains intact, including snapshots to protect against all the above issues.

Bets are off if the tree hits you, though.

Comment Re:Obligatory: RAID is not a backup (Score 1) 320

I'm gonna throw in another vote for ZFS with Remote Replication. I currently manage a few hundred petabytes of storage and we rely on it day-in, day-out for disaster recovery, archival, and site mirroring. Combine regular snapshots with continuous or scheduled remote replication and a decent backup strategy storing tapes off-site and you have a pretty bullet-proof disaster recovery and data integrity plan.

That's last bit is really key. ZFS is much better than plain old RAID for verifying data integrity. It's a huge selling point, and the horror stories of multiple-component-failure we've still recovered data from only because the underlying filesystem is ZFS are legion.

Throw out the idea of "cluster" filesystems. Kind of pointless for what you're talking about. Set up two ZFS arrays on the two computers you're going to use. I'd recommend mirrored or triple-mirrored vdevs if you're performance-conscious, RAIDZ2 or RAIDZ1 + spare if performance is less critical and you're tight on disks; you want to be able to weather at least two simultaneous disk failures (and preferably a path failure, too) without any issues. Make sure both systems are already set up with the IP address range you expect them to use; moving a remote replica to a new IP is sometimes an exercise in frustration. Or set up an SSH tunnel as described in the FreeNAS documentation.

Get your initial replica up and running and set up a cron job to kick off replications at regular intervals. You can also write a little daemon to monitor the replication and start the next RR job the moment the one before completes, but that's a bit complicated. We do it all the time, but still, it's a little more complicated than cron.

One read/write master, and one read-only replica. At any time you can also reverse the relationship if needed. Set up hourly, daily, weekly, and monthly snapshots so you can recover from an "oops".

Backing up to tape is where you really get hit in the pocketbook. Whether you need tape or not is up to you; for many situations tape makes great sense, for other situations it does not. Many less-critical installations do fine with an outsized area for snapshots (typically we reserve about 25% of the total space for snapshots) and an extended snapshot preservation window. It all really depends on the volatility of your data. If you're like most users, you don't really "churn" your data a lot: things tend to stay where they get put once they are where they're supposed to be. And you flush out old movies or whatever a few times a year.

The cool thing about ZFS is that it scales very well. Whether you need just a snapshotting filesystem for a single drive in your notebook computer or a 200-spindle half-petabyte array synchronizing data across a continent, it can handle most tasks. There are a few corner cases where I wouldn't use it -- mammoth media farms and OLTP databases requiring huge throughput as well as great transactional performance come to mind -- but for a home user it's easy to use and overkill all at the same time :)

Disclaimers: Yes, I work for Oracle. Yes, I'm a huge fan of ZFS, and I was exposed to it because I work here. But that's really irrelevant to the fact that it beats the tar out of every home-brew snapshotting/backup/replication system I've tried over the past seventeen years.

Comment Re:SPARC is dead (Score 1) 128

We chose Solaris on SPARC T3 for media servers to drive a massive StorageTek SL8500 library because Linux on x86 can't keep up with the I/O. With real-world performance in excess of 1.5Gbit/sec, the latest T10kC drives with T2 tapes will bring many any backup media servers to their knees. And we can pump data to quite a few drives from a single T3.

Disclaimer: I work for Oracle because THEY pay ME to play with their giant toys :)

Slashdot Top Deals

Regardless of whether a mission expands or contracts, administrative overhead continues to grow at a steady rate.

Working...