Don't they mean watching their acting lessons? I'm wondering which team gets the best chance at an academy award this year.
Don't they mean watching their acting lessons? I'm wondering which team gets the best chance at an academy award this year.
I chose other, meaning to pick this. So +1 to Wabbit season!
... more of a commentary around the importance some people place on social media. Slightly tabloid, this slashdot article is. Mmmm...
Heh! When I were younger, in the early '90s, our uni had a Pyramid with 2 whole CPUs running AT&T Unix! Programming students could bring it to its knees with their buggy chat client projects. Unix, or something very much like it, is now not only potentially free, but infinitely more stable. And now Oracle gives you high availability services that will auto restart things, or you're running on a RAC cluster, and when an instance crashes the end user simply experiences a bit of a pause, if that. There's nothing like being able to patch the database software across all 3 nodes in a cluster (1 instance down at any one time) while the users are logged in and actively using the application it back-ends.
Four Yorkshire-men aside, sometimes you need those 3 instances as 1 crashes because you're using the new feature and another inexplicably refuses do any work because of some buggy load balancing algorithm. (ok, perhaps some exaggeration here, but not by much)
Dude, I reckon you're behind the times. There have been 4 major releases since 10g - 10gR2, 11gR1, 11gR2 and now 12c. Even if you take R1/R2 as one major release, that's still 2 major versions. 10g is what, 8 years old and afaik unsupported by oracle except maybe in extended fashion on the latest patch level. What were the limits back in '05 on "current" Postgres?
Invalidation of interdependent objects will show you ultimately which part of your application is broken when you compile a new package or change some DDL, and thus when it's safest to do the more invasive stuff - unless of course you've used object editioning as it's supposed to be used (as of 11gR2) so both old and new code can coexist on your database, and you can switch users over to the new code, piecemeal or wholesale without interruption or pesky invalidations if you know what you're doing.
There are still gotchas though, like the magical GoldenGate, which can give you an architecture independent replica of your database (for a price of course) oh except for the handful of special tables you want to copy but it doesn't support yet.
I suggest playing with XE to see what the free version can do for you now. And read up on 12c features. It may only be a matter of time before Postgres can do the ones you want, and Postgres may even do it better when it does the implementation (I've no doubt Oracle's database code resembles spaghetti even now). Postgres exists for a reason, and I only wish it were better recognised than MySQL as a real long time ACID compliant RDBMS.
Yep, I've joined the dark side, but perhaps I can still be turned back...
Amen! At least they don't license based on available RAM yet. Oh darn, I hope Larry doesn't read this and get any ideas.
From experience, TOAD isn't great for managing RAC stuff (I use TOAD, but not for that!) I'd use the database management interface that comes with the database installation, or take a step up from that and use Oracle's OMS, especially given that its license is now essentially free if you avoid trying to use certain largely irrelevant frills like cloud management. OMS will help you do your impdp and if you know how, compare / make changes to multiple databases simultaneously if you dare. You can even download and install a limited license Oracle database for free - http://www.oracle.com/technetwork/products/express-edition/overview/index.html - it comes with a browser based GUI and all, if resource limited - but comes in setup.exe and RPM form.
Manual management of tablespaces??? Create your datafiles for your tablespaces on an ASM diskgroup, set them to autoextend, set up an email alert for when the tablespace gets to 90% full or whatever suits so you can add more files if you underprovisioned, and set up another alert to tell you when your ASM diskgroup is getting full. Manual management is very much dark ages stuff (although some currently supported software insists that is how you configure things... or asserts its configuration on you which then you have to bend to your will... grrrr).
Datafile size limitations - default sizes these days with default block / ASM extent sizes amount to 32G per datafile in 11g. If you know the data will be huge, increase the extent sizes. Doubling one will double the other, and so on... or create a HUGEFILE tablespace, which is 1 essentially unlimited size file.
Recompiling invalid objects? From the database server, login as SYSDBA and: @?/rdbms/admin/utlrp
Oracle databases are more complex because the software can do more for you. If you don't need the complexity, install Postgres - I'd choose Postgres in a heartbeat if all I needed was a solid RDBMS with a useful interface. And I'm sure you can still do funky things with Postgres anyway if you want to practice black magic! Horses for courses though. If it doesn't make sense to use Oracle, and you're not forced to use it, then don't!
The UI is much prettier these days, and improving with each release... thankfully! Never had to do any installations up to version 8, but 9i was _bleep_, 10g was pretty crap... 10gR2 bearable. 11g was a step in the right direction and 11gR2 not too shabby. Haven't had the time to install 12c yet myself, but the installers for their other (predominantly weblogic) work pretty well when you can get your head around the installation guides, assuming you're doing something non-vanilla. IMNSHO of course.
Oh yeah there was the time I uninstalled a piece of application software with the OUI, and as well as what was expected it deleted 1 library out of another oracle installation. Finding that was LOADS of fun. But that was in the 10g days. If you go with 12c, when they release an upgrade you're supposed to install the new software, unplug your database from the old home and plug it into the new one (aka pluggable databases). I'd wear a little pain with the installer for that feature.
Also not for the faint of heart, the grid infrastructure software which provides the cluster / ASM support weighs in around the ballpark of 1G of RAM usage, before starting up your RDBMS. Not much of an issue when the cost of RAM is such that the average home enthusiast can afford 16G or more of the stuff. But when you use the features (because you need them), it's approximately worth it.
> Flashback queries and flashback archives (they are really cool)
Is that the same as time travel?
Nope, in Oracle you can run this query on any table to view the data it held yesterday:
select * from emp as of timestamp(sysdate-1);
select * from emp as of timestamp(sysdate-1) where empid not in (select empid from emp
insert into emp (---insert above query here---);
At database level it's common before a potentially risky data change to create a flashback point, and if it messes up, shutdown, revert to the time you created the flashback, and pretend the changes never happened. It happens as fast as all the necessary extents can be written back to the data files and the database can be restarted.
But but but
I've been a relatively mild-mannered open source advocate for over 20 years now, and have been running Linux for all of it. My first DBA job was with Postgres (6 or 7, ~12 years ago now!) and now Oracle. This is all about databases, completely ignoring the application related acquisitions they've made in the last decade...
A lot of difference I see and is evident from the discussions here is that Oracle usually has the features earlier (not always, but yes, usually). The earliest example I've witnessed is Postgres' Write-Ahead Logging, which was definitely cool, but Oracle were there first. More recently, with 11gR2 you have advanced compression (pay $$$$ and it will store all your data compressed if you want) and with 12c there are a bunch of features that make me drool. Pluggable databases is just one of them.
Again, not entirely sure about Postgres, but Oracle build a lot of instrumentation into the database software itself. Tracing custom events is a great way of profiling your application as well as database deficiencies. Pay for the license to unlock the full power of ASH or AWR and you have a great deal of ability to see exactly what's going on and figure out how best to resolve any performance issues. The best bit is that this instrumentation doesn't make the database run like a dog. A few percent overhead gives you a lot of debugging power, and it's ALWAYS turned on with basic event tracking always happening anyway. But you can add MOAR.
I see some impressive performance on Oracle databases these days, but not entirely convinced that Postgres cannot meet them. But then, Oracle can run on anything from 32 bit x86 to some seriously beefy hardware (and when it does, it runs well). I'm not entirely sure about Postgres, but I know Oracle has been compiled for RISC architecture (Power, SPARC, HPUX, others??) for a long time. These days they to lean towards x86 - and will even sell you a "database machine" (google for Exadata). This extends to scaling out on any of the supported architectures with their cluster software (Grid Infrastructure) these days, which is quite mature now. Again, Postgres probably does this, but each generation sees a significant improvement for Oracle.
Having said all that, leading edge can also be bleeding edge... The biggest problem for me with Oracle continues to be the time it takes to resolve software bugs combined with their support infrastructure. While it usually gets there in the end, for the price you pay for enterprise support one might expect quicker resolution if you happen to be the first person to hit upon a specific problem. Unfortunately this tends to tie with the need to certify with all the Oracle applications they release and support. The one and only bug I reported when I was a Postgres DBA was around a date calculation issue - from the behaviour I reported it was tracked down and patched in ~ 2 days, and I had a workaround for the meantime anyway.
Oracle have also done some cool stuff in the open source domain with OCFS (and now OCFS2) and the free domain with their base GI cluster software, as well as the plain cool domain with ASM (dynamically manageable disk pooling with Stripe And Mirror Everything methodology providing solid data robustness) and ACFS which lets you carve out clustered POSIX compliant filesystems on top of ASM at will. This all helps with scaling (don't need OCFS2 now if you use ACFS tho).
Hmmm, it seems they really are turning me to the dark side.... heeellllllppppp!!!!
... when the laser's mounted on a Frickin' shark's head.
So our democracy allows us to choose who to vote for (of course, you can always vote Donkeh!), but does not allow us to choose not to vote.
OCFS was originally designed specifically for storing Oracle datafiles, in a cluster, in a non-POSIX fashion. After that came OCFS2, which is POSIX compliant, but can deadlock when NFS exported due to the way NFS handles locking, in a way that can be worked around with the "nodirplus" NFS mount option (not available on all OSes, but Linux is ok). They since developed ASM (Automatic(ed?) Storage Management) which threw away the traditional filesystem presentation of your oracle datafiles, and subequently bundled that into the release of 11gR2 clusterware and extended the functionality to give us ACFS - ASM Clustered Filesystem.
11gR2 clusterware is designed to be clustered with shared storage, and depending on the options when created will happily give you a POSIX compliant clustered filesystem for any occasion - datafiles, regular files - whatever. It is Oracle's implementation of their "best practice" Stripe And Mirror Everything methodology with the aim of not only high availability, but consistently high performance, through spreading all your data across all your disks, and implementing mirroring in a sane way too (split your disks into two (or three!) failure groups, and the software will ensure there are 2 (or 3!) copies of each block. All you do is add disks to the pool(s), and if you have the space you can dynamically remove disks from the pool too. You can fsck, mkfs, mount and unmount it, take snapshots (!), and the lead-up to all that is all not much of a stretch from LVM. Google for Oracle ACFS and see the "Basic Steps to Manage Oracle ACFS Systems" section.
OCFS was only ever available for Linux, but ACFS now supports other platforms... probably doesn't matter to you. The one catch I've found so far is the ~1Gb RAM overhead to run the clusterware PER NODE. There's other reasonable stuff, like you need the network layer to be up in order to start the ACFS supporting services, so you can't put anything related to the basic boot process on those volumes.
The cost of 11gR2 clusterware?
As for the fencing method, it all works via heartbeat to disks in your ACFS pool. If the clusterware can't "ping" the disk within the threshold, it forces the system that's having the issue to reboot. Such is the nature of ensuring sanity when using shared disk. I suggest looking at it if your boxen can spare the RAM and you're happy to accept their OTN license agreement, as it really does seem to be one of Oracle's better products at an amazing price for what you get.
Real Programmers think better when playing Adventure or Rogue.