I remember seeing at some point numbers. It didn't impress in a single thread, but could easily saturate a 10Gb link in multi-threaded tests. They tested an FTP server on a T2plus. Regarding cores, we have anything from dual UltraSPARC IIIi to T4 based systems including some M-class. I believe the T3-4 has the highest number of cores. It should be 64 cores and 512 threads, but a single Solaris instance can only see 256. I believe that the M9000 and M9000-64 should have the same problem, but the biggest M series I've worked with is M8000.
While I'm a Solaris admin for some time, I can tell you that it's not the best TCP/IP stack. It does have all the bells and whistles, but it's not even close to the speed of FreeBSD. It's actually not even in the same ballpark as FreeBSD. It's probably Linux fast if you tune it properly. It does have cool configuration, virtual switches, link aggregations, hardware crypto that can be usable by OpenSSL, OpenSSH, and ipsec but it's not even close speed-wise. But the cost of all those features basically means that it has mediocre performance for simple, yet performance-hungry scenarios.
Sunny days they make tons of "free" electricity.
On cold dark winter nights, where does the power come from?
They can build backup plants that run on coal/gas typically operating under nameplate capacity or they can buy nuke power from the French.
Oh, the irony...
You've got it. What I don't understand is why nuclear electricity is put in the same basket as coal and gas plants. The incidents that Nuclear has gone through in the past 60 years only reinforce my view that it's a safe solution. If given all the fsck-ups that gave us Chernobyl, Fukushima and 3 Mile Island that's all that happened I think that it's pretty much OK. I'm saying this because coal/thermal have their exhaust pipe problems which affect a much greater percent of the population and hydro is in general an ecological mess that also involves massive population relocation.
Nothing is laptop hardware in that machine. Like previous Mac Pros it has workstation cpu (Xeon), workstation graphics (FireMV) and workstation RAM (registered, ECC). Indeed, the mac mini has a laptop CPU and SO-DIMMs for memory, but we're talking about the Mac Pro.
Furthermore, I don't get the "doubts about the thunderbolt displays". Thunderbolt can act as a simple mini-display port (with audio also). So go grab your $150 Dell Display Port monitor and plug it in. All it takes is a $8 mini-display port M to display port M cable. If you want to use the more advanced features of thunderbolt, it's a matter of taste, but for a lot of external hardware USB is not an option even in it's 3rd incarnation.
Stop scratching the machine if you don't want scratches on it. My workstation is always on, and I think that except for dusting it, I haven't actually touched it in over 1 year. Now going to serious stuff...
Upgrades are allowed. It features 6 Thunderbolt ports so you can add as many 10GigE, FC, HBA, high performance external directly attached storage arrays, Video Capture controllers as you want. There are a few thunderbolt to pci-express 2.0 8x adaptors available if you want to use your own hardware.
I guess that the only non-upgradable parts are the video cards. I think that they are swap-able but due to their proprietary format there would be no 3rd party alternatives.
P.S.: Have you noticed how Google managed to come up with a decent Maps app in only 6 months? They completely neglected the iOS distributed app for years and only improved on Android until Apple kicked their arse back to work. I find that kind of competition to be healthy!
If you were Apple, you wouldn't have survived the 90's.
While the Apple maps data is not the best in some places, I can say that they're doing a much better job improving than everyone else. It took Google a few years to have any roads listed in most European countries. Apple started with complete maps. I've compared the coverage of Apple, Google, Nokia, Bing and OSM on quite a few occasions and OSM is the only one better than the rest. Google, Apple, Nokia and Bing are not showing one third of the motorways in Romania. I'm not talking about a forgotten secondary road somewhere up in the mountains, I'm talking about (albeit a few) hundreds of kilometers of motorways.
The application isn't bad at all. It's still superior to Google's, at least for now. The data might be flawed in some places, but you should give them a few months to get it right. I'm quite sure that when Google Maps first appeared, their data wasn't optimal either. Their maps are now much better due to community effort in apps like mapmaker.
In case you're an idiot and couldn't figure this out by yourself, I'm going to spell it out: it makes perfect business sense to build your own maps application if your biggest competitors (Google, Microsoft, Nokia) all have their own solutions. What do you think the licensing costs would be if Apple attempted to license a maps solution from Nokia's Navteq or from Microsoft's Bing?
You haven't been to the Netherlands recently. NS should stand for "No Show"!
In my experience, while traveling between FR, DE, BE, LX, CH, AT and the NL, once a train (including a high speed train) crosses the Dutch border it's instantly delayed. Should I count the part where they are changing the trains to between NL and BE to "high-speed" trains, even if they are traveling at normal speed, is just an excuse for making the prices 3-4 times higher and with mandatory reservations (unless you buy the tickets from Belgium). Should I count the times that I've wasted on their platforms mostly in bad weather.
The Dutch are good at a lot of things. Punctuality hasn't been one of them in a long time, whether you're talking about KLM, KPN (especially Getronics), NS they have completely forgotten what punctual means. Furthermore, they have replaced their BS-free attitude to a disgusting "politically correct/tongue up your arse" attitude, where, in order not to loose your business they tell you what you want to hear instead of the ugly truth. Fortunately, the Germans and the French are still frank enough.
You didn't read what I said. Yes, ZFS+Snapshots, but you also need at least Sun Cluster replication and tape backup. ZFS + Snapshots doesn't save you from fires, floods, software bugs and ill-will. It does save you from idiots, and disk failure though.
RAID is a method of reducing the chances of a disk failure being fatal to the data. RAID is not a backup solution. Anyone who answers a question about backup with RAID is an IDIOT who doesn't deserve his oxygen quota and should be put down.
Disk failure is not the only reason for using backups. More often than not you run into an idiot user (who happens to be executive) that deleted stuff by mistake and you need it back.
Furthermore, disk failure can happen on all the disks at once. You have: fires, idiots, floods, more idiots, bad wiring, idiot admins, software bugs, and my personal favourite: tired admins.
Always have an off-line back and an off-site replica is my personal favourite.
> what you're looking for (ZFS) hasn't been invented on any of the OSs that you're using.
Actually, there is MacZFS. Runs just fine on OSX. I have the OS, apps, and my home folder on an HFS+ partition on an SSD. Everything else is on ZFS. It's exported via SMB to all my Win boxes.
And there's the ten's complement implementation that's even better, but doesn't cover Windows and Linux. There is no Windows implementation and the Linux one is alpha quality at best.
Two aspects to your problem:
1) Recovering from the current situation
If you didn't make ANY changes to the filesystem after it was corrupted, you still have a chance with software like DiskWarrior or Stelar Phoenix. Never work on the original corrupted filesystem unless you have copies of it. So grab a second drive, connect it over USB and using hdiutil or dd copy it to the second drive. Once you do that, use DiskWarrior or Stelar Phoenix on either one of the copies, while keeping the other one intact. Always have an intact copy of the original FS. You might be successful trying multiple methods, so KEEP AN INTACT COPY.
2) Avoiding it in the future
NTFS is good at surviving a crash if and only if the crash occurs in Windows. Paragon NTFS for Mac/Linux or NTFS-3G don't use journaling to it's full extent (for both metadata and data). So, if you get a crash while in Mac OS X or Linux, chances are that you get data corruption.
Same goes for HFS+. While Mac OS X uses journaling on HFS+, Linux doesn't. It's read-only in Linux if it has journaling. Furthermore, the journaling is metadata only in HFS+.
Now we get to the last journaled filesystem available to all 3 OSs: EXT3. It's the same crap as above.
Because of the three points above, I have a conclusion: what you're looking for (ZFS) hasn't been invented on any of the OSs that you're using.
Thus, I have a simple recommendation:
Use ZFS in a VMware machine exported via CIFS/WebDAV/NFS/AFP to Linux, Windows or Mac OS X. A small FreeNAS VM with 256MB of RAM can run in VMWare Player and Workstation on Windows/Linux and Fusion on OS X.
ZFS uses checksumming on the filesystem blocks, which lets you know of the silent corruptions. Furthermore, by design, it will be able to roll-back any incomplete filesystem transactions. I've had my arse saved by ZFS more times than I care to remember. The most difficult thing for my home storage system is to find external disk arrays that give me direct access to all the disks (not their RAID crap). A proper home storage system is RAIDZ2 (basically RAID6) + Hot Spare.
Another way is to have a simple, TimeMachine-like backup solution on at least one of your operating systems. But even that doesn't catch silent data corruptions, let alone warn you. As such, we get back to: ZFS.
I do appreciate your sarcasm. It's of quite reasonable quality; unlike most
UTF-8 encoding comes with a lot of additional processing. IP communication (v4 or v6) needs to be implementable in anything from ASICs to Java in as few lines as possible. Adding something like a decoder increases the complexity of the whole thing and definitely increases the latency. Since we're in a jitter and latency sensitive world, decoding each packet that comes through each router interface will most probably add a quite sensitive amount of latency to the whole equation.
The whole article starts from the wrong premise. What I'm debating is the whole anti-IPv6 movement from idiots that aren't able to understand the need or the features of IPv6. If we're completely on-topic, Apple hasn't stopped using/providing IPv6. Apple still provides IPv6 on their AP/routers, however, their newest configuration tool doesn't provide a method for configuring it. So, what Apple is missing in the whole IPv6 equation is not IPv6 support, but:
A) Support for configuring IPv6 in Airport Utility Version 6.0 (5.6 still does the job, and both versions can be installed in parallel). Following Apple standard behaviour, by July 1st, they will release Airport Utility 6.1 that 'reintroduces' IPv6 support. Fortunately, the 5.6 version is still available for download.
B) Support for PPPoEv6. Apple supports static IPv6, 6to4 tunnels and automatic allocation (incl. DHCPv6) but no PPPoEv6. This is the only thing that is really missing on the AP/TimeCapsule side of the things (not in the config tool). PPPoEv6 is mandatory for most DLS providers that actually give you the option of using your own router (while turning that expensive VDSL2 router into a simple bridge).