I remember seeing at some point numbers. It didn't impress in a single thread, but could easily saturate a 10Gb link in multi-threaded tests. They tested an FTP server on a T2plus. Regarding cores, we have anything from dual UltraSPARC IIIi to T4 based systems including some M-class. I believe the T3-4 has the highest number of cores. It should be 64 cores and 512 threads, but a single Solaris instance can only see 256. I believe that the M9000 and M9000-64 should have the same problem, but the biggest M series I've worked with is M8000.
While I'm a Solaris admin for some time, I can tell you that it's not the best TCP/IP stack. It does have all the bells and whistles, but it's not even close to the speed of FreeBSD. It's actually not even in the same ballpark as FreeBSD. It's probably Linux fast if you tune it properly. It does have cool configuration, virtual switches, link aggregations, hardware crypto that can be usable by OpenSSL, OpenSSH, and ipsec but it's not even close speed-wise. But the cost of all those features basically means that it has mediocre performance for simple, yet performance-hungry scenarios.
Sunny days they make tons of "free" electricity.
On cold dark winter nights, where does the power come from?
They can build backup plants that run on coal/gas typically operating under nameplate capacity or they can buy nuke power from the French.
Oh, the irony...
You've got it. What I don't understand is why nuclear electricity is put in the same basket as coal and gas plants. The incidents that Nuclear has gone through in the past 60 years only reinforce my view that it's a safe solution. If given all the fsck-ups that gave us Chernobyl, Fukushima and 3 Mile Island that's all that happened I think that it's pretty much OK. I'm saying this because coal/thermal have their exhaust pipe problems which affect a much greater percent of the population and hydro is in general an ecological mess that also involves massive population relocation.
Nothing is laptop hardware in that machine. Like previous Mac Pros it has workstation cpu (Xeon), workstation graphics (FireMV) and workstation RAM (registered, ECC). Indeed, the mac mini has a laptop CPU and SO-DIMMs for memory, but we're talking about the Mac Pro.
Furthermore, I don't get the "doubts about the thunderbolt displays". Thunderbolt can act as a simple mini-display port (with audio also). So go grab your $150 Dell Display Port monitor and plug it in. All it takes is a $8 mini-display port M to display port M cable. If you want to use the more advanced features of thunderbolt, it's a matter of taste, but for a lot of external hardware USB is not an option even in it's 3rd incarnation.
Stop scratching the machine if you don't want scratches on it. My workstation is always on, and I think that except for dusting it, I haven't actually touched it in over 1 year. Now going to serious stuff...
Upgrades are allowed. It features 6 Thunderbolt ports so you can add as many 10GigE, FC, HBA, high performance external directly attached storage arrays, Video Capture controllers as you want. There are a few thunderbolt to pci-express 2.0 8x adaptors available if you want to use your own hardware.
I guess that the only non-upgradable parts are the video cards. I think that they are swap-able but due to their proprietary format there would be no 3rd party alternatives.
P.S.: Have you noticed how Google managed to come up with a decent Maps app in only 6 months? They completely neglected the iOS distributed app for years and only improved on Android until Apple kicked their arse back to work. I find that kind of competition to be healthy!
If you were Apple, you wouldn't have survived the 90's.
While the Apple maps data is not the best in some places, I can say that they're doing a much better job improving than everyone else. It took Google a few years to have any roads listed in most European countries. Apple started with complete maps. I've compared the coverage of Apple, Google, Nokia, Bing and OSM on quite a few occasions and OSM is the only one better than the rest. Google, Apple, Nokia and Bing are not showing one third of the motorways in Romania. I'm not talking about a forgotten secondary road somewhere up in the mountains, I'm talking about (albeit a few) hundreds of kilometers of motorways.
The application isn't bad at all. It's still superior to Google's, at least for now. The data might be flawed in some places, but you should give them a few months to get it right. I'm quite sure that when Google Maps first appeared, their data wasn't optimal either. Their maps are now much better due to community effort in apps like mapmaker.
In case you're an idiot and couldn't figure this out by yourself, I'm going to spell it out: it makes perfect business sense to build your own maps application if your biggest competitors (Google, Microsoft, Nokia) all have their own solutions. What do you think the licensing costs would be if Apple attempted to license a maps solution from Nokia's Navteq or from Microsoft's Bing?
You haven't been to the Netherlands recently. NS should stand for "No Show"!
In my experience, while traveling between FR, DE, BE, LX, CH, AT and the NL, once a train (including a high speed train) crosses the Dutch border it's instantly delayed. Should I count the part where they are changing the trains to between NL and BE to "high-speed" trains, even if they are traveling at normal speed, is just an excuse for making the prices 3-4 times higher and with mandatory reservations (unless you buy the tickets from Belgium). Should I count the times that I've wasted on their platforms mostly in bad weather.
The Dutch are good at a lot of things. Punctuality hasn't been one of them in a long time, whether you're talking about KLM, KPN (especially Getronics), NS they have completely forgotten what punctual means. Furthermore, they have replaced their BS-free attitude to a disgusting "politically correct/tongue up your arse" attitude, where, in order not to loose your business they tell you what you want to hear instead of the ugly truth. Fortunately, the Germans and the French are still frank enough.
You didn't read what I said. Yes, ZFS+Snapshots, but you also need at least Sun Cluster replication and tape backup. ZFS + Snapshots doesn't save you from fires, floods, software bugs and ill-will. It does save you from idiots, and disk failure though.
RAID is a method of reducing the chances of a disk failure being fatal to the data. RAID is not a backup solution. Anyone who answers a question about backup with RAID is an IDIOT who doesn't deserve his oxygen quota and should be put down.
Disk failure is not the only reason for using backups. More often than not you run into an idiot user (who happens to be executive) that deleted stuff by mistake and you need it back.
Furthermore, disk failure can happen on all the disks at once. You have: fires, idiots, floods, more idiots, bad wiring, idiot admins, software bugs, and my personal favourite: tired admins.
Always have an off-line back and an off-site replica is my personal favourite.
> what you're looking for (ZFS) hasn't been invented on any of the OSs that you're using.
Actually, there is MacZFS. Runs just fine on OSX. I have the OS, apps, and my home folder on an HFS+ partition on an SSD. Everything else is on ZFS. It's exported via SMB to all my Win boxes.
And there's the ten's complement implementation that's even better, but doesn't cover Windows and Linux. There is no Windows implementation and the Linux one is alpha quality at best.
Two aspects to your problem:
1) Recovering from the current situation
If you didn't make ANY changes to the filesystem after it was corrupted, you still have a chance with software like DiskWarrior or Stelar Phoenix. Never work on the original corrupted filesystem unless you have copies of it. So grab a second drive, connect it over USB and using hdiutil or dd copy it to the second drive. Once you do that, use DiskWarrior or Stelar Phoenix on either one of the copies, while keeping the other one intact. Always have an intact copy of the original FS. You might be successful trying multiple methods, so KEEP AN INTACT COPY.
2) Avoiding it in the future
NTFS is good at surviving a crash if and only if the crash occurs in Windows. Paragon NTFS for Mac/Linux or NTFS-3G don't use journaling to it's full extent (for both metadata and data). So, if you get a crash while in Mac OS X or Linux, chances are that you get data corruption.
Same goes for HFS+. While Mac OS X uses journaling on HFS+, Linux doesn't. It's read-only in Linux if it has journaling. Furthermore, the journaling is metadata only in HFS+.
Now we get to the last journaled filesystem available to all 3 OSs: EXT3. It's the same crap as above.
Because of the three points above, I have a conclusion: what you're looking for (ZFS) hasn't been invented on any of the OSs that you're using.
Thus, I have a simple recommendation:
Use ZFS in a VMware machine exported via CIFS/WebDAV/NFS/AFP to Linux, Windows or Mac OS X. A small FreeNAS VM with 256MB of RAM can run in VMWare Player and Workstation on Windows/Linux and Fusion on OS X.
ZFS uses checksumming on the filesystem blocks, which lets you know of the silent corruptions. Furthermore, by design, it will be able to roll-back any incomplete filesystem transactions. I've had my arse saved by ZFS more times than I care to remember. The most difficult thing for my home storage system is to find external disk arrays that give me direct access to all the disks (not their RAID crap). A proper home storage system is RAIDZ2 (basically RAID6) + Hot Spare.
Another way is to have a simple, TimeMachine-like backup solution on at least one of your operating systems. But even that doesn't catch silent data corruptions, let alone warn you. As such, we get back to: ZFS.
I do appreciate your sarcasm. It's of quite reasonable quality; unlike most
UTF-8 encoding comes with a lot of additional processing. IP communication (v4 or v6) needs to be implementable in anything from ASICs to Java in as few lines as possible. Adding something like a decoder increases the complexity of the whole thing and definitely increases the latency. Since we're in a jitter and latency sensitive world, decoding each packet that comes through each router interface will most probably add a quite sensitive amount of latency to the whole equation.
The whole article starts from the wrong premise. What I'm debating is the whole anti-IPv6 movement from idiots that aren't able to understand the need or the features of IPv6. If we're completely on-topic, Apple hasn't stopped using/providing IPv6. Apple still provides IPv6 on their AP/routers, however, their newest configuration tool doesn't provide a method for configuring it. So, what Apple is missing in the whole IPv6 equation is not IPv6 support, but:
A) Support for configuring IPv6 in Airport Utility Version 6.0 (5.6 still does the job, and both versions can be installed in parallel). Following Apple standard behaviour, by July 1st, they will release Airport Utility 6.1 that 'reintroduces' IPv6 support. Fortunately, the 5.6 version is still available for download.
B) Support for PPPoEv6. Apple supports static IPv6, 6to4 tunnels and automatic allocation (incl. DHCPv6) but no PPPoEv6. This is the only thing that is really missing on the AP/TimeCapsule side of the things (not in the config tool). PPPoEv6 is mandatory for most DLS providers that actually give you the option of using your own router (while turning that expensive VDSL2 router into a simple bridge).
OK. It seems that I am well rested, so let's see why you're an idiot:
1) NAT doesn't work. It only works properly for trackable connections (TCP/IP for example). Otherwise NAT requires hacks such as NAT-PMP and UPnP. Can you please explain to me why do we need the intervention of a complex protocol (like UPnP) just to get layer 3 working properly? Understanding NAT traversal and implementing it properly is more difficult than just understanding and implementing IPv6.
2) NAT is used as a security feature only by idiots (thus, my assumption that you're an idiot). Sane router defaults and enabling the firewall that comes with your operating system might do a better job. Even blondes have heard of a firewall. Not doing that is as inexcusable as not locking your car and then complaining that it got stolen/vandalised. In order to do some things (such as using a computer), you need to accept that you need to learn shit (such as enabling a firewall).
3) Getting IPv4 and IPv6 to play nice is not a problem. Getting both of them at the same time might duplicate some of the work, but that's what you get when you migrate from something old to something new. Some things still need to be done twice. However, since they are independent protocols (none assumes or requires the other one), you don't have to get them to "play nice" and you don't "default" to one or the other. Google "CCNA Semester 1" if you're missing the basics about IPv4 and IPv6 and the layered OSI model.
4) You make the ASSumption that if you have both protocols, somehow, all requests will first go through IPv6 and then, after timing-out will attempt IPv4. That ASSumes a few things that need to go wrong and usually don't.:
4a) the requested resource advertises both protocols (most only advertise IPv4)
4b) the application defaults to IPv6. Applications don't default! Applications do as they (or the OS in this case) are configured.
4c) your system is imagining that it's connected to both an IPv4 and an IPv6 network that can route to the requested resource when if fact it's only connected to an IPv4 network that can route to the resource. If your network doesn't provide IPv6, even if your system supports it, the applications will NOT use IPv6, let alone time-out. Same with IPv4. If your network only provides IPv6, your applications will not attempt to connect via IPv4. Actually, some applications will, but will instantly get a "no route to host" on the missconfigured protocol and only then will attempt to use the other protocol. But even in this scenario, you don't have a time-out, you get an instant exception.
5) Making IPv6 somewhat backwards compatible with IPv4 would make it IPv4.
6) Not having experience at something should be an incentive for us to get better at it, not a reason to stick with IPv4. We've already had almost 15 years to learn what IPv6 is all about, but some 'experienced' fucks are too damned lazy to give IPv6 6-12 hours of their life.
7) It's about time we move on and get rid of all the crap around IPv4 (such as: IPSEC not mandatory in all implementations, DHCP/BOOTp, ARP, RARP, 32-bit addressing, not-auto-configuring)
BTW, everybody should pray that we still use horses for transport as much as possible, because investing in tarmac is so expensive and time-consuming. God only knows what happens when the switch is flipped and we move to cars.
Thank God you're out of corp IT because you're definitely not able to adapt to the natural evolution of things.
IPv6 is terrible if those "20 bytes more" are relevant for your application.
This is a ridiculous argument. Over the internet you don't have any guarantee of the MTU. A common value is 1280, another one is 1500, but you might end up with the packets fragmented to a lot less than that (sometimes even 400 bytes). There are bigger differences in path MTU sizes over the internet than the 20 bytes that might be different between IPv4 and IPv6.
If you're talking about intranet, then I should remind you that Jumbo Frames have been around for about 10 years. If you're still not using at least Gigabit Ethernet, then it's your design that is at fault not IPv6.
Sometimes admins and developers need to suck it up and go with the wave. We can't keep using Lotus Notes 6, Windows 95 and IPv4 over PPP/POTS forever.
IPv6 is something that we need and you need to adapt your application to that. If you don't, it means that you're not doing your job. It's your duty to find out any hiccups and if you can't directly fix them, at least report them upstream as near-term risks for the infrastructure.
If developers did their job properly, IPv6 will work without any intervention from them. Microsoft introduced the IPv6 stack for testing back in Windows NT 4.0. If you use the correct APIs, you should be using IPv6, IPv4 or even IPX depending on your network conditions almost transparently. Apple also documented the correct APIs for looking up hosts and getting sockets that are protocol agnostic for a few years. Even if you didn't follow the OS vendor recommendations, IPv6 clearly visible at the horizon for 10-12 years. I will presume that your application is not 20 years old, so you have no excuse for ignoring compatibility with a disruptive upcoming technology that everyone knew was coming unavoidably.