I'll wait for 4.1, and then I'll wait for 4.1.2 just to be safe.
Raid 10 can survive SOME 2-drive failures (in a 4-drive raid 10), and has significantly faster write speeds than Raid 5.
Personally, I use a combination of RAID-0 and RAID-6 (not the same array), because Raid-5 for large arrays is almost useless. I've seen too many raid-5's die when the bad drive is replaced and the added stress of the rebuild then kills a second drive. Ouch.
No. You assume that a phone call stating that you won a free trip is false because of two things: you know that the odds of you winning such a thing is minuscule, and you've heard of phone scams and the odds of it being the later is higher.
Flash at least crashes
Fixed that for you.
There is nothing in
Well if it is based on fatalities, then:
Accident rate in general: 4-5%
Accident rate so far with only 48 vehicles: 0%
Without more details, you have no basis to assume the claim of "someone else's" fault is false.
Ah, I thought RAID1 would warn you somehow of bit flips which I assume would be the way heat-deteriorated storage would show up.
It does. The description of how RAID1 works was incorrect. No raid controller that I am aware of implements RAID1 that way. That would include DELL's persec raid controllers, INTEL's ICH raid controllers, Adaptec raid controllers, LSI's raid controllers, rocket raid controllers, and window's implementation.
Sorry, the same applies to parity drives and dedicated parity drives as well during reads.
During writes, all the data on a particular stripe need to be read so that the correct parity can be calculated.
For a RAID1, most RAID controllers (and software RAID implementations) will absolutely read from all devices so as to service the read ASAP.
No, almost every RAID1 controller I've ever encountered does not do that at all. It balances the reads across the drives so that the it maximizes throughput and IOPS. Only when one drive attempts to read a sector and it detects an error through it's internal CRC checks and is unable to rectify the error (short period for raid drives, long period for desktop class drives), THEN it will request the data from the alternate drive and have the original drive correct itself.
Just out of curiosity, I fired up a D14 VM and loaded SQLIO on it. It came back with 253713 IOPs. If you got less than one, you were doing something very very very wrong. BTW, there is no reason to load SQL server of any type on the machine as SQLIO doesn't use it, so uh...yeah.
Did you set up a VPN to your local machine and then test how many IOPs you get to your local machine over a network share? LOL.
As for AWS vs Azure performance, I'm not sure how you were testing but based on your "expert" opinions which are sorely misinformed, I'm guessing you did something really boneheaded. A P3 instance is set to give a minimum of 735 IOPS every second. So if you were getting less than one, I'd say that was a problem with the user. We use Azure to run our production sites and haven't seen anything of the sort you describe.
On Azure, SQL databases can be connected to from anywhere, any IP. There is no firewall at all.
Uh, no. You have to explicitly allow IPs in. Here's a link to "Azure SQL Database Firewall": https://msdn.microsoft.com/en-...
If you don't understand firewalls, there is a pretty picture if you scroll down. The first step "SQL Database Firewell" is a server-level IP firewall. Yes, the configuration is stored in the database, but that has very little meaning, it could have been stored in a
I've never tried a D14 VM with enterprise installed, but if you were getting poor performance, I'm guessing you did something extremely boneheaded like try to put the database on a remote drive instead of the local SSD.
You obviously know nothing about networking or the slammer worm because it affected the discovery/locator service on port 1434, not 1433, and yes, I know networking very well, thank you.
You might as well say the next time there is an http exploit all the web servers in the world are going to get hosed. Maybe we should put all web servers behind our firewalls to be safe. LOL.
Staged publishing on Azure is limited to 5 slots, not 2. But you are correct, it doesn't scale out to 200 slots. However, in Azure, you pay for the first slot, the additional 4 are free, while the same isn't true in AWS.
As for SQL performance in Azure, I can't make an exact comparison, but you always have the option of running your own SQL server if you don't like the SQL Service. And even the cheapest Azure plan gets your point in time backup/restore, although the amount of time increase as you change service levels (basic - 7 days to premium - 35 days) and the cost of a P1 database is roughly half that of a single "db.m3.xlarge" instance on AWS ($465 vs $968.40), and the price gets worse from there. Multi-zoned databases on AWS are basically full cost, while on Azure (called GEO-replicas), they become cheaper. Then tack on some more if you care about provisioned IOPS (which comes free with Azure, as all tiers are allocated that way), and you've quickly turned up a huge bill for the same performance from Azure.
As for AWS RDS being the "full" version of SQL Server, I suppose that is true if you only need the functions in SQL Standard (Amazon doesn't offer enterprise). Unfortunately, we use features only available in enterprise (like being able to enlist indexed/materialized views in queries automatically), and AWS RDS service doesn't support that while Azure does -- so your "full" version is less "full" than than your "not-full" version of Azure's SQL service. Amazon's RDS is limited to 1 replica, vs 4 in Azure (we don't actually use this, yet) as well.
As for the "slammer" idea, that is quite funny, when the exact opposite would be true assuming there still exists vulnerabilities in the Service locator service. Azure likely doesn't use the same code, while AWS does, so they would be more likely to be hit than Azure *IF* another vulnerability exists, but good job bringing up a 13 year old irrelevant vulnerability into the discussion.