Do not stare at Internet with remaining eye?
Do not stare at Internet with remaining eye?
Even if it had all the features, from TFA, the guy pointed it at a group of people standing still and dumped it.
The car would've assumed the driver knew what he was doing (it only works when moving slowly and not accelerating hard) and plowed into the fools anyway.
Just to clarify, I was a TPG customer for >14 years, from back when they were a no-frills technically oriented dial up ISP operating off the back of an older corporate IP/X.25 WAN provider. Most of that time I was on an unlimited plan of some description. When they decided to drop any focus on quality and push for pure price competition is when they started going downhill - early '00s.
I remember when Mr Teoh used to switch to international transit whenever he was negotiating with Telstra for better domestic transit pricing. Anything to the gang of 4 would go all the way to San Jose then back home again. Having to manually choose a proxy in order to browse during peak because the transparent load balancer was a bit iffy. More recently, every time they go on an advertising spree the network goes down the toilet again. It's far worse down in Sydney or Melbourne I hear.
6 months of jitter and packet loss on my home DSL prompted the move and I couldn't believe I'd waited that long. There was nothing wrong with the tail, it was all on the TPG network edge - multiple refreshes to get a page to load, packet loss, serious jitter to anything outside their network. SPT and TPG resources were fine and it was only during peak - I was monitoring 24/7 with SLA probes. Internode is more expensive but rock solid - I've only been with the 'node for 3 years, the difference is night and day.
I work in the telco space, several of my employers' upstream providers are now owned by TPG. They're nearly as bad in wholesale/corporate as residential. There's been some improvement but they're consuming companies rather than absorbing their strengths, where those strengths cost a little extra. That was the point I was trying to make.
They're being bought by the second worst ISP in Australia: http://www.itnews.com.au/News/401960,iinet-board-seriously-concerned-about-culture-post-tpg-buy.aspx, http://www.afr.com/technology/iinet-shareholders-hit-out-at-board-over-tpg-m2-takeover-battle-20150507-ggvyow.
They've already destroyed several large players in the infrastructure space (PIPE Networks for example, AAPT is in progress), and now one of the highest ranked customer service ISPs (if not the highest) is about to be consumed in a primarily cash-based deal, leaving the original team with no control or say in the combined company.
There's little chance of TPG allowing anything to continue that costs more than the bare minimum. Where you previously had people who knew their stuff proactively supporting many-thousand-$-per-month corporate fibre WANs and the like, you now get a bored dude from the Philippines working through a residential ADSL support flowchart, he wouldn't know a VLAN if it was trunked right up his bum.
iiNet/Internode/Westnet/etc are the last service-oriented consumer ISP in the marketplace. Their legal defence of their common-carrier status and their continued protection of customers is just one example. It would be a shame if they were absorbed by a company that is their exact opposite.
(What's the worst ISP though? I reserve that title for Dodo).
All true to a degree, however in AU at least, there's a couple of caveats.
First, all physical endpoints must be identifiable. There are some exceptions, but the ACMA carrier licensing regulations around voice and data mean that in 99% of instances, much of the data you're describing must already be logged and made available when presented with a warrant. Much of the infrastructure is already in place. For example, it is illegal to activate a mobile SIM without providing ID (drivers' license information). Your phone number is bound to your SIM identity so when you're making calls, it doesn't matter what the cell infrastructure or backhaul is doing, the CID and IPND data is traceable through all the carriers involved. All services hooking into the PSTN are required to provide valid endpoint location and responsible person data, even IP voice.
Secondly, with data, the vast majority of Internet connections in Australia are either PPP or mobile. Most residential services (e.g. DSL, NBN, residential fibre) are delivered as PPPoE/A, directly linking an authenticated username with all its account details to an IP history. Actions taken by that IP are easily cross-matched without worrying about matching physical circuits. HFC cable, EoC, fibre ethernet or other L2 tails are uncommon for residential internet and when in place, service providers are still required to supply similar means of match-up to comply with ACMA requirements. Mobile broadband acts similarly, the accounting systems make tracking easy.
All of this stuff is already in place. All the ISPs I'm aware of are very particular about traffic accounting and logging beyond even what is required by current law. The laws being proposed (as far as I can tell) increase the storage time and expand beyond the scope of the accounting data required now, almost to the point where you're going to be logging netflows, archiving proxy/DNS logs and hanging on to them for a couple of years - huge amounts of data. Unfortunately, all doable, all scalable off the back of existing diagnostic and accounting systems. I've been involved in scoping some of this myself for my employer.
It'll be expensive, which is what ISPs and CSPs are griping about loudest right now, but there's no crippling technical limitations, no matter how much I wish there was.
It seems you're unfamiliar with the performance of 365 cloud storage.
Short answer: your connection will not be the bottleneck.
It's even more fun trying to migrate terabytes of data back out of the MS cloud.
Speaking for myself, it's a market I care quite strongly about (having a Mac and being a fan of Dropbox). It's also a market that's used to paying for decent features.
iCloud doesn't work well on anything but my single Mac. Barely tried Google Drive or OneDrive, but their clients were just terrible each time I have. Dropbox works very well on the 3 workstation OSes and 2 phone OSes I use day-to-day.
Reinforcing your point, I find my MBP to be an excellent dev box, with all the bells, whistles and software vendor support I could want. Bonus points for being lightweight and high performance with a great battery life, especially compared to the regular (HP, Toshiba, LG) "high performance" employer-issue dev laptops which seem to be either slow or not very portable.
And Australia's had it for years too. When I ordered it, I had to make sure my phone was the international version with NFC support, because the US model doesn't have it.
My Australian education would recall that water structure is constantly changing, and that no "memory" lasts more than a few nanoseconds. No structure has been observed in any form for a longer period than this, or any kind of cyclical/regenerative states based on non-reacting impurities or solutes in the water.
Of course, this is all in relation to room- or body-temperature water, which is quite energetic and liquid. Environmental effects are a bit different. Closer to freezing everything slows down and the molecules start to line up in preparation of forming ice crystals. Usually, I'd hope this doesn't happen in a purification plant in-pipe or a human body. Either scenario is unpleasant.
But I digress. The point is this: AC power is a waveform, oscillating at 60 Hz. It cannot vary much at all...because within the same grid, everything is interconnected. Every generator is in sync, or has a syncrophasor to re-sync the power coming from it before it hits the grid. Otherwise, you get some power from A and some from B, with waveforms that are out of sync...and the frequency changes in both rate and amplitude, and shit blows up.
You may wish to engage in a quick review of:
And numerous other examples of various subcarriers being successfully overlaid on the 50/60Hz power waveform. When used for data transmission, BPL technologies (while commonly deployed in short-range scenarios due to EMI problems), can deliver hundreds of megabits, up to multiple gigabits of bandwidth over tens of KMs - this was deployed and trialled for wide-coverage broadband delivery in Australia. These capabilities would indicate we already have consumer technology which can work through the noise to transmit and receive such a high-precision signal on a shared medium, and which would not create the chaos described.
I'm not disagreeing with this being highly unlikely as a useful tool for tracking without a lot of infrastructure, but the power networks are in no way clean or perfectly in sync. Phases are locked (or the generators will get yanked into line, potentially disastrously), but beyond mechanical low-frequency synchronisation at the production end, there's a lot of noise and variation. I've personally seen several scenarios, mostly large industrial estates, which vary very significantly in voltage and frequency (both over 20%) depending on time of day and resultant grid load. IT gear doesn't agree with this and requires heavy duty power conditioning.
And I've been getting increasingly nostalgic over WW1&2 shooters (Codename Eagle, BF1942, ET, the original CoD), over the current crop of modern warfare clones. This game might be right up my alley.
Don't have too much time to game these days, but if TF2 or PlanetSide 2 isn't hitting the spot, I might give the new Wolfenstein a try.
And you get the usual proprietary issues from both.
I'm not entirely sure what you're angling at VMware with that, but for AWS it makes more sense.
The promise of OpenStack is that you develop in house, then push it out to whatever commodity provider(s) meet your needs at the time [...snip...] [compatible] at the machine level instead of the app level.
I was under the impression that OpenStack is a management and deployment framework - it will work on top of whatever supported hypervisors are in use (KVM, Xen, VMware, etc). One would assume you won't be exposed to the majority of OpenStack's APIs and direct management systems if you're using a third-party cloud provider.
Unless you're planning your own cloud system or are looking at a deployment on the scale where you would be closely looking at running up some of your own hardware with an IaaS partner for rapid scaling, I don't see any direct benefits to users. Especially for SMEs and non-IT-centric businesses, which are the primary targets for the "outsource everything to the cloud, it's worry free!" propaganda.
Reminds me of : http://en.wikipedia.org/wiki/C...
I'd have to agree. VMware VI Client (the
It's been a while since I used NetApp though. NetApp and 3PAR's management toolkits crap all over HP MSA/EVA or the various IBM SAN consoles for usability.
"I got everybody to pay up front...then I blew up their planet." "Now why didn't I think of that?" -- Post Bros. Comics