NOTHING escapes a black hole, not matter or energy or gravity.
Strawman, since no one has claimed otherwise.
We detect the activity around and affected by the blackhole, which is not at all inside of it by any definition.
NOTHING escapes a black hole, not matter or energy or gravity.
Strawman, since no one has claimed otherwise.
We detect the activity around and affected by the blackhole, which is not at all inside of it by any definition.
Well, considering that conventional warfare is a nono, and nuclear warfare is a BIG NONO, but economic warfare is fair game, I'd say you have a point.
But once you have done that once, that's it, the economic weapon has been used, and you've got nothing left. Of course, there's always the threat of using it, or selling off a few million dollars of shares every now and again just to prove the point.
As others have mentioned, it may not be single use after all...
But even assuming it was, once the economic weapon has been used, doesn't that still leave conventional warfare and the ICBMs that weren't used?
So the government earned it? I don't think so.
Assuming the heirs in question weren't butchered by a mob along with their billionaire parents and the entire estate burnt down... Yes, the government earned it.
These heirs wouldn't have ANY money, nor their parents, if it wasn't for the government. If they had anything of value, they would likely have been killed and the valuables stolen if not for the government.
Not to mention they probably would have died during birth if not for the hospitals and medicine made possible by a stable economy, roads to get them to said hospital, and police to protect their safety and possessions.
If they want to claim they owe the government nothing, they also lose the right to complain when the rest of us murder them and take their stuff. I seriously doubt they really would go with that option if given the choice.
Either defend yourself 100%, or stop complaining and pay up for the protection offered.
And them going through it to profile you, handing a copy to the government, and the likelihood that they get hacked one day (either en masse or just your account or some disgrunted employee) and it gets out to someone/everyone else just free perks then?
That can be a risk yes, but is easily defeated.
If the files I upload are intended to be distributed and shared, say software I write or videos I publish or whatnot, then that profiling is only a good thing since I want to rise in the search results as the source and creator.
If the files are a backup of personal/company data, stuff not to be shared anyway, I'll be uploading an encrypted container.
Yes, my profile will have a single entry stating I use encryption (which I don't know how to avoid) but my actual data isn't profiled since they can't read it.
You also eat some business risk that they may decide to discontinue the service, with little or no notice.
True, that is always a risk when you go 3rd party and don't do it "in house".
But that shouldn't be TOO much of a problem, especially for the backup situation. By definition backups are a copy, and by best practices you shouldn't ever have only one copy but many. Losing one may be an inconvenience in replacing it, but only a fool would delete their originals after uploading a backup.
(Not to say there aren't fools out there who do exactly that - only to say this is the fools fault, not the hosting provider)
For the distribution situation, it is adamantly a much larger inconvenience and to many more people.
Hopefully a person operating this way has their own domain name and website, and can be the authoritative source for where to download the files.
Then it's just time being spent to upload everything to a new host to distribute and updating the links that point to them.
Yes users wanting to download will be upset, but that's mainly a result of you not having redundancy in place.
I realize redundancy isn't always an option for many reasons, and am not attempting to place blame. (If you can't afford it, you can't afford it.)
But this is the case with most everything online, be it file hosting, web hosting, backup services, ISPs/uplinks using BGP vs a single DSL/cable provider, etc.
I don't see that as a fault with the 3rd party really, just an unfortunate truth to the nature of what we do.
And they may lose it. Google's lost data before after all. They're far better administrators than the average joe consumer, but they aren't magical. You should probably still arrange for another backup.
Agreed! No one is perfect, and expecting otherwise can only bite you.
But for most, especially individuals and hobbyists but small companies too, Google or Amazons (or even Microsofts) admins are going to have more time and people on hand to take care of it.
It's always a cost/benefit to figure out.
That said, I don't object to making use of cloud storage where appropriate... but google storage? Really? Don't they have enough of your data already?
Perhaps. I think this one is going to boil down to personal preference.
There is a certain extent I do trust Google, and a line I wouldn't trust them past. I suspect my line is a bit closer to "trust" than your line is
But assuming we are both knowledgeable about what Google does, what their business modal is, and what specific thing we will be using them for - I don't think there is much either of us can say to move that line for the other much, and that doesn't make either of us wrong.
, compared to paying you anything for the service of "oops, all your data was lost because this crappy consumer level drive failed"
Of course, one could maintain a couple copies. So when the drives inevitably fail, you've got more copies.
According to the statements made which my comment replied to, no actually you can't.
Parent specifically said the solution was one single 4 TB hard drive.
You can't do a RAID with one disk.
You can have multiple copies on that one disk, but that won't help against most failure modes.
Parent specifically argued against all the options that would let you protect a single consumer drive from failure, and didn't even account for other failures such as bitrot checksums, parity to fix said bitrot, or user error (ie deleting something on accident)
And really most data isn't worth backing up. My music / movies -- not going to sweat 99% of them. Vacation photos etc? I replicate copies to my family (and they to me). Odds of all of us losing them at once are near enough to zilch -- that whatever catastrophe manages to do it will probably make the lost photos the least of our concerns.
Strongly agreed there too.
Personally I do not backup any of my movies. I keep my original DVDs and whatnot, and yes it would suck to re-rip everything and sort it nicely again, but it can be done.
My music ended up backed up due to wanting it at multiple locations, but I didn't intend on that originally.
My vacation photo situation sounds identical to yours too. If 5 US states disappear over night including the one I live in, fallout has taught me photos will not be of help in fighting off the raiders and mutants
But - I do have a ~250 MB encrypted volume I DO backup everywhere I can.
It contains my tax paper work, scans of important documents I need to keep for some time, my own unencrypted private certificate authority cert, etc.
I also have a smaller encrypted volume I store with family and friends that contains my password basis and schemes, plus some of the more important ones along with my rotation schedule and whatnot.
My will contains this volumes password and a list of which family members and friends have a copy, intended to be available to my heir once I pass on.
The more help for her in this case the better, since I won't be around to ask!
While these days I run a fleet of my own servers to host files I distribute, I too started out with free hosting online (zomg ftp.netcom.com and geocities!) then moved up to paid hosting (usually of the web hosting form) before finally moving to colo servers and now adding VPSes.
Professionally on the other hand, I do have a metric crapton of data that does need to be backed up. I have ~250 employees here I have to protect against themselves (Be it a boot drive image to be restored after an infection, or files on the storage server to be placed back after being overwritten incorrectly)
But I will fully admit at work I keep everything in-house. They provide much more funds for that than I am willing to spend for myself at home, so thankfully this isn't an issue
In the end it all boils down to the specific case at hand, what concerns one has, and which options are easiest to address those concerns.
But none of that was mentioned in my previous reply, since that was on a pure technical level between a single consumer drive on one PC, and enterprise grade hardware setup by best practices, administrated by competent people.
Hopefully it is clear however that even the political side of the argument isn't as black and white either.
A 4TB drive is under 200 USD from several vendors. That is only $.05/GB. So, at 0.24/yr.
And a 74 GB SAS drive is $300, which won't fall over and puke on itself the second a second user tries to perform a read operation on it, nor will die in zero to a few years.
That is only $4 per GB, and provides significantly higher speeds, bandwidth, and lifetime over your option.
Not to mention, I would rather pay Google $0.02 per GB for the service of "storing my data", compared to paying you anything for the service of "oops, all your data was lost because this crappy consumer level drive failed"
Haven't dealt much with cable tech support much, have you? They couldn't figure out what is wrong with a line if you handed them a cable cut in half. They'd first ask you to try and reboot the computer to make sure it wasn't that.
What does the average tech support flow chart reading monkey have to do with automated CPE monitoring setup by the network engineers?
I assume it is rate limited in some way
Just to clarify, it is rate limited in the same way your existing connection is (though likely more so)
Docsis 3 configured with 4096 QAM can push 10 gbps down and 1 gbps up the coax.
Out of that, your service will be allocated some bandwidth over a number of channels, depending on what the ISP feels like offering and how much you are paying them. In the US, lets say you get a 20mbps down package (For our UK friends, pretend it's 100mbps down) - and that is your rate limit.
Now they can allocate a new channel for the other virtual circuit. This is equivilent to having two people in your house each subscribing to the same cable ISP and having their own cable modem on the wire.
Short of massive bandwidth packages requiring many channels, both modems can live and operate quite happily on the same coax, each tuned to a different channel. (Only if multiple channels are bonded to give more bandwidth are dedicated coax runs involved)
In this case, there is a channel your modem uses for your own service, capped at whatever you pay for.
There is a seperate channel the modem also tunes to and sends to the wifi access point built in, that other subscribers can login to.
This unrelated channel will also be capped, and likely much lower than your own service.
That channel is bound to a virtual circuit that isn't under your name, and shows as a dialup pool or the like, where radius logs can link usernames and login times with DHCP logs and the IP(s) being used by whom.
In both cases, a metric crapton of unused and unallocated bandwidth over the coax is sitting there idle. Instead of 10000-20 mbps unused, there will be something like 10000-20-5 (or whatever they end up allocating the wifi)
The bottle necks are further up stream within the ISP network (typically at their edge routers, which link them to other networks) - no longer at the last mile.
In fact the only difference between two virtual circuits terminating in the same modem (one going to ethernet and wifi radio 1 for you, the other going to wifi radio 2 for others) is the hardware being used to do it.
Accounting, bandwidth, and cost wise there is no difference between this setup, and both you and the person next door subscribing to the same ISP.
As far as the network itself goes, this is already a well known and quite solved problem, and has been going on for decades.
The only real concern is the piece of hardware servicing these two circuits in the same software stack. Any security flaws that would let one circuit route to another in any way differently than if they were separate routers would be a "very bad thing"(tm)
Right now I can only reach you over the network by that ethernet jack in the cable modem, that your firewall names "the outside". Any packets I send must abide by your firewall rules to make it through.
A flaw in the router might possibly allow routing between wifi radio 2 and ethernet/wifi radio 1 in a different way than from coax to ethernet/wifi radio 1 and coax to wifi radio 2.
Imagine iptables setup on a machine with 3 ethernet jacks. #1 is ISP, #2 is you, and #3 is the roommate. Packets from #3 to #2 should NOT flow if they wouldn't also be able to go from #1 to #2, or from #1 to #3 even.
Docsis even provides security features where all the cable modems on the same coax can only communicate with the CMTS. You and the person next door, or even the room mate in the same house, willingly communicating over the network will route packets from you out to the cable co and back to the same house to the room mate. Replies take the same long path back. Each cable modem encrypts using unique keys.
Having two such encryption channels in the same cable modem is part of the 3.1 spec at least, so this is more like using an existing feature instead of inventing a brand new home grown solution out of a linux box with multiple network adapters.
(Which I'm not knocking! But sometimes carrier grade router gear is the better bet, and with the public masses involved this would be one of those times)
Some also question the competence of the IT staff comcast chooses to retain, and question if they are capable of realizing such a problem exists as well as can apply the industry standard "fix" defined in the 80s (ie correct filtering rules on the correct interfaces)
There are a number of ways to do this setup properly and securely. But this is comcast here, not the network professionals.
Except your argument has been proven false - many eyes DID catch the bug!
You are posting to an article plainly stating the bug exists, while your post claims such an article doesn't and can't exist because this very bug you are commenting on hasn't and can't be found.
You state this falsehood while at the same time argue the only process that "works" is one where not only would this bug have been around for a decade but still to this day would only be known to the black hat hackers who will use it for ill, depriving the software users (us) of being allowed to even know there is a problem under threat of lawsuits.
I have to seriously question your motives for such a desire and why you don't want people to be secure...
(let's call him Rob)
No no no, scientifically his name must be Carl! Did no one teach you your A B C's?
and of course when you start dealing with SSD's or more expensive drives with smarter controllers your ability to actually do a write to every sector to achieve this goal is somewhat questionable
Every IDE drive made since the 90s has a multicore processor on it that is already more powerful than most hobbiest computers sold as actual computers just the decade before.
The translation between an address on disk to read or store a byte has not matched a static physical location since MFM drives, which most people these days have never seen or heard of.
Some brilliant hackers are only just recently reverse engineering these controllers, learning to run code directly on them.
This guy even has a Linux kernel running on a 2tb Western Digital HD controller chip, and reprogrammed it to silently watch for a certain string to be written by the PC and then return additional data.
His idea was to create a program that could be triggered remotely by getting said string to be written to disk, say by utilizing a webserver log file which puts even invalid requests into an error log.
That drive has a 150mhz 3 core ARM processor, which has a 32 bit memory map, direct access to the sata bus and direct access to the raw storage.
By pausing the HD CPU, memory locations can be changed and the currently running program modified, then the CPU can be unpaused and the code continues to run.
Basically anything you can do from the sATA interface is pretty garenteed not to be able to touch or even be aware of specific locations on the platters where data is stored.
No he means Silicone. Calculators obtain that by displaying 58008
You can't spell out "mama" on a 7 segment display
(Kids these days!)
And while I am at it, the order of the domain should have been reversed. So instead of e.g. tech.slashdot.org.us, It would have been better to go for us.org.slashdot.tech as you then follow the tree. Even neater if there would have been no dots, but slashes instead:
http://us/org/slashdot/tech//d... (Please note the second double slashes to show where the domain ends and the file system begins.
Actually in the 80s that is pretty much how it was.
UUCP mail was routed from one mail server to another to another before finally (hopefully!) landing in a users mail spool on a server they frequently checked more than others. This one done with whats called "bang paths" as they used ! as the separator, and the route was listed left to right ending with a double colon and the username.
Even at the time DNS replaced hosts.txt on the ARPAnet, there were still other connected networks like BITnet and CSnet using different protocols that used mixed forms of routing paths, and neither network required NSF approval to join like the ARPAnet did.
BITnet was IBMs VMS network, and anyone that had a VAX with the RSCS software installed and could afford a leased line was able to get on the network and get data to/from the arpanet.
There was a serious perceived threat from these other protocols, most of which lacked a unified or centrally managed naming lookup scheme (although that is exactly what RSCS was, although only for VAX)
At the time each protocol pretty much only looked out for their own, except for DNS which was advertized as "generic" and "non-proprietary" as only IP was required. DNS was also an open standard like IP and TCP. That was enough for DNS to "win" and become the one true naming system.
I'm not sure why they decided to use a right to left hierarchy beyond just trying to differentiate themselves from existing protocols...
But it doesn't follow the URL/URI standard because that wasn't to be invented for another 10 years or so.
As you say, hindsight is always 20/20
Excluding all ccTLDs, the original gTLDs are:
The first expansion added:
Then ICANN opened this new gTLD program. The listing of new gTLDs approved are here
I had the idea to use it for pre-blacklisting each and every one in my mail and web filters, but opted instead to go with a whitelisting approach hoping for easier maintenance (Thus the easy copy/pasting of the list at the top - sorry, I don't have link references anymore)
The applicant status page makes for better comedy however, as it lists the existing company name that requested the new top-level instead of the fake company name setup to handle domain registrations. (Currently the english TLDs start at page 4)
Most make sense from the twisted world view of trademark holders, but some are true WTF moments...
Amazon for example requested some obvious ones like
But they also have some strange requests like
Amazon requested a whole 76 TLDs, Google requested 102, Microsoft only 11, and surprisingly Apple only requested
ICANN bitched and moaned about not wanting to create
Also interesting is they already approved and delegated
Filtering on similarities shows
A whole 6 pages worth of results have objections linked to them, which sounds promising except there are 56 pages total
Sadly there is way too much money involved for much success of a massive grass-roots preemptive blocking and agreement to not allow such TLDs to resolve.
But I have no qualms about doing so and only white listing individual and specific domains if any of our customers or vendors go the retarded route of making their primary email or websites use one of these.
I'd give our non-english speaking friends a break, because despite the great technical problems involved at least they have a valid reason wanting a TLD in their native language.
Beyond that however, the rest so far look like money grubbing land grabs, stupid branding, or obvious scamming/spammer havens.
I wasn't disagreeing with the facts that were cited, only pointing out that the amount of work that you are going to get out of a particular amount of charge for a given application is directly proportional to that amount of charge, regardless of what the current or voltage levels are, because for any single given electrical application, the power demands tend to be invariant. Under such circumstances, more charge available means powering that particular application for more time, which results in more work being done.
A 12 volt 1 amp-hour battery will store the exact same amount of energy as a 6 volt 2 amp-hour battery. Both store 12 watt-hours of energy.
However if your load requires 12 volt, minimum 10.5 volt, then being powered by the 12 volt 1 amp-hour battery will provide for an hour of useful work, while being powered by the 6 volt battery will likely result in NO work what so ever, despite both providing the same amount of energy.
It's hard to argue 1 hour of work is less than zero hours of work, or that one equals zero.
Meanwhile, money made from selling Windows software to computer makers slid by three percent due to continue soft demand by consumers for personal computers
Yes, I too have been both softly demanding and loudly demanding a personal computer OS from Microsoft, yet all they want to push is some tablet OS unsuited for business work on a personal computer.
At least they aren't acting surprised about their choice.
"A car is just a big purse on wheels." -- Johanna Reynolds