I've been wondering for a while whether the affected ISPs would have cause to sue the government/courts/publishers for compensation as a result of losing customers due to the enforced filtering (which doesn't apply to smaller ISPs). TTIP sounds like it would open up that possibility if they can't already...
Or I could just buy it easily from Amazon, and strip the DRM for backup purposes.
My take on this is that if I'm required to infringe copyright on a legally purchased product in order to make sensible use of it, why should I actually purchase it instead of just infringing copyright and getting it for free from a torrent?
For the record, I don't do either - I've steered away from ebooks entirely until the publishers stop taking the piss. Since books were invented there have been various generally accepted things that everyone did with them that ebooks don't allow you to do: e.g. if I buy a paper book, I can read it, then pass it on to my wife to read, lend it to a friend to read, stick it on the book shelf for years, then hand it onto kids to read, who can hand it onto their kids, or I can sell it, etc. Compare to the T&Cs of Google Play (as an example) which say that I'm not even allowed to lend my tablet to my wife so that she could read an ebook I purchased, let alone actually transfer it to someone else's device. When I can get ebooks with the same rights as I have for paper books, I'll think about buying some.
Meetings can be made efficient. My meetings usually are. I invite people for their topic to the correct minute. Yes, minute. Give or take 5, but it's patently USELESS to have someone sit in a meeting for an hour if all the matters to him is about 10 minutes thereof.
The problem there is that you end up cutting short important stuff to stay on-schedule. Thankfully, where I work now (my own company) I just arrange ad-hoc meetings and hammer details out till we're done, which is very productive; but I used to work for $large_multinational and meetings where we got into a detailed discussion about something really important only to have the chairperson halt the discussion to prevent the meeting getting off-schedule were the norm. The result: meetings were so superficial that they were useless, because they never got down to the nitty-gritty detail that actually _needed_ to be discussed. The same goes for anything that demands the meeting stay on some kind of a schedule - i.e. multiuser meeting rooms where you're required to wrap up you meeting by a specific time so the next person it's booked to can start theirs.
1. Limit what you're going to cover in the meeting - spending an hour hammering out a single detailed design point is better than having a uselessly superficial discussion on 20 points.
2. Limit who's going to be in the meeting - if you're discussing 10 different things and one of those things needs an extra person, schedule a separate meeting for that one thing rather than either wasting that person's time or abandoning discussions in order to stay on schedule.
3. Figure out if a meeting is actually the best plan - it might be that a good chunk of the discussion would be better done by email, which gives time for people to research their arguments and present them in a more coherent way.
4. Ensure everyone has plenty of "overrun time" so you can extend the meeting unexpectedly. i.e. if you're expecting to spend the 2 hours after the meeting doing some coding then that's fine since you can just postpone the coding, but if you're expecting to have to drive off to see a customer right at the end of the meeting then you're screwed if you're not on time.
6. Make sure everyone has plenty of information to prepare with before the meeting (another good reason for having detailed email discussions first!).
This allows you to tell the Daily Mail readers that Something Is Being Done, just as it ought to.
Well, right up until the Daily Mail gets blocked... and given how offensive the lies the DM peddle are, it really should by any kind of "offensive content" filter.
So, the designers of IPv6 could not conceive that somebody could have less than 2^64 devices and still want to put them in separate networks?
Networks are allocated as
So now my ISP will have a say in how many internal networks I have?
Yes and no. You _can_ allocate networks smaller than a
And this is supposed to be better than IPV4 with NAT?
Oddly enough, yes - ISPs really shouldn't be restricting your internal infrastructure. If your ISP is being a dick about this then the answer is pretty obvious - switch to another ISP, it isn't as if ISPs are thin on the ground.
People who think they need end-to-end connectivity for everything don't understand networking. It's not only not required, it is undesirable in most cases.
Its undesirable in _some_ cases, it's absolutely required in others. So if you have a single IP address and you have to NAT everything, you win in the "some cases" situation and you lose for "others" (even worse with CGNAT). If you get rid of NAT and stick a stateful firewall in, you get the best of both worlds and can choose the best for the situation at hand.
As someone who's not really a networking guy, this!
I like the extra layer NAT provides. It's no substitute for a firewall of course, but having your internal boxes not publicly addressable at all adds an extra layer of warm and fuzzy.
Is this attitude wrong? Probably. But it is also pervasive.
That attitude is definitely wrong. The warm fuzzyness you're currently feeling is false security - lots of ways to trick a NAT into giving access to internal machines that you think are unaddressable. What you need is a stateful firewall - that gives you real security without breaking all the stuff that NAT does.
WTF do you need a
/64 is only big enough for a single network.
IPv6 would help both enormously.
In the long term, yes. In the short term, going offline for the 93.69% of their users who don't have IPv6 yet would certainly be seen my most as a completely dickish move - I'm pretty sure their investors would be upset, for one thing.
Lower latency on routing means faster responses.
How does IPv6 yield lower latency? If anything, the latency on IPv6 is often slightly higher than IPv4 owing to the prevalence of IPv6-over-IPv4 tunnels where native IPv6 interlinks aren't available, along with larger headers slightly increasing the latency of cut-through routing.
IP Mobility means users can move between ISPs without posts breaking, losing responses to queries, losing hangout or other chat service connections, or having to continually re-authenticate.
Does anyone actually implement IP mobility? It requires support from your ISP, and I've not heard anything about any ISPs implementing it.
Autoconfiguration means both can add servers just by switching the new machines on.
DHCP does pretty much the same under IPv4 - I can't see this being a boon to Google/Facebook. (TBH I wouldn't be surprised if their infrastructure was too complex for any of these protocols - they've probably got some home baked protocol for doing that stuff).
Because IPv4 has no native security, it's vulnerable to a much wider range of attacks and there's nothing the vendors can do about them.
So no different from IPv6 then... both protocols have ipsec support (I think it's mandatory for IPv6 whereas the IPv4 version is an optional backport, but all major OSes support it in both cases so that's neither here nor there). However, ipsec use is currently pretty much reserved for VPNs - you can do adhoc ipsec but no one does. About the only thing you get from IPv6 is that IP addresses are much sparser, so scanning/attacking by picking addresses at random isn't effective.
Whilst CFLs worked as a stop-gap until LED lights could become feasible, I do wonder if they have done long term harm to people's acceptance of efficient lighting - for a long time, "energy efficient lighting" is going to be associated with "takes 5 minutes to get bright enough to see" thanks to CFLs...
That said, I might miss CFLs in my bedside lights if I ever have to replace them with LEDs - that's the one place where a slow start-up is quite nice!
Yeah I never understood that, why try and recover the clock signal from the data stream? If I where designing it I would have my DAC monitor the stream to calculate what the clock signal is supposed to be then generate my own dam clock signal.
My guess is:
If you recover the clock from the stream, you just need to roughly control the motor (CD) RPM and stream what you read. If you run your own clock you need a buffer and you then either need to dynamically tweak the motor speed based on how fast the buffer is filling/draining, or you need to read the CD a bit too fast and stop/resume every so often. Clock recovery sounds much simpler to me.
Will it have the same line of site limitations as current satellite Internet? I'm in Seattle, and with providers like HughsNet you need a very good line of sight to the south to get service. IIRC, where I used to work we had the dish pointed only 24 degrees above the horizon.
These sats are going into LEO, not GEO, so their position in the sky won't be fixed. I imagine you'll used a phased array antenna to track them. The good points being: lower latency, no requirement to see the southern horizon specifically. The bad point being that you'll need a view of a bigger chunk of the sky to avoid signal dropouts as the satellites move - how big a chunk depends on how many satellites they have up there (and therefore how many are above the horizon at the same time). If they have enough satellites, it may work out better for you.
The people want it to stay this way and a massacre or two will not change that.
Although crazies keep voting for UKIP, who have said they want to legalise firearms...
If a program needs to look at stuff in other file structures then give it read access
Great! $malware got read access to your bank details.
You want it to be able to write to files in those other directories, fine, it reads in a file it isn't allowed to overwrite or change, and then saves it's own copy that it can molest in whatever way it wants.
So now instead of having a single copy of the file, you have a separate copy saved by each application that has been used to process it - creating a mountain of almost-identical files that the user has to keep track of is not a user friendly way of doing things.
Better is to have a versioned filesystem - each time a file is changed (by any application!) the delta is saved and the filesystem keeps the old data hidden away. Most of the time everything behaves as normal - you have one copy of a file, no matter how many times it is edited. If you need to roll back some changes then you just ask to see previous versions of that file, much like a source control system. And indeed, there are a number of file systems that do exactly this - if you care about such things there's nothing stopping you doing it.
It doesn't stop malware reading your files or modifying them, but it does mean you can recover the unmodified versions... but then doing backups (which everyone should be doing anyway) gives you similar protection.
And, hell, why do applications get the run of every file I use under my account? Should they not have to request such things first? Even on Unix-likes, if you get on as my user, you can trash all my data - why?
Because anything else would require popping up numerous "would you like to allow this application to do $foo" boxes, and then you end up training the user to just hit "yes" on everything because it's too damned annoying to make a decision every time when the vast vast majority of access requests really are legitimate.
Sandboxing based on applications making their own decisions and being relatively trustworthy might not be a bad plan though - i.e. if your web browser has an immutable list of files it needs access to, and you trust your web browser, that provides some level of protection when some malware compromises the browser, so long as the immutable list really is immutable and the malware can't modify it.
I'm sorry, but the very concept of a virus scan happening "at scheduled intervals" or after you've already double-clicked on the file just tells you that it's too late before you start.
Well no, if you can roll back everything that happened between the "all clear" scan and the "you've been cracked" scan then that's certainly much better than nothing.
Fact is, I didn't install it and I have no idea what it ACTUALLY does.
You don't know what most software ACTUALLY does, even if you did install it - most software people use is closed source, but even the open source is a black box unless you actually audit it.