Forgot your password?
typodupeerror

Comment: Re:Expert?? (Score 1) 436

by Cyberdyne (#47693501) Attached to: Is Storage Necessary For Renewable Energy?

There is no such thing as negative energy price, unless you're retarded? Why would you pay someone to take your excess energy, when you can just dump it into the atmosphere through a resistor heating element? They are not that expensive, even if you have to finance one. Of course you might be benevolent and give it away for free, or even exert some effort out of love to they neighbor, and pay some for him to take it, but in a selfish capitalist view you can get rid of energy very easily, it's not like trash that is costly to get rid of.

You might not have a massive resistor handy at the instant you need it, but I suspect subsidies will play a part in this. If you get paid a certain subsidy per unit of electricity you produce, in addition to receiving whatever the wholesale price is at the time, you could still end up turning a profit by paying someone to receive your surplus electricity. (In Europe, there are also obligations for power companies to get a certain % of power from renewable sources - so it could be better for them to take this power now, giving it away for free or even paying a big industrial customer a tiny bit to use it, just to meet the government targets.) Hopefully, dumping that power into your own resistor bank doesn't earn you subsidy payments!

Comment: Re:Cox (Score 1) 93

by Cyberdyne (#47689185) Attached to: Groundwork Laid For Superfast Broadband Over Copper

I'm currently working on a project for Cox Communications in which they are chemically dissolving the foam inside of the coaxial cable conduit & then air blowing fiber through the newly created space inside the conduit. Pretty cool stuff. This avoids the costs associated with permitting, digging new trench & burying separate fiber conduit & they can use the DWDM hardware they already have on hand instead of buying new systems like this.

Wow, that is neat. (At first, I misinterpreted that as dissolving the foam dielectric inside the itself - which could also be neat at some point, for doing FTTH, but rather more demanding.) I take it this is the "final mile" conduit between the cabinet and individual homes, or is it just pushing the HFC boundary down to street level for much shorter runs of coax?

Comment: Re:Fibre optic is almost her (Score 1) 93

by Cyberdyne (#47689077) Attached to: Groundwork Laid For Superfast Broadband Over Copper

Why are we still flogging the dead horse?
FTTH will always outperform copper, without exception, and it's gaining traction quicker than the telco would embrace G.Fast

In the long term yes - but the economics are very different short term. A couple of telco engineers could install VDSL2 (or, presumably, G.fast) for a whole wiring cabinet - a hundred or more households - in the time it would take to run fiber to a single one of those premises. Apart from anything else, it seems right now all the engineers are busy installing those FTTC services; switching them to putting in more FTTP/FTTH would not only mean more expense, it would take longer, leaving everyone else on ADSL for longer. I suspect things will be different in a few years, once that FTTC rollout is complete and manpower is freed up.

I actually have the option of FTTH right now, if I wanted: 330 Mbps down, 30 Mbps up, using GPON. The problem is, I'd have to pay heavily for it: high three figures installation, then a three year contract lock-in at GBP 100 per month - just for the line to the exchange, that doesn't include any actual Internet connectivity! Needless to say, I'm staying on FTTC (VDSL2) for now: 80 Mbps down and 20 up, for a fraction of that price.

Now, when it comes to new housing, it's another matter: if you've got to go and dig up a road anyway to put in the wires to a new housing development, it's much the same cost whether it's copper or glass you put in (or both). So, you can sometimes get a fiber connection for the price of VDSL2!

Comment: Re:Seems like it would've worked (Score 1) 97

by Cyberdyne (#47688809) Attached to: How California's Carbon Market Actually Works

Sadly, much of it comes from coal, but e.g. in Norway a huge amount comes from hydroelectric plants. That is why oil refining and metalworking is a large industry in Norway.

Yes, Norway's quite good in that respect - as is the US Pacific North-West, as I recall: the abundant hydro-electric power gave Microsoft and Amazon a cheap, clean electricity supply for their early "cloud" offerings. Eroded now by global expansion, I think: once they built huge hosting sites elsewhere, they used whatever power was present in that area, usually something much less clean. Of course, Norwegian oil refining activity will also be boosted significantly by the small detail of having a major oil supply, unlike other European countries...

I've seen a few hosting outfits offering "carbon neutral" services, which I think you can do quite affordably by locating somewhere with suitable clean power. A bit of a niche market compared to more mainstream hosting services, but there's obviously some demand there.

In a sense, CA is getting what it voted for: stamping out "dirty" industry. Other countries are getting what their governments want/permit: economic growth, regardless of CO2 etc. I suspect both will regret it to some extent and change course: China has a major smog problem and is trying to clean up, CA has a major economic problem and power shortfall and will have to give that a higher priority soon, if it isn't already.

Comment: Re:Microsoft naming practices (Score 1) 413

They need to pick a name which is similar, to be identifiable, but less tarnished by past bad experiences. I propose Infernal Excrement: still "IE", but much less off-putting than the name they have soiled so badly with IE6 and other fiascoes.

To be fair ... it does suck much less now. I suppose it's rather like working for a surviving offshoot of Enron or Lehman Bros... Who, thinking about it, have probably done less economic damage globally than IE has.

Comment: Re:Dead as a profit source for Symantec, well, ... (Score 2) 327

by Cyberdyne (#47688447) Attached to: Ask Slashdot: How Dead Is Antivirus, Exactly?

Othervise, it would have been nice to allow only certain binaries or software developers/publishers to run. It would also be nice to sign the binaries and not allow changes.

That would be less help than you might expect (although OS X does do exactly this by default now). Remember all those Word macro viruses of a few years ago? Totally unaffected: it's a genuine copy of MS Word that's running, it's just doing something it really, really shouldn't be. Likewise any browser exploit. Trojans have always relied on the user to execute - and in general, they will execute them, whatever dire warnings you may put in place, unless you can give them a totally locked down system (which, even in a strict corporate setting, is often politically impossible). In a University setting, I've had very senior academics call me up with "I can't open this CampusLife.pdf.exe file someone sent me ... and it won't open on my secretary's PC either." Of course it was malware - but any computer restrictions to prevent that would probably have resulted in unemployment rather than a more secure PC. Telling people at the top of the food chain "you aren't allowed to do that" just won't work. (Fortunately, opening that particular worm did nothing anyway - it either relied on Outlook, or having outbound port 25 open, neither of which applied at that time.)

Ultimately, for anything more than the most limited functionality, you will have security holes - just like you will get hard drives and power supplies failing, keyboards and mice getting choked up with gunk. Reduce the risks where it makes sense (RAID and redundant PSUs for servers, good patch management, sensible firewall settings) and then deal with things that go wrong effectively when it does happen (spares, backups, etc).

Like real life, take sensible security precautions - but going too far can do as much harm as having poor security. Do you drive everywhere in an armored vehicle with armed escorts? Unless you're POTUS or equivalent, that would just be silly - I seem to recall there have been cases of people dying after getting trapped in "panic rooms" after false alarms, because medical help couldn't get to them in time! So, don't be the computer equivalent: blocking attachments entirely is secure, but is it useful?

Comment: Re:Dead as a profit source for Symantec, well, ... (Score 4, Interesting) 327

by Cyberdyne (#47688349) Attached to: Ask Slashdot: How Dead Is Antivirus, Exactly?

The controller feels that this is more or less an acceptable trade-off over time -- my labor cost to rebuild the PCs vs. the ongoing cost of AV.

They are probably right there - of those 3 rebuilds, how many do you think would have been prevented by paying more for any given AV product? Thinking back, I can remember several PCs needing recovery work because of the AV system in use (good old McAfee pulled down an update which declared a piece of Windows XP itself to be malware and need deletion - leaving a machine you couldn't log in to until that file was reinstalled), and probably two nasty infections for me to clean, which got in despite McAfee being present with fairly paranoid settings.

Comment: Re:Technical People (Score 1) 194

by Cyberdyne (#47681363) Attached to: The Billion-Dollar Website

Technical people should have the professionalism to analyse requirements and check that the requirements fit the purpose. Unfortunately the way of the world is that technical people would be quickly shuffled out of the way by sales and marketing if they started to reduce revenue by telling a customer what they really wanted instead of what the spec says.

All too true, sadly. Tendering processes seem to exacerbate this: when a government control freak puts out a document announcing that the government is really determined to buy a chocolate teapot, whatever the price, the bidder saying "here's a stainless steel teapot which will do the job for $5" gets dumped, while the one saying "we'll stick bars of premium Swiss chocolate together with chewing gum for $1m" gets handed the million - then another million to patch the chocolate teapot with cement to make it hold hot liquids. Then it turns out they were actually needing a milkshake dispenser in the first place but didn't understand anything about beverages, so they have to start again from scratch, $2m down.

One large government contract I was involved in stipulated in minute detail exactly what error message had to appear when the service was offline. There was no SLA, however, not even an incentive in the contract to improve it! (This was the result of the previous project for that department having been a high-profile failure, with servers overwhelmed by the load. The bureaucrats responded to that with "next time, let's make sure it can show an error when busy!" rather than requiring scalability or load tests.) On the bright side, the winning bidder had the integrity to make sure it didn't fall apart anyway.

Comment: Re:Seems like it would've worked (Score 4, Insightful) 97

by Cyberdyne (#47681229) Attached to: How California's Carbon Market Actually Works

I can see it now--we'll have trans-Pacific transmission lines from India and China!

No, just more imported products of energy-intensive industrial processes, like steel and aluminum. It's already happening to an alarming extent in Europe for exactly that reason, with large metal-working plants (which can consume hundreds of megawatts each) getting moved overseas. Just because you can't import the electricity itself doesn't mean the resulting products have to be made in the US!

Comment: Re:Cat blog (Score 1) 148

by Cyberdyne (#47639355) Attached to: Google Will Give a Search Edge To Websites That Use Encryption

But, but... That doesn't make any sense!
Using HTTP, the connection isn't encrypted in either direction. If they can see the original request, they can also see the original response, so why not just cache that?

It's an absolutely crazy implementation, I agree (particularly speaking as someone implementing something which analyzes HTTP downloads right now). It's not caching, but some sort content analysis; my guess, and it is only a guess, is that it's intended as a workaround to copyright. Genuine caching is OK, for cacheable content, but I don't think this use would be covered by that copyright exemption: by fetching their own copy from the server like a regular web spider, they're no longer "making a copy". The other possibility is bandwidth: being a major ISP, it might be easier to intercept only the requests in-line, then queue them up for spidering by a separate system; intercepting the downloaded content as well would mean forcing all traffic through the analysis system in realtime.

Mine just hashes and logs the objects as they get fetched. Of course, I'm doing it in the firewall, with the user's knowledge and consent. I just remembered, though, a friend who works for an anti-malware vendor company mentioned to me that their security proxy does the same bizarre duplication rather than scanning in transit, which IIRC screwed up music streaming services, so presumably there's a good reason for that. (Weird, because if I were shipping malware, I'd find that all too trivial to circumvent by serving different content to the client and the scanner.)

Comment: Backward (Score 2) 72

by Cyberdyne (#47625175) Attached to: Expensive Hotels Really Do Have Faster Wi-Fi

Conversely, I seem to find (in the UK at least) that cheaper ones and shops are more likely to have free WiFi, while pricier hotels and bigger chains seem to be more likely to charge for it. The poshest one I've spent any time in - part of the same chain as the Savoy in London - charges crazy prices (and has lousy mobile reception), though it's a rock-solid signal throughout the large building; a much cheaper hotel nearby just had a Wifi access point on ADSL somewhere, with no password, for anyone to use.

A question of attitude I suppose: a small hotel thinks £20 or so a month is a trivial investment to make guests happier, like having newspapers in reception; a bigger chain sees it as spending millions across the chain to roll out a service which should generate revenue.

Comment: Re:Cat blog (Score 4, Informative) 148

by Cyberdyne (#47623877) Attached to: Google Will Give a Search Edge To Websites That Use Encryption

Still, HTTPS would at least prevent your ISP from monitoring your browsing activity.

That's part of it - a valuable enough part in itself, IMO; at least one UK ISP, TalkTalk, has started duplicating HTTP requests made by their customers: so, if you request http://example.com/stuff on one of their lines, 30 seconds later they'll go and request the same URL themselves for monitoring purposes. Obviously, enabling SSL prevents this kind of gratuitous stupidity - and the previous incarnation of such snooping, Phorm. If enough websites enable SSL, ISPs will no longer have the ability to monitor customer behavior that closely, all they will see are SSL flows to and from IP addresses, and whatever DNS queries you make to their servers, if any. (Use encrypted connections to OpenDNS or similar, and your ISP will only ever see IP addresses and traffic volume - exactly as it should be IMO!)

Comment: Re:Useless (Score 1) 177

by Cyberdyne (#47621549) Attached to: Algorithm Predicts US Supreme Court Decisions 70% of Time

So, I agree with you that simply predicting reverse/affirm at 70% accuracy may be easy, but predicting 68000 individual justice votes with similar accuracy might be a significantly greater challenge.

In fact, it looks like very much the same challenge: with most decisions being unanimous reversals, it seems only a small minority of those individual votes are votes to affirm the lower court decision. So, just as 'return "reverse";' is a 70+% accurate predictor of the overall court ruling in each case, the very same predictor will be somewhere around 70% accurate for each individual justice, for exactly the same reason. (For that matter, if I took a six-sided die and marked two sides "affirm" and the rest "reverse", I'd have a slightly less accurate predictor giving much less obvious predictions: it will correctly predict about two-thirds of the time, with incorrect predictions split between unexpected reversals and unexpected affirmations.)

This is the statistical problem with trying to measure/predict any unlikely (or indeed any very likely) event. I can build a "bomb detector" for screening airline luggage, for example, which is 99.99% accurate in real-world tests. How? Well, much less than 0.01% of actual airline luggage contains a bomb ... so a flashing green LED marked "no bomb present" will in fact be correct in almost every single case. It's also completely worthless, of course! (Sadly, at least two people have put exactly that business model into practice and made a considerable amount of money selling fake bomb detectors for use in places like Iraq - one of them got a seven year jail sentence for it last year in England.)

With blood transfusions, I understand there's now a two stage test used to screen for things like HIV. The first test is quick, easy, and quite often wrong: as I recall, most of the positive readings it gives turn out to be false positives. What matters, though, is that the negative results are very, very unlikely to be false negatives: you can be confident the blood is indeed clean. Then, you can use a more elaborate test to determine which of the few positives were correct - by eliminating the majority of samples, it's much easier to focus on the remainder. Much the way airport security should be done: quickly weed out the 90-99% of people/bags who definitely aren't a threat, then you have far more resources to focus on the much smaller number of possible threats.

Come to think of it, the very first CPU branch predictors used exactly this technique: they assumed that no conditional branch would ever be taken. Since most conditional branches aren't, that "prediction" was actually right most of the time. (The Pentium 4 is much more sophisticated, storing thousands of records about when branches are taken and not taken - hence "only" gets it wrong about one time in nine.)

Now, I'd like to think the predictor in question is more sophisticated than this - but to know that, we'd need a better statistical test than those quoted, which amount to "it's nearly as accurate as a static predictor based on no information about the case at all"! Did it predict the big controversial decisions more accurately than less significant ones, for example? (Unlikely, of course, otherwise they wouldn't have been so controversial.)

Comment: Re:No towers in range? (Score 1) 127

by Cyberdyne (#47621027) Attached to: T-Mobile Smartphones Outlast Competitors' Identical Models

Usually, a terrestrial phone doesn't need to do anything much to "look" for a tower, besides keeping its receiver turned on. Towers emit beacons, and if you don't hear the beacon, there's no point in you sending anything - you won't receive a reply because you don't even hear the tower's beacon.

True - the problem AIUI is that "just" keeping the receiver turned on constantly consumes a significant amount of power in itself. Once synced with a tower, the phone can turn off the receiver, knowing that it has, say, 789ms until the next beacon it needs to check for; if it's waiting, it needs to be listening constantly. Worse, it doesn't know what frequency the tower might appear on - so until it finds one, it will be constantly sweeping all the different frequency bands a tower could be using, until it actually finds one - on a modern handset, cycling between at least three different modes (GSM, 3G and LTE), each on several different frequency bands. Also, because of the possibility of roaming, it may be hitting other networks then checking whether or not it can use those ("Hi network 4439, I'm from network 4494, can I connect? No? Kthxbye")

Comment: Request to remove or alter content (Score 2) 81

I can't imagine that absolutely none of the requests where verifiable facts. {like a mis-typed date}

That wouldn't come under "right to be forgotten" though, a simple edit or correction request would address that.

The whole notion of a "right" to prohibit someone else from making a factually accurate statement on one website about the content of another site seems utterly absurd to me. Removing the destination page itself could perhaps be excused in some cases ... but to accept that the owner of a page making a statement about somebody has a right to keep it, even if it's out of date, then turn round and gag the likes of Google from making current factual statements about that page? Every "judge" supporting that nonsense needs to be unemployed ASAP.

Suburbia is where the developer bulldozes out the trees, then names the streets after them. -- Bill Vaughn

Working...