Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:Yup, Hegel 101 (Score 1) 564

by dnavid (#48628265) Attached to: Reaction To the Sony Hack Is 'Beyond the Realm of Stupid'

Anyone believing the "terrorist" propaganda must somehow also believe that the DPRK has millions of bomb strapping terrorists stationed in the US ready to flock into Star and AMC to bomb people for watching a comedy.

Anyone who thinks that is nuts. But that's the problem. There's no way to predict how many nuts out there think this would actually happen, and decide they want to get in on the action. 13,000 theaters being attacked by North Koreans is a really bad sequel to Red Dawn, not a credible threat. But a couple dozen crazy people attacking theaters because they heard about the North Korean threat is still a lot of potential casualties. That is a legitimate risk worth thinking about.

Comment: Re:A tech gloss over racial profiling? (Score 1) 218

by dnavid (#48525663) Attached to: 'Moneyball' Approach Reduces Crime In New York City

I would very much like to know the racial makeup of that list. Given it came from the police themselves, it certainly leads to questions about how such individuals end up on those lists.

1. The article links to a document that describes the system and how it works in detail.

2. The article and linked document describe the function of the lists as prioritization lists for prosecutors after arrest, not used by the police to target people for arrest. If you think about it, that's logical: the police do not need to help the city make a list of people to target for arrest. If the police are biased in targeting people for arrest, they can still stop and arrest those people whether they are on the list or not.

3. Given the finite resources of the NYC criminal justice system, any data-driven system designed to focus more resources on critical repeat-offenders will by necessity reduce the focus on everyone else. That means while any system can have racial bias including this one, its more likely to reduce the impact of that bias simply due to the practical reality of reducing the need for drag-net style law enforcement. Even if the impact is small, more time spent on publicly vetted priorities is less time available to do anything else.

Comment: Re:Let's do the math (Score 2) 307

by dnavid (#48454131) Attached to: Complex Life May Be Possible In Only 10% of All Galaxies

What evidence is there of an infinite universe that had no beginning?

Bear in mind also that if an infinite universe exists, which had no beginning, then light would also have had infinite amount of time to travel to here from absolutely everywhere else, and although the intensity of radiation that reaches a point is inversely proportional to the square of the distance to that point. the volume of space that is an average of some given distance away from a point is greater than an amount proportional to the square of the distance from that point, and so the number of things in that volume which produce radiation at that distance would be be correspondingly greater, more than cancelling out the inverse square relationship to the intensity of radiation reaching a point some fixed distance apart. Every point in the universe would be perpetually saturated in radiation that is reaching it from every other point in the universe, infinitely far away, and certainly things like life bearing planets could not exist.

Critical observation suggests that the universe is finite.

This is known as Olber's paradox, and it is not valid for expanding universes where red shift reduces the wavelength of light from distant sources until it drops below visible wavelengths and there ends up being an observable horizon, even in an otherwise infinite space and unbounded lifetime.

Although your point about distance and volume is wrong for other mathematical reasons. The number of light sources expands as the square of radius, not volume. The *total* number of sources is proportional to the cube of radius, but that's double counting. The volume includes close sources previously counted, plus the new sources being added as radius increases. Olber's paradox doesn't rely on that error in math.

Comment: Re:Global warming is bunk anyway. (Score 5, Interesting) 367

We shouldn't be fooling around like this. It's obvious we don't understand, or are too corrupt and greedy to admit, that there's no problem.

Its ironic that one of the potential benefits of geoengineering research is that it will force many climate change deniers to admit that its possible for human activity to have major deleterious effects on Earth's climate.

Comment: Re:They WILL FIght Back (Score 1) 516

by dnavid (#48422935) Attached to: Rooftop Solar Could Reach Price Parity In the US By 2016

Yes, many US states require free net metering and power resale. It's the law, so utilities have to do it. But all you're doing at the time being is transferring the solar-generators' share of the infrastructure costs onto the non-solar-generators share. So when you report that these people can "break even", is that really a fair comparison?

It is a true statement that net-metering customers are often using infrastructure that they are effectively not paying for, or not paying the true costs of, when their net-metering contracts allow them to directly offset generation and usage one for one. However, its also true that the statement "solar could reach price parity" can still be true while recognizing that fact for this reason: electric utilities overstate the costs of the infrastructure, and the costs of that infrastructure are affected by network defections.

In Hawaii where I live, the electric utility recently proposed a plan to deal with the high growth rate of residential solar, which they were delaying due to their assertion that the grid was unprepared for the volume (which I concede as likely a mostly true statement). Their plan proposed a cost structure that would have made a customer currently paying about $200/month (what their documents stated was the residential average) and who could with solar reduce that bill to close to $15/month, under the new system would be paying about $150/month. That's for someone who deployed a net-zero solar system. That was due to the proposed system charging a $55/month infrastructure fee and paying only about half the rate for customer generated power as they charged for customer used power.

Given those numbers, its entirely possible that the continued drop in costs for residential solar could bring down the cost of a completely off-grid system of batteries and emergency generator (for stretches of low sunlight) to a point where it becomes competitive with that $1800/yr of "infrastructure" costs. Moreover, even if the costs don't reach full parity, if they just get close its possible that enough customers could choose to defect off the grid so as to make the per-customer costs of maintaining the grid even higher - since the costs are largely fixed and don't go down when there are fewer customers.

The combination of lowering costs of completely off-grid solar systems combined with the rising costs of utility power per customer due to defections could still cause a meet-in-the-middle where solar reaches price parity even for systems that don't need utility connections. Maybe not by 2016, but I wouldn't bet a lot of money against it either.

The problem with the "solar customers are not paying their fair share of the grid" statement is not that it is not true: its that the best way to resolve that problem in the long run will be for solar customers to defect off the grid. If electric utilities continue to pursue the strategy of making solar customers "pay their fair share" eventually, and it may take time, the technology will reach the point where that fair share will be zero, because they will stop using the grid. Once the technology reaches that point, it will be too late for the electric utilities to do anything except watch customers leave. They should be working now to find a more suitable relationship between the utility and solar residential customers that isn't adversarial. Because in short run the utilities have a lot of power over the situation, but in the long run they have very little. They should use their power while they can to create something that ensures a future where their customers still need them. There are lots of ways to do that. Demanding solar customers pay huge amounts for the privilege of being utility customers is not one of them.

Comment: Re:Replace Cisco, and Akamai and then maybe.. (Score 4, Insightful) 212

by dnavid (#48413403) Attached to: Launching 2015: a New Certificate Authority To Encrypt the Entire Web

Replace Cisco, and Akamai and then maybe I'll be convinced it's better than the current situation. But it's still oxymoronic service: A central authority that *REQUIRES* trust for people who don't trust anybody.

First, if you don't trust Cisco and Akamai to that extent, how do you intend to avoid transporting any of your data on networks that use any of their hardware or software?

Second, I think a lot of people really have no idea how SSL/TLS actually work. There's two forms of trust involved with SSL certificate authorities. The first involves the server operators. Server ops have to trust that CAs behave reasonably when it comes to protecting the process of acquiring certs in a domain name. But that trust has nothing to do with actually using the service. Whether you use a CA or not, you have to trust that *all* trusted CAs behave accordingly. If Let's Encrypt, or Godaddy, or Network Solutions, is compromised or acts maliciously they can generate domain certs that masquerade as you whether you use them or not. As a web server operator if you don't trust Let's Encrypt, not using their service does nothing to improve the situation, because they can issue certs on your behalf whether you use them or not - so can Mozilla, so can Microsoft, so can Godaddy.

The real trust is actually on the end user side: they, or rather their browsers, trust CAs based on which signing certs they have in their repositories. Its really end users that have to decide if they trust a server and server identity or not, and the SSL cert system is designed to assist them, not server operators, to make a reasonable decision. If you, as an end user decide not to trust Let's Encrypt, you can revoke their cert from your browser. Then your browser will no longer trust Let's Encrypt certs and generate browser warnings when communicating with any site using them, and you as the end user can then decide what to do next, including deciding not to connect to them.

Seeing as how the point of the service is to improve the adoption of TLS for sites that don't currently use it, refusing to trust a Let's Encrypt protected website that was going pure cleartext last week seems totally nonsensical to me, unless you also don't trust HTTP sites as well and refuse to connect to anything that doesn't support HTTPS.

Lastly, if you literally don't trust anybody, I don't know how you could even use the internet in any form in the first place. You have to place a certain level of trust in the equipment manufacturers, the software writers, the transport networks. If all of them acted maliciously, you can't trust anything you send or do.

I don't necessarily trust the Let's Encrypt people enough to believe they will operate the system perfectly, and I don't believe they are absolutely immune from compromise. But I do think the motives of people adding encryption to things currently not encrypted at all is likely to be reasonable, because no malicious actor would be trying to make it easier to encrypt sites that have lagged and would otherwise continue to lag behind adopting any protection at all. Even if they are capable of compromising the system, that is quixotic at best. Even in the best case scenario they would be making things a lot harder for themselves, and in the long run getting people accustomed to using encryption with a system like this can only accelerate the adoption of even stronger encryption down the road.

Comment: Re:Who is this guy (Score 0) 15

by dnavid (#48404631) Attached to: Interviews: Warren Ellis Answers Your Questions

Notable works? Most popular recognizable work? I'm not gonna go googling so do your fucking job editors

If you have no idea who Warren Ellis is and have no intention of actually doing any research, what possible benefit could listing any of his works be to having any appreciation or context to an interview?

In any case, three of his most well known works are actually *mentioned* in the interview; namely Planetary, Transmetropolian, and his run on Hellblazer. I'm also fond of his work on The Authority which is also very well known. I also know he scripted the animated Justice League Unlimited episode called "Dark Heart" that was about the nanotech weapon that landed on Earth.

And jeez, learn to use the rest of the internet, or unlearn the little random bits.

Comment: Re:Reliable servers don't just crash (Score 1) 928

by dnavid (#48283529) Attached to: Ask Slashdot: Can You Say Something Nice About Systemd?

Actually that statement is 100% correct since the definition of a reliable server is one that does not crash.

Its trivially easy to make server software that doesn't crash. Just send all exceptions into an infinite loop. I think not crashing is a common prrequisite, but far from sufficient requirement for a reliable server. In fact, for some software like filesystems and databases crashing is almost not relevant to reliability: crashing to prevent data corruption is what reliable systems do in some contexts.

"Reliability" is when the software does what it is designed to do. That can include protective crash dumping. "Availability" is when the software is always running. Well designed and implemented software is both reliable and available, which is another way of saying its always running, and always running correctly.

Comment: Re:Gay? (Score 1) 764

by dnavid (#48274023) Attached to: Tim Cook: "I'm Proud To Be Gay"

I don't see why it should be a reason to be "proud". Gay is the way he is rather than something he has chosen but it does not confer some form of superiority on him.

I have no idea why people keep saying this, as if the only valid reason to express pride is to express feelings of superiority or requesting acknowledgement of accomplishment. The way I use the word, and the way the dictionary defines it, allows for pride to express positive feelings of association, and to express self-esteem particularly to contrast with the expectation of the opposite. For example, during and immediately following World War II, many Japanese Americans expressed pride in being Japanese Americans. Some served the military during the war and were proud of their accomplishments, but others did not and were only expressing pride in being associated with a demographic that was often denigrated but did noteworthy things. I don't recall hearing people ask why a Japanese American would express pride in being Japanese when that was not their choice: the reason for making the statement was obvious at the time, as is the reasons for declaring pride in being gay today. Except for people being deliberately obtuse.

Comment: Re:Blades (Score 1) 56

by dnavid (#48172487) Attached to: Making Best Use of Data Center Space: Density Vs. Isolation

The SAN is usually less of a single point of failure because they usually have a lot of redundancy built-in, redundant storage processors, multiple backplanes, etc. You're right that off-site replication is still important, but usually more for whole site loss than storage loss.

People assume the biggest source of SAN failures is a hardware failure, and believe hardware redundancy makes SANs less likely to fail. In my experience, that's false. The biggest source of SAN failures are (usually human-) induced problems from the outside. Plug the wrong FC card with the wrong firmware, knock out the switching layer. Upgrade controller incorrectly, bring down SAN. Perform maintenance incorrectly, wipe the array. SANs go down all the time, and often for very difficult to predict reasons. I saw a SAN that no one had made any hardware or software changes to in months just suddenly crap out when the network that connected it to its replication partner began to experience flapping which noticeably affected no one except the SAN, which decided a world without reliable replication was not worth living in and committed suicide by blowing away half its LUNs.

Keep in mind the last time I saw a hard drive die and take out a RAID array was so long ago I can't remember. However, the last time I saw a RAID *controller* take out a RAID array - and blow the data away on the array - was only a couple of years ago. Its important to understand where the failure points in a system are, particularly when it comes to storage. These days, they often are not where most people are trained to look. Unless you are experienced with larger scale storage, you're not trained to look where the problems tend to be.

Storage fails. Any vendor that tells you they sell one of something that doesn't fail is lying through his teeth. Anything you have one of you will one day have to deal with having zero of. It doesn't matter how "reliable" the parts in the one thing are. You should plan accordingly.

Comment: Re:You could lock down Windows (Score 1) 334

by dnavid (#47933943) Attached to: Ask Slashdot: Remote Support For Disconnected, Computer-Illiterate Relatives

For the purposes of the discussion, I'm assuming they are on Windows 7. If they aren't on Windows 7, they need to get there, at least. If they are still on XP that just sucks because a lot of the below stuff isn't there.

Something to look at which works for both Windows XP and Windows 7 are software restriction policies, which are a form of whitelisting build into Windows. With Windows 7 Enterprise or Ultimate editions, you can also use Applocker which is a more sophisticated version of software restriction policies. I'm not an expert on SRP or Applocker, but I believe both can be used to lock down a desktop and prevent users from running or somehow causing to run any executables except for the ones you whitelist. That won't prevent all possible malware from infecting the system, but between that and malwarebytes I think that would provide significant protection for this specific use case, and you wouldn't have to retrain the users to switch from Windows to a Linux desktop.

Comment: Re:It's a production system (Score 1) 85

by dnavid (#47923861) Attached to: Why Is It Taking So Long To Secure Internet Routing?

The internet is in production. No one wants to touch anything that's already in production unless they literally can't make it any worse. Otherwise we would have IPv6 as well.

Lots of people want to touch production systems. In the case of the internet and BGP, however, evolution has weeded out the people who like to touch production systems, and the only people with administrative rights are still getting over having to support 32-bit AS numbers and wondering where their pet dinosaur went.

Comment: Re: Magic (Score 1) 370

by dnavid (#47912205) Attached to: The State of ZFS On Linux

I was just reading up on Ceph a bit. One thing that does have me concerned is that it does not appear to do any kind of content checksumming. Of course, if you store the underlying data on btrfs or zfs you'll benefit from that checksumming at the level of a single storage node. However, if for whatever reason one node in a cluster decides to store something different than all the other nodes in a cluster before handing it over to the filesystem, then you're going to have inconsistency.

The problem you're describing is a problem that neither ZFS nor BTRFS is capable of handling either. Both checksum data on disk, but are vulnerable to errors that occur anywhere else in the write path starting from network clients through the OS. That's why ECC or fault tolerant memory is explicitly recommended for ZFS enterprise servers; a bit flip in memory is impossible for ZFS to correct for or detect in most cases.

"In the face of entropy and nothingness, you kind of have to pretend it's not there if you want to keep writing good code." -- Karl Lehenbauer