Become a fan of Slashdot on Facebook


Forgot your password?

Comment Re:Gonna get lambasted for this but... (Score 1) 698

no one foresaw anything like UEFI. How could they?

Well that's just not true. IBM PowerPC and Sun Systems hardware had firmware for a very long time. x86 was way behind the curve on this one, and firmware was well understood by then. Perhaps UEFI has its own quirks, but it was hardly novel.

Comment So basically, it's like universal health insurance (Score 1) 285

Collectively empathizing with people who have fallen on hard times due to sickness, and helping each other out by small donations to help pay those bills? So basically it's health insurance, except it's one that the poor can actually afford. Won't be nearly as effective because it isn't mandatory though.

Comment Flawed? (Score 1) 303

There are three main factors: number of conspirators, the amount of time passed since it started, and how often we can expect conspiracies to intrinsically fail (a value he derived by studying actual conspiracies that were exposed).

I don't see how this analysis could possibly be conservative as conservative as the author believes. He estimates a lower bound on a failure parameter based on exposed conspiracies, except this can't possibly be convincing if one believes there exist long-term conspiracies that have yet to be revealed. Such nascent conspiracies would totally skew that parameter estimate, and not in the author's favour.

In other words, conspiracy theorists would simply see exposed conspiracies as outliers, and not representative. This still makes conspiracy less plausible due to needing to accept even more, but I don't think that would be a problem for conspiracy theorists.

Comment Re:Still not seeing all the fuss about AI (Score 1) 167

AI? Dangerous? I mean, yeah, in the same way that humans are. Being afraid of AI is like being afraid of very, very smart children.

I think you're downplaying the danger. AIs are like intelligent, immortal children that can communicate and coordinate across the globe faster than you can blink and whose values and perceptions of the world are completely inscrutable.

Comment Re:I don't understand the concern, personally. (Score 2) 167

Absolutely anything that you would have to worry about an artificial intelligence doing that might be troublesome to our society, you would have to also need to reasonably worry about a malicious person doing exact the same thing, albeit perhaps only more slowly.

A lot more slowly. A coordinated action would be much easier for an AI than for humans, and much harder for us to spot.

Also, we can somewhat anticipate and understand human reasoning, even when it's couched in different cultural values because we share most biological features. AI reasoning might very well be completely alien and unpredictable to us, and it's environment completely inscrutable. That's a good reason to be very cautious. I think the movie Ex Machina did a decent job highlighting this.

Further, AIs can be effectively immortal if any copies exist. That kind of opponent is incredibly dangerous.

Humans are at a huge disadvantage compared to AIs, so there's plenty of rational reasons to fear them. I don't think we're anywhere near the point where this fear is rationally justified on a daily basis, but it's not inconceivable that it could happen sometime in the 21st century.

Comment Re:Sounds like an MBA plan! (Score 2) 216

Yes, just make your users your testers, great idea.

User acceptance testing is the only meaningful testing of this sort. There's no way around it really. Black box testers don't have the domain knowledge that end users have, nearly any usability feedback is practically meaningless. How could a blackbox tester possibly really know whether a particular screen layout, or information organization, would make sense to an end user?

Developers are frequently not very much in tune with their target end user perspective. Hence why I see my fellow developers get frustrated with testers and even clients, wondering why the users can't do things the way the developer thinks they should do.

If users are confused, it's an impedance mismatch between what a developer thinks the user knows, and what the user actually knows. Which is a spec failure. The developer can't divine what a hypothetical user would know at any given moment by magic. How could a QA team know this information? And if they somehow know, why not just communicate this to the dev in the spec in the first place?

Comment Re:Sounds like an MBA plan! (Score 1) 216

Please point out where I said "unit tests" anywhere in my comment. I specifically said automated tests, which encompasses any testing procedure that can be conceivably run on a computer (which is conceivably just about anything a human can do).

And automated tests follow a fixed procedure by definition, as they're computer programs. People following tedious, repetitive tasks, like testing programs, make frequent mistakes. This is a well known fact from the manufacturing industry.

Comment Re:Sounds like an MBA plan! (Score 2) 216

By all means add more automated testing and hopefully make the QA process smoother, but skipping it entirely (they fired the QA team) is a recipe for disaster, and the only thing they have to measure the effectiveness of this is the number of issues in their issue tracker, which was *fed* by the QA tea.

Which can be replaced by an easy issue submission process embedded in the program itself. Combined with a program trace of the user's actions, which Yahoo already uses as part of its distributed architecture, the developer would be able to reproduce precisely any behaviour seen.

The testers are using the software much like a human would, and they do unexpected human things like the end user would be doing. This particularly helps identify usability issues, which isn't really an automation sort of thing.

But you can automate entry of unexpected data too, without having to generate the data/tests yourself. See QuickCheck and other testing tools inspired by it, which invokes functions with a large set of permutations over the full domain of its parameters, starting with upper and lower bounds, ie. if your function accepts an int, it will invoke it thousands of times int.MaxValue, int.MinValue, and a number of randomly selected values between.

Sometimes single minded march toward matching the spec is a bad idea, because the spec is bad and needs to be revisited.

Sure, but you can't fix spec problems in the testing phase with a QA team that aren't domain experts. Only end users can validate something like this, which is where shorter feedback cycles with end users are key delivering a good result.

Re: humans having a hard time using an interface, this is generally obvious to the developers too while they're writing and testing their own code. They generally just don't have a process available via which they can question the viability of the spec and be taken seriously.

Comment Re:Sounds like an MBA plan! (Score 1) 216

We were tired of being constantly bogged down by all these mistakes and bugs, so we got rid of the people who kept telling us about all the mistakes and bugs.

His argument is sound though. Testers don't always follow proper testing procedure, so you'll still get plenty of bugs passing through it, and it takes way too much time for this to feed back into the develop-test cycle. Better to spend all that QA time writing more automated tests of various types, which the devs can run themselves during their much shorter develop-test feedback cycle. This obviously increases robustness towards matching the spec, and you're just left with acceptance testing to ensure your spec matches client expectations.

Slashdot Top Deals

Elliptic paraboloids for sale.