Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment It needs lots of enemies (Score 4, Informative) 85

The original Doom games were noteworthy for having big levels that contained lots and lots of enemies. I haven't played Doom 3, but I've heard that it has much more beautiful 3D graphics, and as a result you would be attacked by only a few monsters at a time (because too many would overwhelm the graphics adapters that were current when that game came out).

My favorite thing in the the original Doom games was getting the monsters to fight each other. If you could get an Imp so hit a Cacodemon with a fireball, for example, the two would get into a fight. Frequently I would lure some monster into the line of fire and as soon as it was hit, it would forget about me and go kill whatever monster hit it. This is more fun to me than just shooting everything. I hope the new game has this.

The specific rules: monster special attacks don't hurt other monsters of the exact same type... for example, Imp fireballs don't hurt Imps. But the zombie soldiers shoot bullets and bullets hurt anything, so you could get soldiers to fight each other. And anytime a monster hit a different kind of monster it would do damage.

P.S. Doom modified as a way to control processses on a system. Kill a process with a shotgun! https://www.cs.unm.edu/~dlchao/flake/doom/chi/chi.html

One side-effect of this is that processes on a system can get into a fight with each other. Two processes enter, one leaves. Not recommended for critical systems.

Comment Two opposed postions on abortion, both libertarian (Score 2) 459

If you say something about my freedom stopping at his nose, then I remind you that the baby's right to live stops at the aborter's saline injection, scraping blade, etc.

libertarians might agree that abortion should be illegal, and might not. I'll explain why:

The core of libertarian philosophy: force and fraud are not acceptable, but as long as people are free to choose, the state shouldn't intervene.

Thus a libertarian would not be in favor of the state forbidding drugs like alcohol or tobacco or marijuana. If a person chooses to use such drugs it is his/her choice.

But a libertarian would agree that murder should be illegal.

So it comes down to: is an abortion murder?

libertarians who believe that life begins at conception, and even a one-week-old embryo counts as a person, would believe that abortion is murder, and thus should be illegal.

libertarians who believe that an embryo isn't a person yet would believe that abortion should be the choice of the mother.

The question of whether an embryo is a person is not one that is decided by libertarian philosophy, and thus two people who are libertarians might have opposite opinions.

All libertarians would agree that the state should not be using tax money to fund abortions. Some libertarians think the state should be very small, and others (the "anarcho-capitalists") want no state at all; none would consider funding abortions to be a legitimate function for the state.

P.S. I read an essay by Carl Sagan where he suggested that before brain activity starts up, a fetus is not a person, but after the brain is functioning it should be considered an unborn person. IIRC he said that is about the third trimester. (Note, I did a Google search and found one web page saying brain activity starts around 25 weeks, which would be early third trimester.)

Comment Re:the point (Score 5, Informative) 130

The point of Docker is to have a single package ("container") that contains all of its dependencies, running in isolation from any other Docker containers. Since the container is self-contained, it can be run on any Docker host. For example, if you have some wacky old program that only runs on one particular set of library versions, it might be hard for you to get the Docker container just right to make it run; but once you do, that container will Just Work everywhere, and updating packages on the host won't break it.

The point of the news story is that someone did a better job of stripping the container down, removing libraries and such that were not true dependencies (weren't truly needed).

Not only does this make for smaller containers, but it should reduce the attack surface, by removing resources that are available inside the container. For example, if someone finds a security flaw in library libfoo, this would protect against that security flaw by removing libfoo when it is not needed. It's pretty hard for an exploit to call code in a library if the library isn't present. Also, presumably all development tools and even things like command-line shells would be stripped out. Thus a successful attacker might gain control over a docker container instance, but would have no way to escalate privileges any further.

If the stated numbers are correct (a 644 MB container went down to 29 MB) yet the new small package still works, then clearly there is a lot of unnecessary stuff in that standard 644 MB container.

Comment Hey AMD, show us your new CPUs for 2016 (Score 5, Informative) 152

Hey, AMD, show us your new CPUs for 2016. Everything you got now is long in the tooth.

How right you are. But their basic problem has been that they were still stuck on old semiconductor fabrication processes. Intel has spent a bunch of money on fab technology and is about two generations ahead of AMD. It didn't help that their current architecture isn't great.

I'm not a semiconductor expert, but as I understand it: the thinner the traces on the semiconductor, the higher clock rate can go or the lower the power dissipation can be (those two are tradeoffs). Intel's 4th-generation CPUs were fabbed on 22 nm process, and their current CPUs are fabbed on 14 nm process. AMD has been stuck at 28 nm and is in fact still selling CPUs fabbed on a 32 nm process. It's brutal to try to compete when so far behind. But AMD is just skipping the 22 nm process and going straight to 14 nm. (Intel has 10 nm in the pipeline, planned for 2017 release, but it should be easier to compete 14 nm vs 10 nm than 32/28 nm vs 14 nm! And it took years for AMD to get to 14 nm, while there are indications that they will make the jump to 10 nm more quickly.)

But AMD is about to catch up. AMD has shown us their new CPU for 2016; its code-name is "Zen" and it will be fabbed on a 14 nm process. AMD claims the new architecture will provide 40% more instructions-per-clock than their current architecture; combined with finally getting onto a modern fab process, the Zen should be competitive with Intel's offerings. (I expect Intel to hold onto the top-performance crown, but I expect AMD will offer better performance per dollar with acceptable thermal envelope.) Wikipedia says it will be released in October 2016.

http://www.techradar.com/us/news/computing-components/processors/amd-confirms-powerhouse-zen-cpus-will-arrive-for-high-end-pcs-in-2016-1310980

Intel is so far ahead of AMD that it's unlikely that AMD will ever take over the #1 spot, but I am at least hoping that they will hold on to a niche and serve to keep Intel in check.

The ironic thing is that Intel is currently making the best products, yet still they feel the need to cheat with dirty tricks like the Intel C Compiler's generating bad code for CPUs with a non-Intel CPUID. Also I don't like how Intel tries to segment their products into dozens of tiers to maximize money extraction. (Oh, did you want virtualization? This cheaper CPU doesn't offer that; buy this more expensive one. Oh, did you want ECC RAM? Step right up to our most expensive CPUs!)

Intel has been a very good "corporate citizen" with respect to the Linux kernel, and they make good products; but I try not to buy their products because I hate their bad behavior. I own one laptop with an Intel i7 CPU, but otherwise I'm 100% non-Intel.

I want to build a new computer and I don't want to wait for Zen so I will be buying an FX-8350 (fabbed on 32 nm process, ugh). But in 18 months or so I look forward to buying new Zen processors and building new computers.

Comment Re:Team Reviews are far superior (Score 1) 186

When I look at the list of 100 bugs found by a single tester in my team, who is not busy having review meetings and counting metrics, in a week, I laugh at these numbers.

If your tester is finding 100 bugs a week, you're doing it wrong. Your underlying quality is much too low. It's much more expensive to find a bug by functional testing than by code inspection. This is because all those bugs need to be fixed and retested. This usually requires a rebuild and other ancillary tasks that drive up cost.

Worse, it's usually a geometric progression with this kind of pattern in that for every hour spent bug fixing, there's a ratio of new bugs introduced that have to be removed by the process. This process repeats until the defect count is acceptable. Even with a relatively low co-efficient of bug introduction, the geometric series usually adds 20-30% additional cost to the development.

Sometimes I think a lot of software processes are held up as improving quality not because they actually work, but because the reduced productivity makes the quality metrics look better..

This comes back to my earlier point on people ignoring published research because they feel they know better. Do you know there's actually properly controlled scientific trials that actually establish the truth of what I'm saying? Why is your thought superior to this research? Why is this research defective?

Comment Re:Team Reviews are far superior (Score 2) 186

No offense meant, honestly, but your place sounds miserable to work at. It's not the process, but the ridiculous level of formalization and standardization.

Code inspections work best when they're formal with clearly defined roles and clear reporting steps. There have been large scale studies done that confirm this. The research fed in to the development of the Cleanroom methodology pioneered at IBM.

The less formal the structure, the less well it works.

One of my big bugbears with software development as a craft is our failure to really learn from experience. There were lots of studies done on the craft from decades ago that cleanly establish these basic principals. We choose to ignore them because developers feel they know better than published research.

The truth is that people suck at writing software. Even the very best developers in an organisation are not as a good a team of lower quality people that inspects their own output. Teams > individuals.

Honestly, it isn't as corporate as it first appears. Once the roles are defined, the work turns to inspecting the source. It takes a few seconds to cover off that part of the meeting and from there the real work begins.

There are other benefits

One is that everyone has read everybody's source. There's none of this "Only Bill knows that piece of code." The whole team knows the code very thoroughly.

Another is that relatively junior people end producing code just as solid as person with 25 years experience. They end up learning a lot on the way. Do not estimate the tremendous power of that.

My teams enjoy the process and they certainly enjoy not getting as many bugs coming back to bite them in the future when the feature is out in production. Once they're done, they tend to be done and are free to move on to the next feature.

The benefits of having a cleaner code base, fewer issues and more accurate delivery times has a huge affect on morale.

Comment Re:Team Reviews are far superior (Score 1) 186

Please mention the place so I never get into a mile of it. How would of Linus have created Linux without people like you? Didn't he understand the technical debt he was creating? He could have been finding bugs at a rate of 1.25 per applied man hour instead of actually creating something useful! Silly man. You process guys are useless.

I find this example really odd because Linux is built around a process of a huge amount of code review. They do it differently because they're a distributed team but they absolutely have a rigorous code review process.

Comment Re:Team Reviews are far superior (Score 3, Interesting) 186

You sound like a bean counter, and your organisation sounds like it is hell to work in. 1.25 bugs per man hour? Christ.

Well I'm the head of development at our place so I inhabit both worlds. Businesses like to measure return on investment. By being able to speak that language, I can generally frame activities developers naturally want to do in those terms. This leads to developers getting more of what they want.

You know what developers really, really, really hate? Having to work with technical debt and having no process to remove that technical debt because the program is now "working".

The best way around technical debt is not to put it in to the program in the first place. This process does a sterling job at that. So our developers are generally a pretty happy bunch.

Comment Team Reviews are far superior (Score 4, Interesting) 186

In our organisation, we have teams of six people that work together on their sprint. QA staff are included in this team.

On major features, the team code reviews the feature together in a special session. Roles are assigned. The author is present, a reader (who is not the author) reads the code. There is an arbitrator who decides whether a raised issue gets fixed. This arbitrator role is rotated through the team on an inspection by inspection basis. Finally, there is a time keeper role who moves the conversation to a decision if one topic is debated for more than three minutes.

This process typically finds a humongous number of issues. It takes us about 4 hours of applied effort to discover a bug in pure functional testing. This process discovers bugs at a rate of 1.25 bugs per man hour of applied effort. So if you have five people in a room for one hour, you have applied 5 man hours. You'd expect to find 6-7 bugs. If you include all the stylistic coding standards bugs, this is typically 10-15 bugs per hour.

So while on the surface it looks expensive to have all those people in a room talking. The net result is that it tends to accelerate delivery because so many issues are removed from the software. Better still, the review occurs before functional testing begins. This means the QA staff on the team can direct their testing at the areas highlighted by the inspection process. This further improves quality

It's true that about 50% of the ossies are stylistic issues. But usually we get 1 or 2 bugs per session that present a serious malfunction in the program. The rest could be problems under some circumstances or minor faults.

Team reviews are vastly, vastly superior to pair-programming. There really is no contest.

Comment Server 54 was walled off (Score 3, Interesting) 332

Only 4 years, not 18+, but still a good story. At University of North Carolina they took an inventory of their servers and realized they couldn't find one. Eventually by following cables they discovered that it had been sealed up behind a new wall, four years previously. The server had been chugging along with no problems during that that whole time.

http://www.informationweek.com/server-54-where-are-you/d/d-id/1010340?

Comment Re:Unbiased source? (Score 4, Interesting) 110

Right. This is why I think VP9 actually could win and become the new standard (replacing H.264).

H.265 and VP9 seem like they are definitely in the same ballpark on quality. And H.265 is heavily encumbered with patents; you have to pay royalties, and you never know what the royalties might cost in five years. VP9, on the other hand, is simply free: no royalties, no restrictions on what you may do with the video.

Even if VP9 takes a lot more CPU time to encode, and even if H.265 is slightly better than VP9, not having to pay royalties (not even having to keep track of what you do with the video!) is such a huge benefit. It seems like a no-brainer.

And Google will be making sure that all the Android phones at least will have good hardware support for VP9 decoding. VP8 never had a chance against H.264 because the hardware support wasn't there, and large companies were content to pay the capped fees as you noted.

All that's left is possible legal FUD around VP9, but even that seems pretty cut-and-dried to me. MPEG-LA tried for something like a year to find patents to put into their patent pool to extract royalties from VP8, and in the end Google gave them a one-time payment of (to Google) a relatively small amount of money. Thanks to that one-time payment we know MPEG-LA won't ever come after anyone for using a VP8-derived codec, and I have no reason to think anyone would be able to prevail in court if they try it.

Given all of the above, it seems to me that VP9 is the obvious choice for the new video standard, and I kind of wonder why anyone is still interested in using H.265 and paying the royalties.

Comment Build your own O2 headphone amp (Score 4, Informative) 135

The O2 headphone amplifier is an extremely clean amp that can drive almost any headphones. It sounds great. Pair it with a clean DAC, rip all your CDs to FLAC, and you can listen to your music from your computer with the very highest in fidelity.

If you can solder, you can build the O2 amp for $30 to $40 worth of parts.

http://nwavguy.blogspot.com/2011/08/o2-summary.html

The guy who designed the O2 also designed a really good DAC. He wanted to release it as a DIY project but the realities of the DAC chip business mean that it was only practical to sell a complete DAC board. But you could make a project out of building an O2 amp in an enclosure with the DAC board built-in. (I have such a device but I can't solder; I bought mine from JDS Labs, pre-built.)

http://nwavguy.blogspot.com/2012/04/odac-released.html

I am friends with a world-class audio expert, and he agrees that the O2+ODAC is the best way to spend your money. It's as clean as $1000+ solutions.

P.S. Article about the guy who designed the O2 and ODAC: "the audio genius who vanished"

http://spectrum.ieee.org/geek-life/profiles/nwavguy-the-audio-genius-who-vanished

Comment Re:That's exactly right (Score 1) 645

I'm glad that you are so happy with the cost of electricity. However, I keep reading magazine articles about what a disaster the energy policy in Germany has been, and your one data point does not convince me.

The Economist wrote:

The simultaneous dash to renewables and new fossil-fuel power plants resulted in overcapacity and caused wholesale prices to tumble, which has battered the utilities' profits.

At the same time, the prices paid by consumers have been rising. This is because of the above-market prices guaranteed for renewable energy.

[...]

This means that traditional utilities have turned instead to much more climate-damaging coal for generation. The result is that prices have gone up and the use of renewable sources has expanded, but Germans have ended up emitting more carbon dioxide as a result of the extra coal...

But it gives me no happiness to think that the energy plan in Germany is failing. I hope that it will work out eventually.

What Germany really needs, what everyone really needs, to make renewable energy work is storage. I am hoping for new storage technologies to make grid-level storage practical... the liquid metal batteries from Ambri, or pumped electrolyte batteries, or whatever. The only currently practical technology is pumped hydro, and the energy policy in Germany has led to pumped hydro facilities shutting down. If your energy policy leads to coal plants continuing to operate and pumped hydro shutting down, You're Doing It Wrong.

On the other hand, I am also reading that that companies in Germany are planning to build more pumped storage within a decade despite the current economic disincentives, and coal use is going down. Perhaps it will work out in the future.

Slashdot Top Deals

In every non-trivial program there is at least one bug.

Working...