Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Performance testing is unpredictable (Score 1) 483

Performance testing is unpredictable. Sometimes it is not enough to run a profiler and then just naively tweak the code to get the performance that meets the spec. Sometimes you need to change data structures because you figured out, after integration, that your complex data structure behaves slightly different than during unit testing.

Sometimes you find out that some of your algorithms need to be rewritten. On other occasions, unlike with simulations, your architecture will never work fast enough with the real data your are getting because an inevitable and unpredictable interaction between components. You can't always know these things early enough, you almost never do.

Comment Works only if you have all ingredients upfront (Score 1) 483

You can estimate time only if you know all required components upfront. You should know all the related technology, which probably been used by you in prior projects. Once you try that no one managed to complete before you can't estimate what can be done in a single sprint.

Let's you have an NP-hard problem which is, e.g., a variant of set-cover or bin-packing. You write a prototype that maps the problem into SAT and invokes a state-of-the-art SAT solver, all is well. Next week you try to solve a set-cover instance, after running for a couple of hours you terminate the run. After tweaking the set-cover to SAT converter for the rest of the week you manage to get a trivial use-case pass. Now, how the hell can you estimate the number of weeks and experiments it will take you to get a realistic problem to run reasonably well? NP-hard problem solving sometimes behaves exponentially but sometimes linearly, you don't always know in advance as it depends on the micro structure of the inputs.

The process of writing software that solves real-world NP-hard problems looks completely stochastic. You can read dozens of papers, try hundreds of different algorithms, approximations, heuristics, technologies and new ideas before you find how to solve the problem at hand, assuming you do. How can you estimate this time - upfront?

On the bright side, NP-hard, decidability and other tough algorithmic problems are only a niche in the world of programming. Most of the time software development is "only" a matter of engineering, planning and experience -- where Scrum could well be the right answer.

Comment Re:It's Israel (Score 2, Interesting) 303

You are reversing the order of events.

1. Build border settlements

The towns near Gaza are not settlements, they are and always were within Israeli international borders. Ashdod is 43 Km (20 miles), it is not a "border settlement", most of Israel (Including Tel-Aviv) is below 43 Km from its borders.

2. Whine about rocket attacks

The US president would not act nicely towards Mexico if it launches rocket attacks on San Diego either.

they are the mechanism by which Israel is stealing the entire area that was the Palestinian state.

There was no Palestinian state, ever. The U.N decided to divide the British controlled area between the Jews and the Arabs. When the British left at 1948, the Arab states conquered the parts that we now call Palestine. This land was an integral part of Jordan and Egypt up until the war of 1969.

Just look at a map from 1948 and a map from today. If you have time, check the map every decade between, you'll see Israel increasing steadily in area.

You are trolling, this is simply false, it has been reversed lately. Since the peace talks began, parts of the occupied territories were given to the Palestinian authority (1994-5), and some of the newer maps mark these areas correctly. Unfortunately, due to later unrest Palestinian control was massively eroded (call it retaliation or a security necessity). Despite that, these lands are still marked as Palestinian in many maps.

Gaza and the West Bank are becoming more and more overpopulated as the Palestinian lands shrink, effectively making them concentration camps.

This is only a half truth. The West Bank is shrinking due to actions of Israel, and people there do suffer from it, but this is not so with Gaza (where the rockets come from). Gaza is within its 1948 borders, when it was part of Egypt, and Gaza is the most overpopulated part of Palestine. Israel has nothing to do with it. So do you say that Israeli actions deprive Gazans of land they could use in the west bank? Wrong, Gaza does not border with the West bank. People could never move freely between these two places, not even during their Arab rule. Geographically they are two different nations. They were linked together only due to political/strategic moves by all sides (Israeli, Palestinian, American, European, Egyptian and Jordanian).

The people of Gaza have only two possible expansion directions: towards Israel (beyond 1948 borders) or towards Egypt. This is what many of them want. This is one of the reasons why the peace talks stalled - Israel did not want to let a big percent of Palestinians immigrate to Israel, and the Palestinians did not want to give this thought up.

Say what you want about Hamas. They were elected fairly, in elections overseen by Jimmy Carter. Whatever you, the UN or your government may think of them, they are the democratically elected party

So was Slobodan Milosevic, it did not give him the right to do what he did. Hamas does not promote peace, they promote violence, or at most a temporary cease fire. They do not promote equality, but segregation by gender and religion instead. If anyone wants peace she should hope that Hamas will get out of the equation.

Comment Documentation should not be retrofitted (Score 1) 769

In OSS there is a tendency to code first (and if you are good - design first) and a year later someone else will try to retrofit user documentation. This will never work right. And this is why:

In order to have a possibly reasonable documentation, the design and code must be easy to explain. There should be relatively little user-visible corner case, feature X should behave similarly to feature Y even when they are designed/coded by different people at different times.

In OSS what usually happens is that developer X has an itch and implements his stuff without thinking about developer Y doing a seemingly different feature (proprietary S/W is no better). They end up with a documentation that has to cover 2 different features with subtly different ideas. Very confusing.

It is quite possible that feature X and Y are technologically independent, but that is not something the user should be aware of (most of the time). This means that it requires more work to make them look similarly from the outside, so that it is easier to document.

Consider for example the concept of a "file system". Most of the time the user does not have to know if this is XFS, NTFS or EXT4. The documentation is relatively simple and covers 99% of cases. However, if every file-system had different system calls, documenting it would be hell.

If every application has a different UI short-cuts and concepts, it is much harder to document. Why can't it resemble other applications? Because the coder did not consider the cost of explaining and documenting the thing, only of technology (certainly), functionality (probably) and ease of use (hopefully). But documentation was written only after the fact. At that point many concepts and ideas are set in stone, changing them to ease the use and documentation ranges between difficult to impossible

I have gone from the wrong direction and then seen the pain of the users too many times. I hope I learned my lesson. It is simply impossible to document the beast in a reasonable way down the road.

Comment What's the point? (Score 1) 386

The amount of resources it reportedly takes makes this not so practical.

What do one would want to have deduplication for? The cost of disk storage has two big elements - speed (latency&throughput) and backup.

It does not seem that this technology would help much in the speed department, it might actually hurt. Managing copy on write has several potential costs. It may help backup if the backup program knows the fine details of deduplication, but that means that old backup software will have to be replaced.

It reminds me the compressed file system I used to have on my old SLS Linux PC which had a small disk (1992 if memory serves me right). It was dog slow to run X11 on it. I have not seen a compressed file system since, there was no need. Disk storage grows much faster than my need for data.

Comment Microsoft's excuse for not updating (Score 5, Informative) 211

After reading Windows Can but Won't I am still unimpressed. This article tries to hide a substantial feature preset in Linux but not in Windows. Call it a misfeature, a bug, an engineering decision or a precaution but, as it seems, Microsoft's filesystems do not support file removal well. If a DLL is in use you can't remove it without dire consequence, you are left with modifying the original file.

On Linux, you can remove the DLL without destabilizing running applications. This is because the file is unlinked from the directory structure, appearing as if it was removed, and the old file contents is still accessible to running applications. On Linux, an update mechanism can remove the DLL and put a new DLL in its place without affecting any running applications. Running applications continue using the old DLL, posing no substantial stability risk.

The Linux way isn't perfect either because running applications do not benefit from the update. Such an application will effectively use the old DLL until it is restarted giving a false sense of security. If an affected service is not restarted, then the computer is still at risk.

Comment GCC was forked off the FSF (Score 1) 306

The danger of forking is not reserved to commercial entities.
If the community is not happy with whoever controls the code then it's fork fork fork.

If the company that controlled the code plays this well then it may have a chance to merge the fork back.

Cygnus were fed up with FSF's attitude regarding GCC development, other developers were also fed up.
In fact most of the active community were fed up. So Cygnus forked GCC into EGCS which started to thrive.

FSF came to its senses and made an agreement with Cygnus and other developers to merge EGCS back to FSF.
In fact EGCS was renamed to GCC 2.8 or 2.95 (I don't remember).
The smartest thing that Cygnus and other developers did was to assign all copyright to FSF even during
the fork. This allowed the merge back.

If the major players play well then it is possible to merge any fork back.
Things should not be as bad as you say for creates an open source product. Things can be fixed if
the developing company is willing to avoid the arm wrestling game.

The question is, how interested are Sun, Oracle and the developers to avoid a fork.

Comment Re:Price (Score 1) 128

Just so everyone knows:

Tesla Roadster (all electric): $98,000

Liv Inizio (all electric): $100,000

Lightning Hybrids car (biodiesel): $39,000-$59,000

After taxes it should cost like my house.
My bank will surely give me a mortgage for one of those. This investment is definitely safer than what they had been doing a couple of years ago.

Slashdot Top Deals

Work is the crab grass in the lawn of life. -- Schulz

Working...