Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:I don't get it (Score 1) 388

No, the logic problem is with sugar free cans. Many obese people drink sugar free sodas, hence no calorie intake
and my argument fails. However, in the case of sugar-free drinks the correlation != causation is reversed:

Who drinks sugar-free sodas? Obese people probably more than others. So if you are obese, my guess is, you have
a greater chance to drink soda with or without sugar. The paper talks about calorie intake, but with sugar-free
sodas you practically have no calorie intake. And yet, you get BPA for free.

Comment Re:We already know soda drinkers are fat (Score 1) 388

I can think of dozen other things they did not account for. They are simply measuring the wrong thing.

Did they consider the timing of soda drinking? During lunch at school (more likely cans), or
while watching TV (more likely bottles). There are studies that correlate drinking at lunchtime
with weight issues.

How about 2L bottles being used by other member of the family and visitors, making it difficult to estimate the caloric intake of soda for any family member. Caloric intake is very easy to get wrong unless you perform complicated
measurements. Counting calories according to what you think you ate is only slightly better than guessing.

Some people forget to add up the calories of the huge amount of calorie-loaded sauce they add to the salad,
or the obscene amounts of oil and butter in almost everything they eat.
So maybe the lack of BPA is correlated with overestimation of caloric intake for the drinkers of L2 bottles?

Also, obese people like to drink soda during lunch (canned). If they somewhat care for their body they hope that
drinking sugar free soda will save them. So they are more likely to get a lot of BPA without additional calories than
people with a healthy weight. So, maybe they are obese because all those artificial sweeteners? Did they check
the urine for that?

If I really wanted to "prove" that artificial sweeteners are the cause for obesity, I would probably would find a way which
would look as convincing as this study. I would also be able to "prove" that people who eat a lot of quinoa are more likely to get smallpox, despite vaccination.
[hint: the link is vegetarians/new-age/vaccination level of friends]

I don't say that they did it, they almost certainly did not, but cooking-up wanted results is significantly easier than
detecting it in a review. Similarly, if the distance between the alleged cause and effect is too big, as in this case,
it is almost impossible to rule-out causation!=correlation errors.

Comment I don't get it (Score 1) 388

Many, if not all canned drinks contain bisphenol-A.
Don't people who, instead of water, drink substantial amounts of canned and sweet beverages become obese?
Hence, if you get a lot of BPA in your system you are have good chances to be obese.

So what's the news?

(There is a small problem with this logic, which can easily be fixed ; I was lazy)

Comment Re:I'm no longer conerced about it (Score 2) 212

The stricter the US internet laws are the bigger the chance it will be cut off of the rest of the internet. If the most of the internet users live in freer countries, they will use a different set of DNSSEC resolvers. This means that internet addressing will become fragmented between the US and the free world such that the same address means one thing in the US and another in the free world.

The economic impact on the US, of such a fragmentation, will be considerable. It will be a natural continuation of US economy's decline. As usual the brilliant politicians will blame the situation on pirates and will continue to draft even stricter free-speech (anti-piracy) laws.

I am glad that I do not live in the US, but this situation may change. Unfortunately many countries tend to copy the worst US practices (laws and behaviour), over time. We may end up in the US fragment of the internet and not where the rest of the world is.

Comment Re:How Absurd (Score 1) 545

The problem is that slow typists try to avoid doing much typing. This means that they avoid things like detailed comments and meaningful symbol names.

Being a fast typist I don't mind writing a lot of comments that I intend to delete at the end of the day. I don't mind writing 2 or 3 variants of the same code just to see what approach is more understandable and maintainable. I don't see the slow typists writing comments describing every parameter, pre-condition, and post-condition. When one argument of a method changes its meaning, I don't see the slow typists rewrite method's description when it is much clearer to do so.

Comment Re:Generalized Sudoku is NP-complete (Score 1) 86

The Sudoku problem is in general NP-complete

.

From their website they are solving a simpler problem:

here we solve a 4x4 grid version. However, expanding on the same principles, our E. coli can theoretically solve larger grids, for example 9x9 grids.

9x9 Sudoku problems that you find in magazines or online are trivial Constraint Satisfaction Problems (CSP). Trivial CSP solvers can solve thousands of these in one second. By solving such a trivial problem I am not sure that their work can be scaled to more complex variants. Their 4x4 variant is so simple that it can even be solved efficiently by a trivial program which can be written from scratch in a couple of hours. This is in contrast to the general problem which is in a completely different class.

Comment Your kid can't find his keys, again! (Score 1) 242

Assume that your kid forgets/loses house keys every other week, in which case your neighbor sends an urgent e-mail titled "Your kid can't find his keys, again!", which happens the only e-mail he/she ever sends you. Now assume that you never reply to this e-mail but rush home, instead, to open the doors.

Is this Google feature going to downgrade this repeating e-mail, just because I never reply to it?

For you single nerd types, how about an automated "XXX system is going down for reboot". You may want to look at it in time, especially if you try to analyze this XXX system recurring issue. (I guess that most people would not use Google for this due to privacy concerns)

Comment Random testing (Score 1) 396

Injecting random, but smart, inputs and automatically checking the outcome is much more stimulating than the boring stuff you are talking about. This will stimulate your mind and your code much more than going over a list of events one by one.

In my experience around 80% of the bugs that my random tests detect happen in a combination of events that I would have never thought of testing. I also make sure that the simple events, a tester would think of, would have a reasonable chance of being tested.

So, what's the magic? Pure random testing does not work, it has almost zero chance of hitting bugs. Instead, think of corner cases in your input and give them a higher probability (e.g. zero sized string, MAX_INT, MAX_INT-1 and so on). Finally, generate different tests with a different "entropy" otherwise you are testing the same thing over and over again. This "entropy" thing is tricky to explain and to notice. For example, if your input is a bool[1000] and you randomize your elements independently (half True and half False) then, practically, you'd have a zero chance of getting 1000 elements of the same value (which could trigger something interesting)

The final thing is to check your outputs by validating invariants in your code, either both from inside by using assertions and from outside, finding good invariants is like a riddle. The details are too complicated for one post.

Comment Performance testing is unpredictable (Score 1) 483

Performance testing is unpredictable. Sometimes it is not enough to run a profiler and then just naively tweak the code to get the performance that meets the spec. Sometimes you need to change data structures because you figured out, after integration, that your complex data structure behaves slightly different than during unit testing.

Sometimes you find out that some of your algorithms need to be rewritten. On other occasions, unlike with simulations, your architecture will never work fast enough with the real data your are getting because an inevitable and unpredictable interaction between components. You can't always know these things early enough, you almost never do.

Comment Works only if you have all ingredients upfront (Score 1) 483

You can estimate time only if you know all required components upfront. You should know all the related technology, which probably been used by you in prior projects. Once you try that no one managed to complete before you can't estimate what can be done in a single sprint.

Let's you have an NP-hard problem which is, e.g., a variant of set-cover or bin-packing. You write a prototype that maps the problem into SAT and invokes a state-of-the-art SAT solver, all is well. Next week you try to solve a set-cover instance, after running for a couple of hours you terminate the run. After tweaking the set-cover to SAT converter for the rest of the week you manage to get a trivial use-case pass. Now, how the hell can you estimate the number of weeks and experiments it will take you to get a realistic problem to run reasonably well? NP-hard problem solving sometimes behaves exponentially but sometimes linearly, you don't always know in advance as it depends on the micro structure of the inputs.

The process of writing software that solves real-world NP-hard problems looks completely stochastic. You can read dozens of papers, try hundreds of different algorithms, approximations, heuristics, technologies and new ideas before you find how to solve the problem at hand, assuming you do. How can you estimate this time - upfront?

On the bright side, NP-hard, decidability and other tough algorithmic problems are only a niche in the world of programming. Most of the time software development is "only" a matter of engineering, planning and experience -- where Scrum could well be the right answer.

Comment Re:It's Israel (Score 2, Interesting) 303

You are reversing the order of events.

1. Build border settlements

The towns near Gaza are not settlements, they are and always were within Israeli international borders. Ashdod is 43 Km (20 miles), it is not a "border settlement", most of Israel (Including Tel-Aviv) is below 43 Km from its borders.

2. Whine about rocket attacks

The US president would not act nicely towards Mexico if it launches rocket attacks on San Diego either.

they are the mechanism by which Israel is stealing the entire area that was the Palestinian state.

There was no Palestinian state, ever. The U.N decided to divide the British controlled area between the Jews and the Arabs. When the British left at 1948, the Arab states conquered the parts that we now call Palestine. This land was an integral part of Jordan and Egypt up until the war of 1969.

Just look at a map from 1948 and a map from today. If you have time, check the map every decade between, you'll see Israel increasing steadily in area.

You are trolling, this is simply false, it has been reversed lately. Since the peace talks began, parts of the occupied territories were given to the Palestinian authority (1994-5), and some of the newer maps mark these areas correctly. Unfortunately, due to later unrest Palestinian control was massively eroded (call it retaliation or a security necessity). Despite that, these lands are still marked as Palestinian in many maps.

Gaza and the West Bank are becoming more and more overpopulated as the Palestinian lands shrink, effectively making them concentration camps.

This is only a half truth. The West Bank is shrinking due to actions of Israel, and people there do suffer from it, but this is not so with Gaza (where the rockets come from). Gaza is within its 1948 borders, when it was part of Egypt, and Gaza is the most overpopulated part of Palestine. Israel has nothing to do with it. So do you say that Israeli actions deprive Gazans of land they could use in the west bank? Wrong, Gaza does not border with the West bank. People could never move freely between these two places, not even during their Arab rule. Geographically they are two different nations. They were linked together only due to political/strategic moves by all sides (Israeli, Palestinian, American, European, Egyptian and Jordanian).

The people of Gaza have only two possible expansion directions: towards Israel (beyond 1948 borders) or towards Egypt. This is what many of them want. This is one of the reasons why the peace talks stalled - Israel did not want to let a big percent of Palestinians immigrate to Israel, and the Palestinians did not want to give this thought up.

Say what you want about Hamas. They were elected fairly, in elections overseen by Jimmy Carter. Whatever you, the UN or your government may think of them, they are the democratically elected party

So was Slobodan Milosevic, it did not give him the right to do what he did. Hamas does not promote peace, they promote violence, or at most a temporary cease fire. They do not promote equality, but segregation by gender and religion instead. If anyone wants peace she should hope that Hamas will get out of the equation.

Comment Documentation should not be retrofitted (Score 1) 769

In OSS there is a tendency to code first (and if you are good - design first) and a year later someone else will try to retrofit user documentation. This will never work right. And this is why:

In order to have a possibly reasonable documentation, the design and code must be easy to explain. There should be relatively little user-visible corner case, feature X should behave similarly to feature Y even when they are designed/coded by different people at different times.

In OSS what usually happens is that developer X has an itch and implements his stuff without thinking about developer Y doing a seemingly different feature (proprietary S/W is no better). They end up with a documentation that has to cover 2 different features with subtly different ideas. Very confusing.

It is quite possible that feature X and Y are technologically independent, but that is not something the user should be aware of (most of the time). This means that it requires more work to make them look similarly from the outside, so that it is easier to document.

Consider for example the concept of a "file system". Most of the time the user does not have to know if this is XFS, NTFS or EXT4. The documentation is relatively simple and covers 99% of cases. However, if every file-system had different system calls, documenting it would be hell.

If every application has a different UI short-cuts and concepts, it is much harder to document. Why can't it resemble other applications? Because the coder did not consider the cost of explaining and documenting the thing, only of technology (certainly), functionality (probably) and ease of use (hopefully). But documentation was written only after the fact. At that point many concepts and ideas are set in stone, changing them to ease the use and documentation ranges between difficult to impossible

I have gone from the wrong direction and then seen the pain of the users too many times. I hope I learned my lesson. It is simply impossible to document the beast in a reasonable way down the road.

Comment What's the point? (Score 1) 386

The amount of resources it reportedly takes makes this not so practical.

What do one would want to have deduplication for? The cost of disk storage has two big elements - speed (latency&throughput) and backup.

It does not seem that this technology would help much in the speed department, it might actually hurt. Managing copy on write has several potential costs. It may help backup if the backup program knows the fine details of deduplication, but that means that old backup software will have to be replaced.

It reminds me the compressed file system I used to have on my old SLS Linux PC which had a small disk (1992 if memory serves me right). It was dog slow to run X11 on it. I have not seen a compressed file system since, there was no need. Disk storage grows much faster than my need for data.

Comment Microsoft's excuse for not updating (Score 5, Informative) 211

After reading Windows Can but Won't I am still unimpressed. This article tries to hide a substantial feature preset in Linux but not in Windows. Call it a misfeature, a bug, an engineering decision or a precaution but, as it seems, Microsoft's filesystems do not support file removal well. If a DLL is in use you can't remove it without dire consequence, you are left with modifying the original file.

On Linux, you can remove the DLL without destabilizing running applications. This is because the file is unlinked from the directory structure, appearing as if it was removed, and the old file contents is still accessible to running applications. On Linux, an update mechanism can remove the DLL and put a new DLL in its place without affecting any running applications. Running applications continue using the old DLL, posing no substantial stability risk.

The Linux way isn't perfect either because running applications do not benefit from the update. Such an application will effectively use the old DLL until it is restarted giving a false sense of security. If an affected service is not restarted, then the computer is still at risk.

Comment GCC was forked off the FSF (Score 1) 306

The danger of forking is not reserved to commercial entities.
If the community is not happy with whoever controls the code then it's fork fork fork.

If the company that controlled the code plays this well then it may have a chance to merge the fork back.

Cygnus were fed up with FSF's attitude regarding GCC development, other developers were also fed up.
In fact most of the active community were fed up. So Cygnus forked GCC into EGCS which started to thrive.

FSF came to its senses and made an agreement with Cygnus and other developers to merge EGCS back to FSF.
In fact EGCS was renamed to GCC 2.8 or 2.95 (I don't remember).
The smartest thing that Cygnus and other developers did was to assign all copyright to FSF even during
the fork. This allowed the merge back.

If the major players play well then it is possible to merge any fork back.
Things should not be as bad as you say for creates an open source product. Things can be fixed if
the developing company is willing to avoid the arm wrestling game.

The question is, how interested are Sun, Oracle and the developers to avoid a fork.

Slashdot Top Deals

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...