Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:Definition of Calorie ABSORBED FROM FOOD is Bro (Score 1) 425

Humans don't digest anything (except complex carbohydrates, via saliva in the mouth). Gut bacteria digest food. What is available for the host human to absorb after the bacteria are done changes significantly -- not by some little correction factor, by up to an order of magnitude -- depending on a number of factors such as food particle size, prevalence of cell walls and connective tissue, the exact ratio and distribution of gut bacteria species, and so forth, for a given "energy content" of food. (A human will typically absorb as much chemical energy from a 4-oz. medium-well hamburger patty as from a 16-oz. rare steak, and an much from a 2-oz. piece of cake as from a 6-oz hunk of black bread.)

What the human body then does with that chemical energy depends on a number of genetic, environmental, and experiential factors. Having lost a significant amount of weight lowers energy demand, permanently, by up to 30%. Food availability to the mother during gestation affects the metabolic efficiency of the offspring. Hormones and hormone analogs in _microgram_ quantities effect the efficiency and completeness of energy absorption by the gut and whether abdominal fat stores the glucose. (Subcutaneous fat responds to glucose levels, not hormone levels.) Oddly, there is a strong correlation between maternal soy consumption during pregnancy and non-obese offspring: but then soy is an estrogen mimic. Most plastics also shed endocrine mimics.

The "fuel" model of food is overly simplistic. The conflation of extreme overweight and obesity is overly simplistic (yes, obese people can diet and exercise to normal weight -- 5% of the time; the other 95%, other mechanisms keep the fat from turning into energy). The worldwide obesity crisis cannot be solved by diet, exercise, and willpower, because it is not caused by overeating, lack of exercise, and self-indulgence. _Overweight_ can be so addressed; obesity cannot.

Comment Palm m105 (Score 1) 508

A working Palm m105 can be had for $25-35. "Typed" student papers (Graffiti'd in) could be transferred to your computer by IrDA or a serial cradle. If you're willing to reformat electronic readings to ePub format, readings can be transferred to students the same way. "Notes" up to 4kB hold about a page and a half of single spaced text. Small but readable screen, free applications that raise the limit on the editor to 32kB (about 12 pages single spaced). A pair of batteries lasts 1-2 weeks; using the IrDA is the big current suck, so use a serial cradle for everything. The "supercaps" in the m100 series don't hold a charge while switching batteries, so the device resets. "Hotsync" to a PC before & after battery swap makes that irrelevant. For those who must keyboard, an attachable full-size keyboard that folds up to pocket size is another $35.

Submission + - Ask Slashdot: So now that .NET's going open source...? 1

Rob Y. writes: The discussion on Slashdot about Microsoft's move to open source .NET core has centered on

1. whether this means Microsoft is no longer the enemy of the open source movement
2. if not, then does it mean Microsoft has so lost in the web server arena that it's resorting to desperate moves.
3. or nah — it's standard MS operating procedure. Embrace, extend, extinguish.

What I'd like to ask is whether anybody that's not currently a .NET fan actually wants to use it. Open Source or not. What is the competition? Java? PHP? Ruby? Node-js? All of the above? Anything but Microsoft? Because as an OSS advocate, I see only one serious reason to even consider using it — standardization. Any of those competing platforms could be as good or better, but the problem is — how to get a job in this industry when there are so many, massively complex platforms out there. I'm still coding in C, and at 62, will probably live out my working days doing that, but I can still remember when learning a new programming language was no big deal. Even C required learning a fairly large library to make it useful, but it's nothing compared to what's out there today. And worse, jobs (and technologies) don't last like they used to. Odds are, in a few years, you'll be starting over in yet another job where they use something else.

Employers love standardization. Choosing a standard means you can't be blamed for your choice. Choosing a standard means you can recruit young, cheap developers and actually get some output from them before they move on. Or you can outsource with some hope of success (because that's what outsourcing firms do — recruit young, cheap devs and rotate them around).

To me, those are red flags — not pluses at all. But they're undeniable pluses to greedy employers. Of course, there's much more to being an effective developer than knowing the platform so you can be easily slotted in to a project. But try telling that to the private equity guys running too much of the show these days...

So, assuming MS is 'sincere' about this open source move (big assumption),

1. is .NET up to the job?
2. Is there an Open Source choice today that's popular enough to be considered the standard that employers would like?
3. If the answer to 1 is yes and 2 is no, make the argument for avoiding .NET.

Submission + - What Happens to Society When Robots Replace Workers? (hbr.org)

Paul Fernhout writes: An article in the Harvard Business Review by William H. Davidow and Michael S. Malone suggests: "The "Second Economy" (the term used by economist Brian Arthur to describe the portion of the economy where computers transact business only with other computers) is upon us. It is, quite simply, the virtual economy, and one of its main byproducts is the replacement of workers with intelligent machines powered by sophisticated code. ... This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. ... Ultimately, we need a new, individualized, cultural, approach to the meaning of work and the purpose of life. Otherwise, people will find a solution — human beings always do — but it may not be the one for which we began this technological revolution."

This follows the recent Slashdot discussion of "Economists Say Newest AI Technology Destroys More Jobs Than It Creates" citing a NY Times article and other previous discussions like Humans Need Not Apply. What is most interesting to me about this HBR article is not the article itself so much as the fact that concerns about the economic implications of robotics, AI, and automation are now making it into the Harvard Business Review. These issues have been otherwise discussed by alternative economists for decades, such as in the Triple Revolution Memorandum from 1964 — even as those projections have been slow to play out, with automation's initial effect being more to hold down wages and concentrate wealth rather than to displace most workers. However, they may be reaching the point where these effects have become hard to deny despite going against mainstream theory which assumes infinite demand and broad distribution of purchasing power via wages.

As to possible solutions, there is a mention in the HBR article of using government planning by creating public works like infrastructure investments to help address the issue. There is no mention in the article of expanding the "basic income" of Social Security currently only received by older people in the USA, expanding the gift economy as represented by GNU/Linux, or improving local subsistence production using, say, 3D printing and gardening robots like Dewey of "Silent Running". So, it seems like the mainstream economics profession is starting to accept the emerging reality of this increasingly urgent issue, but is still struggling to think outside an exchange-oriented box for socioeconomic solutions. A few years ago, I collected dozens of possible good and bad solutions related to this issue. Like Davidow and Malone, I'd agree that the particular mix we end up will be a reflection of our culture. Personally, I feel that if we are heading for a technological "singularity" of some sort, we would be better off improving various aspects of our society first, since our trajectory going out of any singularity may have a lot to do with our trajectory going into it.

Comment Agile is not a golden bullet (Score 5, Interesting) 597

The major problem with Agile is that it is the new software development buzzword, and thus is perceived as a golden bullet for software development. Agile has a specific application: development of experimental software, where the project sponsors know they need something in a particular area but do not know exactly what. Agile (and iterative development in general) lets the target change over time as knowledge is gained. Unfortunately, iterative development is expensive, probably twice as expensive as waterfall for the same result: "refactoring" is another word for "rework," and there is a great deal of this in iterative development. Agile in practice is typically waterfall without a project plan: the project sponsor knows what is desired, and when, and is trying to get it for cheap. Iterative development fixes the time taken ("timeboxing") and the cost (level of effort); what is unknown is how long it will take (or alternately how much you can put into a sprint). Starving Agile has the same result as starving typical development: you only get the 1/3 of the software that is apparent, not the 2/3 that makes that 1/3 truly functional, reliable, and maintainable.

Comment Go back to school — a little, anyway (Score 1) 565

The change from imperative/procedural languages to object oriented languages can probably be done through reading and experimenting, but an instructor-led course would be easier by far. If you can take one or two semesters at a local college of some object oriented language — Java and C# are the two most approachable, and conveniently have free IDEs and toolchains — that would put you back in the game enough to learn through reading and through playing around. Once you have your mind wrapped around OO concepts, a GUI framework (Swing, WinForms, GTK+, etc.) is easy enough to pick up. Similarly, learning how to exploit relational DBs is deserving of a course: I've never seen anyone self-teach more than about half of a relational DB's capabilities. I'd put off C++ to start: the language is fearlessly exploring what happens when OO concepts are applied orthogonally across a language, but Java and C# fit nicely into the 90% solution space that people actually use.

Slashdot Top Deals

The bigger the theory the better.