Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Re:Etherpad lite is pretty close (Score 1) 42

I was going to mention this one. I've been using Etherpad on the Sandstorm.io service for a few weeks and I haven't had any problems yet. I haven't used it extensively, though, and I've only been sharing documents between two accounts. But so far, so good.

Comment Re:Finally! (Score 1) 221

Good point on your criticism to my first comment. Old C++ libraries are baggage that newer languages don't have, but clean slate new C++ programs can ignore old libraries as easily as new languages can. I was hypocritical with that argument and didn't notice. Thanks.

I would assume that for developing most code, unless you're hitting specific performance-related problems you need to debug, you do most of your compiling without the optimizer. Once it's more or less feature complete, then you have a few optimized builds before release. But even so, I work in web development. Python and PHP may not be finished getting out of bed for a day of number crunching when when C++ is already in the bar enjoying a beer, but with web work when you're accustomed to a recompile and re-deploy cycle of "Ctrl-s, Alt-Tab, F5" it makes working with C++ feel like sticking thumb-tacks into your face.

In terms of abstraction levels and such, that's a whole separate discussion or debate. I've swung around to the Clojure community philosophy of working with the simplest possible data structure that gets the job done. See for example the video https://www.youtube.com/watch?... - of course I don't know the particulars of your work so I would be foolish to assert without specific evidence that you're doing something wrong by making your own data structures. Clojure supports interface-based polymorphism and you can even use Java's inheritance-based polymorphism too, but the best practice method is to work as much as possible with sets, vectors, lists, maps (aka associative arrays) and keywords (enums). By staying at that level as much as possible, you have code that's much easier to inspect and reason about, and you get a staggering amount of libraries that will work on your data as-is instead of using the Visitor pattern and the Strategy pattern and the Decorator pattern and all of those other Design Patterns that provide wonderful fixes to one problem at the cost of requiring lots of extra work to interact with other code. So maybe Go with the simple data structures and the right code would do your work well - minus any performance deficits, of course - but maybe not.

I hadn't followed the evolution of the D language much. I stumbled across the language a few years ago when D 2.0 seemed relatively stable. I admit, I would have been furious if I had been an early adopter of D 1.0, because the transition between 1.0 and 2.0 was enormous.

Comment Re:Already propagating (Score 1) 663

I've read Protein Power and Dr. Atkin's books, among others, and what they write and the research they use to back it up make sense, but when I cut my carbohydrate intake to 50 grams per day or less and ate that way for three months, I only lost fat for the first month and a half or so and even then barely a pound per week. I had a good intake of fat, protein, and vegetables at the time too.

So while the connection between saturated fat intake and good cholesterol, and protein intake and overall health might be rock solid, I am not sold on the fat loss. I have an easier time cutting calories, at least for a little while, on a diet with a high proportion of fat and protein in the food. But I still have to cut calories to slim down, period.

Comment Re:Finally! (Score 1) 221

Go as a C++ replacement should be fine unless you need manual memory management. D as a C++ replacement should be awesome - D has garbage collection by default but manual memory management as optional. I'm surprised the Mozilla team didn't go with D. I don't have anything against Rust, mind. It just seems to me that in general terms D already provided almost everything the Rust designers had in mind.

C++ drawbacks:
1. The language has a long history, which by its nature means there is tons of older C++ code around. So while you can write a new program with a small, safe, consistent set of the language features, you will often find yourself reading and calling older code that deals with corner cases and language features you don't understand. (I mean "you" in the general sense, for all I know serviscope_minor knows every revision of the C++ standard backwards, forwards, sideways, and inside out and all of the compiler quirks and corner cases dating back to Stroustrap's first release.) Go, Rust, and D adoption is hindered by the fact that there isn't half a billion or more lines of code in their respective languages in use around the world, but for all three of those languages you won't get bitten by misunderstanding older versions of the code or compiler quirks. They were designed with the benefit of hindsight using C++ as a starting point.
2. C++ has a preprocessor and header files. In the time it takes to compile a three million line C++ program, you can compile a three million line Go or D program and build a two car garage. I think Rust is in the middle, but closer to Go and D for compiler times on large projects than to C++. The C++ preprocessor and header files are fundamental to the language, you can't get rid of them without breaking most older C++ projects. This is a headache that will never go away. This slow compile time is the fundamental reason most websites are not written in C++ despite the fact that C++ demolishes competitors for performance. I expect to see more websites written in Go, D, and Rust as time goes on because they're getting close to C++ for raw performance but the edit-compile-run-test loop time for any website much bigger than "Hello World" is much closer to that of PHP than that of C++.

Comment Re:note 4 (Score 1) 208

Planned obsolescence is more profitable for vendors. But really, the only reason to upgrade since the Samsung Galaxy S3 era Android phones has been for the camera. I think 2GB of RAM or Android 4.x or both were the tipping point to where Android is stable and simple to use. All enhancements past that has been window dressing.

But of course, you need CyanogenMod or something similar to keep getting security updates on older phones.

I think the really cost effective thing to do would be to get a Samsung Galaxy S3 or something similar and a separate digital camera, and just get into the habit of carrying both around. But I forget the phone as it is.

Comment Re:Google Search (Score 1) 208

My HTC One Max has a 1920x1080 (not quad hd) screen and 3300 mAh battery, I can get two days between charges with moderate usage. My daughter can watch Netflix on it for about six or eight hours before it needs a charge. (I know it's no good for her to watch that much, but sometimes I'm just too damn tired to make her do something else.) The phone is good in all respects except the camera, which just can't match the mid range or better Android cameras.

Comment Re:Already propagating (Score 1) 663

I think saying that "calories in - calories out", while technically true, is not useful. It's like saying, "The secret to limitless wealth is to earn a lot more than you spend." Or "The secret to being an immortal is avoiding death!"

While 1500 calories of cotton candy has the same energy as 1500 calories of tuna, eating the cotton candy will probably wreak havoc with your insulin, blood glucose levels, etc... over the short term but leave you feeling hungry again in short order. The 1500 calories of tuna will be much harder to eat - you'll get tired of the taste quickly - but leave you feeling satiated for much longer. Human beings are not machines, sense of satiation matters.

With respect to the "low fat" dieting, this is still a heated discussion but I think evidence has accumulated that it was a failure. You can even ask whether it merely accompanied the American obesity epidemic or actively contributed to it. Wheaties in skim milk is a standard part of a low fat diet, but the same amount of calories in the form of eggs and bacon will keep your hunger at bay for many hours longer.

Comment Re:Already propagating (Score 3, Interesting) 663

I switched to diet soda about 12 years ago and over the course of a year I effortlessly lost about 15 pounds with no conscious change to my eating habits (aside from the switch to diet). Then over the next two years, still with with no conscious change to my eating habits, I gained all of the lost fat back. So who knows.

Comment Re:Already propagating (Score 2) 663

It's been demonstrated in studies that people who drink zero calorie drinks or switch to zero calorie drinks are not thinner than others, on average. It's not yet demonstrated in studies that most overweight and obese people that drink zero calorie liquids makes up for the difference in extra eating elsewhere.

I am not ruling out the possibility, of course. It's still likely. But remember that many health recommendations that seemed obvious and intuitive 20 and 30 and 50 years ago are now viewed as incorrect. We have a food pyramid instead of four equal food groups. We no longer recommend smoking to suppress appetite for fat loss. Cholesterol intake in the diet has been proven not to have a direct link to bad cholesterol (LDL) blood levels. etc...

Not that I'm suggesting we take anything that comes out of a research group funded by an industry at face value.

Comment Re:Why worry? (Score 1) 133

My phone is new enough to get security updates and I still don't do any online banking, social media, Amazon shopping, etc... on it for the same reason. I make phone calls, send and receive texts, turn on location services when I need navigation, play casual games, and browse news websites.

Comment Re:Um.. we don't see it as advancing our career (Score 1) 125

This is a problem across all industries, and it's not as bad in software as elsewhere. I've been writing software for fourteen years and I was only asked to work long hours once, for one week. My employers won't insist on it because I'll leave, and I'm not easy to replace and even if they find someone just as skilled it takes a few months to ramp up for productivity. I'm sure it does happen, of course. But if my boss asked me to work a 50 hour week I would quit today - and probably be back to work within a month.

In general, I don't see any solution other than the socialist ones - unionize and demand fairer treatment. Just don't let the union morph from something "for the workers, by the workers" into a monster that is as much focused on its own goals and indifferent to the treatment of members as the corporate management (which has been the experience with American labor unions that most of my family has had).

Comment Re:Um.. we don't see it as advancing our career (Score 3, Insightful) 125

The problem with being a software developer at 45 or 50 is that when you learn Node.js or CoreOS or whatever the new hotness is and a 28 year old learns the same technology, a lot of HR managers will think they can get 10 extra hours of quality work per week out of the younger man (or younger woman) for 20% lower compensation. That belief is often wrong because 40 hours of quality work from you trumps 60 hours of quality work from most people 20 years younger, but it's a common belief nonetheless.

Conversely, there are three good things you can do as a manager that used to be a software developer for fifteen or twenty years:
1. You can manage from experience, with a real understanding of the work your employees are doing. My best managers have all been former developers - or in some lucky cases, people that get to do half management, half development.
2. You can make informed decisions about what technology stacks to use or to avoid and what priorities matter. At my last job I turned down a management role repeatedly, and I was pleased with my choice until the person who took the management role drove me out of the job with poor decisions.
3. You can understand that a development manager's most important job is running interference between skilled employees and the rest of the company. Yes, it's less fun than developing. But you'll gain respect, trust, and productivity from your team if you point them to the target and then spend your own time leaving them alone, keeping them out of wasteful meetings, and trying to remove any obstacles that would slow them down. My current manager does that, and she's awesome.

"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis

Working...