Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Surely not? (Score 1) 386

This strikes me not so much about studying "free will" than our inability to operate on extremely small timescales.

Second of all, I would be very reticent to accept into general principle the idea that there is no free will.)

  • If free will is an an illusion, then so be it (and I will have been fated to deny such a reality anyway, so stop bugging me)
  • But if free will is in fact reality, we have two main choices
    1. We can choose to assert that free will exists, and continue exercising it
    2. We can choose to assert that free will is indeed an illusion, and give up any sense of responsibility -- corollary to this is absolving an agent of any behaviour or action as being "destiny," and using this to explain away, and sometimes justify, all sorts of unpalatable or unethical behaviour.

If free will exists but we assert it does not, we surrender it of our own volition: if there is hope, but we do not keep hope, then from our own decision, there is no longer hope.

Comment What about SSH+vim...? (Score 1) 168

No holy wars please. s/vim/$YOUREDITOR/g

If you're using a LAMP stack mainly (which it seems you are), the option of SSH and using a command line text editor could be of interest.

That being said, if you can create a dev environment that matches your production, why not git around? Where has this workflow failed you previously? Surely also you are developing on multiple servers for different projects, and there are things you do common to all servers?

For my sites I use generally two git repos - one for my toolchain (custom connection, management and setup scripts), one for base content. Given that I use Wordpress for a lot, I can duplicate some of the base work. I use digitalOcean, always a ubuntu 14.04 server, and always set up with the same scripts. I do have a test VM directly on my machine to test things that might seriously break stuff, or for dev when I'm on the go without network, but otherwise, a lot of what I do is "in the cloud" - just over SSH.

As to how this would map to your requirements:

1) No syncing hassles across machines
If you're a team of one, SSH+cli editor should work - it's also lower processing power on your device. Better battery longevity. You can also quickly move back and forth between server logs and code from here too
2) No installation of toolchains to get working or back to work — a browser and a connection is all that would be required
Unless I'm mistaken, you'll always have toolchains to install and update on every new site/server. So long as you maintain an easy way to deploy them fresh, and the new instances you spin up are always predictably similar, I see no problem with maintaining a toolchain deployment recipe in one place, and easily management of instances along the way in teh way you want (ansible, chef, puppet, custom scripts...).
3) Easy teamwork
If the solution you want is for live coolaborative editing, maybe indeed a web-interface which facilitates this may be interesting, though you'll want to check that this mode of "team" working suites your team best, or if revision control-based solution works better for their workflows. Maybe a question for the web-based solution is, does it integrate well with individuals who like to work offline and sync as needed? I'll assume you've already had this talk with them.
4) Easy deployment
I'd say this is down to DRY - Don't Repeat Yourself, or, have you automated everything you could have? :-)
5) A move to Chrome OS for ultra-cheap laptop goodness would become realistic.
That's a personal preference I guess. I wouldn't try it just yet. I try to run a lean machine, but I like to know I Have The Power when I need it.

At the end of the day, so long as you have revision control in whatever solution you choose, you have a decent chance of developing on production. Now about the databases...

Comment Who cooked up such a misleading summary?? (Score 5, Informative) 281

1/ The Mythical Man Month was "referenced" (and mangled) about in an unrelated article by Eddy Baldry. It is he who mis-states the premise of the MMM

2/ Mythical Man Month's statement is "Adding more manpower to an already late project delays it further." This is different from the premise stated by Baldry.

3/ Anders Wallgren mentions nothing of the Mythical Man Month

4/ Wallgren actually does say that microservices, underpinning some of the arguments for DevOps, is not suitaed for all projects.

5/ The summary is a lie.

Submission + - Is curl|bash insecure? thinks not ( 2

taikedz writes: I can see several flaws in these arguments, so much so that where I previously dismissed the curl|bash offer as non-indicative of Sandstorm's security otherwise, I am now not so sure.

What do you think? From the article:

Sandstorm is a security product, so we want to address that head-on.

When you install software on Linux, no matter what package manager you use, you are giving that software permission to act as you. Most package managers will even execute scripts from the package at install time – as root. So in reality, although curl|bash looks scary, it’s really just laying bare the reality that applies to every popular package manager out there: anything you install can pwn you.

Realistically, downloading and installing software while relying on HTTPS for integrity is a widely-used practice. The web sites for Firefox, Rust, Google Chrome, and many others offer an HTTPS download as the primary installation mechanism.

Comment Use an alternative? (Score 1) 492

Have you considered using something other than BitLocker?

And I'm gonna say it - why not use disk-encrypted Linux and put Windows in a VM for those one or two programs that are Windows-only? This way you have full control of your system, the whole disk is encrypted, and you can stick to Windows 7...

Comment Linux for the .... middle? (Score 1) 189

So two thoughts come to me after reading the summary and the comments

1/ This clearly demonstrates that Linux is the right technology to use when supporting business syetms with legacy hardware (I'll second the opinions that enterprise keeps hardware around the longest - I supported users on obsolete AS/400s which, whilst not as old as what some people here talk about, still mean we're frequently learning old technology in Support); and the point that the leadership (Linus right now, hopefully the same with whomever comes to replace him eventually!) can be more ameanable to keeping up support for old hardware is great. I dream of desktop uptake, but enterprise and research are where it's at.

2/ However I also wonder - isn't this an offshoot problem of the fact that Linux is a monolothic kernel? Can this kind of interface-specific support not be modularized? Say, an API/ABI (in-kernel)standard that allows the kernel to plough on with currently evolving requirements, whilst maintaining a stable interface for previously integrated kernel features that have been split off into modules...?

(and no, I'm not at all familiar with the ins and outs of kernel development and architecture - I just read newsposts and Wikipedia ...)

Comment It makes sense but.... (Score 1) 54

So the major players want to bring some order to the bazaar. So be it - they can try. There are small projects that will probably decide to cooperate, and will because they are a one- or two- person effort - but the projects that truly behave like a bazaar will remain as coordinated or uncoordinated as they still are.

I don't see this effort being capable of shoving an agenda down anybody's throats - if you don't care for the agenda, don't. Submit your code to the project as and when you see fit, and work on the bits you want to. If tomorrow they want to address what they see as glaring issues in GNU's netcat, they'll be able to throw resources at it collectively - but I doubt they'll be able to tap GNU's shoulder and say "hey, give us some of 'your' devs to fix this."

In the end, if the effort results in a pooled selection of developers, incentivized directly and collectively (read: employed) by the companies, to work on aspects of open source projects they have communal stake in, to common goals and specification, that is probably going to be a good thing.

If they fork any of the technologies that is fine too - that's exactly what GNU GPLv3 was meant to allow them to do. They just can't expect to fork the maintainers and community too.

If however there is a scenario in which volunteers can be coerced into their way or the highway, that scenario must be understood and countermeasures prepared by those who would stand to lose from it. Don't take it too seriously, but don't take it in any way lightly either.

Submission + - Haiku debates kernel switch to Linux... or not. (

taikedz writes: A very interesting discussion is taking place in the Haiku mailing list. A developer has created a working prototype implementation of the BeOS API layer on top of the Linux kernel, and he is wondering if the project is worth pursuing.

Both 'sides' make a lot of compelling arguments, and it gives a lot of insight into decisions that went into the Haiku project, both past and present.

Comment Insufficient (Score 1) 79

The "highly restricted" spec is meant to catch suspicious combos like in the mybank example - but does not catch full-ascii (which is an even more restrictive level) trickery like (notice the two "v" chars). that combo in particular is now known, but goes to demonstrate that trickery does not need charsets larger than 7-bit... some people simply get caught by

Slashdot Top Deals

"It's what you learn after you know it all that counts." -- John Wooden