Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:Which is why sometimes small engines ... (Score 1) 238

Whereas with a bigger engine this is less of the case and you can get equivalent mpg

ah, i wrote a diesel truck simulator in 1993 for Pi Technology: there is actually much more to it than that. with a bigger engine with higher torque it is possible to have the vehicle drive more often in its peak torque range where it has either better acceleration or better fuel economy or both.

with a smaller engine the effect you mention - that people put their foot to the floor - means that the engine has to rev its nuts off and thus operates waaay outside of its efficiency band.

Comment watch the program on 5th gear (Score 4, Interesting) 238

before making *any* judgement you *need* to watch the program on 5th gear which covers exactly this question in some detail. basically the test was designed originally for people driving sensibly, and it was designed i think well over 20 possibly even 30 years ago. so it has a very *very* gentle acceleration and deceleration curve. gentle acceleration because that is not only fuel-efficient but also the cars of that time simply could not accelerate that much, and gentle braking because again that is more fuel-efficient but also because if you had drum brakes they would overheat.

people no longer drive sensibly: they are more aggressive with other drivers (not keeping a safe distance), they put their foot down hard on the accelerator and they put their foot down hard on the brake. also as the cars are more reliable they tend to not maintain them properly: until i watched another program on 5th gear about how badly old oil affects fuel economy and the lifetime of the engine i had absolutely no intention of changing oil regularly in the decade-year-old cars i buy.

so, in effect, people should stop complaining and start driving in more fuel-efficient ways... *regardless* of how aggressive the person behind them gets when they set off from the lights at the same acceleration rate as a 40 tonne cargo lorry. that's the other person's problem.

Comment love descent (Score 1) 251

i love descent, and i love that it's now software libre. i hope the guy who maintains d2x has stopped being an idiot by including patched versions of standard libraries such as libsdl without providing an option to replace them and forcing the patched versions to overwrite pre-installed software, but yes - awesome.

the thing about descent was that it was the first game with 6 degrees of freedom. i actually bought a special joystick that was capable of dealing with it (one designed for flight simulators) and after 2 to 3 weeks of practicing i was competent at side-motion circular slides firing at a target at the centre. the first 2 weeks were spent mostly getting motion sickness and having the nose of the craft bashed against a corner :)

it was also fun to watch spectators swaying from watching the screen! but, again, after a couple of weeks you got used to it, both as a player and as a spectator.

yeah - to those people who set up LAN parties: i hear ya :) i did the same. i think the lowest spec i got away with was a 486 SX 25 with 12mb of RAM, setting the screen to 320x240 and it was just about tolerable. i had to use 10-Base-T with terminators for goodness sake - what the heck i was doing with 5 networked computers in my house back in 1996 with just a 28kbaud modem i _really_ don't know!

so yes, absolutely: descent (the software libre version *or* a commercial version) gets my vote... *as long as* it has a community portal similar to that of Dark Reign, with a chat room so that people can meet other players, set up a match and play. that is bizarrely what's missing from bzflag: although bzflag has an in-game chat it doesn't hatve out-game community chat, very odd.

also, it would be awesome to see planetary-surface action as well, not just in mines (no matter how large). i always felt a little claustrophobic and the attack vectors would be very different in free space... interesting to think about the possibilities here, hmmm :)

Comment Re:depinit (Score 1) 533


"i have never even seen a PAM module which does this trick. it would be awesome to do the same trick for ssh as well."
you mean like pam_ssh for ssh keys or if you just want it to work with gpg and ssh you could also run the gnome key manager as I do.
True single sign on with all ssh and gpg keys.

no not pam_ssh. not "ask for a 2nd passphrase at a 2nd prompt which is entered into the ssh system to unlock the ssh key" - have ABSOLUTELY NO login credentials AT ALL, and LITERALLY use the success/fail of the ssh passphrase (or gpg passphrase) unlocking *AS* the login. no /etc/shadow, no password field in /etc/passwd - nothing BUT unlock the gpg or ssh key.

Comment depinit (Score 4, Informative) 533

depinit. written by richard lightman because he too did not trust the overcomplexity of sysv initscripts and wanted parallelism, it was adopted by linux from scratch and seriously considered for adoption in gentoo at the time. richard is extremely reclusive and his web site is now offline: you can get a copy of depinit however using archive.org.

using depinit in 2006 i had a boot to X11 on a 1ghz pentium in 17 seconds, and a shutdown time of under three. depinit has two types of services: one is the "legacy" service (supporting old style /etc/init.d/backgrounddaemon) and the other relied on stdin and stdout redirection. in depinit you can not only chain services together for their dependencies but also chain their *stdin and stout* _and_ stderr together.

that has some very interesting implications. for example: rather than have some stupid system which monitors /var/log/apache2/logfile for security alerts or /var/log/auth.log for sshd attacks, what you do is run sshd or apache2 as a *foreground* service outputting log messages to stderr, chained to a "security analysis" service which then chains to a log file service.

the "security analysis" service could then *immediately* check the output looking for unauthorised logins and *immediately* ban repeat offenders by blocking their IP address, rather than having to either poll the files (with associated delays and/or CPU untilisation) or have some insane complex monitoring of inodes which _still_ has associated delays.

also depinit catches *all* signals - not just a few - and allows services to be activated based on those signals. richard also had a break-in on one system, and they deployed the usual fork-and-continue trick, so he wrote some code which allowed the service-stopping code to up the agressiveness on hunting down and killing child processes. this also turned out to be very useful in cases where services went a bit awry.

basically the list of innovations that richard added to depinit is very very long, in what is actually an extremely small amount of code. i simply haven't the space to list them all, and no, richard was not a fan of network-manager either.

btw you might also want to look at the replacement for /bin/login that richard wrote. it was f****g awesome. basically what he did was use gpg key passphrases as the login credentials.... and ran gpg-agent automatically as part of the *login*. i have never even seen a PAM module which does this trick. it would be awesome to do the same trick for ssh as well.

it's fascinating what someone can get up to when they have the programming skill and the logical reasoning abilities to analyse existing systems that everyone else takes for granted, work out that those sytems are actually not up to scratch and can write their *own* replacements. it's just such a pity that nobody seems to have noticed what he achieved.

Comment learn how to learn (meta-learning) (Score 1) 247

there is actually something which is far more useful to be able to do, more than any amount of books read, which is only really possible effectively and efficiently now that internet searches are possible (and quick, and accurate), and that is meta-learning. in its crudest most disparaging form one might mistakenly call this cut-and-paste programming but it is actually nothing of the sort.

basically what you do is treat everything as a black box, and use the principles of the 6 different types of knowledge (listed on the wikipedia page for Advaita Vedanta, which is mentioned specifically because the western word Epistemology is woefully inadequate) to basically reverse-engineer the subject matter and in effect teach yourself *on the go* by way of analysing the results achieved, even though you are starting out from quite literally zero knowledge.

it does however take a hell of a lot of balls to do this *whilst being paid* and most employers simply will not believe you when you tell them that this is something that you can do... and be *more effective* at applying this technique than people who have been explicitly trained or quotes have experience quotes in the field.

to be fair to those people who genuinely do have experience, often such people *may* have encountered the circumstances before, such that they *may* have the answer much quicker than you-who-has-no-experience-at-all, *but*, the critical critical thing that you need to tell prospective employers is: what happens when something falls *outside* of the experience of the person who quotes has experience quotes? whom then would the employer rather have (if they had to choose one or the other rather than both people) - the person who will get there in the end, regardless of what they are asked to do, or would they rather have the person who can get there *most* of the time but who does not have the skills or intelligence to work out the all-important remaining last 10% of the job, without which the contract will remain unfulfilled and the company will go bust because of it?

in short: no amount of reading will substitute for learning how to learn and applying that skill *every single moment of your life*. when i hear people say i am too old to learn it makes me cringe, and i feel sad for them - i cannot say anything so i have to remain silent - but i feel sad for them because i know that inside they have given up. the only time to give up learning is when you are actually dead, and not before!!!

Comment cost now (losses) vs cost (funding) (Score 2) 80

ynow... there is a moral to this tale: if businesses and individuals making money from software (libre) had properly funded it, putting some of the money that they saved from not purchasing proprietary software into the hands of those software teams, would we be talking about this now? in all probability, the answer is no. the reason is because those teams would be able to expand, take on more people, pay for security audits and so on which they would otherwise, as we have discovered, not be in a position to do.

so my take on this is that it is really really simple: businesses have received what they paid for, and got what they deserved.

i have been through this experience - directly - a number of times. i worked on samba - quietly - for three years. whilst the other members of the team were receiving shares from the Redhat and VA Linux IPOs, which they were able to sell and receive huge cash sums - i was busy reverse-engineering Windows NT Domains so that businesses world-wide could save billions of dollars.... and not one single one of those businesses called me up to say thank you, have some cash. as a result, about a year after terminating work on samba i was working on a building site as a common labourer.

it was the same story with the Exchange 5 reverse-engineering, which the Open Exchange Team mirrored (copied, minus the Copyright and Credits).

there is a moral to this tale: unlike proprietary software, which has a price tag commensurate with its perceived value, the process of even *offering* payment to individuals working on a software libre project that has been downloaded, usually from a completely different location (via a distro), is completely divorced from the developers actual efforts.

even in shops in rural districts, it is understood that if the door is unlocked and the shopkeeper not there, you help yourself, open the till, sort out your own correct change and walk out. but in the software libre world there is often not even that level of expectation! the software is quotes free quotes therefore it is monetarily zero cost therefore we should not have to pay, right? and businesses are pretty pathological about taking whatever they can get without paying for it.

so the short version is: there is a huge disconnect in software libre between service provision (the software) and paying for that service, and i really cannot see a solution here. perhaps this really should be bigger news: perhaps in this openssl vulnerability we have an opportunity to make that clear.

Comment Re:parallelism (Score 1) 117

You're assuming a lot there. How would you know if osx or windows NT kernels are 'fully parallelized'? Have you seen the source?

someone else answered about OSX. NT, based on the MACH kernel, has been fully re-entrant and multi-threaded for a looong time. also, given that the service control manager (which is a parallelised start/stop daemon service) is fully parallelised i'd be incredibly surprised if the same attention to detail wasn't also carried through on device-driver initialisation as well. although.... the only evidence against that is the "Debug Startup" mode, which initialises drivers in sequence (and shows you the sequence), but that could well be due to the request for "Debug Mode" rather than an underlying design. honest answer: don't know.

Comment bowling for columbine (Score 1) 1633

wasn't it some guy michael who did that documentary, showing that there are an average of THIRTY FIREARMS PER PERSON in Canada, yet there were only two gun-related murders in the entire country that year. by contrast, i remember the camera man showing the city of detroit and this guy michael saying that there had been tens of thousands of gun-related murders in just that one city of the united states, alone.

no: if canada's population can be sensible about guns, then gun "control" in america is not the answer. basically we may reasonably deduce that there's something terribly wrong with american society, resulting in many individuals placing little value on another person's life and them being sufficiently stressed or pathologically outright insane as to be capable of killing. passing laws to remove the guns *will not stop that*. it is simply not connected.

if [sensible] citizens are not permitted to defend themselves from their own government, what we then have is a situation where the Oligarchy of the United States (see http://politics.slashdot.org/s... ) could basically murder those people who see it as their duty to protect their fellow cizitens from tyranny.

hmmm... where have we seen that happen before? and before anyone *outside* of the united states imagines this to be a "local problem", remember that the united states has been doing things like bombing other countries and cutting off communications (cutting underwater mediterranean cables for example) of any country that attempts to e.g. start selling oil *not* on the $USD standard. so basically if the united states ends up in chaos it means the rest of the world ends up in chaos as well.

sensible U.S. Citizens: please make your voices heard. loudly.

Comment parallelism (Score 3, Interesting) 117

.... um, it's 2014, the linux kernel is a critical part of the planet's internet infrastructure, is used in TVs, routers and phones all over the world, and you're *seriously* telling me that its internals aren't fully parallelised? i thought the linux kernel was supposed to be a leading example of modern operating system design and engineering.

Comment serious problems with networking equipment in HFT (Score 3, Informative) 342


this article explains in depth what the problem is. the SEC has now been alerted to the problem, and is investigating. the people who found the issue believed originally that this was deliberate, but it actually just turned out to be a systemic problem of the speed differentials between different routes that high-frequency trades come in at.

what they originally discovered was that they could see a price on a screen, but the moment that they put in the bid to a number of brokers, the price would DISAPPEAR. they thought that this was deliberate, that someone was scamming them: it turned out that this wasn't true, but it took a couple of years of investigation to find out. what they did was they put in *individual* bids *directly*, and found that they were accepted. they then investigated various combinations, introducing delays into the bids, and found, amazingly, that it was down to the *time of arrival at the exchange* of their bids as they were sent via numerous brokers.

so it was only when they invented a tool (which they called "Troy") that *deliberately* introduced networking delays, such that the bids would (as best they could manage) arrive within milliseconds of each other at the exchange, that they managed to trade successfully.

if however any one of those bids happened to go via a different ISP, or a different router, or any other random combination, then the bids would *FAIL*.

the problem it turns out is that these delay effects are well-known. most of the money in high-frequency trading is therefore made by seeing a slightly slower broker's prices, then putting in an undercutting bid *knowing full well* that the other broker has a slower network. and this aspect of high-frequency trading is what is currently under investigation by the SEC.

*this is why the introduction of networking delays is so absolutely important*.

the people who discovered this phenomenon basically had to set up their own independent exchange in order to solve the problem. they needed to introduce a delay of 350ms as a way to make things fair for everyone. they did this by basically putting in 38 miles of fibre-optic cable in a shoe-box in the basement of the server farm that they leased.

it turns out that once investors discovered this, they began *specifically demanding* that their trades *exclusively* be brokered through this new exchange that had this 350ms shoe-box delay. it actually caused a lot of embarrassment for a number of brokers and trading houses because the brokers were explicitly disobeying their client's instructions, because the brokers didn't understand how important this really is.

anyway: you really have to read that article (or the book) fully because it's quite complex, and it's basically an inherent flaw down to the fact that the internet (TCP/IP) is routed randomly, thus introducing gross unfairness that has become the subject of intense investigation, very recently.

so yes, *all* trading should be done with at least a 350ms delay.

Slashdot Top Deals

To be is to program.