Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 8 declined, 2 accepted (10 total, 20.00% accepted)

Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×

Submission + - Google vs Oracle decided in favour of Google. (twitter.com)

serviscope_minor writes: The Oracle vs Google trial is just over and has been decided in favour of Google. Officially, reimplementing APIs is fair use.

No stories published yet, because as of the time of writing, it only happened 32 minutes ago, but thankfully there's live tweets direct from the courtroom
here, thanks to Sarah Jeong.

Our industry is safe for now, it seems.

PS Sorry for the only link being a twitter feed. The news doesn't seem to have hit the usual outlets yet.

Submission + - What's the smallest biggest number you can think of?

serviscope_minor writes: If you think exponentials, factorials or even Ackermann's function grow fast, then you're thinking too small. For truly huge, but well defined, numbers, you need to enter the realm of non computability.

The Busy Beaver function BB(n) is the largest number of steps that an n state Turing machine will run for when fed with a blank tape excluding non halting programs. It grows faster than any computable series but starts off as the rather pedestrian 1, 6, 21, 107. By BB(7) it reaches at least 10^10^10^10^10^7 and at some point becomes non computable. It must be non computable because if it wasn't, you could run a program for BB(N+extra states needed to encode the initial tape state)+1 steps, and if it gets that far then you know it never halts, so you've solved the Halting Problem. So, at some point it must transition from numbers that can be computed to ones that can't be.

And now there's some new and rather interesting insight into that which essentially reduces the problem to code golf or the International Obfuscated Turing Code Contest (as if there is any other sort). Imagine you have an axiomatic system, say ZFC (which underlies almost all of modern maths), and you know you can't prove it's consistent (you can't). If you write a program that systematically evaluates and tests hypothesis based on the axioms, you can't prove it will halt or not since that's equivalent to proving consistency.

This insight and first upper bound is the program proving that BB(7918) is noncomputable comes from this new paper. It turns out that writing a ZFC axiom evaluator directly in a Turing machine is rather tricky and long winded, so the authors wrote a small interpreter for a higher level language then wrote the axiom evaluator in that. Now finding a smaller uncomputably larger number is a question of writing even smaller programs which attempt to compute undecidable things. Think you can do better? A good starting point would probably be the existing code on github.

(I hope I've got the explanation at least half way right!)

Submission + - LibReSSL unaffected by DROWN

serviscope_minor writes: The OpenBSD people forked and heavily cleaned up OpenSSL to create LibreSSL due to dissatisfaction with the maintainance of OpenSSL, culminating in the heartbleed bug. The emphasis has been on cleaning up the code and improving security, which includes removing things such as SSL2 which has fundamental security flaws. As a result, LibreSSL is not affected by the DROWN bug.

LibreSSL is largely compatible with OpenSSL. The main exceptions are in the cases where programs use insecure functions removed from libreSSL, or require bug compatiblity with OpenSSL.

Submission + - Ask slashdot: Clusters on the cheap?

serviscope_minor writes: Dear Slashdotters,

A friend of mine has recently started a research group. As usual with these things, she is on a shoestring budget and has computational demands. The computational task is very parallel (but implementing it on GPUs is an open research problem and not the topic of research), and very CPU bound.

Can slashdotters advise on a practical way of getting really high bang for buck? The budget is about 4000 GBP (excluding VAT/sales tax), though it is likely that the system will be expanded later.

The computers will probably end up running a boring Linux distro and Sun GridEngine to manage batch processing (with home directories shared over NFS?).

Submission + - Best removable storage filesystem for Linux?

serviscope_minor writes: What filesystem do you use for portable disks, especially large ones, under Linux? FAT is simply not very good. Using a proper filesystem (e.g. ext3) preserves the read/write permissions of the original machine which is rather annoying when the disk is moved to a different machine with differet user IDs. So is there a way to have a good filesystem that supports all the unixy things such as symlinks, and an execute bit, but does not require lots of chown'ing as root when moved to a different machine?
Portables

Submission + - ARM Based netbook.

serviscope_minor writes: Shopping in Robert Dyas of all places (note to non English readers, this is a fairly generic hardware store and has only a small selection of electronics at best) I noticed Inkia ARM based netbooks being advertised, though careful readers will note that the specs seem to differ slightly. The specs are the usual netbook ones along with an 800x480 screen 64Meg RAM, 1G flash and a 400 (or maybe 533MHz) Samsung ARM processor and WinCE. So, it looks like the first non-x86 netbooks have arrived. Sadly, this one is rather expensive, being slightly cheaper than the EEE 2G, with a painfully small amount of RAM, less storage and battery power. But this brings up several interesting questions: are they going to get much cheaper, are there ones with more memory, and will it run OpenBSD? The specs are very similar to the Sharp Zaurus 3000 series which runs OpenBSD very well, but running Firefox in 64M is somewhat painful.
Government

Submission + - Liquid explosives: no danger and no plot.

serviscope_minor writes: It has already been established in a previous article that bringing down an aircraft with liquid explosives mixed on a pllane would be very difficult. The men accused of the plot werer brought to trial and a verdict has now been reached. There was not enough evidence to convice any of them of targeting a plane. So apparently, there was not much evidence of a plot that could not have worked anyway.
Software

Submission + - Good profiling tools for C/C++ unser un*x?

serviscope_minor writes: It should be well known to any developer that you should only optimize parts of a program which need optimizing. And the way to find those parts is through profiling. This simplifies one point: profiling is difficult. The obvious way is to enable profiling in the compiler and use gprof, but this has problems. Firstly there is no point in profiling a program without turning on -O3 (or which ever), since this can change the results dramatically. Secondly, -O3 will inline functions which can ruin profiling results by making them far too coarse. Even if it doesn't do this, there is no way of determining which part of a function is taking up all the time. So that brings me to my question: does anyone know of profiling tools which do not suffer from these problems? My platform is C++ (using g++) on Linux.
Linux Business

Submission + - Ubuntu on Dell: interesting to a wide audience.

serviscope_minor writes: You heard earlier today that Dell will be shipping Ubuntu on selected models. Naturally, this is interesting to slashdotters. However, the interest generated by a wider audience will ultimately be more important. Well, apparently, this is the 3rd most popuar topic on the BBC at the moment. So apparently this is interesting to a general audience. I believe that this bodes very well for the future.

Slashdot Top Deals

"But what we need to know is, do people want nasally-insertable computers?"

Working...