Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Risk vs. benefits (Score 1) 528

If every teacher had one of these along with the training on how to use it when that nut kicked in the door

A. ...you would maybe have stunned the one real wannabe mass killer who would have otherwise done something stupid this year (and thus maybe saved a couple of dozen of potential victims).

B. ...and you would have a country filled with countless problems of abusive tasering (badly behaving kids who got on the nerves of their teachers. Not that the brats were in their rights to begin with. But using a potentially lethal weapon to deal with verbal menace or bad behavious *is* inappropriate) and several extra cases of taser-related deaths on top of the usual ones.
Just look at how much cases of inappropriate tasering there has been since tasers became popular among various security branches.

(And I'm not counting in the potential of malevolent kids stealing their teacher's electrical weapon for nefarious purposes)

Comment Still waiting... (Score 1) 331

...yup but sadly 2 decades after Maastritch, there still isn't a common EU taxation law.

So currently, corporation get to game the system all they want - thanks to the increased mobility offered by the european union.
But there's no way for the state to get money to finance eduction, health, and so on. Because of lack of a European-level tax.
(more likely instead of an actual tax - i.e.: an extra tax to be paid by individuals and corporations - it would be better as flux of money between states. If all corporation run away to ireland, it would be fair for ireland to pay to the EU a share of the increased income coming from the companies moving in).

Curriously: although it's not even actual part of the EU but just having bilateral convention with it, and although its considered as a tax haeven, Switzerland DOES give money to the EU currently.

Comment BUT THEY ARE LEGAL (Score 1) 331

These tax avoidance schemes play a significant role in the current global financial crisis, and the debt problems of countries like the US. It also represents an highly unfair competitive advantage for large multinationals in competition with smaller national-scaled companies.

But, it *still* *completely* *LEGAL*. Unfair, unethical, but legal.

If you have a problem with that, don't sue them.

If you have a problem with companies gaming the system in order to take advantage of it, don't try to hit them (other will take turns and you're in for a huge whack-a-mole game). Try to change the system itself. Try to bring new laws, try to create new international tax scheme.
(And try to find a way to do it without alienating said company and having them run away from you).

Comment Huge difference (Score 2) 331

There's a practical difference (at least that's how it's defined in Switzerland - which is one of the possible tax avoidance place, although far less attractive than the ones in the summary).

- One is *lying*, giving false information and not paying the taxes you're required by law to pay. You pretend you don't have money and try to hide it (in order not to pay taxes. But according to the law you should be paying taxes). This is illegal. A person or a company doing so should be persecuted.

- The other is just shifting money around. You're absolutely honest and give any needed information out. You simply move the money to another place, where the tax happen to be lower than the first place. Once there, you openly collaborate with the local tax institution, declare all the money you have and pay all the taxes you're required to pay. It just happens that said taxes are lower than in the country of origin. But nothing is hidden, all money is openly accounted for. No one pretends anything false. This *IS LEGAL*. A person or a company doing so is just cleverly playing the system. WHAT NEEDS TO BE DONE is collaborating with the local government so some tax money is funneled back to the original country.

Ireland, for example, has almost no taxes. There's nothing wrong in the law about storing your money there. There's nothing wrong about paying almost no taxes (as long as you declare everything and don't hide anything). If you're unhappy with this, you should bring to court the company putting their money there. you should instead write to your politician asking that the European Union finally comes up with a solution for EU-level taxes (so money is shared between Ireland and the other countries where the money was prior transfer but were the company isn't paying taxes).

Comment New Wifi (Score 1) 553

So if some new technology comes out... won't work for Windows 7. Imagine if this spring, some new version of WiFi is released that works over distances of 20 miles, at gigabit speeds, and allows infinite porn downloading.

You mean, like IEEE 802.11ac? :-D

More seriously:
A new feature will probably be supported the same way Bluetooth was supported before microsoft included it into the service pack: an ugly vendor specific hack included with the driver.
So the end user will be stuck between two hard choices:
- keep the older OS version and put up with the crappy stack
- pay and upgrade to the newer OS version with buil-in official support

Comment The reverse (Score 1) 161

The reverse would be comparing the number of stars in the sky with the number of hairs.
It's probably a gross under-estimation (I in turn am no astrophysicist), but the listener clearly gets the point:
There's an insane amount of other stars out there than our sun.

Comment Speciality (Score 1) 161

but i am an astrophysicist, not a neuroscientist ;)

Don't worry. Some of us here realise it and take it with the necessary amused grain of salt.
We understand what you meant actually:
a gigantic number-crunching machine with bat-shit crazy computing capability.

Comment Biology is against you (Score 1) 161

A neuron is more complex than a transistor. Let's say it does a job, for the sake or argument, that would take about 16 transistors

Let's say that, you completely under estimated the computing power of a neuron.

First of all, a transistor just takes a few inputs, integrates them and give 1 signal out. To over simplify (medical doctor/researcher speaking here, so it will be an abusive over simplification) it will work like a basic boolean operator on 2 input bits.

A neuron is several order of magnitude more complex than that. It takes many more different inputs. The connection between two neuron is a synapse: neural impulse comes to one extremety of the source neuron. at the synapse this cause the release of a chemical (a neurotransmitter), across a gap there's a corresponding receptor on the target neuron. When the chemical is docked into the receptor, this opens a small gate which let a few ion flow through changing the electrical properties on the target neurons.

There can be several *thousands* of synapses on a single neuron, meaning that a single neuron can receive input from thousands of its peers.
Also the integration of all this inputs is complex (a docked neurotransmitter will only *change* the electrical properties on the target neurons, not necessarily fire the target neuron. Whether the neuron will fire or not depends on the net result of all the activities at all synapse. Some will raise the probability of firing, other will lower it). At that point we're already far from the "two bits in ; bolean operator ; 1 bit out" of a computer transistor. We're speaking of "thousands of signals in followed by complex and subtle integration".

Conversely a single neuron projects its output on a lot of target neurons too, so the overall network (synapses) can get very complicated for a given number of nodes (neurons).

The signal itself isn't binary. It's not "fire / no fire" duality unlike the bits in a byte. In fact neuron are (most of them) constantly firing. What varies is the rate at which they are firing. And this can vary across a wide range. So neuron aren't even digital, they are analogue with a lot of subtleties (and few signal loss, because they use the time domain instead of the signal amplitude).

And that's only the near inter-neuron communication. (At the synpase level). There are also a lot of general circulating molecules in the blood flow which can have an impact on the activities of neurons (hormones, etc.)

And all that is only the simulation of the activity going at a single point in time. But neurons are living objects and constantly changing.
Their metabolism changes, they might change their inner structure, etc. For example: The whole point of treatment of the depression is encouraging the neuron cells to produce more receptor for serotonin.

Even if neural cell don't reproduce, the network change over time: new synapses are created (e.g.: more synapses along often used paths) other are removed.
The total population of neuron change two, on one side, old neuron may die (or can get poisoned by drugs and toxins), on the other hand, stem cell (in the amygdalia region) produce new neuron which can then migrate and insert themselves into the network, compensating the loss (well, as long as the individual isn't suffering from dementia).

You will need actually much more than 16 transistors to simulate such complexity. You might as well need a small computer (or at least a whole separate thread) just to simulate 1 neuron. You definitely need a massive super cluster if you want to fully replicate the work of even the simplest animal brain.

Even if we *do* use neural net in computer research and data processing, such nets use virtual neuron which are much more simplified than the real biological counterpart. It's good enough to do research and data processing, it's not good enough to simulate a brain.

So definitely no, you can't count a 1:16 neuron-to-transistor relation in your cyber-brain.

Comment Not exactly so (Score 1) 119

The main problem between mr torvalds and nvidia, is that up to that point nvidia has never ever wanted to collaborate with them, nor even try to use whatever already exist in linux.
nvidia wanted the lazy way and share as much code as possible. their current way to do things is throw everything in the garbage and do things their own way. the problem is that this doesn't play nice with everyone else (including Intel hardware) and thus some functionality is completely broken (Optimus only works though third party hacks) although the necessary functionnality is alread in the kernel (but ignored by nvidia).

recently they've started to wisen up. they are open to collaborate better and try to use what exist. but now the situation has reversed and currently the linux developpers aren't playing nice (the necessary technology has been licensed as GPL only from the beginning)

Maybe Valve will solve the problems by simply forking Linux for their Steambox project

and that would be stupid.
Instead valve went the hard route and started collaboration between their own developers, and GPU drivers developers (both official binary at AMD and Nvidia, and official opensource at Intel and AMD). All this leading to several improvement.
Thanks to them, binary driver are being improved, and even the opensource stack (the Intel driver is officially opensource, and AMD officially supports opensource efforts in addition to their own in-house binary drivers. All these based on the same standart Linux infrastructure) has seen an increased pace of development (among other Valve is responsible of adding several debugging related extension to Gallium 3D, paving the way for future OpenGL 4.x).

And these efforts do indeed pay back: Source engine with an opengl back-end is much faster than with direct3d... even on windows.

Comment Android convergence vs. Linaro (Score 1) 60

The Android convergence was about bringing the specificities needed to run android userland back into the main kernel tree, instead of a special separate fork.
(e.g.: Among other, Android uses some special type of IPC)
Since recent kernel (was it 3.5? 3.6? memory fails me) these modifications are back into the main tree.
So, today, *as long as the hardware is supported*, you can run Android using any modern kernel. (for example: there are some interesting experiment of having both Ubuntu and Android running on the same ARM device using the same kernel)

Linaro is about the rest, about the "as long as the hardware is supported" part. As said above by godrik:

No two ARM machine boots the same. No two ARM processors expose component the same way.

As I've explained elsewhere in this topic, this has lead in the past to a situation where every single ARM machine has a seprate monolithic patch which was custom-written in one shot by the machine maker.

The point of Linaro is to bring some discipline in this, bring cooperation between all makers & developpers, and help modularize all this into small independent component.
So when the next machine comes to the market, instead of writing yet another new monolithic patch completely from scratch, as much work as possible can be leveraged from what already exist.
And the other way around: when a new kernel version comes, instead of having to port several thousand different monolithic patches (which never happens. end-users are always stuck with hardware specific versions), it will be easier to just maintain the few modules affected by the update changes.

When both are combined, that means it'll be incedibly easy to use Android on as many devices as possible. For the manufacturer, that's a no-brain choice. The mainline kernel contains everything needed to run android, and contains a lot that can be leveraged to support the hardware. Either write just a few new needed drivers, or even better (if you're lucky) only write new settings for some generic hardware.

BTW: Microsoft has chosen a different path - screw everything else, we're only supporting one specific way, one single type of machine. Either go our way, or pick another OS. My bet is that this isn't going to work very well, specially given the flexibility that android is offering on the other hand.

Comment Linaro is not *a distribution* (Score 1) 60

Linaro is not a distribution. It's a joint effort to *help linux run on ARM hardware*.

For example there are about three linux based distros for the openmoko, all with custom UI front ends.

And for these "linux based distros" to work you need a running linux (kernel) on which to base them. That's the work of the software company named Linaro.

To give into more details:
as a point of comparison, in the x86 processor world, things are rather standardized, and well modularized. There's more or less only one single main platform (the PC) with some weirder variant (BIOS vs. (U)EFI, or even weirder PC vs. custom Apple Intel Macs) and a few exception (they are really rare and don't matter much for mass consumption). With clearly organized components (no matter if you go for Intel, AMD, Via or more rare hardware: you've got the same basic CPU, northbridge, south bridge, PCIe bus, etc.). And well defined procedure to initialise and control everything. On the software side, all this is controlled by using clearly modularized and segregated components.
When something new arrive to the market (like the jump from BIOS to EFI, the move from PCI to PCIe, etc.) you only need to write a module and leverage what already exists for everything else.
To boot into linux, you use the same kernel everywhere, and only load different drivers depending on the local variations.

in the hardware arm world, things are much more messy: lots of weird SoC are put together, with far less standardisation. Also, whereas the x86 hardware is general purpose and widely available to everyone (just think about all the beige boxes everywhere. Then think that if you need server, a compute node or a home theaterPC you use the samecomponents. And branded machine (brand servers) use the same componenents too), lots of the ARM hardware tend to be put together for very specific custom usage (a company using their own SoC + PCB for a router. Another for a smart phone. Another for a NAS. Another for TV set top box. Etc.). Lots of this hardware is one-shot (there's few design re-use between a router and a smartphone).

as a consequence, before Linaro, Linux on ARM was a big mess: each time a company put together some hardware, they also do their own port in a one-shot fashion: they download the linux source, write a big monolithic patch to support their own weird variant, compile it, even sometimes publish the code (to be compliant with the GPL) and call it a day.
when another company wants another ARM-based machine, they just to the same.
No modularisation. No code re-use. No easy rebasing of the kernel.
That's why for several pieces of hardware, you're basically stuck with one very specific kernel version (openmoko is still at 2.6.x something) even when the source is available: the hardware depends on a huge monolithic platform drivers which is tighly dependant on the very specific kernel version against which the patch was written.
If there's any known kernel bug, your only hope is to wait for the back-port. you can't just move to a recent kernel (to 3.6.x).
If you want to provide a distribution for several pieces of hardware, you need one separate kernel per each separate device, sometimes different kernel version (depending against which kernel version was the patch written).
A big mess.

It's not a surprise that Microsoft is having big difficulties with Windows 8 on tablets and smartphones: the hardware landscape is really weird (and their own approach is to impose 1 single specific type of platform, so writing Windows 8 is easy and have everyone else standardize on it) (that's why they won't end up being as much popular on smart phone as they wished).

The role of Linaro was to put some order in this mess, by gathering together several of the people involved (there are hardware companies here) and giving them opportunities to work together and coordinate their effort. Among them as well as together with the kernel developers and the rest of the community.
Split everything into small component which will be easy to re-use (for newer slightly different hardware) or re-base (new kernel = keep as much of the drivers as possible). Organise everything cleanly.
The target: to achieve the same situation as with x86 hardware - same kernel everywhere, all the magic lies in modules and component. Re-using should be easy and cheap (just write new drivers for the new hardware elements, re-use as much as possible from the rest. Or even write a new batch of settings for a generic driver). Everything should be as modular as possible, and get everything possible back into the official kernel (no monolithic patches).
The end users see benefits (they get better quality of code, and longer maintainability. This requires open source code, but also require code which is easy to work with. The past mess wasn't. The future for which Linaro strives is).
The hardware makers see benefits (the initial cost of developing new embed hardware is lower, no need to write full monolithic patches, leverage as much as possible from what already exist. The overall development/maintenance is also simpler: no need to keep one separate kernel for every piece of hardware poduced, and easier to get updates from the mainline kernel).
And the kernel developing community see benefits (easy to keep and maintain modules in the mainline kernel, less headache when big changes arrive with newer kernel as everything is nicely modularized).

If today you start seeing interesting distros on ARM (lots of smartphone-specific distros. big name distros like Ubuntu and openSUSE. etc.) that's exactly because of the work behind the scene of developers like those from Linaro, making the Linux kernel easy to make work every where.

Comment Use cases: AMD leverage ARM+Radeon (Score 1) 213

Also, AMD can spit out some interesting use cases where it can find a nice empty niche market:
by leveraging their built-in GPUs.

Not simply putting a low powewred mobile GPU (AMD radeon's Qualcom Adreno cousin, PowerVR, etc) something more high-powered (some of their own low power radeon designs):

- Coupled with a multi-core ARMv8 CPU, Can be useful for netbooks with good graphic performances (the same kind of market after which Nvidia is running with their own Tegra series).

- Some numerical loads can benefit from a low power CPU core and a decent GPU.
When building server farms, the performance-per-watt matters, and that's why ARM is starting to eat on the x86 territory.
With a low-power ARM and decent GPU, AMD's creations can also end-up as low-power GPGPU nodes with a crazy low energy requirement. (I've seen ARM + custom parallel chips combo appearing. There might be market for something like this).

So there is definitely some place for a multicore ARM 64bit by AMD.

Best part for us geeks? AMD is likely to keep their open-source policies.
So you can expect documentation pushes to coreboot&co (for the ARM part) and to opensource radeon&co (for the GPU part).
And this is a racidally different situation than the present ARM situation, where most of the graphics core are closed.
Either completely closed (PowerVR)
or undergoing some reverse engineering (Mali/Lima)
or getting some docs (Adreno's common ancestry with Radeon helps)
or being an opensource farce (Broadcom's Videocore is basically a software 3D engine running on a dedicated core. Their "fully opensource driver" is a thin layer which uploads the firmware to the dedicated core and then forwards the OpenGL ES calls as-is).

Currently, no nice opensource graphics in the embed world, unlike Intel (and AMD).
But if AMD keep their opensource trend, and Nvidia keep their promises (after the Linus' "Fuck you scandal" they ended up promising to open-up a little bit more), we're going to see an interesting battle of "ARM with big GPUs" in the embed linux world.

Comment 3rd Party (Score 5, Informative) 467

If you tell Facebook your secret, it's not a secret anymore and you're a moron for thinking it would be.

The problem isn't what they told to Facebook. The problems is that the girls got added to some queer-themed group. group-adding on facebook doesn't require user confirmation nor anything.
A 3rd party just clicked on a group button while the girls were online, and their homophobic parents saw "Girl1 and Girl2 joined group 'lesbian chorus singers' " and freaked out. Without the girls ever needing to do anything, they didn't even need to write their preferences into their profile, and in fact their account could even have been dormant.

The biggest problem is not only that clueless users could mess their own privacy online, but morons can mess other people's privacy as well (and in a few cases including privacy of people who aren't even on facebook themselves).

Comment small is better than zero (Score 1) 262

Good luck breaking RSA.

Yup.
for a 4096 bit key, it would give the correct numbers only a very tiny fraction of the time. But that's still better than zero.

consider that to mutliply the factors to verify the answer is trivial.
if the speed of the quantum unit is high enough and it spits answers at a sufficient rate, even if it only spits the correct answer at a very low rate, by veryfying constantly the result of the quantum unit, after a while you may end up with the correct result with a sufficiently high confidence.
if the answer-spitting-rate is high enough compared to the correct-answer-rate this "after a while" might be better than the time necessary to brute-force-factorise the numbers with current technology.

Yes you can't have nice things (for free).
But quantum is the kind of weird technology which can give lots of *shitty* things (for free).
Okay, these things are *shitty* and not *nice*, but you still get them for free.
And you can still try putting them in a big pile, light them on fire and use the heat to cook a nice dinner. And a dinner is *nice*.

maybe the production rate is forced to scale badly compared to the true-positive rate and they cancel each other exactly the same way a perpetual motion machine always ends up having an output energy of zero or worse.
but that not a reason to not even try proving *that*. we need to at least understand quantum computing well enough to be able to see if this 50% vs 48% difference will become an obstacle in the long term or not. we have to test and prove your predictions.

maybe cooking dinner over quantum virtual shit is not worth it, but at least we have to prove it.

Slashdot Top Deals

Genetics explains why you look like your father, and if you don't, why you should.

Working...