Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Of course! And you never need more than 640K RA (Score 1) 373

Yup, they missed the boat. Anyone who has used a SSD will go back to using a regular HD when they stop making SSDs, and the last available one breaks.

SSDs really are the bee's knees.

I though I would be one of those people.

But my latest work laptop has 16GB of RAM, which is basically enough to cache pretty much anything I touch during the day. So once the cache is hot, I don't notice any slowdows from the HDD.

As a result, the laptop is perfectly usable with a HDD once booted (I only ever suspend to RAM). And even then, Ubuntu 12.04 boots pretty quickly from cold, though things like Lotus Notes and Eclipse can be slow to start up.

By contrast, my old WinXP based laptop was totally transformed by an SSD, to the point that I genuinely didn't even notice it thrashing to swap when it ran out of RAM.

SSD good. RAM better!

Comment Re:Hmmm (Score 1) 221

Enriched uranium allows it to react faster to the decay which promotes more neutrons, which causes more decay, which causes even more neutrons,
and the process continues.
Chernobyl - This is also what happened in Japan due to the tsunami, just not a complete failure like Chernobyl (Thank god / science / whatever you believe in that's good)

What are you on about? The Fukushima reactors were scrammed minutes before the tsunami arrived, in response to the original earthquake. The meltdown was the result of the cooling system failure and the residual decay heat. Absolutely nothing like Chernobyl or an atomic weapon.

Comment Re:Huh? (Score 1) 241

In summary, get a cheap old laptop/netbook, and configure it accordingly. A laptop with a broken screen can be had cheap as chips.

Bad advice... You're wasting a LOT of power, and you're spending a lot more money, for a device with lesser capabilities.

A lot of power? Hardly. A few watts, especially as the screen is never on these days. I've never measured the power draw, but it probably doesn't go above 10-15W, and that's not much more than newer routers that have to power those internal hubs.

Any wasted power would be on the order of $10-$20 dollars a year? Big deal.

You not only need to buy the laptop, you're also buying USB ethernet adapters, and a separate network switch to connect to it, while home APs/routers have all that built-in.

And 99% (made up figure) of users don't use any of the internal switch ports. Most people connect to the interweb using wireless connections these days. And an Eee PC has a wireless port built in, with an ethernet port for the upstream modem connection. I have the USB ethernet connection to break out to a further wireless router, but I don't necessarily need it, and will retire it now I'm happy with the functioning of the AP functionality.

Just get something with a USB port that is compatible with DD-WRT or OpenWRT. I know an 8-port D-LINK DIR-632(a) has been available for $40 on Amazon for the past 6 months, which I'm sure ends up FAR cheaper than your solution, and will lower your power bill.

Perhaps, but then it has no built in UPS, and if you also use the machine as a NAS, a UPS comes in handy as well.

Comment Re:Huh? (Score 4, Insightful) 241

Cheap home routers tend to have crappy power supplies and inadequate cooling.

I've an old Asus EEE PC 701, augmented with a USB upstream ethernet, that does perfect service as a router with OpenWRT. Built in UPS (which I presume also conditions the power for the mainboard).

Uptime: 612d 3h 48m 4s, though I'll power it down soon to swap the RAM with a machine more deserving of the 2GB installed in there currently.

In summary, get a cheap old laptop/netbook, and configure it accordingly. A laptop with a broken screen can be had cheap as chips.

Comment Re:You know there is no explosive force in space.. (Score 1) 311

All a nuclear device would do in space is heat it up, pretty rapidly, maybe enough to thermal-stress-fracture it into several pieces, but nevertheless a nuclear weapon in space is not going to blow an asteroid (or anything else) to bits.

The heat would vapourize the rock, which would at least expand and exert some force on the rest of the asteroid. If the nuke was embedded in the asteroid before exploding, the vapourised rock would expand inside the asteroid, and probably significantly fracture the asteroid, perhaps into several pieces. And those individual pieces, as well as being less mass than the combined mass before (because of the mass lost to vapourised rock) would also be on a different trajectory to before, and so perhaps missing earth entirely. I think that's the point.

Also, don't think of asteroids as necessarily solid rock like you'd find on earth. They are just as likely to be coalesced space rubble, and not very tightly bound together due to insufficient gravity.

Comment MySQL/MariaDB does OpenCL now? (Score 1) 112

The problem is you need to handle two approaches in your parallel programming: (1) Multithreaded across multiple cores, and (2) vectorization. These are the two facets to today’s multicore programming, and your code needs to handle both aspects correctly.

WTF would you need vectorization in a DB for?

I'd stake my dog's life that scalability differenes are not down to compiler switches and SIMD instruction selection. Amdahl's law is more likely to be applicable, and as DBs have many serialization points (disk IO, lock management) it is these that are more likely to affect scalability rather than compiler flags. Xeon Phi? Come on!

Comment Re:Greed (Score 3, Informative) 292

These same rare earths are needed for nuclear power plants (neodymium magnets, copper wires and suchlike). Indeed they are needed for all power plants.

But once they were used in nuclear power plants, radioactive contamination makes them impossible to recycle.

That's just pure FUD. Anything on the clean side of the reactor (basically anything this side of the primary heat exchanger is just like any other power plant. I can asure you anything copper is no where near the "dirty" side of the reactor, it just isn't a suitable material. And I'm not sure why you'd need neodymium magnets anywhere. I'd imagine any generator or motor magnets would be eletromagnets.

Even for materials exposed to nuclear waste, things like metals can be cleaned then recycled, the cleanup waste then being considered nuclear waste. Most metals can be recycled. Concrete that's been exposed to nuclear waste (like water from cooling ponds) can be tricky, but metal cladding is used for such ponds, that can be stripped and cleaned, leaving the underlying concrete clean of nuclear contaminants.

Comment Re:Greed (Score 1) 292

No, it's just very safe and reliable if done correctly. The number of hours of skilled people needed to build, maintain and operate new nuclear plants make them too expensive unless electricity prices go up a lot, which they won't.

Night-time prices will slowly decline, but day-time prices will only go down faster from now on. Wholesale electricity prices in German already drop below 1 Euro cent / kWh regularly, and that's _after_ shutting down most nuclear plants.

Wow, you should work in the futures market.

Perhaps Germany just buy in some France's cheap nuclear surplus energy to keep the costs down?

Comment Re:Greed (Score 5, Insightful) 292

There is evidence that even when things were "done correctly" at Fukushima there were completely unexpected failure modes that no-one had predicted. That's the biggest challenge in engineering safety - handling things that are literally unpredictable.

Fukushima was a catalogue of retrospective bad design, cover-ups, mis-management, a huge freaking earthquake and largest tsunami in memory devastating huge swathes of Japanese countryside and killing many thousands of people.

And still no deaths can be attributed to the nuclear aspect of the regional disaster. Perhaps even the destructive hydrogen explosions could have been avoided (thus preventing much of the fallout) if it had been allowed to vent, but as I understand it, that wasn't allowed due to the fear of "radioactive gases" being vented.

Three Mile Island and Fukushima show us Nuclear is inherently safe, only Chernobyl has had anything like a devastating effect on anything other than economics scales. And the Chernobyl reactors were a picture of how not to do nuclear power.

Comment Re:Um... (Score 1) 612

While I hope he is, I've noticed something similar. On mornings when I haven't gotten enough sleep, the automatic (with cruise control!) made it hard to stay awake. Manual engaged me more and kept me awake.

The study I've seen (no citation, sorry) indicated that drivers who were intoxicated were rubbish at the details, like fine maneuvers (think parking and fitting through small gaps) but not so affected by monotous tasks, like long highway drives, whereas sleep deprivation was the opposite.

In short, get plenty of sleep, stay off the sause.

Oh, and cayenne8 is a total twat.

Comment Re:Internet != Network (Score 2) 213

Connecting flight controls to "The Internet" would be the stupidest of all ideas. If they do this, anyone getting on board would be a candidate for the Darwin awards.

I'm sure they meant to say that all these systems are networked together, using ARINC or other aviation network technologies.

TFS says "an internet". A network -> network connection is an internet connection, regardless of whether it's routed to "the internet".

Comment Re:yea they fell by 44% (Score 1) 77

OS binaries and libraries are often read in a random IO pattern, as the process jumps from one section of code to another. This is where a low latency drive on OS/application startup helps.

The only runtime linker I'm familiar with will prefault the entire binary and then let it be demand paged out, on the basis that binaries are usually small and mostly-used, that reading the entire binary is as cheap as faulting in a few pages, and if some pages aren't used for a while then they can be paged out at no cost later.

And what about libraries? An app could contain 100's of MB of code, even if only a small fraction of it is referenced. I'd rather that code not push out my working set of data.

User data, on the other hand, is usually read/written in a sequential IO pattern, from start to finish.

Since this is the sort of thing that usually deserves a big fat [citation needed], I'll skip that and just point you straight to a peer-reviewed citation that roundly refutes that idea:

A File is Not a File: Understanding the I/O Behavior of Apple Desktop Applications, published at ACM Symposium on Operating Systems Principles, 2011.

The paper above doesn't entirely refute my assertion:

Summary: A substantial number of tasks contain purely sequential accesses. When the definition
of a sequential access is loosened such that only 95% of bytes must be consecutive, then even more
tasks contain primarily sequential accesses. These “nearly sequential” accesses result from metadata
stored at the beginning of complex multimedia files: tasks frequently touch bytes near the beginning
of multimedia files before sequentially reading or writing the bulk of the file.

This was based on observations of IO patterns from the studied applications in the paper.

Loading that word doc? Word will read and parse the file in one fell sweep. Saving the updated document? Why not just write it out in one go, rather than update the document in place (not sure if this is how Word works, BTW).

See the above paper.

Yeah, this was a bad example, as Word docs are highly structured. SQLite files operate similarly.

Slashdot Top Deals

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...