Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment PostgreSQL + PostGIS (Score 5, Informative) 316

At my place of employment we use PostgreSQL and PostGIS extensively for the exact or similar problems as you describe. We recently contributed back to a portion of the PostGIS project by extending the TSP solver for a different ending than the beginning. I'm not the one who is generally writing stuff like this, but I maintain the servers and I know how much performance can be gained. Plus, the PgSQL and PostGIS guys are very close with lots of code and advancements being contributed directly into PgSQL from the PostGIS team. We have also looked at the MS solutions and found them to be ridiculously expensive to host and scale services targeted at business with real-life budgets and not huge corporations. We have tools used in nearly all of the counties in Wisconsin processing many requests per day and second(not allowed to give numbers) with only a few servers. Personally, stay open source and stick with PostgreSQL. They have a track record for extremely stable systems that can be upgraded as advancements are made with very little downtime. You can tune the internal performance metrics to tweak everything you need with online research or many books and even consultants such as EnterpriseDB. Good Luck with your developers, go with PostgreSQL and you won't look back.
Social Networks

Submission + - Introversion And Solitude Increase Productivity (nytimes.com) 1

bonch writes: Author Susan Cain argues that modern society's focus on charisma and group brainstorming has harmed creativity and productivity by removing the quiet, creative process. 'Research strongly suggests that people are more creative when they enjoy privacy and freedom from interruption. And the most spectacularly creative people in many fields are often introverted, according to studies by the psychologists Mihaly Csikszentmihalyi and Gregory Feist. They’re extroverted enough to exchange and advance ideas, but see themselves as independent and individualistic. They’re not joiners by nature.'

Comment Re:Bundling / wrapping is old news (Score 1) 228

Correction: The cpu cache line size of modern cpus is 64 bytes. This means that any random RAM access will load 64-bytes (as a single read). The CPU is then capable of extracting 1, 2, 4, 8 or even 16 byte (SSE2 vector load/stores) sections of that into registers, as a single operation if the data is aligned to its size, or as a few micro-instructions if not. This is no different between 32-bit and 64-bit CPUs.

Comment Re:Bundling / wrapping is old news (Score 3, Informative) 228

It's full of errors. Especially the spiel about alignment. In 64-bit mode you don't have to align everything to 64-bits for best performance, only 64-bit-sized values (including memory pointers). The example 16-bit value actually only needs 16-bit alignment for best performance, which is no different to the 32-bit version of the program.

2: The increase in the memory use of pointers doesn't explain Windows x64's extra 300MB of memory use. My bet is on it loading both 64-bit and 32-bit versions of a bunch of libraries in order to support various components of Windows that are still 32-bit (as well as any 32-bit software you run).

3: Saying that a 64-bit version of a program won't be faster... Two things are actually in favour of it being faster: 64-bit mode exposes more and larger registers to use, and also guarantees certain instruction set enhancements exist (SSE2). The latter especially is a huge speedup if you take advantage of it.

Comment Re:Now for something completely different... (Score 1) 627

Interesting and thanks for the info. I am currently looking at going this exact route with the Asus Transformer Prime TF201 though with SSH and my desktop to back me up at home. I will be mostly browsing and writing emails though I will be writing a fair amount of code in vim and compiling anything of huge processing on my desktop. I will be working primarily in Java, Node and Javascript. Possibly some PHP. After glancing at your blog it looks like I should consider the same route you did with dual-booting Ubuntu. We'll see how it goes and I'll have to let you know what I decide on. My largest motivation for the switch is due to the pitiful battery on my laptop and the processing in Tegra 3 beats out my laptop's Core 2 in LINPACK. However it goes, I'm looking forward to the challenge.

Comment Re:It'd better happen quick then (Score 1) 311

What you really want is failure rate within first N years of operation, which you can't calculate from the MTBF figures.

Actually MTBF is defined as failure rate in the "constant failure rate" stage of a components lifetime, which is between the initial failure rate (aka infant mortality) and wear-out failure stage. So it's actually the failure rate over the regular lifetime of the component (typically 1-5 years).

Comment Re:It'd better happen quick then (Score 5, Insightful) 311

MTBF is not the failure rate of a single disk, it's the average failure rate of disks used in an array. If you have a type of disk with a 100,000 hour MTBF, and use 100 of them (whether in a raid array, a cluster, or 100 individual desktops in a company). Then you will (roughly) replace one disk due to failure for every 1000 hours (100,000 MTBF / 100 disks), or 40 days.

It doesn't try to pretend that a single disk lasts 100,000 hours. That's stupid.

Slashdot Top Deals

"Buy land. They've stopped making it." -- Mark Twain

Working...