Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Vendor Hype Orange Alert (Re:hmm) (Score 1) 381

A lot of times people who don't know about joins do the basic join of select x.a y.b from x, y where x.c = y.c Not realizing that Most SQL engines will take all the records of x and cross them with y so you will have x.records*y.records Loaded in your system, the it goes and removes the matches. So O(n^2) in performance, Vs. If you do a Select x.a, y.b from x left join y on x.c

Sorry, but it doesn't work that way. As far as I know, none of the decent SQL engine choke on it, although I'm not sure on Access :P

Also, a lot depends on the size of the dataset and other parameters in the where clause. Real life example, with len(r) = ~1M and len(g) = ~20k: select * from core_report r, core_guild g where r.guild_id = g.id and g.id = 7. With this query, postgres executes it as: scan core_report_guild_id index, look for id=7. Then, lookup g by primary key and join it in a nested loop with loops=1. Without the g.id = 7, it executes as: table scan g, hash it, table scan r and join the two with a hash join. Note that the query planner switched from fetch by primary key, which is O(log n) * n rows -> O(n log n), to table scan x2, O(n), but with a much lower actual cost because walking a BTree isn't cheap. It also ordered it so that only 20k rows get hashed and copy-pasted into the main dataset, not the other way around. That's the advantage of using a proper DBMS.

You can pry PostgreSQL from my dead, cold hands. It's just so much easier to do meaningful things in a relational database, and until you hit the db-size > largest SSD (used to be RAM) you can buy limit, there is absolutely no reason to limit yourself to glorified tuple stores and hash tables. Okay, sometimes, ORM's can be slightly too eager to join stuff (causing queries like this one), but it's easily fixed by rewriting the line executing the query. Or just ignore it, even that monstrosity (1 index scan, 1 fetch-by-id loop, 3 full table scans) took only 1s max - who cares on a homepage/intranet/most websites.

Comment Fixed point numbers? (Score 5, Insightful) 626

Use fixed point numbers? You know, in financial apps, you never store things as floating points, use cents or 1/1000th dollars instead!

Computers don't suck at math, those programmers do. You can get any precision mathematics on even 8 bit processors, most of the time compilers will figure out everything for you just fine. If you really have to use 24 bits counters with 0.1s precision, you *know* that your timer will wrap around every 466 hours, just issue a warning to reboot every 10 days or auto reboot when it overflows.

Comment Re:WiFi in general is going to die (Score 1) 259

Parent meant Single-frequency network with single channel multicast - several transmitters simultaneously send the same signal over the same frequency channel. It's way more efficient than what we have now - adjacent senders cannot use the same frequency as it causes interference. For example, on FM here, 101.20 and 101.50 is the same channel, but from different towers - one covers North-Holland, other South.

Comment ONE THOUSAND?! (Score 3, Insightful) 404

Lets have your grandma walk down the street, get mugged, break her hip and be traumatized. How many CCTVs would you be willing to put up to reduce the chances of that ever happening again? This privacy thing is getting incompetent, when you're in the public.. you're in the public. Unless someone has CCTVs pointing into your house. Appreciate the fact that if someone knifed you in the street, you have a better chance of catching that person

Comment Re:Facebook's application is poorly coded (Score 1) 370

This really makes me doubt their ability to benchmark / scale things properly. In the article, he sounds like facebook is completely CPU bound, and yet he's slamming the latest generation server processors by Intel / AMD?

From all the benchmarks I've seen, like Anandtech's and from personal experience, web servers scale pretty much linearly with clock speed * IPC and the amount of cores present in the system. The addition of HT is good for another 20% throughput.

What they need to do is to look at their setup, and make sure there isn't another bottleneck - have you spawned enough threads and processes to utilize the system completely? PHP may be "thread safe", but that usually means that there's a huge lock around everything that could be dangerous and one process refuses to use more than 100% cpu on 1 core, so serve it with apache-prefork + load balancer + separate static file server. Same thing for Python - fork off more copies via mod_wsgi even in threaded mode, as many as you can afford within the available RAM, or the Global Interpreter Lock will limit the CPU usage to 1 core.

If you have setup the environment well and there are no other bottlenecks, web services scale perfectly with the available CPU power. And that has increased by an insane amount for the Xeon 54xx to 55xx, it's almost doubled the performance in most server apps (OLTP, VM), but even the PHP test case which failed to scale to 16 cores in a single process was good for +39%.

Comment Re:Joking aside... (Score 1) 724

It's an older chipset, but the Intel X-38 based motherboards accept ECC Dimm's. The price premium? 5 euros per 2GB, I've installed 4 of them in this system. As for something more recent, the X-58 Nehalem boards are ready for ECC, but to use it you need to install a Xeon.

Comment Bad defaults (Score 1) 830

This is a classic case of bad defaults. Yes, you will always have a trade off between performance and security, but going for either extreme is bad usability!

People expect that, without explicit syncing, the data is safe after a short period of time, measure in seconds. The old defaults were: 5 seconds in ext3, in NTFS metadata is always and data flushed asap with but no guarantees. In practice, people don't lose huge amount of work.

What happened is that the ext4 team thought waiting up to a *minute* to reorder writes is a good idea - choosing for the extreme end of performance.

My question is: WHY? Does it really matters to home users that KDE or Firefox starts 0.005 seconds faster? Apparently, the wait period is long enough to have real life consequences even with limited amount of testers, imaging what happens when it gets rolled out to everyone. On servers, it's redundant. Data is worth much, much more than anything you hope to gain and SSD's, battery backed write cache on controllers and SAN's have taken care of fsync's() already. If you run databases, those sync their disks anyway, so you just traded a huge chunk of reliability for "performance" on stuff like /home, /var/mail and /etc.

The "solution" of mounting the volume with the sync everything flag is just stupid. Yay, lets go for the other extreme - sync every bit moving to the disk. Isn't it already obvious that either extreme is silly?

Just set innodb^W ext4_flush_log_at_trx_commit on something less stupid already, flushing once every second shouldn't kill any disk. Copy Microsoft for config options:
* Disable flush metadata on write -> "This setting improves disk performance, but a power outage or equipment failure might result in data loss".
* Enable "advanced performance" disk write cache -> "Recommended only for disks with a battery backup power supply" etc etc.
* Enable cache stuff in RAM for 60s -> "Just don't do it okay, it's stupid."

Power

Submission + - Superefficient Solar Cell from Silicon Nanocrystal 1

chinmay7 writes: "Researchers at the National Renewable Energy Laboratory (NREL), have shown that silicon nanocrystals can produce two or three electrons per photon of high-energy (blue and UV) sunlight. The small size of nanoscale crystals results in the conversion of this energy into electrons instead of heat. Solar cells made of silicon nanocrystals could theoretically reach more than 40% efficiency, compared to 20% efficiency of the best conventional silicon solar cells.
An article in the Tech Review goes into more detail."
Power

Submission + - Thorium the Key to Non-Prolfieration?

P3NIS_CLEAVER writes: Nuclear energy has been proposed as an alternative to coal power plants that generating carbon dioxide and emit mercury. As we are seeing now in Iran, the desire for nuclear energy has created a gray area that places peaceful civilian power generation at odds with nuclear non-proliferation. An article at Resource Investor claims that thorium reactors can be used to replace existing reactors without creating isotopes that may be used in nuclear weapons.
Movies

Submission + - Matt Groening On Futurama and Simpsons Movie

keenada writes: "Though The Simpsons has declined in popularity in recent years, it still has a cult and popular following worldwide. Matt Groening (rhymes with raining) sits down with Crave to discuss his new movie, and the future of Futurama."
Businesses

Submission + - New technique for recycling PCBs

MattSparkes writes: "PCBs from discarded computers, cellphones and other devices could be recycled less harmfully using a technique developed by researchers in China. Unlike current methods, it can be used to reclaim metals such as copper without releasing toxic fumes into the air. Only a small numbers of PCBs are currently recycled."
The Almighty Buck

Submission + - Dow Jones Plunge Fueled by Overwhelmed Computers

cloudscout writes: "The Dow Jones Industrial Average dropped over 400 points today. While there were various valid financial reasons for such a decline, some of the blame is being placed on computer systems that couldn't keep up with the abnormally high volume at the New York Stock Exchange and the resulting tremor as they switched over to a backup system. In other words, Dow Jones got Slashdotted."
Businesses

Submission + - Stock Market Drop Blamed on Computer Error

WebHostingGuy writes: "Today the Dow Jones Industrial Index dropped a little over 3% in value. Stock market swings come and go but it is interesting that the sudden drop in the stock market is the result of a computer glitch. According to MSNBC, the computers running were not properly calculating trades. This led to the switch to a backup system which led to several seconds delay which impacted the Dow. Even now after the close of the market spokesmen for the NYSE Group Inc. could not confirm if all closing share prices were even valid."

Slashdot Top Deals

After an instrument has been assembled, extra components will be found on the bench.

Working...