Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Too late (Score 3, Informative) 105

That's how to do it. It has to be managed in a loop transaction side or in the application. This loop is remarkably difficult to get right by most programmers. For example, if a trigger is added to the inserted table after the fact causing an error in some other table, most loop based upsert code I've seen will fail into an infinite loop.

Comment Re:Not rocket science (Score 1) 244

Postgres HS/SR *is* the master slave replication feature. It's better than anything mysql offers as long as that you are ok with the main caveat: it replicates the entire database cluster and the slaves must be the same version. The only separate tooling you'd need would be to do things like manage failover if the built in tools to manage that aren't working for you. Logical replication handles all other cases. BDR (http://bdr-project.org/docs/stable/index.html) lays on top of that to provide clustering. It's not quite there yet; most but not all of the necessary foundation have been moved to the core product which can make installation and setup a bear. But the underlying infrastructure is really well designed and will ultimately compete with commercial solutions both in terms of power and ease of use.

Comment Sarah, the LKML SJW (Score 5, Interesting) 928

I was curious and did some research on this. I know Linus and some of the other guys can be a lot to take. However, after reading a lot of the posts Sarah made complaining about people and things, I started to get the feeling she's attention seeking and disruptive. She constantly brings up gender in irrelevant ways and appears to be the self styled 'girl kernel developer'. She also punches below the belt. For example:

"*Snort*. Perhaps we haven't interacted very often, but I have never seen you be nice in person at KS. Well, there was that one time you came to me and very quietly explained you had a problem with your USB 3.0 ports, but you came off as "scared to talk to a girl kernel developer" more than "I'm trying to be polite"."

Linus tends to be very direct, as are a lot of important open source communities. The critical people are very busy and get frustrated when people display various kinds of incompetence. In fact, it appears to me that they were treating Sarah very gently precisely *because* she was a girl. Or maybe it was the intel.com email adress -- who knows.

Comment linus was right (Score 2, Interesting) 757

Linus was right (I didn't agree with him when he wrote that but I do now). Jeff doesn't answer any of the major issues with c++: Lack of standard ABI (preventing interop with other languages), insanely complex grammar, years of paradigm shift, action at a difference, lack of abstraction away from computers, etc. Java/C# have completely displaced it in the business world and C still dominates system programming. C++ would be already obsolete except that it caught a big with the gaming industry...real-time games can't tolerate GC languages and C is considered too baroque to many developers.

Comment Re:HDD is fine for .. 98%? (Score 1) 256

Lets be honest here - outside of a small percentage of users doing raw uncompressed video operations HDD are more than fast enough. Drives and OS both offer large caching of high use objects which reduces seek/startup time differences to a very small amount. The biggest difference is on start up and even there.. do those 5, 10, 15 secons extra really matter that much? How often are you booting? Or even resuming from hibernation if thats your thing?

As to power, idle is now around 5 or 6 watts and standby around 1. Even in a laptop the difference in power use between hdd/sdd is not going to make or break the deal. Your screen, however, another story.

That's silly. Anyone who does anything on their computer besides browsing the net and email will quickly observe that the move from slow to fast storage is the single greatest performance improvement in the history of the computer. It's very simple: if you are writing any non trivial amount of data or you are reading from datasets that exceed unreserved ram (a very typical thing to do that is gaming) then the hard drive is the primary performance bottleneck in the computer.

Comment Re:duh (Score 1) 256

disks (and to some extent tape) will always have scaling advantages over litho-fabed storage

I could not disagree more. Disks spin and have some complicated assemblies and pricier raw materials. The main cost inputs to SSD are capital investments (which amortize to zero over time) and energy. There is a lower limit to density in flash (which AIUI we are already close to) but flash is already denser than hard drives. Tapes have an advantage in that they are not active and so are very cheap for offline data. Disk drives OTOH have no fundamental advantages over flash -- they are being rapidly displaced for user facing devices. Warm storage (NAS etc) where SSD performance don't play will take longer -- maybe 3-4 years and it's done.

Comment Re:We live like kings and queens already (Score 1) 256

moreover, storage is specializing. desktop/portable computing devices of all types are only going to be sold with SSD Real Soon Now (in many cases this has already happened). Hard drive storage is going to be primarily be used for dedicated storage appliances. This has already happened to a significant degree in the enterprise depending on how progressive the IT dept is.

Comment not that simple... (Score 1) 48

You can have both. Let's take transaction time performance for example -- bitcoin does not provide fast resolution (compared to, say, the visa network) but nothing is keeping a transaction broker from laying on top and providing those services. A 'bitcoin visa' payment service would then provide near instant times, allow for chargebacks, etc by absorbing the risk through fees and making a profit on the difference.

Comment Re:The consumer trend seems to be clear (Score 1) 263

I used to say the same thing, but unfortunately it's not so clear cut. The intel drives which post such great random i/o numbers only do this because they are configured in write back cache mode w/volatile cache. The x25-M in write through mode can post about 50iops writing -- I'm not kidding. Also, wear&tear on the drive is much higher. IOW, the intel controller does not perform magic -- they cheated. The x25-e drive is configured the same way -- the performance drop for going to write-through is not so high (you can eek 1000ish iops out of a drive) but the drives are expensive and the the math doesn't work out all that well. The basic problem is that flash is plain and simply lousy at random writing just like hard drives. With a small NV cache on the drive, things could be completely different (and some boutique mfg IIRC already offer this) but until you see Intel, Seagate, or WD on a drive with NV guarantee for at least semi-reasonable price you will not see serious intrusion into the enterprise.

Comment Re:Our approach (Score 1) 244

for the cases that you can't strictly do the query, we push the logic into a function call and dyna-sql it. (to hide the internals, it's actually mostly function calls over the low security interfaces). we also wrote a libpq wrapper to allow sending and receiving extremely complicated structures over libpq protocol efficiently. (here, if you're curious: http://libpqtypes.esilo.com/).

Comment Our approach (Score 1) 244

We use PostgreSQL. We expose the libpq not default port directly to the internet through pgbouncer. What we did:

*) Modify pgbouncer to only except extended protocol (parameterized) queries
*) Auto Generate list of allowed queries used by app to store in whitelist
*) Block all functions except auth if authenticated or to the whitelist othewise
have had zero problems. curious what you think.

Slashdot Top Deals

"An open mind has but one disadvantage: it collects dirt." -- a saying at RPI

Working...