Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:But streaming is easy! (Score 1) 200

They are potentially using more of their bandwidth that way -- by sending streams that may not be watched. It may cost Hulu more to show you the latest episode vs and older show. Still, you could "pin" a few shows in advance which would get them more overall views as they know some users cannot always stream.

They also cannot count the show watches nor ad views that way... I suppose they can pre-send the ads with the content to your cache, and then send your ad-watch/skip data back when you re-connect. But if you cannot "click" the ad, some advertisers may refuse to participate.

Comment But streaming is easy! (Score 3, Interesting) 200

Yes, downloading videos in advance over a wired or local wireless network does save you precious mobile bandwidth when you view the content later.

But, streaming is easy. The consumer does not have to pre-decide what they want to watch if they stream. They're not sure if they want to watch a TED talk or the final Colbert Report while "roaming".

With Google Play, I can "pin" a show on wifi and watch it later, assuming I want to watch it later. It's still DRM protected. The bandwidth savvy consumer would like to download more content and play it back at any time, but do those consumers even exist as the majority anymore?

Comment Re:Please specify a better scenario (Score 1) 272

Instead of "sharding" (split customers across multiple copies of the database) you should try a NoSQL solution to handle the flood of writes as the first layer. Then an recurring process can query the data in your NoSQL object store (by timestamp) and aggregate it into an SQL database for reporting. You could archive those processed entries, or wait until they get old, to another object store for your "data warehouse" -- basically just an archive in case you need to do different aggregate reporting in the future (depending on storage size of course).

I must ask, do you really need to store each full piece of information written by these clients at such a high volume?

Depending on your use of the data, you could even just store the results in memory for X hours/minutes, and then aggregate-process that and write the results to your SQL DB. A single DB with many application servers would be fine in this condition, with writes every X hours/minutes. (You are probably already flat-file logging the incoming requests; that is an archive if you *really* need to go back.) If you cannot afford memory loss if an app server dies, solutions like EhCache (java) will persist the memory to disk, in case of hardware/software failure.

Comment Re:Use PostgreSQL (Score 1) 272

Was your 5000 tps using normal insert/update/delete statements or using the COPY statement? (I guess it's a form of batching: meaning, you issue large copy statements instead of many insert statements, if your application can data that way.)

Also, was your hstore experience with 9.3+ or what version(s) had problems?

Comment Crowd Funding (Score 1) 480

Do you see crowd funding as a means to economically provide for the development of Free Software and even Free Hardware?

Have any small projects grown to a critical mass where only a few funding rounds bootstrapped them into having a sustainable Libre/Free product?

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...