Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Subset holy wars; STL without exceptions (Score 1) 793

So write that logic into your allocator. Generally there are two options when your allocator gives you NULL; that's either retry or abort, and writing those into your allocator is trivial. That's what's so good about C++, btw. (You're still incurring no runtime overhead aside from whatever extra cycles you are burning in your allocator with these checks, which you needed anyway.)

Comment Re:Eh? This is how Skype works? (Score 3, Informative) 396

having the worlds largest mobile phone manufacturer Nokia by the balls?

HAHAHAHAHAHAHAHAHAHA, Microsoft + Nokia, taking the world by storm! Windows Phones everywhere! HAHAHAHAHA! They're gonna expand that 0.41 percent market share into something important real soon now!

Anyway, that's all I -- HAHAHAHAHAHAHAHAHA

Comment Re:Not worth it. (Score 3, Informative) 260

This.

With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding. To mirror a segment of data (or the moral equivalent with RAID-5 or RAID-6) you need segments of the same size; those segments are going to have to be no larger than the smallest drive. That means larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from .1TB to 3TB implies this might be the case.

Think about it -- it's probably going to take most-to-all of those smaller drives to "mirror" the larger drive to make it redundant (and mirroring is the best you can do with just two drives). But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.

Your alternative is to use a JBOD setup and have just contiguous space across all of the disks. This is the same problem, except when a drive goes you lose some random segment of data. That's acceptable for two or three drives in scratch storage, but you don't want to actually store things on that.

Make no mistake -- those drives are going to die.

Trust me on this; don't go down this road. Your actual options are to either pair up the disks as best you can, supplimenting with strategic purchases, and make 2-3 independent raids (and maybe even RAIDing those, but it'll be painful), or just write the whole thing off, put disks in if you have obvious candidates in your hardware, and donate the rest.

Comment Re:Mr. Wall, please sit down... (Score 5, Insightful) 577

That's actually one of Google's defenses. They didn't copy the entire Java API, just a portion of it. So no, if the ruling is in Oracle's favor (which is unlikely but not impossible), then you can't get away with fair-use.

This is really, really scary for open source and GNU-like projects -- it's an attempt by a corporation to define copyright law in a way that lets big business completely shut down the academic "free exchange" culture once and for all.

This is serious, guys.

Comment Re:GPU programming is a nightmare. (Score 5, Insightful) 57

Half of our department's research sits directly on CUDA, now, and I haven't really had this experience at all. CUDA is as standard as you can get for NVIDIA architecture -- ditto OpenCL for AMD. The problem with trying to abstract that is the same problem with trying to use something higher-level than C -- you're targeting an accelerator meant to take computational load, not a general-purpose computer. It's very much systems programming.

I'm honestly not really sure how much more abstact you could make it -- memory management is required because it's a fact of the hardware -- the GPU is across a bus and your compiler (or language) doesn't know more about your data semantics than you do. Pipelining and cache management are a fact of life in HPC already, and I haven't seen anything nutso you have to do to support proper instruction flow for nVidia cards (although I've mostly just targeted Fermi).

Comment Re:Thanks gcc! (Score 2) 192

This is more handwaving. Let me say it again.

GCC's license has no effect on your source code or your finished binary. It doesn't matter if GCC is GPLv2, GPLv3, or GPLv9. It has no effect. It doesn't matter. It is exactly the same as if it were a proprietary product that you purchased. Better, in a lot of ways, but in no way worse. It doesn't change or affect your license on your final product in any way, and it doesn't require a "legal team" to ascertain that.

LLVM's development has nothing to do with licensing. It started as a research project and continued as a result of GCC's relatively tangled and highly-domained code base. GCC can get away with that because it's good, but there's plenty of room for competitors, of course.

Companies following procedures you describe are shooting themselves in the foot. Are you implying that when purchasing a product for integration in your toolchain there aren't licensing issues? Does your company just buy the first thing on a Google search without considering how it affects your product?

And regarding mixing GPL 2/3 -- again, that's completely irrelevant to the discussion. Your product is not GCC, nor is your code being "incorporated" with it.

Typically this attitude is the result of smear campaigns against vague licensing boogeymen by Microsoft sales people for upper management. Is that the case here?

Comment Re:Thanks gcc! (Score 5, Interesting) 192

This is just fearmongering. It's not complicated at all. If you don't hook GCC's (internal) intermediate code generation to run some custom process on, then you are covered by the compilation exemption.

Configuring your build to output GCC intermediate, retain that output, modify it with an external tool, and resume the build with the modified intermediate code is not something that will happen by accident. The implications of GCC being GPLv3 are, exactly, none.

FreeBSD's philosophical objections to GPLv3 are well known and they have the right to maintain those objections, but that has little bearing on GCC's use for a proprietary end product.

I would be interested to hear about your build process that you feel is likely to accidentally create a non-exempt compilation. Do you have an example?

Comment Re:Thanks gcc! (Score 5, Informative) 192

Deliberate misinformation. You are free, of course, to do whatever you want to with binaries produced by GCC. GCC's license is completely irrelevant unless you're modifying or extending GCC itself.

Nice try, though.

Comment Re:I still don't think..... (Score 1) 249

I'm going to speak frankly, here. Ad-supported video streams are, by definition, dependent upon actually showing ads. If you put the logic for that on the client side, then about an hour after push there's going to be an addon for Firefox that removes the ads, greatly reducing the value of your particular advertising space.

Video stream ad integration has to happen server-side. You'll see more pressure here as more content hits the streaming model, and particularly as major-network content moves to the Web.

Comment Re:Good (Score 1) 299

Which "couple hundred million"? The, as you imply, current top? But the very premise of the article is the "top" results are poisoned by SEO pages. If you use that as your standard set, then you're canonizing all of those obviously content-free sites -- already significantly reducing the information content of your supposedly comprehensive set.

And what about ambiguity? I can google "cream" and my top result is a band, not a dairy product. Which one is more informative? Which one is more relevant? How about if the band had half as many fans? Twice as many? What about in twenty years when they're much less popular? Who decides that? Google employees? How? By counting links to the band?

Beyond that, you're suggesting there will never be a new popular search topic? Suppose a new plague hits. Search terms spike and new literature is published. A network-based search engine (present-day Google) can respond very rapidly. A search engine that puts new topics on a "TODO" list, as you suggest, will be left in the dust.

You've already outlined that, although you may not have realized that -- you state outright that the basis of such a "network-free" search engine should be the results they've built by being a networked search engine. How do you think new topics could reach consensus without that same treatment?

Put another way, the whole point of search engines was to remove the need for manual human classification, since there's simply too much information out there for that to be practical. Remember the web directories of the 90s?

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...