So how's that XBox handheld working out for you?
So how's that XBox handheld working out for you?
having the worlds largest mobile phone manufacturer Nokia by the balls?
HAHAHAHAHAHAHAHAHAHA, Microsoft + Nokia, taking the world by storm! Windows Phones everywhere! HAHAHAHAHA! They're gonna expand that 0.41 percent market share into something important real soon now!
Anyway, that's all I -- HAHAHAHAHAHAHAHAHA
With such a wide range of storage sizes, you're going to have serious trouble setting up any kind of redundant encoding. To mirror a segment of data (or the moral equivalent with RAID-5 or RAID-6) you need segments of the same size; those segments are going to have to be no larger than the smallest drive. That means larger drives have to store multiple segments, but that the segments have to be arranged in a way such that a drive failure on one of the large drives doesn't take the RAID down. If the drives can't be bisected -- that is, divided into two piles of the same total size -- this is impossible, and the fact that your range is from
Think about it -- it's probably going to take most-to-all of those smaller drives to "mirror" the larger drive to make it redundant (and mirroring is the best you can do with just two drives). But having one side of the mirror spread across 9 drives makes failure laughably likely, to the point where you're paying performance penalties for nothing.
Your alternative is to use a JBOD setup and have just contiguous space across all of the disks. This is the same problem, except when a drive goes you lose some random segment of data. That's acceptable for two or three drives in scratch storage, but you don't want to actually store things on that.
Make no mistake -- those drives are going to die.
Trust me on this; don't go down this road. Your actual options are to either pair up the disks as best you can, supplimenting with strategic purchases, and make 2-3 independent raids (and maybe even RAIDing those, but it'll be painful), or just write the whole thing off, put disks in if you have obvious candidates in your hardware, and donate the rest.
That's actually one of Google's defenses. They didn't copy the entire Java API, just a portion of it. So no, if the ruling is in Oracle's favor (which is unlikely but not impossible), then you can't get away with fair-use.
This is really, really scary for open source and GNU-like projects -- it's an attempt by a corporation to define copyright law in a way that lets big business completely shut down the academic "free exchange" culture once and for all.
This is serious, guys.
This would hold water if Microsoft weren't a convicted monopolist.
They did some things right -- they gambled on backwards compatibility at expense of efficiency and won big-time. But they pulled a lot of dirty tricks, too, and their market position partly reflects that.
Half of our department's research sits directly on CUDA, now, and I haven't really had this experience at all. CUDA is as standard as you can get for NVIDIA architecture -- ditto OpenCL for AMD. The problem with trying to abstract that is the same problem with trying to use something higher-level than C -- you're targeting an accelerator meant to take computational load, not a general-purpose computer. It's very much systems programming.
I'm honestly not really sure how much more abstact you could make it -- memory management is required because it's a fact of the hardware -- the GPU is across a bus and your compiler (or language) doesn't know more about your data semantics than you do. Pipelining and cache management are a fact of life in HPC already, and I haven't seen anything nutso you have to do to support proper instruction flow for nVidia cards (although I've mostly just targeted Fermi).
This is more handwaving. Let me say it again.
GCC's license has no effect on your source code or your finished binary. It doesn't matter if GCC is GPLv2, GPLv3, or GPLv9. It has no effect. It doesn't matter. It is exactly the same as if it were a proprietary product that you purchased. Better, in a lot of ways, but in no way worse. It doesn't change or affect your license on your final product in any way, and it doesn't require a "legal team" to ascertain that.
LLVM's development has nothing to do with licensing. It started as a research project and continued as a result of GCC's relatively tangled and highly-domained code base. GCC can get away with that because it's good, but there's plenty of room for competitors, of course.
Companies following procedures you describe are shooting themselves in the foot. Are you implying that when purchasing a product for integration in your toolchain there aren't licensing issues? Does your company just buy the first thing on a Google search without considering how it affects your product?
And regarding mixing GPL 2/3 -- again, that's completely irrelevant to the discussion. Your product is not GCC, nor is your code being "incorporated" with it.
Typically this attitude is the result of smear campaigns against vague licensing boogeymen by Microsoft sales people for upper management. Is that the case here?
This is just fearmongering. It's not complicated at all. If you don't hook GCC's (internal) intermediate code generation to run some custom process on, then you are covered by the compilation exemption.
Configuring your build to output GCC intermediate, retain that output, modify it with an external tool, and resume the build with the modified intermediate code is not something that will happen by accident. The implications of GCC being GPLv3 are, exactly, none.
FreeBSD's philosophical objections to GPLv3 are well known and they have the right to maintain those objections, but that has little bearing on GCC's use for a proprietary end product.
I would be interested to hear about your build process that you feel is likely to accidentally create a non-exempt compilation. Do you have an example?
Deliberate misinformation. You are free, of course, to do whatever you want to with binaries produced by GCC. GCC's license is completely irrelevant unless you're modifying or extending GCC itself.
Nice try, though.
Either way, it's fitting that you used a toy analogy. After all; Linux is, if anything, a tinker-toy desktop OS.
...which is presumably why over 90% of the top 500 supercomputers in the world run Linux, and all but one run something from the *NIX/BSD family.
But yeah, Windows is srs bzns.
I'm going to speak frankly, here. Ad-supported video streams are, by definition, dependent upon actually showing ads. If you put the logic for that on the client side, then about an hour after push there's going to be an addon for Firefox that removes the ads, greatly reducing the value of your particular advertising space.
Video stream ad integration has to happen server-side. You'll see more pressure here as more content hits the streaming model, and particularly as major-network content moves to the Web.
Uh, based on a novel, sorry...
Which "couple hundred million"? The, as you imply, current top? But the very premise of the article is the "top" results are poisoned by SEO pages. If you use that as your standard set, then you're canonizing all of those obviously content-free sites -- already significantly reducing the information content of your supposedly comprehensive set.
And what about ambiguity? I can google "cream" and my top result is a band, not a dairy product. Which one is more informative? Which one is more relevant? How about if the band had half as many fans? Twice as many? What about in twenty years when they're much less popular? Who decides that? Google employees? How? By counting links to the band?
Beyond that, you're suggesting there will never be a new popular search topic? Suppose a new plague hits. Search terms spike and new literature is published. A network-based search engine (present-day Google) can respond very rapidly. A search engine that puts new topics on a "TODO" list, as you suggest, will be left in the dust.
You've already outlined that, although you may not have realized that -- you state outright that the basis of such a "network-free" search engine should be the results they've built by being a networked search engine. How do you think new topics could reach consensus without that same treatment?
Put another way, the whole point of search engines was to remove the need for manual human classification, since there's simply too much information out there for that to be practical. Remember the web directories of the 90s?
Yes, we've had success for years with those techniques -- in specific domains.
What you're talking about is the ability to do that, definitively, for every webpage. As the other responder noted, this could include classification of technical words -- but what is a technical word?
You've got someone that searches for "foobaz". Most webpages that have "foobaz" in the h1s or repeated frequently also mention "fizzbang", "barbar", and "carslop". Some have different ratios of those words or omit one or two of them. How do you decide which one has "good content"? That's the problem you're attempting to formulate.
You're talking about writing a program that not only can meaningfully parse the syntax of every English webpage -- and we're not even to that yet -- but is enough of a general expert on every subject on the internet as to be able to determine to a statistically significant degree of accuracy how useful the provided information is. That's 100% impossible for the foreseeable future; we'll have strong AI long before we have that.
Remember Darwin; building a better mousetrap merely results in smarter mice.