MPEG-LA claims to have full H265 patent coverage, so it'll be decided in the courts if MPEG-LA can defend their H265 claims against HEVC Advance. My guess is that MPEG-LA knows what they've got and HEVC Advance is making a big show for shareholders. Technicolor already put it in their last quarterly earnings report that they had massive profit potential from their HEVC patents. To me this looks like a fake out by companies like Technicolor to trump up the value of their patents while MPEG-LA continues to do real business with reasonable terms. By the time Technicolor et-all's stock holders realize that they aren't making anything off of their ludicrous terms they'll have moved on to the next scam.
Not to mention bandwidth. How are you going to move 500TB to the cloud and back in a reasonable time frame? You're looking at several months even over a gigabit connection.
What are your performance requirements. If you just need a giant dump of semi-offline storage then look into building a backblaze Storage Pod.
For about $30,000 you could build four storage pods. Speed would not be terrific. Backups are handled through RAID. If you want faster, more redundant or fully serviced your next step up in price is probably a $300,000 NAS solution. Which might serve you better anyway.
That's unfair, I use a lot of software that I pay for but want to make peripheral changes to. For instance I use a compute job scheduler that costs about $180 per compute node + maintenance. It's worth the price for the existing features, but I also want to implement esoteric features that maybe nobody else needs but will help it work better with our workflow so I have forked a number of the built in options. The company even has a github repository of the latest release so that you can get performance and reliability bug fixes from the developer while still keeping your one-off tweaks or even share with other companies who need similar fixes.
This same company eventually even bought a substantial addition I made to make it a core feature. This model works well for everybody, if you need a custom feature you just need to add that one small feature without starting from scratch and I don't have to worry about maintaining the code and add big core features like moving to a new database or extending the SDK or writing a web interface or creating a native Python library.
As the author stated, this sort of situation doesn't lend itself well to a support model since most of the users have the same needs so it's only fair that everybody pay a share of the updates and many of the studios that use the software *could* write it themselves if they were just going to develop it indirectly.
No we have two different statistics competing. I can say that 20% of the population died this year, oh my god end of days tragedy! But I can also encourage 50% of the population to procreate this year and have a child. Yay! Population growth everything is fine no need for doom and gloom!
There are more hives. But bees are still dying in record numbers. Both can be true simultaneously. If 50% of every hive died you would have 50% less bees even if the number of hives increased slightly.
I was just going to say that I am using a cpu raster renderer from 1993 on my latest project. Why? Because for simple data passes it's the fastest renderer available and it's single threaded! I can run 14 concurrent instances on one machine and render in near real-time on the CPU but with proper shading and filtering unlike GPU rendering.
It's pretty much all phong and blinn shading but that goes back to 1977 and the birth of computer graphics.
TESTING requires destruction of this kind of thing as I read the article. They will NOT be doing 100% testing to failure of their stock of struts, except to prove to themselves how bad their supplier really was.
Nobody said they would be testing to failure. You can test every unit to say 150% of failure. If the material is rated for 1000% of failure then 150% should be safe. If it doesn't fail at 150% once it probably won't fail at 100% 100 times. So now you're at 99 times until mean failure instead of 100.
So a song supposedly written in the last 80 years or so having a copyright still on it means that you have zero respect for the GPL?
1) If Microsoft wanted to include a secret NSA screen recording app they could hide it in the code and you would never know.
2) If your'e worried about exploits then you should just worry about the fact that your GPU's drivers already offer this capability.
3) Recording your screen is the most useless way to learn things about you that I can think of. If you have access to the system to such a level that you can execute arbitrary code it's far more effective to run a keylogger than a video screen system which would require gigabytes of data to get meaningful information. Install your keylogger and then have millions of computers dump their keystrokes to a database that doesn't require you to sneak terrabytes of data from millions of computers to your server. Then run some data mining software to identify likely username/password combinations.
This is occum's razor shit people. Screen capture software only requires 1-2MBs. It's not like they can't be hidden. And even if they couldn't be easily hidden they're mostly useless.
The Linux community doesn't really stand a chance since Microsoft isn't 100% technical where as most of the "linux community" doesn't have any overhead or advertising or accounting or datacenters or call centers or executives or web designers or game developers or.... For instance up until recently Microsoft's highest level woman employed was in HR. There is no HR (although maybe there should be
The other contributing factor is that Microsoft does hire a lot of women in technical positions but a lot of them are international where tech is viewed as just a "Good high paying job" not as "A bunch of geeks and mouth breathing virgins". That's why I always bang my head on the table when stories go something like this: "Tech is a toxic soup of misogynistic assholes... and we need more women to choose computer science!" Regardless if it's true as long as that stigma sticks around women aren't going to be knocking down the doors to be the first person to be victimized and discriminated against. However while women are far more likely to pursue tech in a developing country like India, it's mostly because "Tech is a good high paying job" not "Tech gives you the opportunity to contribute to an ideologically driven project that is an unpaid position in your free time!" That's the opposite of a "Good high paying job" that's a no paying job.
Also the "Linux Community" is all around pretty small. It doesn't take *that* many people to create an operating system. Even if the Linux community had the same demographics as Microsoft it's safe to say that Microsoft employs about as many people to develop windows as the number of people working on the linux project. Both projects are similar in scope and design. By comparison, Microsoft not only makes windows they also have Office and Xbox and Azure and Microsoft Game Studios and Movies and Music and Hotmail and MSN.com and Cortana and Bing and Here and Lumia and Surface and... So you would need to do an apples to apples comparison of Microsoft's Windows Team vs the 'Linux Community'.
Yeah, I'm surprised that computer geeks don't more broadly embrace electric vehicles based solely on the principle of flexibility.
Electric cars far better embody the Unix philosophy of atomization and portability than ICEs do. If you're in the woods you could trickle charge your car off of solar. Or you could put a turbine in a stream and power your car from a creek, or you could hire a few people on bicycles to pedal away for a couple days to charge it up, or you could setup a wind turbine, or you could have a small ICE generator burning gasoline, or you could have a generator run from nuclear power, or you could have a generator run from a wood fire or coal or anything.
The beauty of electricity is that it's a common currency just like text is to a unix application. You don't mandate an energy source you can mix and match and switch power sources dynamically.
Electric vehicles are also simple and easy to understand. Electric motors have pretty much the one single moving part. Even battery technology is modular, you could have an electric car that has no batteries, just an ICE generator in the trunk providing electricity.
From a form factor perspective it should also appeal to computer geeks' ideals of Aesthetics. A design in which you might need wheels, but otherwise the sky is the limit on where you place critical components.
You mean the VFX companies thought they'd chose an "incredibly slow" production renderer, rather than Arnold?
RenderMan has been a hybrid renderer for some time now
Yes I am saying that. There are a couple reasons for that. For one thing there is a lot of inertia in the industry and for good reason. You don't want to move from a tool that you know works, has worked on dozens of features previously and adopt something which might not work. Arnold is only creeping into production. Render TDs aren't familiar with raytracing in general. Lighting TDs are used to cheating everything. Only 1 production renderer historically had been the renderer of choice for feature film work (PRMan) so only it had gotten the AOVs, custom shaders, flexible scene graph manipulation and such that feature film teams want. So your choice before Arnold was essentially "We can take a mostly ready renderer like say Arnold, Brazil or Vray and hammer it into what we want or we can keep using PRMan which does everything we want albeit slowly." The "right" choice was PRMan. A few groups who didn't have the same constraints such as ILM's rogue and digimatte departments went with raytracers (for instance the opening forest shot of Avatar is a Brazil shot) also DD's commercial division was mostly Vray so when Tron came along they had a moment of power and sort of injected Vray on to Digital Domain at large (which is in my opinion why Vray suddenly started getting good focused development towards being a usable feature production renderer). And of course Arnold re-emerged after 12 years of underground development in SPI's basement. So DD and Sony did decide to put in the herculean effort to take a promising production raytracer and turn it into a competitive production renderer.
But as I'm sure you know speed isn't everything. Prman is exactly as you say all about:
... since most Blender-users are not going to be rendering multi-billion polygon scenes with massive displacement, instancing, complex shader networks, complex AOV output and custom-written Renderman shaders.
Arnold years ago: too immature to do any of that.
Brazil: massive displacement was extremely glitchy until briefly before it was acquired and killed. It also had no implicit hair spline rendering.
Vray: Had nobody using it in features so as a result it was back-asswords for feature production with feedback and bug reports only coming from Arch-Viz artists. It also didn't have the stability or capabilities of handling multi-billion poly scenes at the time
And none of the above had something like RIB.
So while they were all faster, if you can't rely on it for every shot it's a bad choice. You don't want to get 90% of the way into a shot and then have the renderer shit the bed and have to change renderers and redo all of your work on the shot. Renderman has always been spectacularly reliable, fast? No. Reliable, absolutely.
What finally happened though was that Arnold popped up out of stealth maturation in SPI's pipeline and made a mockery of Renderman's performance. More and more effects were raytraced and Renderman's raytracer was a kludgy tacked on piece of shit. Arnold overcame the deficiencies of Vray and Brazil for features and implemented all of the stuff that feature films want: bullet proof displacement, solid instancing, implicit hair shapes, AOVs, a RIB like scene graph with ASS plus it brought the speed and artist friendly workflow of something like Brazil or Vray.
So yes, the renderman team went back to the drawing board and turned renderman into a path tracer before they lost the entire market. But it's only been in a stable release availability for 2 years, compared to the optimization and tuning that Arnold has had going on for well over a decade. Renderman is catching up but it's definitely on defense and trying to catch up.
But all of that is irrelevant for Blender users even now you don't really want a path tracer unless you're doing animation. Most blender artists are doing essentially arch-viz and path tracing is sllllloooowwwwww. There is a reason Brazil and Arnold which were animation focused renderers spent a lot of time on their brute force GI performance while Vray which was dominating the arch-viz scene focused on the quality of its irradiance caching/Final Gather/whatever you want to call it sub-sampled GI. For stills sub-sampled GI is an order of magnitude faster. It's flickery useless garbage for animation beyond fly-throughs and such but it's what the vast majority of blender users would benefit the most from. So even with Renderman catching up the legacy raytracers on performance, it's still not focused on performance for hobbyist applications. And none of the CPU production renderers can compare to the responsiveness of a lightweight non-production renderer like Octane or Cycles for simple small scenes.
A free Renderman isn't really useful to Blender users except for the Whizbang "Yay we're using otherwise expensive software" bragging rights. The comments on the article for Blendernation are telling. "No GPU? Cycles is faster with a GPU than PRMan on the CPU." This is true. In fact PRman has historically been incredibly slow in comparison to other production renderers for almost a decade let alone stripped down hobbyist renderers. Yes Pixar uses, yes ILM uses it but outside of ILM and Pixar applications PRMan is poorly suited for what most smaller boutique studios do let alone what a hobbyist does. A hobbyist would read comments in Cinefex magazine with wonderment when a VFX Supervisor says that Iron Man was the first film where ILM used area lights or Cars one of the first films where PRman did raytracing in any substantial way.
Whatever you're rendering, you probably aren't benefiting from PRMan's really distinguishing features if you're using blender.
How is this different from the linux kernel? Answer it's not. If you aren't on a maintenance build branch you're going to need to update in order to continue receiving security updates. Only maintenance branches get guaranteed security updates without features tagging along.
There is a false dichotomy because there are 3 obnoxious groups.
1) "God put that oil in the ground for our benefit and he'll return before the world gets too warm!"
2) "Oil and Nuclear power are driven by evil chemicals!"
But there is also the
3) "I'm going to smugly pretend that not having an opinion makes me balanced and superior."