Oops I mean "over a hundred million instanced trees" obviously haha.
Don't forget that when new technology is announced they always list the upper limits of a technology. So it has 1,000 times the potential of current best-case-scenario NAND but you won't see that 1,000 performance boost for 3 decades when they tap out the technology's maximum potential.
I'm rendering a project right now that has over a hundred instanced trees in a forest. So the forest is pretty much instanced, but each tree instance is around 1GB of memory and there are about 12 individual models. Then once I get into terrain geometry and villages and trains and that isn't even getting into the volumetric sparse oct-trees for like smoke from chimneys... etc.. anyway long story short 32GB is already gone *with* massive scale instancing.
Obviously this would follow regular air traffic regulations where there are staggered altitude exclusion zones around airports, national security sites etc.
If I can buy a TB of RAM that's maybe DDR4 speeds but not DDR5 speeds but at say somewhere between NAND and DRAM pricing that would be huge.
Also incredibly useful for something like a phone where you might want to shoot 4k video. The CPU would have a hard time processing that but if you buffered to say a 64GB cache and then processed you could shoot highspeed for a minute instead of 2-3 seconds.
One of the articles says the initial products will be PCIe and NVMe.
The Toms Hardware Article is much better:
Intel indicated the new memory would connect to the host system via the PCIe bus, which is yet another reason that Intel and Micron have been vocal proponents of NVMe. The NVMe protocol was designed from the ground up for non-volatile memory technologies, and not NAND in particular. Now it is apparent that Intel and Micron were laying the groundwork for something more as they developed the new protocol.
Clearly this memory will necessitate new motherboards. But I would also love to see this on Nvidia cards.
MPEG-LA claims to have full H265 patent coverage, so it'll be decided in the courts if MPEG-LA can defend their H265 claims against HEVC Advance. My guess is that MPEG-LA knows what they've got and HEVC Advance is making a big show for shareholders. Technicolor already put it in their last quarterly earnings report that they had massive profit potential from their HEVC patents. To me this looks like a fake out by companies like Technicolor to trump up the value of their patents while MPEG-LA continues to do real business with reasonable terms. By the time Technicolor et-all's stock holders realize that they aren't making anything off of their ludicrous terms they'll have moved on to the next scam.
Not to mention bandwidth. How are you going to move 500TB to the cloud and back in a reasonable time frame? You're looking at several months even over a gigabit connection.
What are your performance requirements. If you just need a giant dump of semi-offline storage then look into building a backblaze Storage Pod.
For about $30,000 you could build four storage pods. Speed would not be terrific. Backups are handled through RAID. If you want faster, more redundant or fully serviced your next step up in price is probably a $300,000 NAS solution. Which might serve you better anyway.
That's unfair, I use a lot of software that I pay for but want to make peripheral changes to. For instance I use a compute job scheduler that costs about $180 per compute node + maintenance. It's worth the price for the existing features, but I also want to implement esoteric features that maybe nobody else needs but will help it work better with our workflow so I have forked a number of the built in options. The company even has a github repository of the latest release so that you can get performance and reliability bug fixes from the developer while still keeping your one-off tweaks or even share with other companies who need similar fixes.
This same company eventually even bought a substantial addition I made to make it a core feature. This model works well for everybody, if you need a custom feature you just need to add that one small feature without starting from scratch and I don't have to worry about maintaining the code and add big core features like moving to a new database or extending the SDK or writing a web interface or creating a native Python library.
As the author stated, this sort of situation doesn't lend itself well to a support model since most of the users have the same needs so it's only fair that everybody pay a share of the updates and many of the studios that use the software *could* write it themselves if they were just going to develop it indirectly.
No we have two different statistics competing. I can say that 20% of the population died this year, oh my god end of days tragedy! But I can also encourage 50% of the population to procreate this year and have a child. Yay! Population growth everything is fine no need for doom and gloom!
There are more hives. But bees are still dying in record numbers. Both can be true simultaneously. If 50% of every hive died you would have 50% less bees even if the number of hives increased slightly.
I was just going to say that I am using a cpu raster renderer from 1993 on my latest project. Why? Because for simple data passes it's the fastest renderer available and it's single threaded! I can run 14 concurrent instances on one machine and render in near real-time on the CPU but with proper shading and filtering unlike GPU rendering.
It's pretty much all phong and blinn shading but that goes back to 1977 and the birth of computer graphics.
TESTING requires destruction of this kind of thing as I read the article. They will NOT be doing 100% testing to failure of their stock of struts, except to prove to themselves how bad their supplier really was.
Nobody said they would be testing to failure. You can test every unit to say 150% of failure. If the material is rated for 1000% of failure then 150% should be safe. If it doesn't fail at 150% once it probably won't fail at 100% 100 times. So now you're at 99 times until mean failure instead of 100.
So a song supposedly written in the last 80 years or so having a copyright still on it means that you have zero respect for the GPL?
1) If Microsoft wanted to include a secret NSA screen recording app they could hide it in the code and you would never know.
2) If your'e worried about exploits then you should just worry about the fact that your GPU's drivers already offer this capability.
3) Recording your screen is the most useless way to learn things about you that I can think of. If you have access to the system to such a level that you can execute arbitrary code it's far more effective to run a keylogger than a video screen system which would require gigabytes of data to get meaningful information. Install your keylogger and then have millions of computers dump their keystrokes to a database that doesn't require you to sneak terrabytes of data from millions of computers to your server. Then run some data mining software to identify likely username/password combinations.
This is occum's razor shit people. Screen capture software only requires 1-2MBs. It's not like they can't be hidden. And even if they couldn't be easily hidden they're mostly useless.