Comment Picture Editing, Video Editing, Sound Editing (Score 1) 56
Right now the NPU portions of these processors are mostly staying dormant.
I just bought an Arrowlake processor for the future.
Sure they can be used for Chatbots, but where they will really start to shine is in productivity software.
We are starting to see the beginning of things with real time background and better green screen removal.
How about suggestions on what can be cut from your video and transitions to use? What if it learns how you edit and can start making suggestions on how you already work? How about better upscaling, sure. Sure your 1080p video is fine, but what if you want to zoom in on something and have it still look really good?
Some of these things are available with special tools and using your graphics card with multiple steps, but what if this was all available in your favorite video editing package out of the box?
How about photo editing? Again, there are special tools like Photo AI, Gigapixel, Luminar, etc that use your GPU. The Google Pixel smart phones already use the NPU for better object removal. We have smart removal tools, but what if they can now use the NPU to most of the time look perfect with very little work. Being able to have focus that doesn't add artifacts? Noise removal, face recovery. Sure a lot of these things can be done now by brute force on a GPU, but a NPU is designed for this. What if using both you can start doing things in real time, which might take you minutes of wait time and hoping things turn out right. As you tweak you have to wait minutes for each try, instead of seeing it in real time.
I haven't looked much into it, but I understand there are 3 tools that come with Audacity that only work with an NPU. So we are starting to see it help out in audio. Removing background noise, that kind of thing.
I am not going to say everyone needs these things, but I would argue that in the future a large portion of people will start using these features, probably without even thinking about it, just expecting it to be there.
I just bought an Arrowlake processor for the future.
Sure they can be used for Chatbots, but where they will really start to shine is in productivity software.
We are starting to see the beginning of things with real time background and better green screen removal.
How about suggestions on what can be cut from your video and transitions to use? What if it learns how you edit and can start making suggestions on how you already work? How about better upscaling, sure. Sure your 1080p video is fine, but what if you want to zoom in on something and have it still look really good?
Some of these things are available with special tools and using your graphics card with multiple steps, but what if this was all available in your favorite video editing package out of the box?
How about photo editing? Again, there are special tools like Photo AI, Gigapixel, Luminar, etc that use your GPU. The Google Pixel smart phones already use the NPU for better object removal. We have smart removal tools, but what if they can now use the NPU to most of the time look perfect with very little work. Being able to have focus that doesn't add artifacts? Noise removal, face recovery. Sure a lot of these things can be done now by brute force on a GPU, but a NPU is designed for this. What if using both you can start doing things in real time, which might take you minutes of wait time and hoping things turn out right. As you tweak you have to wait minutes for each try, instead of seeing it in real time.
I haven't looked much into it, but I understand there are 3 tools that come with Audacity that only work with an NPU. So we are starting to see it help out in audio. Removing background noise, that kind of thing.
I am not going to say everyone needs these things, but I would argue that in the future a large portion of people will start using these features, probably without even thinking about it, just expecting it to be there.