GPU speed database

questions about practical use of Neat Video, examples of use
Zach
Posts: 38
Joined: Sat Jun 01, 2013 12:37 pm

Post by Zach » Sun Dec 18, 2016 3:03 am

True enough.

ALthough I have severe doubts that dropping in a second GTX 1070 would offer a worthwhile performance benefit for the money involved.

lansing
Posts: 38
Joined: Sat Apr 21, 2012 6:52 am

Re: GPU speed database

Post by lansing » Thu Dec 20, 2018 11:05 pm

I think the database need an update and a redesign. I am planning to buy a new machine and was looking at this database for gpu choices, I really need a sorting functionality to see which is the fastest. The "color depth" column was probably misinterpreted as well, there should be no 32 bit per channel video out there, not even 16 bit, it just doesn't make sense. And there should be new logic that sort cpu/cpu+gpu as well. Also a "gpu ram usage" should be added to the table, that will give people idea how much vram they'll need for their cards.

So what is the best gpu right now? GTX Titan?

NVTeam
Posts: 2385
Joined: Thu Sep 01, 2005 4:12 pm
Contact:

Re: GPU speed database

Post by NVTeam » Thu Dec 20, 2018 11:20 pm

The database is maintained by a third party, so it may take time to change.

Video processing inside Neat Video and inside host application is often done in 32-bit per channel precision nowadays. 8-bit and 16-bit cases are often possible as well. Since the speed depends on that, that parameter is taken into account too.

We recently posted a blog entry about GPUs that may be generally useful when choosing a new GPU. Since that post, some new GPU models became available or announced that are very good too:
- NVIDIA GeForce RTX 2070 / 2080 / 2080 Ti
- NVIDIA Titan RTX
- NVIDIA Titan V

I think 2080 Ti is a good choice considering performance and price. 1080 Ti is probably the best option in the previous generation of NVIDIA GPUs.

Vlad

lansing
Posts: 38
Joined: Sat Apr 21, 2012 6:52 am

Re: GPU speed database

Post by lansing » Fri Dec 21, 2018 12:46 am

NVTeam wrote:
Thu Dec 20, 2018 11:20 pm
We recently posted a blog entry about GPUs that may be generally useful when choosing a new GPU. Since that post, some new GPU models became available or announced that are very good too:
- NVIDIA GeForce RTX 2070 / 2080 / 2080 Ti
- NVIDIA Titan RTX
- NVIDIA Titan V

I think 2080 Ti is a good choice considering performance and price. 1080 Ti is probably the best option in the previous generation of NVIDIA GPUs.

Vlad
Thanks for the article, from what I read, in terms of vRAM, any highend card with vRAM of more than 4GB will be enough, so a 6GB gpu is more than enough for the program. And there weren't much of a difference between GTX 1070 to 1080, the big jump came from 1080 to 1080ti. So my choice will be between 1070 and 1080ti, either go for the highest, or save $300 for a lower one.

NVTeam
Posts: 2385
Joined: Thu Sep 01, 2005 4:12 pm
Contact:

Re: GPU speed database

Post by NVTeam » Fri Dec 21, 2018 9:47 am

Neat Video itself does not require a lot of memory (it can adapt to the available amount) but the other applications (including the host application) and plug-ins may require more GPU. For example, the modern versions of Premiere and AE require 4GB of video RAM. Resolve requires 8GB for 4K work. Etc.

Fifonik
Posts: 64
Joined: Sat Apr 14, 2012 1:51 am
Location: Australia, Brisbane

Re: GPU speed database

Post by Fifonik » Tue Dec 25, 2018 4:54 am

Now NeatVideo have a lot of different options. Also, it is not possible to change them in optimiser directly. As Vegas user I have to change project settings (for 8/32-bit pixel format), then apply plugin, to fragment, Auto Profile, switch to Filter Settings tab, change some settings to adjust them with some "defaults" that database should use and only then go to Optimiser. Then repeat a few more times (for different settings). This is too complicated for typical user.
So I simply do not know how to update it in a way that it would be usable. Unfortunately.

NVTeam
Posts: 2385
Joined: Thu Sep 01, 2005 4:12 pm
Contact:

Re: GPU speed database

Post by NVTeam » Tue Dec 25, 2018 9:54 am

Yes, a typical user would not try to measure different combinations. They would likely just optimize for the frame size/bitdepth/filter settings they actually use in their project. And those who want to add some more or less complete data to the database will likely try different combinations.

We usually run our tests using 1920x1080/8-bit and 32-bit/default filter settings to make the results directly comparable. When Neat Video users send us their results, we also ask to use those settings for the same reason.

I agree with lansing, it may help to be able to sort the database to find the fastest GPU for the chosen settings.

Thank you,
Vlad

djmarksf
Posts: 9
Joined: Sun May 03, 2015 8:28 pm

Re: GPU speed database

Post by djmarksf » Mon Jan 14, 2019 3:10 am

I have a few results to post. This first one is using a "MacBook Pro (13-inch, 2017, Four Thunderbolt 3 Ports)" with a 3.1 gHz i5:

GPU detection log:

Looking for NVIDIA CUDA-capable devices...
Failed to load CUDA driver ("/usr/local/cuda/lib/libcuda.dylib")
If you use an NVIDIA card, please install the latest CUDA driver from NVIDIA.

Looking for AMD OpenCL-capable devices...
OpenCL driver version: 20181029.214316
OpenCL initialized successfully.
Checking OpenCL GPU #1:
GPU device name is: Intel(R) Iris(TM) Graphics 650
1536 MB available during initialization
This device is not supported
Check failed - will not use the device


Neat Video benchmark:

Frame Size: 1920x1080 progressive
Bitdepth: 8 bits per channel
Mix with Original: Disabled
Temporal Filter: Enabled
Quality Mode: Normal
Radius: 2 frames
Dust and Scratches: Disabled
Slow Shutter: Disabled
Spatial Filter: Enabled
Quality Mode: Normal
Frequencies High, Mid, Low
Artifact Removal: Enabled
Detail Recovery: Disabled
Edge Smoothing: Disabled
Sharpening: Disabled


Detecting the best combination of performance settings:
running the test data set on up to 4 CPU cores

1 core: 1.59 frames/sec
2 cores: 3.34 frames/sec
3 cores: 3.72 frames/sec
4 cores: 4 frames/sec

Best combination: 4 cores

djmarksf
Posts: 9
Joined: Sun May 03, 2015 8:28 pm

Re: GPU speed database

Post by djmarksf » Mon Jan 14, 2019 3:15 am

Next we have that same 13-inch MBP with an Asus TB3-PCI chassis, containing an Asus "Strix"-branded AMD Radeon RX Vega 64 GPU:

GPU detection log:

Looking for NVIDIA CUDA-capable devices...
Failed to load CUDA driver ("/usr/local/cuda/lib/libcuda.dylib")
If you use an NVIDIA card, please install the latest CUDA driver from NVIDIA.

Looking for AMD OpenCL-capable devices...
OpenCL driver version: 20181029.214316
OpenCL initialized successfully.
Checking OpenCL GPU #1:
GPU device name is: Intel(R) Iris(TM) Graphics 650
1536 MB available during initialization
This device is not supported
Check failed - will not use the device
Checking OpenCL GPU #2:
GPU device name is: AMD Radeon RX Vega 64 Compute Engine
8176 MB available during initialization
Check passed - will attempt to use the device


Neat Video benchmark:

Frame Size: 1920x1080 progressive
Bitdepth: 8 bits per channel
Mix with Original: Disabled
Temporal Filter: Enabled
Quality Mode: Normal
Radius: 2 frames
Dust and Scratches: Disabled
Slow Shutter: Disabled
Spatial Filter: Enabled
Quality Mode: Normal
Frequencies High, Mid, Low
Artifact Removal: Enabled
Detail Recovery: Disabled
Edge Smoothing: Disabled
Sharpening: Disabled


Detecting the best combination of performance settings:
running the test data set on up to 4 CPU cores and on up to 1 GPU
AMD Radeon RX Vega 64 Compute Engine: 8176 MB currently available, using up to 100%

CPU only (1 core): 1.69 frames/sec
CPU only (2 cores): 3.33 frames/sec
CPU only (3 cores): 3.72 frames/sec
CPU only (4 cores): 4.03 frames/sec
GPU only (AMD Radeon RX Vega 64 Compute Engine): 5.71 frames/sec
CPU (1 core) and GPU (AMD Radeon RX Vega 64 Compute Engine): 4.9 frames/sec
CPU (2 cores) and GPU (AMD Radeon RX Vega 64 Compute Engine): 4.63 frames/sec
CPU (3 cores) and GPU (AMD Radeon RX Vega 64 Compute Engine): 6.21 frames/sec
CPU (4 cores) and GPU (AMD Radeon RX Vega 64 Compute Engine): 5.43 frames/sec

Best combination: CPU (3 cores) and GPU (AMD Radeon RX Vega 64 Compute Engine)

djmarksf
Posts: 9
Joined: Sun May 03, 2015 8:28 pm

Re: GPU speed database

Post by djmarksf » Mon Jan 14, 2019 3:22 am

Now we use that same Vega 64 card in a new workstation Windows PC. Asus Prime X299-Deluxe II mainboard, Intel i9-7960X (16 cores, 2.8gHz), 128gb DDR4 3200 RAM, Windows 10 Home.

GPU detection log:

Looking for NVIDIA CUDA-capable devices...
Failed to load CUDA driver (nvcuda.dll).
If you use an NVIDIA card, please install the latest video driver with CUDA support.

Looking for AMD OpenCL-capable devices...
OpenCL driver version: 2580.6
OpenCL initialized successfully.
Checking OpenCL GPU #1:
GPU device name is: Radeon RX Vega (gfx900)
8176 MB available during initialization
Check passed - will attempt to use the device


Neat Video benchmark:

Frame Size: 1920x1080 progressive
Bitdepth: 8 bits per channel
Mix with Original: Disabled
Temporal Filter: Enabled
Quality Mode: Normal
Radius: 2 frames
Dust and Scratches: Disabled
Slow Shutter: Disabled
Spatial Filter: Enabled
Quality Mode: Normal
Frequencies High, Mid, Low
Artifact Removal: Enabled
Detail Recovery: Disabled
Edge Smoothing: Disabled
Sharpening: Disabled


Detecting the best combination of performance settings:
running the test data set on up to 32 CPU cores and on up to 1 GPU
Radeon RX Vega: 8176 MB currently available, using up to 100%

CPU only (1 core): 1.98 frames/sec
CPU only (2 cores): 4.08 frames/sec
CPU only (3 cores): 6.06 frames/sec
CPU only (4 cores): 8.13 frames/sec
CPU only (5 cores): 10 frames/sec
CPU only (6 cores): 11.8 frames/sec
CPU only (7 cores): 13.7 frames/sec
CPU only (8 cores): 15.9 frames/sec
CPU only (9 cores): 17.5 frames/sec
CPU only (10 cores): 19.2 frames/sec
CPU only (11 cores): 20.8 frames/sec
CPU only (12 cores): 22.2 frames/sec
CPU only (13 cores): 23.8 frames/sec
CPU only (14 cores): 25.6 frames/sec
CPU only (15 cores): 27 frames/sec
CPU only (16 cores): 28.6 frames/sec
CPU only (17 cores): 28.6 frames/sec
CPU only (18 cores): 28.6 frames/sec
CPU only (19 cores): 28.6 frames/sec
CPU only (20 cores): 29.4 frames/sec
CPU only (21 cores): 29.4 frames/sec
CPU only (22 cores): 29.4 frames/sec
CPU only (23 cores): 29.4 frames/sec
CPU only (24 cores): 29.4 frames/sec
CPU only (25 cores): 29.4 frames/sec
CPU only (26 cores): 28.6 frames/sec
CPU only (27 cores): 28.6 frames/sec
CPU only (28 cores): 28.6 frames/sec
CPU only (29 cores): 27.8 frames/sec
CPU only (30 cores): 27 frames/sec
CPU only (31 cores): 26.3 frames/sec
CPU only (32 cores): 25.6 frames/sec
GPU only (Radeon RX Vega): 19.2 frames/sec
CPU (1 core) and GPU (Radeon RX Vega): 15.6 frames/sec
CPU (2 cores) and GPU (Radeon RX Vega): 14.5 frames/sec
CPU (3 cores) and GPU (Radeon RX Vega): 17.5 frames/sec
CPU (4 cores) and GPU (Radeon RX Vega): 18.9 frames/sec
CPU (5 cores) and GPU (Radeon RX Vega): 23.3 frames/sec
CPU (6 cores) and GPU (Radeon RX Vega): 23.3 frames/sec
CPU (7 cores) and GPU (Radeon RX Vega): 24.4 frames/sec
CPU (8 cores) and GPU (Radeon RX Vega): 26.3 frames/sec
CPU (9 cores) and GPU (Radeon RX Vega): 26.3 frames/sec
CPU (10 cores) and GPU (Radeon RX Vega): 29.4 frames/sec
CPU (11 cores) and GPU (Radeon RX Vega): 29.4 frames/sec
CPU (12 cores) and GPU (Radeon RX Vega): 33.3 frames/sec
CPU (13 cores) and GPU (Radeon RX Vega): 35.7 frames/sec
CPU (14 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (15 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (16 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (17 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (18 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (19 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (20 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (21 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (22 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (23 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (24 cores) and GPU (Radeon RX Vega): 35.7 frames/sec
CPU (25 cores) and GPU (Radeon RX Vega): 35.7 frames/sec
CPU (26 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (27 cores) and GPU (Radeon RX Vega): 34.5 frames/sec
CPU (28 cores) and GPU (Radeon RX Vega): 33.3 frames/sec
CPU (29 cores) and GPU (Radeon RX Vega): 33.3 frames/sec
CPU (30 cores) and GPU (Radeon RX Vega): 33.3 frames/sec
CPU (31 cores) and GPU (Radeon RX Vega): 32.3 frames/sec
CPU (32 cores) and GPU (Radeon RX Vega): 30.3 frames/sec

Best combination: CPU (13 cores) and GPU (Radeon RX Vega)

djmarksf
Posts: 9
Joined: Sun May 03, 2015 8:28 pm

Re: GPU speed database

Post by djmarksf » Mon Jan 14, 2019 3:24 am

Now the same workstation PC as above gets a GeForce RTX 2080 Ti in place of the Vega 64:

GPU detection log:

Looking for NVIDIA CUDA-capable devices...
CUDA driver version: 10000
NVIDIA CUDA initialized successfully.
Checking CUDA GPU #1:
GPU device name is: GeForce RTX 2080 Ti
9209 MB available during initialization (11264 MB total)
Check passed - will attempt to use the device

Looking for AMD OpenCL-capable devices...
Failed to initialize OpenCL.
If you use an AMD card, please install the latest AMD driver with OpenCL support.

Neat Video benchmark:

Frame Size: 1920x1080 progressive
Bitdepth: 8 bits per channel
Mix with Original: Disabled
Temporal Filter: Enabled
Quality Mode: Normal
Radius: 2 frames
Dust and Scratches: Disabled
Slow Shutter: Disabled
Spatial Filter: Enabled
Quality Mode: Normal
Frequencies High, Mid, Low
Artifact Removal: Enabled
Detail Recovery: Disabled
Edge Smoothing: Disabled
Sharpening: Disabled


Detecting the best combination of performance settings:
running the test data set on up to 32 CPU cores and on up to 1 GPU
GeForce RTX 2080 Ti: 9209 MB currently available (11264 MB total), using up to 100%

CPU only (1 core): 1.99 frames/sec
CPU only (2 cores): 4.12 frames/sec
CPU only (3 cores): 6.06 frames/sec
CPU only (4 cores): 8.33 frames/sec
CPU only (5 cores): 10.2 frames/sec
CPU only (6 cores): 12 frames/sec
CPU only (7 cores): 14.1 frames/sec
CPU only (8 cores): 15.9 frames/sec
CPU only (9 cores): 17.5 frames/sec
CPU only (10 cores): 19.2 frames/sec
CPU only (11 cores): 20.8 frames/sec
CPU only (12 cores): 22.2 frames/sec
CPU only (13 cores): 24.4 frames/sec
CPU only (14 cores): 25.6 frames/sec
CPU only (15 cores): 27 frames/sec
CPU only (16 cores): 28.6 frames/sec
CPU only (17 cores): 28.6 frames/sec
CPU only (18 cores): 28.6 frames/sec
CPU only (19 cores): 28.6 frames/sec
CPU only (20 cores): 28.6 frames/sec
CPU only (21 cores): 27.8 frames/sec
CPU only (22 cores): 29.4 frames/sec
CPU only (23 cores): 29.4 frames/sec
CPU only (24 cores): 29.4 frames/sec
CPU only (25 cores): 28.6 frames/sec
CPU only (26 cores): 28.6 frames/sec
CPU only (27 cores): 28.6 frames/sec
CPU only (28 cores): 27.8 frames/sec
CPU only (29 cores): 27.8 frames/sec
CPU only (30 cores): 26.3 frames/sec
CPU only (31 cores): 26.3 frames/sec
CPU only (32 cores): 25.6 frames/sec
GPU only (GeForce RTX 2080 Ti): 29.4 frames/sec
CPU (1 core) and GPU (GeForce RTX 2080 Ti): 22.2 frames/sec
CPU (2 cores) and GPU (GeForce RTX 2080 Ti): 24.4 frames/sec
CPU (3 cores) and GPU (GeForce RTX 2080 Ti): 26.3 frames/sec
CPU (4 cores) and GPU (GeForce RTX 2080 Ti): 29.4 frames/sec
CPU (5 cores) and GPU (GeForce RTX 2080 Ti): 31.3 frames/sec
CPU (6 cores) and GPU (GeForce RTX 2080 Ti): 30.3 frames/sec
CPU (7 cores) and GPU (GeForce RTX 2080 Ti): 33.3 frames/sec
CPU (8 cores) and GPU (GeForce RTX 2080 Ti): 38.5 frames/sec
CPU (9 cores) and GPU (GeForce RTX 2080 Ti): 38.5 frames/sec
CPU (10 cores) and GPU (GeForce RTX 2080 Ti): 38.5 frames/sec
CPU (11 cores) and GPU (GeForce RTX 2080 Ti): 40 frames/sec
CPU (12 cores) and GPU (GeForce RTX 2080 Ti): 41.7 frames/sec
CPU (13 cores) and GPU (GeForce RTX 2080 Ti): 43.5 frames/sec
CPU (14 cores) and GPU (GeForce RTX 2080 Ti): 43.5 frames/sec
CPU (15 cores) and GPU (GeForce RTX 2080 Ti): 43.5 frames/sec
CPU (16 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (17 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (18 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (19 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (20 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (21 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (22 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (23 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (24 cores) and GPU (GeForce RTX 2080 Ti): 47.6 frames/sec
CPU (25 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (26 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (27 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (28 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (29 cores) and GPU (GeForce RTX 2080 Ti): 45.5 frames/sec
CPU (30 cores) and GPU (GeForce RTX 2080 Ti): 43.5 frames/sec
CPU (31 cores) and GPU (GeForce RTX 2080 Ti): 43.5 frames/sec
CPU (32 cores) and GPU (GeForce RTX 2080 Ti): 43.5 frames/sec

Best combination: CPU (24 cores) and GPU (GeForce RTX 2080 Ti)

djmarksf
Posts: 9
Joined: Sun May 03, 2015 8:28 pm

Re: GPU speed database

Post by djmarksf » Mon Jan 14, 2019 3:29 am

And finally a result from the Late-2014 27-inch 5K iMac that's been my main workstation for the last 4 years. This has a 4.0 gHz i7-4790K, with 32gb RAM and an AMD Radeon R9 M295X GPU.

GPU detection log:

Looking for NVIDIA CUDA-capable devices...
Failed to load CUDA driver ("/usr/local/cuda/lib/libcuda.dylib")
If you use an NVIDIA card, please install the latest CUDA driver from NVIDIA.

Looking for AMD OpenCL-capable devices...
OpenCL driver version: 20180524.200703
OpenCL initialized successfully.
Checking OpenCL GPU #1:
GPU device name is: AMD Radeon R9 M295X Compute Engine
4096 MB available during initialization
Check passed - will attempt to use the device


Neat Video benchmark:

Frame Size: 1920x1080 progressive
Bitdepth: 8 bits per channel
Mix with Original: Disabled
Temporal Filter: Enabled
Quality Mode: Normal
Radius: 2 frames
Dust and Scratches: Disabled
Slow Shutter: Disabled
Spatial Filter: Enabled
Quality Mode: Normal
Frequencies High, Mid, Low
Artifact Removal: Enabled
Detail Recovery: Disabled
Edge Smoothing: Disabled
Sharpening: Disabled


Detecting the best combination of performance settings:
running the test data set on up to 8 CPU cores and on up to 1 GPU
AMD Radeon R9 M295X Compute Engine: 4096 MB currently available, using up to 100%

CPU only (1 core): 1.92 frames/sec
CPU only (2 cores): 3.88 frames/sec
CPU only (3 cores): 5.46 frames/sec
CPU only (4 cores): 6.71 frames/sec
CPU only (5 cores): 6.71 frames/sec
CPU only (6 cores): 6.49 frames/sec
CPU only (7 cores): 6.29 frames/sec
CPU only (8 cores): 5.95 frames/sec
GPU only (AMD Radeon R9 M295X Compute Engine): 6.85 frames/sec
CPU (1 core) and GPU (AMD Radeon R9 M295X Compute Engine): 6.1 frames/sec
CPU (2 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 5.78 frames/sec
CPU (3 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 7.81 frames/sec
CPU (4 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 8.13 frames/sec
CPU (5 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 8.85 frames/sec
CPU (6 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 10.2 frames/sec
CPU (7 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 9.9 frames/sec
CPU (8 cores) and GPU (AMD Radeon R9 M295X Compute Engine): 9.35 frames/sec

Best combination: CPU (6 cores) and GPU (AMD Radeon R9 M295X Compute Engine)

Fifonik
Posts: 64
Joined: Sat Apr 14, 2012 1:51 am
Location: Australia, Brisbane

Re: GPU speed database

Post by Fifonik » Mon Jan 14, 2019 5:46 am

NVTeam wrote:
Tue Dec 25, 2018 9:54 am
Yes, a typical user would not try to measure different combinations. They would likely just optimize for the frame size/bitdepth/filter settings they actually use in their project. And those who want to add some more or less complete data to the database will likely try different combinations.

We usually run our tests using 1920x1080/8-bit and 32-bit/default filter settings to make the results directly comparable. When Neat Video users send us their results, we also ask to use those settings for the same reason.
I can change the tool so it will only accept some 'stardard' settings and refuse anything else.
It would require too many things to be changed by user to 'fit' the requirements.

NVTeam
Posts: 2385
Joined: Thu Sep 01, 2005 4:12 pm
Contact:

Re: GPU speed database

Post by NVTeam » Mon Jan 14, 2019 10:59 am

Thank you, djmarksf, for posting the comparative results. It is nice to see a good speedup with the latest cards.

Vlad

NVTeam
Posts: 2385
Joined: Thu Sep 01, 2005 4:12 pm
Contact:

Re: GPU speed database

Post by NVTeam » Mon Jan 14, 2019 11:01 am

Fifonik wrote:
Mon Jan 14, 2019 5:46 am
NVTeam wrote:
Tue Dec 25, 2018 9:54 am
Yes, a typical user would not try to measure different combinations. They would likely just optimize for the frame size/bitdepth/filter settings they actually use in their project. And those who want to add some more or less complete data to the database will likely try different combinations.

We usually run our tests using 1920x1080/8-bit and 32-bit/default filter settings to make the results directly comparable. When Neat Video users send us their results, we also ask to use those settings for the same reason.
I can change the tool so it will only accept some 'stardard' settings and refuse anything else.
It would require too many things to be changed by user to 'fit' the requirements.
Perhaps it just makes sense for testers to run the tests with the 'standard' settings and submit that to the database.
But I guess it would be excessive to not accept other results.

Post Reply