Ask Slashdot: How Much Faster Is an ASIC Than a Programmable GPU?

dryriver writes: When you run a real-time video processing algorithm on a GPU, you notice that some math functions execute very quickly on the GPU and some math functions take up a lot more processing time or cycles, slowing down the algorithm. If you were to implement that exact GPU algorithm as a dedicated ASIC hardware chip or perhaps on a beefy FPGA, what kind of speedup — if any — could you expect over a midrange GPU like a GTX 1070? Would hardwiring the same math operations as ASIC circuitry lead to a massive execution time speedup as some people claim — e.g. 5x or 10x faster than a general purpose Nvidia GPU — or are GPUs and ASICs close to each other in execution speed? Bonus question: Is there a way to calculate the speed of an algorithm implemented as an ASIC chip without having an actual physical ASIC chip produced? Could you port the algorithm to, say, Verilog or similar languages and then use a software tool to calculate or predict how fast it would run if implemented as an ASIC with certain properties (clock speed, core count, manufacturing process… )?

Read more of this story at Slashdot.

Source:
https://hardware.slashdot.org/story/19/11/26/2218213/ask-slashdot-how-much-faster-is-an-asic-than-a-programmable-gpu?utm_source=rss1.0mainlinkanon&utm_medium=feed