Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a much more cost effective answer: Use https://cloud.google.com/tpu/

Unless you have an unlimited supply of free electricity and don't care about the increased hardware management overhead, it's a waste of money to buy Pascal GPUs for large-scale deep learning.

The following cards have much more optimized deep learning silicon and are publicly available /right now/:

- Nvidia Tesla V100 (Tensor cores only: 120 TFLOPS FP16)

- Google TPU2 (180 TFLOPS FP16)

Additionally, Intel Xeon chips with the Nervana deep learning accelerator built-in will probably be available early next year.

If you must control the physical hardware yourself and can't use cloud services, go buy Tesla V100s or wait for the Nervana Xeons.



That makes no sense as the TPU2 is not really out yet for actual consumers. AFAIK it's only in closed alpha right now, so if you're actually doing stuff right now it's not a real option. There's also no pricing so the "cost effective" remark can't really be seen.


If and when it is for sale, I wonder how many problems with the hardware and drivers customers will face over the years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: