Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. The earliest consumer PC 3D graphics cards just rasterized pre-transformed triangles and that's it; the CPU had to do pretty much all the math (but drawing the pixels was considered the hard part). Later, "Hardware Transform and Lighting (T&L)" was introduced circa 2000 by cards like the GeForce 256.


And even then, you couldn't really get any sort of serious matmul out of it; they were per-vertex, not per-pixel.

Per-pixel matmul (which is what you really need for anything resembling GPGPU) came with Shader Model 2.0, circa 2002; Radeon 9700, the GeForce FX series and the likes. CUDA didn't exist (nor really any other form of compute shaders), but you could wrangle it with pixel shaders, and some of us did.


Oh man, I forgot about doing vector math using OpenGL textures as "hardware acceleration". And it would be many more years before it was reasonable to require a GPU with programmable shaders; having to support fixed-function was a fact of life for most of the 2000's.


There were actually some completely insane workarounds even before shaders. I don't think it was actually shipped in real software, but I saw something that used 11 or 18 passes or something to do dot3 texture blending even on unextended OpenGL 1.0. Painstakingly doing one color channel at a time, values above 0 and below 0 on source and destination also separately…

Granted, if you didn't have the “squared blend” extension, it would be an approximation, but still a pretty convincing one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: