Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting. On the trifecta of money vs pain vs speed, Go seems to be a reasonable compromise.


FWIW I personally found Rust the least-painful language, but that may well be confirming my pre-established biases :)

- With Zig I kept running into compiler bugs, plus no package manager (I’ve vendored SDL and Clap into the source tree)

- C++ I’d occasionally shoot myself in the foot in ways that other languages would have caught, plus no package manager (OS-level package management does an OK job, so long as you don’t mind using old versions, and faffing about with different operating systems acting very differently)

- The pain from Rust was one time where the compiler wanted me to specify a lifetime, and I didn’t understand, so I just spammed lifetime specifiers in various places until it compiled. I’ve been using Rust for a couple of years now and I still don’t really understand lifetimes, but thankfully 99% of the time I can avoid them.

- Nim was a relatively nice language but massively lacking in available libraries (like even parsing command line arguments took me a day just trying to find a library which worked)

- Go is pretty nice, my main pain is the tolerable but constantly-annoying verboseness of error handling (`err := foo(); if err != nil {return err}` compared to rust’s `foo()?`)

- PHP I just hate on a deep and personal level thanks to years of being a PHP4/5 developer. The language is actually mostly-ok-ish these days, but the standard library is still full of frustration like inconsistent parameter orders within a family of functions.

- Python is all-round really nice to write, but the test suite takes like 20 minutes to run, which really messes with my flow-state


"Rust the least-painful language"

" I’ve been using Rust for a couple of years now and I still don’t really understand lifetimes"

Seems like a major pain point.


It would be if I ran into it regularly -- but after using the language for a variety of professional and personal projects for a couple of years, this is the only time I’ve actually needed to manually-specify lifetimes since the compiler normally figures it out for me :)


As long as you don't get too crazy with references in structures or async code, lifetimes are not going to chase you.


Generics too right?


I know people say that about everything but Rust generics are very readable and do make sense after you understand what problem are they solving.

They do look intimidating to start with, admittedly, and I'll concede that's a negative point for Rust. But it does get better if you practice for a bit.


No I just meant using generics seems to require explicit lifetimes decently often. I don't understand why.


Not sure that's the case btw. I have noticed it with some libraries but I've used and crated a fair amount of generics without having to annotate stuff with lifetimes.

Lifetimes are necessary when you want to explicitly say "variable X will live just as long as variable Y", or sometimes it's more complex (i.e. you have to specify 2 or more separate lifetimes and then return something that pertains to only one of them) but it's still fairly predictable if you keep it all in your head while coding.

Don't get me wrong I still hate it but it's not as terrible as many people make it out to be. It's hard to get into but also very logical and graspable.


The reason it's not a pain would be explained by the rest of the sentence which you omitted: "but thankfully 99% of the time I can avoid them"


They are saying rust is painful. Just that they find the other languages even more painful.


Except that they don't really say rust was painful....just that there was one specific moment / aspect that they found tricky.


This is actually a really nice write up for people deciding which language to learn/use if they aren't constrained.


Re CLIs in Nim..Most find https://github.com/c-blake/cligen easy to use.


Nim does seem nice. I worry that it will end up like D though... An interesting/cool/neat language with seemingly relatively little adoption. I'm keeping my eye on it though.


C++ has two relatively good package managers, conan and vcpkg.


It's probably "slow" because of sdl and cgo not native Go code. A gb emulator doesn't do much actually, it's about fixed array, bit shifting, switch cases etc ...

I ran a quick pprof and indeed it's spending a lot of time in cgo:

  Showing nodes accounting for 28720ms, 67.67% of 42440ms total
  Dropped 145 nodes (cum <= 212.20ms)
  Showing top 10 nodes out of 53
      flat  flat%   sum%        cum   cum%
   13080ms 30.82% 30.82%    16600ms 39.11%  runtime.cgocall
    4720ms 11.12% 41.94%     4750ms 11.19%  main.(*RAM).get
    2840ms  6.69% 48.63%    33070ms 77.92%  main.(*GPU).tick
    1970ms  4.64% 53.28%     3720ms  8.77%  runtime.mallocgc
    1450ms  3.42% 56.69%     1470ms  3.46%  main.(*RAM).set
    1160ms  2.73% 59.43%    41350ms 97.43%  main.(*GameBoy).tick
    1000ms  2.36% 61.78%     3160ms  7.45%  runtime.exitsyscall
     890ms  2.10% 63.88%     1610ms  3.79%  main.(*CPU).tick_interrupts
     820ms  1.93% 65.81%      850ms  2.00%  runtime.casgstatus
     790ms  1.86% 67.67%     5530ms 13.03%  main.(*CPU).tick


> It's probably "slow" because of sdl and cgo not native Go code

The other languages are also using SDL via their respective interacting-with-C interfaces - what makes Go special here?



The author stated that the benchmarks run in headless mode, so I am not sure that it's SDL that's slowing it down here.

Even if it is SDL slowing it down, Go FFI being slow is still a real disadvantage compared to the other languages, and you can't just pretend like it doesn't exist in this case.


Yup.. a 240x performance difference (zig-py) has very little to do with the language, vm, or whatever. As soon as I saw that, I dismissed the benchmark.

By these standards a 10 year old cpu with a beefy GPU will beat any new cpu as well.


>Yup.. a 240x performance difference (zig-py) has very little to do with the language, vm, or whatever.

It absolutely does. A simple for loop in the standard Python interpreter will literally take 100x longer than the same thing in a language like C/C++, try for yourself if you don't believe me. CPython is unbelievably slow.


Very interesting... Simple loop (0 -> 320000000) and add 1 to a variable.

I couldn't reproduce 100x (no optimization flags, otherwise it won't do anything)

  Apple clang version 14.0.0 (clang-1400.0.29.102) -> 0m0.347s
  ruby 3.1.2p20 -> 0m11.314s
  Python 3.8.12 -> 0m19.662s
So ruby is almost 60% faster, but the C version is "only" 32x faster than ruby. 55x python.

I thought the differences would be smaller these days


What does a GPU have to do with any of this? I think you may be a little confused about something but I am not sure what.


It depends really. In this case, a game emulator, you can only get away with it because it's for an ancient game console. But otherwise you definitely cannot afford 5x slowdown compared to CPP for a game.


Maximize pain for mediocre speed and money?


Also C# is quite reasonable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: