Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, I can share my experience from a few days ago. Gave the same task (a major refactor) to both Claude and Codex.

Codex finished in 5 minutes, Claude was still spinning after 20 minutes. Also it used up all my usage, about twice over (the 5-hour window rolled over in the middle of the task, so the usage for one task added up to 192%). Codex usage was 9%. So, 21x difference there, lol

They're saying there's bugs lately with how usage is being measured, but usage being buggy isn't exactly more encouraging...

So I was on task #4 with Codex while Claude was still spinning on #1.

I didn't like the results Codex gave me though. It has the habit of doing "technically what you asked, but not what a normal human would have wanted."

So given "Claude is great but I can't actually use it much" and "Codex is cheap and fast but kinda sucks", the current optimum seems to be having Claude write detailed specs and delegate to Codex. (OpenAI isn't banning people for using 3rd party orchestration, so this would actually be a thing you could do without problems. Not the reverse though.)

 help



> Claude was still spinning after 20 minutes.

I have been using Claude Code on a medium codebase (~2000 files, ~1M lines of code) for over a year and have never had to wait this long. Also I'm on the max plan and have not seen these limits at all.


Just yesterday it thought for 591 seconds for me, which is ten minutes. There have been times this week when it ran longer and I assumed it was just bust and stopped it

Just chipping in to say that I've never seen it churn for more than 20 minutes in two years worth of usage. The longest I've ever seen it churn is when I had it give extremely detailed analysis of five fictional novels simultaneously.

Fictional novels? Did it have to write them first?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: