Comment Re: Recent experience (Score 1) 26
Context windows make a big difference both in how well it can keep a straight line and also in how much memory and processing power you need. On my PC with 24GB GPU and 64GB ddr5 I can run reasonably effective models (up to 12GB) with a 60K context window. That's good enough for small stuff.
Codex 5.3 can do 230K, can even go much higher when using the API, and auto compacts the token window. That makes a big difference.
What also helps is to maintain a task list, tell it to split up any assignment and put it on that list, only work on tasks on the list, and document the result on that list as well. That way you get a more persistent memory for what it did, and also it only needs to save the context for the task at hand.