Comment Re: The steps (Score 4, Interesting) 103
You can. I have been experimenting with some this week. Claude code running locally against qwen3.5-35b. And GLM-4.7 flash. It is very slow on my 3060Ti/8GB. Though i have a 16 core CPU and 64GB of RAM. However, it is actually very capable, if you can tolerate the wait. Still much faster tham i could be at writing code by hand, tests, running them, even collecting logs and reverse engineeromg payloads from my IOT sensors. If it was paid work, it could never compete with the cloud offerings. Waiting 5 mins between prompts is common. Chatgpt codex 5.3 is about 100x faster - I'm doing a free monthly trial right now. Not sure how luch better the local models will get. Qwen3 coder next exceeds both the vram and ram capacity of my system. It is however possible for mw to upgrade to a 16GB vram GPU and 128GB RAM. But prices are way too high, and I may not pull the trigger on a $1200 upgrade to maybe at beast get 2x token speed.