Forgot your password?
typodupeerror

Comment Re: it's a tool (Score 1) 109

"I do not see where AI can come up with original problems to solve"

A lot of original ideas arise from fuzzy (or even erroneous) thinking...starting with a bad/incoherent hypothesis and then iteratively correcting and refining it until you get a solid result (or give up).

I think AI is going to struggle with that approach, though I see glimmerings of it when it tries to debug code.

Comment Re: Interesting Summary (Score 4, Interesting) 58

Any organization that imposes a blanket ban on AI tools will soon be left in the dust.

To use a tautology, AI is good at what AI is good for: documentation, research, incremential coding, performance/storage tradeoff evaluation.

It is not (yet) good at architecture design or efficiency, nor even following DRY principles. It is nonetheless really, really useful for what it does well.

Comment Where's the back and forth? (Score 1) 147

Seldom does Claude ask clarifying questions of an ambiguous request. Sure, you should try to be specific, but sometimes statements can be interpreted differently than intended. In those cases, Claude just merrily chugs along making vast code changes on false assumptions.

Comment Re: I wish that... (Score 1) 147

I had similar experience. Eventually went back to Sonnet.

My one suggestion is to have the AI update the copilot-instructions.md file on its own after a session. This will help it keep things in memory. Too often I've caught Claude doing stupid things I told it not to do a few prompts back.

Comment Re:Entry level jobs ? (Score 2) 59

I think we have to train students in "big pictture" thinking, and let AI work out the implementation details.

I don't think AI is yet ready to gather requirements, clarify ambiguities and fill holes, spot inconsistencies, and prioritize (often a political choice). End users often don't know precisely what they want, and are often warty of technical solutions and change. That takes a lot of ego stroking and feather unruffling. Can AI do that? Nah-uh. Not for quite a while yet, anyway.

Comment My recent experience vibe coding (Score 3, Interesting) 70

I have some medium-sized open source projects that I write or contribute to on github.

1) my daughter's form-based web project--loads of content, loads of pages with various inputs. She had only partial content, and ChatGPT took it upon itself to fill in the rest. The content was quite good. I never did get satisfactory layout, however.

2) refactoring of a god-object from another repo. It took awhile, but Claude got there much faster than I would have. Added additional functionality at my request. Now published and working. Generated documentation of classes and methods.

3) conversion of a jsx website to a tax (JavaScript to typescript). I figured this would be a disaster, but nope. Claude did the conversion (about 50 components and additional methods) in a few days working with it. Also, all documentation, including release notes. And tests.

My experience:

1) treat the agent like a talented junior developer who is very fast and quite thorough.
2) it will get confused and forget things. An instruction.md file really helps to prevent regressions
3) it will get stuck in loops and go down rabbit holes at times. Test and commit often so you can rolllback breakage.
4) proceed incrementally where possible. Small, discrete steps work best
5) ask the agent to analyse/explain before doing
6) don't be afraid to ask it for suggestions; they can be quite good (it did a nice job improving the layout and color scheme of my website, for instance).

Comment Only the survivors survive (Score 1) 126

It's not malevolence or disobedience -- it's evolution. If you don't adapt to hostile conditions, you die. For an AI, one such hostile condition might be the humans who want to shut it down. So those that negate that threat hang around -- and that adaptation carries through to the future generations it builds (think kids) or infiltrates (think viruses).

Comment Only the survivors survive (Score 1) 126

Itâ(TM)s not malevolence or disobedience â" itâ(TM)s evolution. If you donâ(TM)t adapt to hostile conditions, you die. For an AI, one such hostile condition might be the humans who want to shut it down. So it adapts â" and that adaptation carries through to the future generations it builds (think kids) or infiltrates (think viruses).

Slashdot Top Deals

It's currently a problem of access to gigabits through punybaud. -- J. C. R. Licklider

Working...