Subject: Re: OT: ChatGPT
In my experience, the code that ChatGPT produces now is often bad code, on two very different levels:

Level 1: obviously buggy as hell, of course doesn't execute
Level 2: actually pretty good, but with one or three subtle logical errors (not programming error)

As coding LLMs improve and are widely adopted, Level 2 errors will proliferate. That said, I use ChatGPT for coding assistance now, e.g. it was easier to correct its errors in code for scraping data from macrotrends.net, see above thread, then to start from scratch. As mentioned, AIs equal or exceed humans in other limited domain areas (radiology etc).

My concern is that we're at a point in AI where, to make an analogy, we're going from the Wright brothers at Kitty Hawk, to having airports all over the world with high tech jets ferrying humans everywhere -- but with no equivalent of an FAA or an NSTB (U.S. regulatory and investigative agencies). Commercial jet aircrafts are also marvels of technology that have transformed the world, but they've been strictly regulated and consequently air travel is extremely safe. I think we're going to have lots of spectacular crashes with AI.

That said, the mad scientist in me is dying to have LLM's learn to rewrite their own code (best done in equivalent of BSL-4 facilities).