Subject: Re: Using AI to generate backtesting programs
I assume you know Python and can double check the generated Python code.

I use R and have been using perplexity.ai a lot recently to generate code from plain English prompts.

FWIW, my take-aways:
1) It's often very good at generating logically correct, executable, code from plain English prompts for small well defined tasks.
I know this because I always double check it, it's usually very good.
But 'usually' isn't perfect, it needs double checking.

3) It quickly becomes marginal at generating logically correct, executable, code from plain English prompts for somewhat larger tasks. Even if the code executes, it can contain subtle logical errors (the worst kind!).
It always needs double-checking.

4) It is very good at reading my old code and cleaning it up.
Again, I double check it, but it does an excellent job at cleaning up existing code.

SUMMARY:
For small well defined tasks it usually produces logically and syntactically correct code. It definitely has increased my productivity: it's much easier to check small code chunks than produce them ab initio myself. It's nice to have something come in and clean up my code. Checking the resulting code is essential, it's pretty good but "pretty good" isn't good enough.

CONCERN:
AI'S are getting integrated into IDE's now, and if humans aren't carefully checking the generated code then
(1) Critical code can be generated that produces wrong results (perhaps due to subtle logical error, or due to a side case that was never checked). It may look "sort of right", but "sort of right" isn't good enough.
(2) Generated code is getting deposited into GitHub, this can potentially pollute the code base for training future AIs.

ANALYZING COMPANIES:
I've been doing a lot of the following recently with perplexity.ai

"Examine the financial health of NVDIA over the past few years, it's present health, and it's prospective health as well as estimated returns for the next few years, using analysis of balance sheet, cash flow, and income statement, including any other information that you may have."

I don't take what perplexity.ai outputs as true, but the output provides a nice entry to get into the analysis yourself. You can also ask followup questions e.g.
"What do you see as potential major problems?"
Always double check important conclusions.

CONCLUSION:
I agree that it's startling how good they are, yet sobering that they aren't good enough.

A lot has been said about the dangers of AIs becoming smarter than us, somehow taking over or whatever.
Certainly they will have a place on the battlefield, where 'move fast and break things' is unfortunately the whole point.
But I think that there's now a very present danger with "AM" i.e. "Automated Mediocrity".
They're just not all that good yet, but they're everywhere.