I asked ChatGPT to write some Python code to manipulate ms/excel spreadsheets, and it quickly wrote code. No surprise there.
The code didn’t work. Still no surprise.
After a few iterations, it still doesn’t work. Perhaps a little surprising.
Then it gave me instrumentation (print statements) to add to the code, asked me to run the run code, and report back the result. That surprised me.
It iterated a few times by giving me new instrumentation to add and run. Now I am stunned.
The code still didn’t work. I told it that a particular cell that hadn’t asked to print out was blank in the spreadsheet and being treated as a NaN (Not a Number) in the program along with several other observations, and ultimately (after it re-introducing several different unrelated bugs several times each), the code worked and is happily running on real data.
While I suppose that structurally the instrumentation interaction isn’t that different from other interactions I have had with getting it to write English text (alternate endings for Broadway musicals), it felt fundamentally different.
I really got the feeling I was collaboratively problem-solving with a peer rather than having a junior assistant to do the grunge work. While that in and of itself is the beginning of it feeling like a person, it isn’t enough for me to feel it.
I know that ChatGPT is neither conscious nor alive. But this level of “cooperation” makes it almost easy for me to forget.
And very easy to imagine it being a tiny step from here for these tools to be experienced as true partners. Not just programming partners but even business or romantic partners. A week ago, I would have expected that to be years away. Now, well, maybe next month(?)
How long before we have relationships coaches for use with our AI collaborators? Is there a Susan Calvin in our near future?