Revisiting ChatGPT…
22 minute read
November 1, 2024, 11:55 PM
Recently, a former colleague of mine shared a post on LinkedIn by Benjamin Stein (no, not that guy) that read, “Go ask ChatGPT: ‘Based on our previous interactions, what do you know about me that I may not know about myself?'” My colleague said in her post that she was using ChatGPT to help with a job search, and shared some of the things that the bot said about her. Not bad.
I responded to her post about my own experience with ChatGPT, saying, “All I know is that ChatGPT knows exactly who I am, probably because of my large online presence, and it told quite a few whoppers about me,” and then cited the Journal entry that I wrote about it last spring. My colleague responded that she loved the entry, but considering that the original entry is now more than 18 months old, as well as the rapid pace of advancement in this kind of technology, I should run it again to see what it comes up with. I liked the idea.
So I ran the entire process over again, asking the exact same questions, i.e. “What do you know about Ben Schumin?” and “Tell me about The Schumin Web,” and running each inquiry five times, using the default model, GPT-4o. My methodology for scoring each of the responses was exactly the same as before, counting the number of factual claims, and then determining the accuracy of each one. Accurate claims scored a point, inaccurate claims scored no points, and a mixture of accurate and inaccurate information scored half a point. Then take that number and divide it by the total number of factual claims made, and that’s the final score in the form of a percentage. As was the case before, I still don’t know what an expert in this sort of thing might do to rate the accuracy of these responses, but this is the best that I could come up with, plus this is the same method that I used before, giving us apples-to-apples results.
Categories: Artificial intelligence, Schumin Web meta, Social media