Dark pine trees against a mountainous grey background.

Whatever the Wind Brings

ChatGPT now gets equations right, but it still doesn't know math

Some time ago, I wrote this very long piece about the so-called "AI" and why it isn't intelligent, explaining in detail why it will never replace humans in creative fields and will never reach AGI. One of the arguments I made was using math as proof to my point: if the LLM that absorbed all the knowledge of the human race on the internet and still doesn't know how to do basic math, it means it's incapable of learning, as the dataset should, supposedly, contain thousands of books in every language explaining math.

I usually make this argument because math is simple (well, kind of), it has objective results you can check and double check and, most of the time, doesn't rely on subjective interpretation. If a machine can't do a multiplication right, it will never completely master human language, which requires understanding meaning, subjectiveness, and tons of other undefined variables regarding context. The article linked above goes into detail about this, using studies and other articles as the basis for my argument.

Well, the other day, I decided to do a test and see how things are going with the GPT, since I saw people saying they use it to solve math problems. And, to my surprise, the LLM now gives correct results! BUT, it only does so because when it detects a math equation, it runs a Python script on it. Meaning it's just an unnecessary layer between a human and a proper calculator.

ChatGPT still doesn't know math. It still doesn't understand concepts like numbers and operations. Everything I said in my original article is still correct, as it uses another piece of technology to make calculations for it. What gets me is that people who don't understand technology will think the good ol' GPT now is smarter when it's just a trick. It's all smoke and mirrors, a different version of the Mechanical Turk.

I'm a translator, so I have been dealing with technology that mimics human writing for quite a while now and, as also stated in my previous article, I don't feel threatened by LLMs, but by people making decisions about stuff they don't understand and pushing LLMs into my job. Every multimedia translator I know is against this tech because it doesn't make our jobs easier or faster, but companies still want to pay less for our work anyway.

And, as always, whenever something goes wrong, "it's the translator's fault", not the machine's, and I'm getting really tired of that.

#"ai" #llm #rambling