ChatGPT graduated from high school to college level IQ: study





















Call it pomp and circuitry.








New research out of UCLA finds that ChatGPT “performs about as well as college undergraduates” when it comes to reasoning questions that often show up on standardized testing.





The language learning model has proverbially graduated to higher education since late 2022 when its “clean” writing style was described to The Post as that of “a very smart 12th-grader” by Furman University assistant philosophy professor Darren Hick.





UCLA’s study harnessed ChatGPT-3 — there is now a much more advanced, GPT-4 model available which outdid its predecessor in parts of the research — to be tested against 40 undergrad Bruins in Southern California.






GPT-3 scored around 80% correct on IQ questions based on Raven’s Progressive Matrices — it was “well within the range of the highest human scores” as the average person only got about 60% accurate.





New UCLA research found that ChatGPT outdid many human undergrads on an IQ test.
MediaNews Group via Getty Images







Last January, GPT-3 was given a Wharton School MBA exam and scored in the same B to B- minus range as well.





Weeks before, Hick said he felt “abject terror” about how program like ChatGPT could interfere in academia as he caught students allegedly using the program to do assignments.





At the time, his greatest fear was that the AI would continue learning from its mistakes to a point where professors can no longer inter the difference between bot and human work.






The newest UCLA findings only further such a worry.





“Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well,” said senior study author and UCLA psychology professor Hongjing Lu.





A ChatGPT model performed surprisingly like people when tested, according to UCLA research.
Christopher Sadowski







Co-author Keith Holyoak even said “GPT-3 might be kind of thinking like a human.”





GPT-3 especially succeeded with analogical reasoning — a crux of believed to be human exclusive problem solving trait which uses rational thought and logical examples.






“Language learning models are just trying to do word prediction so we’re surprised they can do reasoning,” Lu added.





Still, these findings shouldn’t come as too much of a shock as people are responsible for GPT’s reinforcement learning from human feedback — the process of refining and enhancing its training data.





The program was also tasked with answering SAT analogy questions which had never been published as a failsafe to rule out that such questions were part of the model’s training data.






There is new research showing that AI can outdo prestigious college students in testing.
Future Publishing via Getty Images




“They compared GPT-3’s scores to published results of college applicants’ SAT scores and found that the AI performed better than the average score for the humans,” according to a release on the study.





Next, researchers want to further understand how AI language models learn and improve their so called IQs as much of the process’ logistics remains a mystery to the untrained public.








“People did not learn by ingesting the entire internet, so the training method is completely different [than people],” Holyoak said. “We’d like to know if it’s really doing it the way people do, or if it’s something brand new — a real artificial intelligence — which would be amazing in its own right.”














Load more...




















https://nypost.com/2023/07/31/chatgpt-graduated-from-high-school-to-college-level-iq-study/?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons


Copy the URL to share






Post a Comment

0 Comments