Sal Khan, the founder of the non-profit educational organization Khan Academy, said that at the start of development of the popular GPT-4 language model, the system was spitting out inaccurate calculations. Khan and his team got a first look at the next-gen AI system being developed and were trying to find workarounds, but after sharing the problem with OpenAI, the GPT-4 developers discovered that the data of language model training had poor mathematical labeling.
The issue was resolved and Khan said that GPT-4 is much better at math now, even though it doesn’t have a natively programmed calculator in the system. But it’s an interesting tidbit about the behind-the-scenes development of the much-hyped big language model, especially since few have had access to the development of and training data for GPT-4. Khan is an admitted skeptic of the tech AI boom, but he’s at the heart of it, as his nonprofit is involved in the beating heart of Silicon Valley.
Khan Academy and its new Khanmigo AI learning platform was one of the few big projects that OpenAI touted with the release of its new LLM. Khan said Khanmigo is the first step for the team trying to create a sort of all-in-one learning and tutoring platform. Although unlike so many companies that embed AI into their products to get into the hype, Khan isn’t trying to wow anyone. In an interview with Gizmodo, he shared both his excitement and his qualms about AI. In his mind, AI may be one of the few ways to stop people abusing the AI itself.
Khan said that nearly six months ago, before ChatGPT saw its initial release, OpenAI CEO Sam Altman and President Greg Brockman approached his nonprofit, saying he wanted his IA could pass the traditional standardized exams and he was looking for a few. companies to partner with for certain “socially positive use cases”. Although specifically, the OpenAI team wanted to make their AI capable of passing traditional standardized tests like SATs.
Although initially skeptical, Khan said “my mind was blown” once he saw the full capabilities of OpenAI’s latest version of its language model. Khan said he started thinking about how an AI could act as a democratized tutor or teaching assistant. Khan Lab’s Khanmigo AI is currently limited to specific users, although a waiting list is available.
Khan said he and his team wanted to take a more thoughtful approach than other guys in Silicon Valley, one in which people using the program knew exactly what they were getting, especially the potential harm. Big tech companies like Meta, MicrosoftAnd Google are in a race to see who can add more AI to their user systems fastest, while telling users not to trust them explicitly. A director of Microsoft recently requested “Sometimes [the AI] will get it right, but otherwise it will be usefully wrong.
“Think about what Tesla did. Khan said. “When they came out with self-driving cars, people paid for the privilege of testing something that could send you hitting a wall at 80 miles an hour.”
Khanmigo is divided into teacher and student activities. If a student asks the algebra program to answer a simple problem like “3x + 7 (X-4) = 5”, the AI will first ask the student to break the problem down into steps, first by simplifying the expression on the left, and soon. Other activities want to “ignite your curiosity” about topics like American history. An AP practical exam on psychology asked who the “father of modern psychology” is and although most people would assume Sigmund Freud, the system answers flatly that it is in fact Wilhelm Wundt, the first to establish a psychology laboratory at the University of Leipzig in the late 19th century.
Ultimately, Khan said he envisions an AI-based system that functions as an all-in-one teaching and learning tool. An educator could ask their class to hop on their laptop to use AI to help them write an essay. If a student goes off on their own to have another AI-like ChatGPT write the essay for them, a teacher could tell by the chat logs that the student didn’t do any work as they were supposed to. . It could be far from circumventing the lingering fears of use AI to cheat in class.
Khan said their system had an extra layer of checks for science and math questions. When the AI gets an answer wrong or misunderstands a question, users are supposed to give it a thumbs down.
And will it go wrong? Rarely, but OpenAI said the system will sometimes get it wrong. And that’s a problem, but is it more precise Or less bothered what a Google search can be? Khan thinks the hardest part will be continuing to refine the model, but also convincing people to be more skeptical and not want to call the AI an “authoritative” source.
Want to learn more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides for Best Free AI Art Generators And Everything we know about OpenAI’s ChatGPT.