OpenAI CEO Sam Altman has some clear words about the GPT-4 hype. He cautions the education system against relying on AI text finders.
Altman calls the rumor mill surrounding the GPT-4 a “ridiculous thing”. Rumors of gigantic model sizes are circulating on Twitter and the like, and Altman has “no idea where it all came from.”
OpenAI co-founder and chief scientist Ilya Sutskever gets more graphic, posting on Twitter a parody of the image that has been circulating for months, showing a supposedly gigantic GPT-4 model next to a small one. GPT-3 model. It’s supposed to suggest a huge leap forward in performance. However, the settings came out of thin air or “complete bullshit”, as Altman calls it.
GPT-4 will soon be launched.
And it will make ChatGPT look like a toy…→ GPT-3 has 175 billion parameters
→ GPT-4 has 100 trillion parametersI think we are going to see something absolutely breathtaking this time!
And the best part? 👇 pic.twitter.com/FAB5gFjveb
—Simon Hoiberg (@SimonHoiberg) January 11, 2023
Moreover, these buzzwords only refer to the number of parameters of the AI model to describe its performance. Altman previously hinted in September 2021 that GPT-4 might differ from GPT-3 in efficiency and data quality rather than just the number of parameters.
A d
Models such as Chinchilla or Sparse Luminous Base show that language systems with fewer parameters but more efficient architecture and trained on more data can perform well.
“People are just begging to be disappointed – and they will be,” Altman says of the possible expectation that OpenAI’s GPT-4 will already have general AI capabilities.
Altman believes in the diversity of values in language models
OpenAI and many other companies are very concerned about the security of large language models, including the accuracy of information and its values. For example, they must not generate hate speech.
Altman thinks that in the future there will be a variety of these models with different values - from completely safe to more “quirky” and “creative”. The next step, according to Altman, would be for users to give the AI system specifications for how it should behave and what values it should take.
Altman won’t comment further on Google’s statements that chatbots aren’t yet secure enough to be widely deployed. However, he said he hopes reporters will call Google out on that statement if the company releases a product anyway. The research firm is reportedly planning chatbot research with a focus on security and up to 20 AI products in 2023.
Recommendation

The education system should not rely on AI text finders
Language models such as ChatGPT or GPT-3 provide students with easy-to-use homework and essay automation. The tools allow them to write faster. They can even generate texts fully automatically.
Technology is therefore controversial in education – should it be encouraged to be used and learners empowered, or is it better to ban AI systems?
“I understand why educators feel the way they feel about this. […] We will try to do some things in the short term and there may be ways to help teachers or anyone else be a little more likely to detect the output of a GTP-like system” , says Altman.
However, Altman added, the company shouldn’t rely on these long-term solutions because a “determined person” would find ways around these detectors. According to Altman, they are important for the phase transition, but it is impossible to develop perfect detectors.
OpenAI is working on watermarking technology for its language models. But such systems could become useless within months of release, Altman said.
“We live in a new world where we have to adapt to the generated text. That’s good,” Altman says, drawing a comparison to the advent of calculators. Language models are “more extreme,” he says, but also offer “extreme benefits.”