On one of the days last week, Tudor Achim asked an artificial intelligence bot by the name of Aristotle to solve a linguistic puzzle. The puzzle involved a 10×10 table in which a hundred numbers are hidden. Now, if you took the minimum number from every row and took the maximum number from every column, the question went, would it ever be that the largest of the small numbers is greater than the smallest of the large numbers? This time the bot answered “No,” which is not an answer. But it wasn’t surprising. Average chatbot’s QC would like to think, will give right answer too. The only thing is that Aristotle had given a proof that his answer was correct. The bot produced a program in computer language that justified the answer “No” to be the correct one.
Answering questions, writing poetry, summarizing news articles and creating images does not exhaust the capabilities of chatbots including ChatGPT and Gemini. But it is, strange, and ridiculous, and goes against how the world works, that they also let you down from time to time. In some cases, they make things up — an astonishing phenomenon called hallucination.
The man who is the CEO and also a co-founder of a Silicon Valley startup called Harmonic, Achim, is one of the active scientists in the developing dictionary of a new generation of AI engines that does not hallucinate. At the moment, this mathematical technology is quite useful. However, many scientists think they will be able to go further than this in programming and other fields. As Maths is strict with its principles as to what counts as proof for correctness, it comes as no surprise that companies such as Harmonic can create AI machines that verify and validate their own solutions.
A few researchers believe it is possible to have an AI that can eventually prove to be superior at mathematics than any living person. That’s the goal Achim and his co-founder Vlad Tenev is working on. Their company Harmonic managed to attract $75 million in investments from Sequoia Capital and other investors. Others believe that these techniques can go much further than this and develop systems that will be able to validate real-world facts instead of just logical equations.
There is an activity that, while checking the one’s answers. Please note. Thus becoming a viable method of generating a vast bulk of reliable information that can be used in teaching the machines. This is what the researchers refer to as ‘synthetic data’ – data that is created by AI to train other AI. Many experts expect that this idea will be important for further development of AI. Achim and Tenev seem to be of the opinion that after years of training, Aristotle will surpass any human being in Maths. “We want it to solve problems that have never been solved,” Tenev says.