Google has unveiled a major new update to its AI chatbot Bard that significantly improves its powers of logic and reasoning. The latest version of the ChatGPT rival is now capable of both writing and executing code by itself, the tech giant announced, allowing it to figure out problems on a far deeper level than current generative AI systems. Google’s artificial intelligence tool is able to perform the new tasks through a new technique called “implicit code execution”, which enables Bard to detect computational prompts and run code in the background. The result is that Bard should theoretically be able to respond more accurately to mathematical tasks and coding questions, as it will have already tested the outcomes that it proposes. Until now, large language models (LLMs) like ChatGPT and Bard have been better suited to language and creative tasks, as they draw from their training data to predict what word will come next when talking about a specific subject. This allows them to produce text quickly but without deep thought, making them weaker when it comes to areas like reasoning and mathematics. “Our new method allows Bard to generate and execute code to boost its reasoning and maths abilities,” Google wrote in a blog post on Wednesday. “With the latest update, we’ve combined the capabilities of both LLMs and traditional code to improve accuracy in Bard’s responses. Through implicit code execution, Bard identifies prompts that might benefit from logical code, writes it ‘under the hood,’ executes it and uses the result to generate a more accurate response.” The new method improved Bard’s accuracy for coding and maths problems by roughly 30 per cent during internal tests, Google claimed. Accuracy remains one of the biggest issues with AI chatbots, with Google warning that despite the upgrade, Bard “won’t always get it right”. Unreliable or fabricated information generated by these AI tools is known as hallucinations, and they are typically delivered in a confident way that can be even more misleading for the user. ChatGPT creator OpenAI announced a potential new method to improve AI misinformation last month, involving two AI systems debating each other until they agree on the correct answer. Read More What is superintelligence? How AI could wipe out humanity – and why the boss of ChatGPT is doomsday prepping 10 ways AI will change the world – from curing cancer to wiping out humanity
Google has unveiled a major new update to its AI chatbot Bard that significantly improves its powers of logic and reasoning.
The latest version of the ChatGPT rival is now capable of both writing and executing code by itself, the tech giant announced, allowing it to figure out problems on a far deeper level than current generative AI systems.
Google’s artificial intelligence tool is able to perform the new tasks through a new technique called “implicit code execution”, which enables Bard to detect computational prompts and run code in the background.
The result is that Bard should theoretically be able to respond more accurately to mathematical tasks and coding questions, as it will have already tested the outcomes that it proposes.
Until now, large language models (LLMs) like ChatGPT and Bard have been better suited to language and creative tasks, as they draw from their training data to predict what word will come next when talking about a specific subject.
This allows them to produce text quickly but without deep thought, making them weaker when it comes to areas like reasoning and mathematics.
“Our new method allows Bard to generate and execute code to boost its reasoning and maths abilities,” Google wrote in a blog post on Wednesday.
“With the latest update, we’ve combined the capabilities of both LLMs and traditional code to improve accuracy in Bard’s responses. Through implicit code execution, Bard identifies prompts that might benefit from logical code, writes it ‘under the hood,’ executes it and uses the result to generate a more accurate response.”
The new method improved Bard’s accuracy for coding and maths problems by roughly 30 per cent during internal tests, Google claimed.
Accuracy remains one of the biggest issues with AI chatbots, with Google warning that despite the upgrade, Bard “won’t always get it right”.
Unreliable or fabricated information generated by these AI tools is known as hallucinations, and they are typically delivered in a confident way that can be even more misleading for the user.
ChatGPT creator OpenAI announced a potential new method to improve AI misinformation last month, involving two AI systems debating each other until they agree on the correct answer.
Read More
What is superintelligence? How AI could wipe out humanity – and why the boss of ChatGPT is doomsday prepping
10 ways AI will change the world – from curing cancer to wiping out humanity