Sdorn Provides Timely and Accurate Technology News, Covering APP, AI, IoT, Cybersecurity, Startup and Innovation.
⎯ 《 Sdorn • Com 》

Google may soon roll out AI ‘personal life coach’

2023-08-18 12:58
Google is reportedly planning to roll out a new artificial intelligence tool that provides “life advice” and acts as a “personal life coach” along with many other AI chatbots to perform tasks like writing and tutoring. The new tools under development are reportedly part of the tech giant’s efforts to drive research further on generative AI systems like ChatGPT in competition with rivals, including Microsoft and OpenAI. Google’s AI teams are testing the use of new tools, such as those behind chatbots like OpenAI’s ChatGPT and the company’s own Bard, into a personal life coach that offers life advice on topics ranging from career decisions to relationship troubles, the New York Times first reported. The tech giant has reportedly teamed with the AI training company Scale AI to evaluate the new “life coach” chatbot. Over 100 experts with doctoral degrees in various fields are also testing the bot rigorously, according to the New York Times. Since the surge in popularity of OpenAI’s ChatGPT, many tech companies and services, including Google, Facebook, and Snapchat have attempted to develop their own versions of the generative AI technology to better interact with users and offer human-like responses to queries. However, some of these AI tools have raised concerns over the validity of their responses as well as privacy issues. Experts have also flagged multiple instances of chatbots making facts up in what is widely called “AI hallucination” – a problem many say may not be fixable. In one instance, an American non-profit for supporting those with eating disorders was forced to take down its AI chatbot after it was revealed that it offered harmful advice instead of helping people. AI experts continue to warn that while such chatbots are very good at giving convincing answers in response to questions, they can often provide information that is not factually accurate. The latest attempt by Google to use AI technology to offer personalised life advice strays from its current guidelines for its Bard chatbot which warns users not to use the AI tool’s responses for “medical, legal, financial, or other professional advice.” Bard’s guidelines also warn users not to include “confidential or sensitive information” in their conversations with the chatbot. Read More Snapchat experiences ‘temporary outage’ as My AI chatbot posts own Story Amazon is rolling out a generative AI feature that summarizes product reviews Paper exams, chatbot bans: Colleges seek to 'ChatGPT-proof' assignments ‘I’m scared’: Snapchat’s AI posts image that terrifies users How much of a threat does AI really pose? Get your ticket for our free event AI-driven cyberattack can now steal passwords with near 100 per cent accuracy
Google may soon roll out AI ‘personal life coach’

Google is reportedly planning to roll out a new artificial intelligence tool that provides “life advice” and acts as a “personal life coach” along with many other AI chatbots to perform tasks like writing and tutoring.

The new tools under development are reportedly part of the tech giant’s efforts to drive research further on generative AI systems like ChatGPT in competition with rivals, including Microsoft and OpenAI.

Google’s AI teams are testing the use of new tools, such as those behind chatbots like OpenAI’s ChatGPT and the company’s own Bard, into a personal life coach that offers life advice on topics ranging from career decisions to relationship troubles, the New York Times first reported.

The tech giant has reportedly teamed with the AI training company Scale AI to evaluate the new “life coach” chatbot.

Over 100 experts with doctoral degrees in various fields are also testing the bot rigorously, according to the New York Times.

Since the surge in popularity of OpenAI’s ChatGPT, many tech companies and services, including Google, Facebook, and Snapchat have attempted to develop their own versions of the generative AI technology to better interact with users and offer human-like responses to queries.

However, some of these AI tools have raised concerns over the validity of their responses as well as privacy issues.

Experts have also flagged multiple instances of chatbots making facts up in what is widely called “AI hallucination” – a problem many say may not be fixable.

In one instance, an American non-profit for supporting those with eating disorders was forced to take down its AI chatbot after it was revealed that it offered harmful advice instead of helping people.

AI experts continue to warn that while such chatbots are very good at giving convincing answers in response to questions, they can often provide information that is not factually accurate.

The latest attempt by Google to use AI technology to offer personalised life advice strays from its current guidelines for its Bard chatbot which warns users not to use the AI tool’s responses for “medical, legal, financial, or other professional advice.”

Bard’s guidelines also warn users not to include “confidential or sensitive information” in their conversations with the chatbot.

Read More

Snapchat experiences ‘temporary outage’ as My AI chatbot posts own Story

Amazon is rolling out a generative AI feature that summarizes product reviews

Paper exams, chatbot bans: Colleges seek to 'ChatGPT-proof' assignments

‘I’m scared’: Snapchat’s AI posts image that terrifies users

How much of a threat does AI really pose? Get your ticket for our free event

AI-driven cyberattack can now steal passwords with near 100 per cent accuracy

Tags tech