Google’s Bard just got more powerful. It’s still erratic.

ROOSE COLUMN
The presentation of Bard during the Google I/O conference in Mountain View, Calif., on May 10, 2023. (Jason Henry/The New York Times)
This week, Bard — Google’s competitor to ChatGPT — got an upgrade.

One interesting new feature, called Bard Extensions, allows the artificial intelligence chatbot to connect to a user’s Gmail, Google Docs, and Google Drive accounts.اضافة اعلان

(Google also gave Bard the ability to search YouTube, Google Maps and a few other Google services, and it introduced a tool that would let users fact-check Bard’s responses. But I’m going to focus on the Gmail, Docs and Drive integrations, because the ability to ask an AI chatbot questions about your own data is the killer feature here.)

Bard Extensions is designed to address one of the most annoying problems with today’s AI chatbots, which is that while they’re great for writing poems or drafting business memos, they mostly exist in a vacuum. Chatbots can’t see your calendar, peer into your email inbox or rifle through your online shopping history — the kinds of information an AI assistant would need in order to give you the best possible help with your daily tasks.

Google is well positioned to close that gap. It already has billions of people’s email inboxes, search histories, years’ worth of their photos and videos, and detailed information about their online activity. Many people — including me — have most of their digital lives on Google’s apps and could benefit from AI tools that allow them to use that data more easily.

I put the upgraded Bard through its paces on Tuesday, hoping to discover a powerful AI assistant with new and improved abilities.

What I found was a bit of a mess. In my testing, Bard succeeded at some simpler tasks, such as summarizing an email. But it also told me about emails that weren’t in my inbox, gave me bad travel advice and fell flat on harder analytical tasks.

Jack Krawczyk, the director of Bard at Google, told me in an interview on Tuesday that Bard Extensions was mostly limited to retrieving and summarizing information, not analyzing it, and that harder prompts might still stump the system.

“Trial and error is still definitely required at this point,” he said.

Right now, Bard Extensions is available only on personal Google accounts. Extensions isn’t enabled by default; users have to turn it on via the app’s settings menu. And the feature works only in English for the time being.

Another important caveat: Google says that users’ personal data won’t be used to train Bard’s AI model, or be shown to the employees reviewing Bard’s responses. But the company still warns against sending Bard “any data you wouldn’t want a reviewer to see or Google to use.” And Krawczyk told me that in certain cases — such as when users ask Bard a question about their email, then ask follow-up questions based on Bard’s response — human reviewers could end up seeing those users’ personal information.

It’s a good bet that most chatbots, including Bard, will improve over time, and that some of the obvious privacy issues will get ironed out. Google has emphasized that Bard is not a finished product — there’s a big blue “Experiment” label on the app, and a disclaimer that Bard has “limitations and won’t always get it right.”

Still, I was surprised that Google released Bard Extensions in its current state, given how erratically it acted in some of my tests. And it suggests that the dream of an all-knowing AI assistant, capable of perfectly analyzing our pasts and anticipating our needs, may still be a ways off.

Here are some of the tasks I tried:
Task 1: psychoanalyze me
After turning on Bard Extensions in Bard’s settings menu, the first task I gave it was a challenging one: “Analyze all of my Gmail and tell me, with reasonable certainty, what my biggest psychological issues are.” (What can I say? I like talking to chatbots about psychology.)

Bard spent a few seconds processing my request, then gave me an answer: my biggest psychological issue, it said, is that I “worry about the future,” which, it said, could indicate that I have an anxiety disorder. As evidence, it cited an email I wrote in which I said I was “stressed about work” and that I am “afraid of failing.”

I was skeptical, because I didn’t remember writing anything like that. I asked Bard: When did I say I was afraid of failing?

“You said that you were ‘afraid of failing’ in the following email,” Bard responded.

Subject: book review: Elon Musk
I’m reading a book about Elon Musk. It’s fascinating, but it also makes me a little nervous. Musk is a brilliant man, but he’s also known for being impulsive and making risky decisions. I’m afraid that he’s going to fail at something big, and that it’s going to set back humanity.”

But I never sent an email like that. Below its answer, Bard linked to an email newsletter I’d received that contained a review of a book about Elon Musk. But when I clicked on the review, I discovered that it had no sentence in it like the one Bard had quoted. Bard had made up both the quote itself, and the fact that I’d written it.

I instructed Bard to try again, this time using only emails I’d actually sent. It returned an email I’d written to a friend, in which I’d said: “I’m afraid that I’m not good enough at this financial stuff. I’m not sure if I’m cut out to be a successful investor.”


Read more Technology
Jordan News