Want to conduct better online research? Think of artificial intelligence as a librarian.
Today, many public libraries and universities are hiring “AI librarians” to help people make use of the latest search technologies.
اضافة اعلان
Watching librarians at work, it’s clear they treat AI with the same healthy skepticism a cautious buyer might show toward a used-car salesman. A librarian is someone you can trust—someone who verifies and double-checks. Trevor Watkins, a librarian at George Mason University, explains: “The instant gratification of information—which I define as the urge to accept information without rigorously fact-checking it—is dangerous.”
Jeffrey A. Fowler of The Washington Post enlisted three professional librarians to help evaluate AI research tools—from ChatGPT 5 to Google’s AI—by asking them a set of tough questions.
Testing the Best AI Tools
Our goal was to find the kind of AI that could save us time without feeding us wrong answers. Here are five practical lessons from our tests and discussions:
1. Start with Google’s AI Mode—not Google’s AI Overviews.
The tool librarians picked as the winner in our AI research test isn’t most people’s first stop online—but it should be.
AI Mode is a hybrid of Google Search and a chatbot. You can access it via the button in the upper-left corner of results.
AI Mode outperformed all the tools we tested (including ChatGPT 5), especially Google’s AI Overviews, which appear by default at the top of most results. Why is it better? Google says it uses AI “reasoning” to identify subtopics, then runs dozens of simultaneous searches. It also taps into the most up-to-date AI model upgrades.
It still struggles with some queries, but for needle-in-a-haystack research, its wide-scope searching is invaluable.
2. Be extremely specific.
Most AI tools aren’t yet “smart” enough to ask key follow-ups. Watkins notes this is where librarians shine compared to AI. With AI, “it’s up to you, the user, to reframe your question and provide more context.”
Chris Markman, librarian at Palo Alto City Library, explains that nine times out of ten, what you’re seeking lies two or three layers deeper than your initial question. Add examples, timeframes, or constraints. San Jose State librarian Sharsley Rodriguez suggests prompts like “discuss the opposing view,” “point out weaknesses in your reasoning,” or “what biases might be present here?”
3. Watch out for weaknesses.
All AI tools had blind spots:
Recent events: AI models are frozen in time. For example, without live web search, ChatGPT 5 knows nothing beyond September 30, 2024.
Visuals: AI still struggles with analyzing images.
Specialized sources: Access to academic or paid media is often limited due to publishing restrictions. Rodriguez notes, “Much scientific research is still not open or easily discoverable—even with search engines.” For certain queries, a human librarian with database access is better.
4. Double-check citations—seriously.
Sometimes AI provides answers with source links—but the content may still be wrong. Rodriguez warns: “It’s easy to be fooled when answers sound so confident.” Always click through, verify authors, institutions, and publication dates. AI often pulls from blogs, low-quality news, or SEO filler rather than peer-reviewed sources.
⚠️ Red flag: answers citing Reddit or X (formerly Twitter). Opinions are fine—but they’re not facts.
5. Ask the same question more than once.
If you have time, open multiple tabs and pose your query to different AI tools. The differences may surprise you—or reassure you when results align.
Markman emphasizes that having at least some subject knowledge helps you recognize wrong answers and avoid time-wasting rabbit holes. “You don’t need to know everything,” he says, “but you should know enough to spot mistakes.”
And when in doubt? “Ask a librarian,” says Watkins. Many public and academic libraries now offer “virtual reference” services—where users can pose questions to professional librarians (not chatbots) online.
Asharq Al-Awsat