After the initial surge in its early years, rapid growth, and stock market breakthroughs, generative artificial intelligence (AI) is expected to face sensitive challenges in 2026.
اضافة اعلان
Is the AI bubble about to burst?
The sector is clouded by anxiety and tension. In mid-November, several major investors, including Japan’s SoftBank and billionaire Peter Thiel, co-founder of PayPal and Palantir, announced that they had sold all their shares in NVIDIA, the American chipmaker. Does this move indicate that the AI bubble is on the verge of bursting?
Massive investments in the sector seem disproportionate to the profits achieved. Furthermore, tech giants and chip manufacturers invest in AI startups, which then sell them products and services in a fragile circular economic system—vulnerable to collapse if markets fluctuate. This recalls the early stages of the 2000 dot-com bubble.
In an interview with the BBC, Sundar Pichai, CEO of Alphabet (Google’s parent company), said: “No company, including us, would survive if the bubble bursts.”
Is the end of office jobs near?
Philip Jefferson, Vice Chair of the U.S. Federal Reserve, recently stated that “the AI phenomenon exists and is affecting how companies view their workforce.”
Some major tech companies that heavily invested in AI have cited new productivity gains as justification for laying off thousands of administrative employees.
However, experts differ on the extent and speed of AI’s impact on the job market.
While some experts believe these changes could be profound enough to require a universal basic income to maintain social stability, others expect a gradual shift. McKinsey estimates that about 30% of U.S. jobs will be automated by 2030, meaning they will be performed by machines or software rather than humans, while Gartner predicts that AI will create more jobs than it eliminates by 2027.
When will superintelligent AI become a reality?
When will humanity create AI capable of matching or even surpassing human abilities? Opinions vary among specialists.
Dario Amodei, founder of American company Anthropic, predicts that artificial general intelligence (AGI) will exist by 2026. Meanwhile, Sam Altman, CEO of OpenAI, believes it will be possible to create AI capable of scientific discoveries by 2028.
Meta has made superintelligent AI a priority, investing hundreds of millions to train a top-tier research team. However, Jan LeCun, the company’s chief AI researcher who is preparing to leave, considers the idea of “creating geniuses in a data center” to be nonsense.
What future for the media?
In an interview with AFP, consultant David Caswell, former Yahoo! staff and part of BBC News Lab, said: “Generative AI is driving the biggest transformation in the information ecosystem since the invention of the printing press.”
Traditional media are struggling due to chatbots and AI summaries from Google, which reuse their content without requiring users to visit news sites. This reduces website traffic and ad revenue.
Experts suggest potential solutions, such as turning media content into high-value premium products, as The Economist and Financial Times have done, applying content-blocking technologies, suing AI companies, or partnering with them, as The New York Times did with Amazon or Mistral did with AFP.
Will low-quality content continue to spread?
Despite exaggerated claims that AI can address climate issues or improve cancer detection, its most visible impact in daily life is the proliferation of low-quality images and videos generated by AI tools.
From bears jumping on trampolines on Instagram to exploding city scenes on TikTok, these fake contents, often mistaken as real, contribute to confusion and misinformation.
Although platforms have implemented measures to classify, monitor, and remove AI-generated content, the overwhelming volume of such content appears unstoppable.
Sky News Arabia