Google has announced a major update to its AI assistant, Gemini, moving beyond traditional text and visual responses into the realms of "Spatial Intelligence" and "Functional Computing." This update allows users to generate 3D models and live physical simulations directly within the chat interface.
اضافة اعلان
According to reports from Android Headlines and other tech sources, the core of this update is interactivity. Instead of merely explaining physical laws or chemical structures via text, Gemini now builds a miniature virtual environment.
For example, if a user asks about "the Moon's orbit around the Earth," the AI will not just provide a description but will render a 3D model that can be rotated and zoomed. It also includes control sliders to modify variables such as speed or gravity, allowing users to see the immediate effect on the simulation.
Key Features of the Update:
Live Physical Simulations: The ability to run dynamic experiments, such as a double pendulum or fluid mechanics, based on user prompts.
3D Model Generation: Support for WebGL technology to render complex objects, ranging from biomolecules to engineering and architectural models.
Generative UI: Gemini programs the interactive tool in the background and integrates it instantly into the response, providing a unique educational and practical experience.
Target Audience and Availability
Reports indicate that this feature is primarily aimed at students, researchers, and engineers. It enables them to test preliminary designs and visualize complex data during brainstorming sessions without needing specialized external software.
The feature is currently available to Gemini Pro users on Android, iOS, and the web. Users can activate these capabilities with voice or text commands such as: "Show me how a car's suspension system works" or "Help me visualize the covalent bond in a water molecule."
Observers view this move as a strategic step by Google to outpace competitors like OpenAI and Anthropic. Google is betting on the ability of models like Gemini 3.1 Pro to merge programming code with computer vision, providing functional tools that don't just "tell" the user information but allow them to "experience it" in an interactive digital environment.
Source: Al Jazeera