• The Prohuman
  • Posts
  • From API to Action: Gemini 2.5 takes control

From API to Action: Gemini 2.5 takes control

Plus: A complete course on n8n and teaching you how to build AI agents

Welcome, Prohumans.

Here’s what you’re going to explore in this post:

  • Google’s newest AI agent can use a computer like you do

  • Google’s AI subscription just went global, what’s inside?

  • An AI just directed a film and Hollywood isn’t thrilled

Just happened in AI

A complete masterclass on n8n:

🚀 Thursday, October 9 | 10 AM EST

Join our 3-hour Deep Dive LIVE Session where we’ll teach you n8n step by step and introduce you to 12+ Advanced Plug-n-Play AI Agents built to handle the heavy lifting in your business and profession.

💡 In this session, you’ll:
Learn how to master n8n, the ultimate workflow automation tool
Discover ready-to-use Plug-n-Play AI Agents for client acquisition, customer support, content creation, and more
Explore how to launch your own AI Agency or become an AI Consultant overnight
See how top professionals scale smarter and faster with AI

Walk away with practical n8n skills and a ready-to-deploy digital AI team to supercharge your work.

⚡ Seats are limited.
👉 Grab Your Spot Now

Gemini 2.5 Computer Use opens a new chapter in interface automation

Image Credits: Google

Google DeepMind has launched Gemini 2.5 Computer Use, a specialized AI model that doesn’t just generate text. It uses apps, clicks buttons, fills forms, and navigates UIs like a human. And it’s already outperforming rivals on real-world tasks.

Here’s everything you need to know:

  • Built on Gemini 2.5 Pro, this new model lets developers create agents that operate web and mobile apps via visual interaction.

  • The agent analyzes screenshots, UI context, and user requests to perform actions like clicking, scrolling, and typing in a loop.

  • It’s already beating the competition in key benchmarks like WebVoyager and AndroidWorld, with lower latency and higher accuracy.

  • The model works best in browsers today, but shows promising results on mobile interfaces too.

  • Real-world use cases include UI testing, agentic personal assistants, and automated workflows that used to rely on brittle scripts.

  • Google has embedded safety checks to prevent misuse, including step-by-step verification and high-risk action controls.

  • Early adopters report up to 50% faster execution and major improvements in reliability across complex workflows.

This is a big step toward real digital agents not just copilots that suggest ideas, but coworkers that can take action. The gap between AI "thinking" and "doing" is closing fast. The next frontier won’t be smarter answers. It’ll be smarter execution.

Google AI Plus expands to 36 more countries

Image Credits: Google

After rolling out Google AI Plus in Indonesia and 40 countries earlier this year, Google is now doubling down bringing its premium AI subscription to 36 more regions worldwide.

Here’s everything you need to know:

  • Google AI Plus gives users access to its latest AI models and features at a lower price than you might expect.

  • The plan includes upgrades to tools like Nano Banana (for image editing), video generation in Gemini, and embedded AI in Gmail and Docs.

  • Subscribers also get 200GB of storage, extended NotebookLM access, and deeper AI integration across Google’s suite.

  • With this week’s expansion, the service is now available in 77 countries globally.

  • Google is offering 50% off for the first six months to attract early adopters.

  • While pricing varies by region, the strategy is clear: scale up usage fast, then build long-term value.

  • The move positions Google to compete directly with OpenAI’s growing ecosystem of paid consumer AI tools.

Google’s trying to make AI a utility, something you pay for monthly like storage or streaming. The big question is whether users will keep paying once the novelty wears off. It’s not just about features. It’s about habit.

The Sweet Idleness imagines an automated future

Image Credits: Andrea Iervolino AI

What happens when 99% of jobs are automated and filmmaking is one of them? That’s the world imagined in The Sweet Idleness, a new film directed not by a human, but by an AI system named FellinAI.

Here’s everything you need to know:

  • Produced by Andrea Iervolino, the film is set in 2135, where nearly all labor is automated and the remaining 1% are stuck in the mines.

  • It’s the first feature directed by an AI agent, developed as a tribute to European cinema and named after Fellini.

  • The digital actors were created using Actor+, a system that generates virtual performances based on real human faces and licensed likenesses.

  • The teaser, with nods to Nino Rota’s scores, blends dystopian narrative with nostalgic visuals.

  • The film drops as tensions in Hollywood remain high over AI’s growing role in content creation.

  • Critics have called the project “the death of art,” while others argue it’s an alternative form, not a replacement.

  • Iervolino says he’s still “a human in the loop,” and insists the tech can empower creatives to scale their visions not erase them.

We’ve seen automation reshape factories, offices, even code. Cinema might be next. Whether that’s a creative expansion or existential threat depends on who’s holding the camera and who gets to decide what’s “real” art.

Thanks for reading…

That’s a wrap.

What's on your mind?

Share your best ideas with us at theprohumanai@gmail. com

We'll bring your ideas to life. Send them our way, and we'll get to work on making them a reality.

Did you find value in our newsletter today?

Your feedback can help us create better content for you!

Login or Subscribe to participate in polls.

I hope this was useful…

If you want to learn more then visit this website.

Get your brand, product, or service in front of 700,000+ professionals here.

Follow us on 𝕏/Twitter to learn more about AI: