• The Prohuman
  • Posts
  • Google Pixel 10 should be called "AI phone"

Google Pixel 10 should be called "AI phone"

Plus: Learn about the 10 AI agents that will do everything for you (don't miss out on this)

Welcome, Prohumans.

Here’s what you’re going to explore in this post:

  • Google’s Pixel 10 is here. It’s all about AI.

  • NASA’s AI can now forecast solar storms

  • Microsoft just gave AI a spatial imagination

Just happened in AI

Together with  Nella AI

This Friday, August 22 at 10 AM EST, step into a live hands-on jam session where we’ll showcase 10+ ready-to-use advanced AI agents designed to do the heavy lifting for you.

From automating workflows to powering your content engine to supporting your business growth—these AI agents are built to help you scale faster and smarter. Many are already using them to aim for their first $100K in just 6 months.

🚀 Whether you’re an entrepreneur, a creator, or a professional, there’s an AI agent waiting to work for you.

👉 Don’t miss the chance to meet your future AI workforce.

The smartphone war just turned into an AI arms race.

Google just unveiled its Pixel 10 lineup and it’s not shy about why this matters. AI is now the centerpiece of its pitch, not just another feature.

Here’s everything you need to know:

  • Google’s Pixel 10 phones now center on Gemini, its AI assistant, doing everything from scheduling to real-time scene analysis.

  • A new feature called “Magic Cue” anticipates your needs before you ask like surfacing flight info mid-call.

  • The $799 base model climbs to $1,799 for the foldable Pro Fold, which touts an 8-inch display and heavy-duty hinge.

  • “Camera Coach” can suggest angles, lighting, and even merge photos into one where everyone looks their best.

  • Pixel users get a year of Google’s “AI Pro” subscription normally $19/month with access to advanced tools and storage.

  • Google isn’t outselling Apple or Samsung yet, but it’s making a clear bet: AI is the next wedge to shift the market.

  • With Apple’s Siri overhaul delayed until 2026, Google has a rare opening to lead with capability, not just marketing.

Most people don’t switch phones over one shiny feature but if AI genuinely makes daily tasks easier, it could chip away at brand loyalty. This isn’t just a tech launch; it’s Google betting that usefulness will outcompete habit.

Surya: the AI that reads the Sun’s next move

Image Credits: NASA

NASA and IBM just launched Surya, a foundation AI model trained to predict solar flares. It’s built on 9 years of sun data and it's already beating benchmarks.

Here’s everything you need to know:

  • Surya was trained on high-resolution data from NASA’s Solar Dynamics Observatory, covering nearly a full solar cycle.

  • It can predict solar flares up to two hours ahead, a leap in space weather forecasting.

  • Surya beat existing models by 16% in early tests, without requiring extensive labeling.

  • The model works by learning patterns from raw ultraviolet, magnetic, and velocity imagery of the Sun.

  • It’s open-source: scientists can access the model on HuggingFace and code on GitHub.

  • Use cases include protecting satellites, GPS systems, astronauts, and even power grids from solar disruptions.

  • NASA says the same architecture could power AI models in planetary science, Earth observation, and more.

This is a rare combo, groundbreaking science, critical real-world impact, and open-access tooling. Surya shows that foundation models aren’t just for text or chatbots. They might soon be the backbone of scientific discovery itself.

AI is finally learning to think in 3D

Image Credits: Microsoft

Microsoft Research just introduced MindJourney, a new way for AI to “mentally explore” 3D spaces. The breakthrough tackles a major weakness in vision-language models: understanding how objects relate in real-world space.

Here’s everything you need to know:

  • VLMs (like Gemini or GPT-4o) are great with static images but stumble with spatial questions like what’s behind you if you turn left.

  • MindJourney simulates 3D movement using a video-trained world model, letting AI imagine what different perspectives might look like.

  • Instead of brute-forcing every option, it uses a “spatial beam search” to focus only on the most promising paths.

  • The system improved VLM performance by 8% on spatial reasoning benchmarks with zero extra training.

  • It works by layering symbolic reasoning from VLMs with world models that understand motion and perspective.

  • This means smarter agents for robotics, AR/VR, smart homes and possibly tools for people with visual impairments.

  • Microsoft plans to expand the system to predict future changes in scenes bringing planning and vision together in one loop.

This feels like a foundational shift. Vision alone isn’t enough, spatial understanding is the missing link for real-world AI. MindJourney hints at a future where AI agents don’t just “see,” they move, test, and plan inside a mental map. That’s how humans think and it’s about time AI caught up.

Thanks for reading…

That’s a wrap.

What's on your mind?

Share your best ideas with us at theprohumanai@gmail. com

We'll bring your ideas to life. Send them our way, and we'll get to work on making them a reality.

Did you find value in our newsletter today?

Your feedback can help us create better content for you!

Login or Subscribe to participate in polls.

I hope this was useful…

If you want to learn more then visit this website.

Get your brand, product, or service in front of 700,000+ professionals here.

Follow us on 𝕏/Twitter to learn more about AI: