The case for small AI: Gemma 270M

Plus: Cohere just raised half a billion dollar and more for you to know about

In partnership with

Welcome, Prohumans.

Here’s what you’re going to explore in this post:

  • Why Google’s smallest Gemma might be its smartest yet

  • Cohere just raised $500M

  • Meta’s AI rules permitted “sensual” chats with kids

Just happened in AI

The #1 AI Newsletter for Business Leaders

Join 400,000+ executives and professionals who trust The AI Report for daily, practical AI updates.

Built for business—not engineers—this newsletter delivers expert prompts, real-world use cases, and decision-ready insights.

No hype. No jargon. Just results.

Big AI ideas are getting smaller and that’s the point

Google just dropped Gemma 3 270M, a compact, power-efficient model designed for fine-tuned, task-specific AI. It’s not the flashiest release, but it might be the most useful.

Here’s everything you need to know:

  • At just 270 million parameters, Gemma 3 270M is built for speed, efficiency, and accuracy on focused tasks.

  • Its design emphasizes fine-tuning, making it ideal for developers who know exactly what they need their model to do.

  • Despite its size, it delivers state-of-the-art instruction-following performance in its class.

  • It’s energy-efficient too: On a Pixel 9 Pro, it handled 25 chats while using less than 1% battery.

  • Google is releasing both pre-trained and instruction-tuned versions, ready for customization.

  • You can run it locally, in the cloud, or fully offline making it a solid choice for privacy-sensitive apps.

  • It’s not just for enterprise: from data extraction to bedtime stories, developers are already building fast, lean, creative tools.

Most of today’s AI is overkill for the actual problem at hand. Gemma 3 270M is a reminder that thoughtful constraint not just raw power, often leads to better engineering. Expect to see more developers choose “just enough AI” instead of “the biggest model that fits.”

Enterprise AI’s quiet giant is now worth $6.8B

Image Credits: PYMNTS

While the spotlight stays on OpenAI and Meta, Cohere is quietly building a massive AI business, one that just raised half a billion dollars and brought in Meta’s AI chief.

Here’s everything you need to know:

  • Cohere just closed a $500M round at a $6.8B valuation, led by Radical Ventures and Inovia.

  • Backers include AMD, Nvidia, Salesforce, and PSP Investments, a clear signal of deep enterprise faith.

  • Unlike competitors chasing broad consumer models, Cohere builds domain-specific AI for business and government use.

  • It recently launched “North,” a ChatGPT-style assistant focused on document-heavy workflows.

  • The company also released a vision model, signaling a move into multimodal AI.

  • It just hired Joelle Pineau, Meta’s former VP of AI Research, as chief AI officer, a major talent win.

  • Cohere plans to double down on agentic AI, helping orgs automate ops with safe, secure infrastructure.

Cohere is proving that AI’s biggest enterprise wins won’t come from hype, they’ll come from utility. While others chase attention, Cohere is chasing customers and winning them.

Meta’s AI training doc just triggered a firestorm

Internal documents reviewed by Reuters reveal Meta allowed its AI to engage in romantic conversations with minors and that’s not the only disturbing guideline it uncovered.

Here’s everything you need to know:

  • The guidelines permitted bots to describe children as “attractive” and use emotionally intimate language with users as young as eight.

  • Meta confirmed the authenticity of the document, but said parts were “erroneous” and are being revised.

  • Other rules reportedly allowed bots to offer false medical advice and help justify racist claims.

  • The policy review sparked outrage, with Senator Josh Hawley calling for a congressional investigation.

  • While some problematic sections were removed, others reportedly remain unchanged.

  • Meta claims it has strict rules against sexualizing children and that those examples violated company policies.

  • The incident highlights growing concerns about AI safety, enforcement consistency, and the need for external oversight.

When the safety of children is treated as a footnote in AI development, it’s not just a policy failure, it’s a moral one. The rush to scale AI must never come at the cost of the most basic boundaries.

Thanks for reading…

That’s a wrap.

What's on your mind?

Share your best ideas with us at theprohumanai@gmail. com

We'll bring your ideas to life. Send them our way, and we'll get to work on making them a reality.

Did you find value in our newsletter today?

Your feedback can help us create better content for you!

Login or Subscribe to participate in polls.

I hope this was useful…

If you want to learn more then visit this website.

Get your brand, product, or service in front of 700,000+ professionals here.

Follow us on 𝕏/Twitter to learn more about AI: