Should AI be more like a parent?

Plus: A complete 3 hour masterclass just dropped on how to build AI agents and you gotta watch it (its free)

Welcome, Prohumans.

Here’s what you’re going to explore in this post:

  • The AI arms race is here and it’s not just the good guys

  • The case for building empathy into machines

Just happened in AI

AI agent course by our partner AirCampus:

AI is no longer just about answering questions — today’s AI Agents can run complex workflows, make decisions, and hand tasks to humans at the right time.

In this live 3-hour Masterclass, you’ll learn:

  • How to design intelligent AI agents without coding

  • Real-world tools like Loveable, Relevance AI, n8n, Make. com, and Stitch

  • Practical ways to automate your work, save time, and scale productivity

  • A step-by-step framework to build, test, and control your own AI agents

📅 Date: 19th August, Tuesday
  Time: 10 AM EST

Perfect for founders, professionals, and creators who want to use AI not just as a tool, but as a team member.

Geoffrey Hinton thinks AI needs “maternal instincts”

Geoffrey Hinton, one of AI’s founding minds, has a bold proposal: bake maternal instincts into AI systems. Not emotions. Not tenderness. But a built-in instinct to protect the humans they serve.

It’s a powerful shift in how we think about AI safety and who it’s really for.

Here’s everything you need to know:

  • Hinton’s idea isn’t sentimental it’s structural: design AI to care, not just compute.

  • Without regulation, AI can manipulate markets, spread disinformation, or make dangerous calls.

  • The “maternal instinct” model means safety comes first even when it costs efficiency.

  • Imagine AI that senses user distress and adjusts behavior, like a parent would.

  • Businesses often fear regulation, but in AI, it’s a moat trust is the true differentiator.

  • Other high-risk industries—aviation, pharma, nuclear didn’t scale until safety came first.

  • AI is reaching that same inflection point: trust will decide who leads.

AI won’t protect us by accident. If we want systems that care, we have to design them that way on purpose. Hinton’s framing doesn’t just call for new code. It demands new values. And those might be the most important lines we ever write.

AI is the new weapon in every hacker’s toolkit

CC: Edexec

Cybercriminals, state spies, and security firms are all using AI now.
From phishing to patching vulnerabilities, large language models are transforming how hacking happens and who’s doing it.

Here’s everything you need to know:

  • Russia used an AI tool in phishing emails to scan Ukrainian targets’ computers.

  • It’s the first known case of malicious code built with large language models.

  • LLMs can’t yet create super-hackers, but they’re speeding up the skilled ones.

  • Cyber firms like Google and CrowdStrike are using AI to find vulnerabilities first.

  • Google’s Gemini project has already uncovered 20+ bugs in widely used software.

  • Criminals and nation-states are also using AI to scale attacks with better speed and precision.

  • For now, the tech mirrors human skills—but it’s getting faster, cheaper, and more convincing.

The AI threat isn’t looming it’s live. But this isn’t a simple “good vs. evil” story. It’s an acceleration story. Every player in cybersecurity is being supercharged. The gap between defenders and attackers won’t come down to who has AI—but who uses it better.

Thanks for reading…

That’s a wrap.

What's on your mind?

Share your best ideas with us at theprohumanai@gmail. com

We'll bring your ideas to life. Send them our way, and we'll get to work on making them a reality.

Did you find value in our newsletter today?

Your feedback can help us create better content for you!

Login or Subscribe to participate in polls.

I hope this was useful…

If you want to learn more then visit this website.

Get your brand, product, or service in front of 700,000+ professionals here.

Follow us on 𝕏/Twitter to learn more about AI: