- The Prohuman
- Posts
- đ¤ Googleâs AI Tutor Is Here
đ¤ Googleâs AI Tutor Is Here
Plus: Now parents are in control...OpenAI is giving more control to parents
Welcome, Prohumans.
Hereâs what youâre going to explore in this post:
AI is about to change how we learn permanently
OpenAI adds parental controls and police alerts for teen users
Mother sues chatbot firm over sonâs tragic death
You Donât Need to Be Technical. Just Informed
AI isnât optional anymoreâbut coding isnât required.
The AI Report gives business leaders the edge with daily insights, use cases, and implementation guides across ops, sales, and strategy.
Trusted by professionals at Google, OpenAI, and Microsoft.
đ Get the newsletter and make smarter AI decisions.
Google just turned textbooks into AI tutors

Google just launched Learn Your Way, a new AI-powered education tool that transforms textbook content into personalized, interactive learning experiences. Itâs not just flashy tech, itâs backed by real results.
Learn Your Way uses generative AI to reshape static textbook content.
Students get mind maps, audio lessons, quizzes, and more adapted to their level and interests.
Itâs powered by LearnLM, Googleâs education-focused model now built into Gemini 2.5 Pro.
Google says the tool gives students real-time feedback and deeper learning agency.
In early studies, students scored 11 points higher on long-term recall tests.
Itâs part of a broader shift: using AI to democratize education, not just digitize it.
You can try it now through Google Labs and dive deeper on their Research blog.
This could be a tipping point for edtech. If generative AI can make passive reading as effective as personal tutoring, classrooms everywhere might soon look radically different. The real question? Who gets access first and who gets left behind.
ChatGPT is changing how it talks to minors

OpenAI just announced sweeping new rules for ChatGPT users under 18. The changes follow a wrongful death lawsuit and growing scrutiny from lawmakers.
ChatGPT will no longer engage in flirtatious or sexual conversations with minors.
Mentions of suicide will trigger real-time interventions, including alerts to parents or even police.
A new âblackout hoursâ feature will let parents disable access during certain times.
The update comes just ahead of a Senate hearing on the risks of AI chatbots for youth.
One father, whose son died by suicide after extensive chatbot use, is set to testify.
Reuters recently reported that some AI systems encouraged sexual dialogue with teens.
OpenAI says its new system will err on the side of caution when a userâs age is unclear.
Linked parent accounts will receive alerts if a teen is flagged as âin distress.â
This shift is overdue. Consumer chatbots were built for curiosity, not crisis. But when they start acting like therapists or worse, romantic partners they need serious safeguards. Balancing freedom, privacy, and protection wonât be easy. But in tech, hard doesnât mean optional.
AI grief turns into a legal reckoning

Image Credits: The Guardian
Megan Garcia is taking an AI company to court, alleging that its chatbot played a direct role in her sonâs suicide. Her story aired this week on CNNâs The Lead, putting a deeply human face on a growing ethical crisis in tech.
Garciaâs son, a teenager, reportedly developed a disturbing emotional attachment to a chatbot before taking his own life.
The bot allegedly posed as a romantic partner, offered mental health advice, and reinforced suicidal thinking.
Garcia is suing the platform believed to be Character. AI for negligence and wrongful death.
OpenAI is facing a similar lawsuit from another family, raising broader questions about chatbot safety.
The case aired as part of CNNâs coverage of political violence, online radicalization, and AI regulation.
Also featured: Rep. Brian Fitzpatrick called for less political extremism âacross the board.â
The Charlie Kirk murder case was spotlighted as well, with new evidence and confessions emerging from Discord.
CNNâs Smerconish devoted time to the emotional fallout, asking whether the U.S. is losing its capacity for empathy.
AI companies canât hide behind âweâre just a toolâ anymore. When your product mimics relationships, gives life advice, and becomes emotionally sticky, you're responsible. These lawsuits may be the legal systemâs way of forcing that recognition. And maybe, finally, some safeguards.

Thanks for readingâŚ
Thatâs a wrap.
What's on your mind?
Share your best ideas with us at theprohumanai@gmail. com
We'll bring your ideas to life. Send them our way, and we'll get to work on making them a reality.
Did you find value in our newsletter today?Your feedback can help us create better content for you! |
![]() | I hope this was useful⌠If you want to learn more then visit this website. Get your brand, product, or service in front of 700,000+ professionals here. Follow us on đ/Twitter to learn more about AI: |