- The Prohuman
- Posts
- ChatGPT Images 2.0 gets practical
ChatGPT Images 2.0 gets practical
Plus: Mythos leak raises trust questions
Hello, Prohuman
Today, we will talk about these stories:
OpenAI fixed a major image problem
Anthropic’s locked model wasn’t locked
SpaceX may buy Cursor for $60B
How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads
The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.
Image AI just became usable

Image credits: Open AI
The big upgrade is boring, and that’s why it matters.
OpenAI’s new ChatGPT Images 2.0 can generate readable menus, UI elements, comics, and marketing assets with accurate text. Earlier image models often mangled words. OpenAI says the model supports non-Latin scripts and outputs up to 2K resolution.
This moves image generation from novelty to workflow. If a model can reliably place small text, icons, labels, and layouts where asked, designers and marketers can use it for real drafts instead of joke screenshots. That is a bigger shift than prettier art styles. The screen is the product.
Expect pressure on Canva, stock asset tools, quick-turn agencies, and any software built around simple creative production. You can almost hear someone clicking through ad variants at 11 p.m. because the cost just dropped again.
When AI images stop looking fake, what becomes the new giveaway?
Safety claims meet basic access failure

Anthropic says Mythos is dangerous enough to tightly limit, yet outsiders reportedly got in on day one.
Bloomberg reports a small group of unauthorized users accessed Anthropic’s new Mythos model through a private online forum. The access began the same day Anthropic announced limited testing for selected companies, and users have continued using it regularly.
This is the awkward gap in AI safety right now. Companies talk about model risk at the frontier level, but sometimes the weak point is ordinary access control, credentials, or rollout process. That matters more than polished policy language.
Expect customers and regulators to ask harder questions about who can actually touch these systems, how logs are monitored, and how quickly access can be revoked. In a market built on trust, small leaks can sound louder than benchmark wins.
If companies cannot secure early access, what happens when these models are everywhere?
SpaceX is buying time in AI

A rocket company may spend $60 billion on coding software months before going public.
SpaceX said it has a deal with Cursor that lets it either buy the startup later this year for $60 billion or invest $10 billion now. Cursor makes AI coding tools, hit $100 million in annual recurring revenue in under two years, and says limited computing power has slowed its growth.
This looks less like a clean acquisition and more like a fast repair job for Musk’s AI position. Cursor brings product traction and developers right away, while SpaceX brings chips, data centers, and cash. The timing matters.
If this closes before the IPO, public investors may be buying an AI story as much as a space company. You can almost hear keyboards late at night as rivals OpenAI and Anthropic push harder into coding tools.
Does SpaceX still know what business it is in, or is that now the point?
Prohuman team
Covers emerging technology, AI models, and the people building the next layer of the internet. | ![]() Founder |
Writes about how new interfaces, reasoning models, and automation are reshaping human work. | ![]() Founder |
Free Guides
Explore our free guides and products to get into AI and master it.
All of them are free to access and would stay free for you.
Feeling generous?
You know someone who loves breakthroughs as much as you do.
Share The Prohuman it’s how smart people stay one update ahead.



