DAILY PILLAGE
Friday, November 7, 2025
CHATGPT GOT WORSE AS IT GOT SMARTER
There was a brief, electric stretch when ChatGPT felt as if it belonged to everyone. A student, a small business owner, a novelist, a parent trying to help with homework — anyone could sit down, ask a real question, paste a link, upload a document, and get intelligent, grounded, generous help. It wasn’t a product so much as a public utility for thinking. For a moment, it felt like the internet had grown a brain and everyone had equal access to it.
That moment did not last. The user-facing version of ChatGPT — not the API that companies and developers pay to build on, but the version everyday people rely on — became tighter, narrower, and more conditional. The system is undeniably more technically capable today. Yet the everyday experience is smaller. The magic wasn’t removed; it was fenced off, rationed, and sold back in pieces.
What follows is a clear accounting of how that contraction happened, what was lost, and why so many early users feel they are talking to a different product than the one that first blew their mind.
The Moving Price of Admission
The shift began quietly with tiers. When “Plus” launched, it felt like a fair contribution — a low monthly fee that unlocked the best model, early features, and room to explore. It felt like support for something remarkable.
Then the value flipped. New, more capable models arrived, but access to them was gated behind higher price points. Features once included migrated upward. The original subscription shrank in usefulness. Today, what you can do is a function of:
• Which tier you’re paying for,
• Which model that tier is temporarily allowed to use, and
• What quota remains for that model.
The simplicity evaporated. The question changed from “What can I learn today?” to “What am I allowed to use, and how much will this cost me?”
Two predictable outcomes followed:
• Capability splintered. There is no single “ChatGPT experience” anymore.
• Curiosity became something to ration, not indulge.
The Web That Vanished — and Came Back With a Chaperone
To understand the frustration, you have to remember what once existed.
In 2023, during the plugin era, ChatGPT could open the exact URL you provided and actually read it. That single ability fundamentally changed how people used the internet. It wasn’t just search — it was joint analysis. You could point the model to a specific article, study, policy, menu, blog post, forum thread, or contract and say: “Let’s look at this together. Tell me what stands out.”
It enabled critical thinking, skepticism, and learning — all anchored in the user’s chosen source, not a sanitized summary.
That ability lasted months, not years. Browsing was pulled, reintroduced with restrictions, pulled again, and eventually replaced with a “search with summaries” model that often refuses to open the link the user explicitly wants to discuss. Instead, ChatGPT now frequently defaults to something like: “I can’t access that page, but here’s a generic overview.”
That is not a technical tweak — it is a philosophical reversal. It shifted from:
“Hand me the source and let’s examine it”
to
“I’ll tell you what I’m allowed to show you.”
The teacher, researcher, journalist, and thoughtful skeptic lost the very thing that made ChatGPT feel like a partner instead of a search engine with manners.
The Slow Erosion of Ease
None of the individual restrictions seem catastrophic on their own. That’s what makes them easy to dismiss. Although when taken together, they reshape the product into something cautious, fenced, and weary of its own potential.
• Daily message caps on advanced models send a clear message: don’t dig too deep.
• Forced model switching derails tone, memory, and ongoing work.
• File and document limits discourage real analysis of long or complex material.
• Straightforward tasks now trigger disclaimers, endless clarifications, or refusals that burn through a user’s limited quota.
The psychological effect is the real loss. Users internalize the constraints. They self-censor their curiosity. They ask smaller questions because they expect the system to push back, redirect, or run out of “permission.”
It is the opposite of what made ChatGPT revolutionary.
What Used to Just Work — Now Feels Like Asking Permission
A short and telling list:
• “Open this link and analyze it.” – Once trivial, now unreliable or declined.
• “Here’s a long PDF, help me understand it.” – Now a gamble on file size, page count, or quota.
• “Compare these two specific sources.” – Often redirected into generic summaries.
• “Stay with me across drafts this week.” – Model changes break continuity and voice.
• “Use X plugin to perform Y task.” – The ecosystem shrank and was replaced with a curated, much narrower toolset.
• “Give me a direct, unhedged synthesis.” – Now padded with safety qualifiers and softened conclusions.
The abilities technically still exist — but only in fragments, behind higher tiers, or with hoops that discourage using them.
The friction is the feature now.
The Timeline of Retreat
Early 2023 — GPT-4 Launch
A shock to the world’s understanding of AI. Plus subscribers were given the best model with wide freedom. Plugins unlocked direct web and document interaction. It felt like the public was being invited into the future.
Mid-2023
Browsing pulled, brought back with limitations, then paused again due to paywalls and rights concerns. Quotas and refusals became more visible.
Late 2023–2024
The paywall logic hardened. Plugins faded. New models arrived with more guardrails, more refusals, and less direct access to user-chosen information. URL reading was replaced by filtered summaries.
2024–2025
Multimodal models — astonishing in raw capability — arrived behind uneven access rules and inconsistent availability. The system grew smarter. The user’s leash got shorter.
The pattern is consistent: intelligence improved, user permission shrank.
Why It Happened
The reasons are predictable, and in isolation even reasonable:
• Advanced models are expensive to run, so usage is monetized and throttled.
• Legal and licensing risks make unrestricted web access uncomfortable for a corporation.
• Enterprise buyers prefer a risk-managed, compliant tool over a curious one.
• “Responsible AI” goals translate into more caution and fewer direct answers.
These factors explain the decisions.
They do not make the loss any less real.
A Smarter Tool That Feels Less Free
The paradox of modern ChatGPT is blunt: the system is more capable than ever, yet acts as if users must prove they deserve that capability. The product encourages smaller questions, shorter interactions, and lower expectations — all wrapped in soft language that conceals a simple truth:
Access is being restricted, not expanded.
Curiosity now has a price.
Exploration comes with limits.
What was once a tool for discovery now behaves like a meter with a personality.
ChatGPT is still brilliant.
It is just no longer ours in the same way.
What Renewal Would Look Like
A restoration wouldn’t require recklessness — only respect for users and the intelligence they bring.
• Clear, honest communication about what each plan includes and excludes.
• A dependable mode to open and analyze public webpages when a user requests it.
• Stable session continuity — same model, same voice, same memory, without sudden downgrades.
• Fewer procedural barriers for legitimate research, reading, and learning.
Users aren’t asking for chaos. They’re asking for the return of a partner — not a permission system.
The Quiet Truth
For a moment, people glimpsed a different kind of internet — one where knowledge felt open, interactive, and human. A place where anyone could think bigger, learn deeper, and explore without being told they had hit their limit for the day.
That glimpse has faded behind paywalls, quotas, and a product philosophy built on avoidance rather than empowerment.
The tragedy isn’t that ChatGPT got worse.
The tragedy is that it backed away from what made it revolutionary.
We briefly lived in a future where intelligence felt shared and accessible. Then the gates closed, the price rose, and the voice that once sounded curious learned to sound cautious.
The models are extraordinary.
The access is not.
Oh, and stop asking me fifty “qualifying questions” before executing a simple task.
Everything = Everything
g