Why LLMs Might Be Killing Creativity Without Us Noticing
When ChatGPT (or any other LLM) gives you a coding snippet, cloud migration plan, or architectural suggestion… it often feels right. Too right. It’s clean. Safe. Popular.
But that’s the thing. It’s popular. Not necessarily correct, or optimal, or creative.
LLMs Don’t Think, They Mirror
LLMs like GPT or Claude don’t reason from scratch. They synthesize based on patterns from billions of training samples. They don’t give you the most innovative solution, they give you the most likely one.
In many cases, that’s fine. But in others, it’s dangerously limiting:
- Developers get code that follows StackOverflow’s greatest hits, but miss edge-case nuance
- Architects get blueprints that follow the trend, not what fits their unique context
- Decision-makers get strategy summaries that reflect the consensus, not disruption
The “Echo Chamber” Risk
The more we rely on AI to accelerate decisions, the more we risk converging on the same solutions, technologies, and even architectural patterns.
Example? Ask ChatGPT how to deploy a modern microservices architecture, and 9/10 times you’ll get:
- Kubernetes
- API Gateway
- CI/CD
- Terraform
- Observability with Prometheus/Grafana
Sound familiar? Yes, it’s the same “greatest hits” we’ve all seen before. But what if a simpler ECS-based setup, or even a monolith, was better for your case? LLMs don’t push those unless you ask very specifically.
How Do We Escape the Echo?
- Prompt Differently: Ask why not instead of just how to
- Challenge the Output: Treat the LLM like an intern, not a guru
- Reintroduce Human Curiosity: Read, explore, and test ideas that aren’t in the mainstream
- Tune Your Models (if possible): Bring in your domain-specific nuance when using LLMs in production
My Personal Take
I love working with LLMs. I use them daily. But as a cloud architect and AI enthusiast, I’ve started noticing how easily they can flatten originality. They don’t force us to think. They let us skip the messy part and sometimes that’s exactly the part where innovation lives.
Maybe the future isn’t about using AI to go faster. Maybe it’s about using AI to go deeper (if we stay aware)
Written by Nuno Neto


