I want to start this blog with a prediction
The current trajectory of large language models LLMs is not eliminating software engineering as a discipline. In my opinion, it is redistributing where human effort is applied within it. Tedious, repetitive, and predictable tasks are being offloaded to LLMs while higher order reasoning stays in the human domain.
What can be safely automated
At present, mundane coding tasks like boilerplate generation, CRUD endpoints, schema wiring, and basic UI scaffolding can be delegated to LLMs with minimal risk. These are transformations where correctness can be verified quickly. The role of the engineer is not diminished but reframed. One must understand system architecture, constraints, and integration points well enough to guide the model and validate its outputs.
Vibe coding vs guided coding
It is important to differentiate this from what is often called vibe coding. In that mode, a user with little understanding of the system simply prompts implement feature X and accepts the result. This can produce something that works on the surface but lacks guarantees around correctness, maintainability, and scalability. In contrast, informed use of LLMs is closer to supervised synthesis. The engineer specifies intent, constraints, and invariants, and the model accelerates implementation within those boundaries.
Future of toolkit-systems and SaaS
From this shift, a second order effect emerges. This is where my personal opinion becomes more speculative. I believe SaaS platforms and modular toolkit systems will lose relevance over time. These systems exist to abstract complexity for users with limited technical depth by packaging reusable functionality behind configurable interfaces. LLMs generalize this abstraction layer. Instead of selecting predefined modules, users can describe the system they want and have it generated.
In my opinion, many existing SaaS offerings will lose their value proposition. If a user can generate a tailored solution in a few hours using a general purpose model, the incentive to pay recurring fees for rigid functionality decreases.
A simple example is this blog. It is built using Eleventy for static generation and a minimal SQL backed comment system. Everything else is raw HTML, CSS, and JavaScript. No external UI frameworks, no third party comment providers, and no platform dependencies. The entire system from design concept to deployed version took around three hours.
Given that, I think it is reasonable to ask why someone would pay for a hosted comment system or a customizable website builder when the cost of generating and deploying such a system is close to zero. This is again my opinion, not a fact, but it seems like a logical direction based on current trends.
Conclusion
As LLM capabilities improve, I expect this pattern to extend to more complex domains. What is feasible today for simple web systems could become feasible for internal tools, data pipelines, and more advanced applications.
For engineers and researchers, this shift is mostly positive. It reduces time spent on necessary but uninteresting implementation details and increases the time available for ideation, system design, and conceptual work. The focus moves away from syntax and toward semantics.
In my view, the role is not being replaced. It is being compressed and elevated at the same time. The execution layer becomes cheaper and faster while the importance of correct abstraction, problem framing, and architectural decisions increases.
If this trajectory continues, the key skill will not be writing code line by line. It will be the ability to precisely specify systems. Knowing what should be built, why it should be built that way, and how to verify that it behaves as intended.
That is the layer LLMs do not remove. They make it more visible.
[ User_Logs ]
Fetching system logs...