How do you envision the role of AI in software development evolving in the future?
Last Updated: 29.06.2025 00:47

the introduction of code completion tools (github Copilot etc. ) which liberate devs from memorizing precise syntax,
In the past 3 years there’s been 3 pivotal moments:
conversational LLM agents (chatGPT, claude etc.) that can accelerate research, simulate brainstorming and perform small technical tasks,
Five dead in latest Israeli shootings of Gazans seeking aid, say local officials - Financial Times
The trends I expect to continue are:
We are entering a new phase of uncertainty. In the late 2010s/early 2020s (“pre-copilot era”) the developer experience was concentrating around fewer tools with large adoption. Now the market for these tools is fractionated again.
agent-centric IDEs (cursor, windsurf, claude code…) which empower agents to reason with an entire codebase and provide more actionable answers / perform more useful tasks.
As Trump touts tariff deal, China pitches itself as global trade leader - The Washington Post
developers will spend less time typing code and more time thinking about code. ie describing their projects. Discussing what they want to achieve with an agent, which requires reasoning and formalizing what they want to accomplish.
I think that “vibe coding” ie giving a brief description of what you want to achieve and get fully functional code as a result is going to have very limited impact. It works, yes, but in very specific cases, but it doesn’t scale well, and the economies it creates are not worth the trouble in the general case.
In the “pre-copilot era”, there was a general push towards code quality, as in: developers were nudged into making code that was easier to maintain by their fellow developers. Code quality is going to evolve into: code that AI agents find easy to work with. Those two things are not incompatible, but it means things like more comments, more tests.
What are some dirty secrets of Indian (Bollywood, etc.) actors and actresses?
We’re still waiting to see how the dust is going to settle IMO.
a larger part of the code in codebases is going to be generated. This doesn’t mean that a large portion of the tasks that were once handled by humans can be entirely delegated to AI, but rather, in a typical commit, an increasingly large proportion of the lines of code changed will be done automatically.
Developers will spend more time on quality insurance, both upstream and downstream. Thinking - how should this piece of code integrate in the larger whole. What are the signals that it’s broken. What logs, testing, monitoring and alerting should I put in place.
A year later, Trump continues to appeal his historic criminal case. Here's what we know - NPR