A lot has been written about AI, its future and the future of all jobs and all of humanity. I follow a lot of them like Andrej Karpathy, Simon Willison etc and read books like Empire of AI besides regularly checking Hacker News. It is a field that is evolving so rapidly that it is difficult to know where we are now, let alone predict the future. Some like Ray Kurzweil predict extraordinary changes while others predict doom. People like Andrej Karpathy keep things more real by illustrating the example of self driving cars. Self driving cars were already a reality in 2015 but they are still not a full blown thing on our roads. The iPhone was also introduced in 2007 but it did not take over our lives overnight. The ecosystem takes a long time to develop.
As a software developer in the healthcare field, I understand another aspect of the eco system. In certain fields, it is not sufficient to be 99% good or even 99.99% good. If the machine you are building results in a critical accident every 1 in 10000 operations in the hospital, that is no good. It is the same with aviation, or automobiles and the list goes on. Even when a machine achieves that accuracy, it is difficult to convince regulatory bodies that accuracy. Add a layer of unpredictability like AI in that stack and it will be next to impossible.
So, how does it go from here? Having worked in multiple domains during my career such as automotive, consumer electronics, healthcare etc, I know that there are some fields where everything need not be 99.999% perfect - 99% is fine. Others like healthcare are less forgiving. Even within healthcare, there are functions which can help a doctor and need not be 100% perfect. Radiology for example is one such example where it helps the radiologist to work on more complex cases. The radiologist always has the last word and hence is always in control. In such fields, I think a human will always be in control.
Although looking at news sometimes gives me some scare, when I deeply think about the above, I feel more optimistic. Most technologies have positive and negative effects; google maps and translate made life so much easier but they have almost destroyed my navigation capabilities and need for learning a new language respectively. I think the same holds for software coding too. It will evolve and a lot of people will need to change how they approach software or be left behind. But for those willing to change, it holds promise.
Twenty years back when I graduated as an electronics engineer, I had studied exactly how a computer works. Doping silicon with boron and phosphorus to create a semiconductor, assembling these semiconductors in a circuit cleverly to create AND, OR, NOT, XOR or some other logic, using billions of these together to make a microcontroller, a processor, a memory etc. Then control the switching on/off of these invisible parts by controlling the inputs to the device with voltage. Next, rather than talking in 0s and 1s, create a language of hexadecimal values to send the commands to the device. Next, create an assembly language where you can say MOVE AX, 0x1234 to move some data to a certain portion of the processor. Next, make a compiler where you can simply say int var = 0x1234. The compiler would now convert that into assembly which would then convert that into binary. And we have more and more powerful languages such as C++, python, rust that abstract the language with which humans speak to computers.
Each new abstraction layer shifted our attention upward - from electrons to logic, from logic to language, from language to intent. AI is just the next layer: we describe intent, and the machine fills in the structure. But each new layer also distances us a little further from the physical world, making trust and verification the new frontiers of engineering. The advanced programmer knows even now what a stack overflow is in hardware terms but they still use modern tools to detect and prevent these. Extrapolate this and you will see that AI and the stack that is going to be built in the next few years will be the next level of tools. The keen engineer will still know what is happening underneath but there is no need to be stuck to old tools for ever.
In 2022, I started using Github copilot. Until then, software development always involved domain learning, language learning, endless stack overflow tabs and a lot of trial and error solving bugs. At first, Copilot felt like an intelligent autocomplete — Python was smooth, C++ not so much. And it had a context window of 4000 tokens which was too little for most things. But I already was using stackoverflow a lot less and bugfixing became a bit easier.
Slowly, the context window increased, more advanced models were released and for those who believed in this technology, the productivity gains were immense. And it was also a lot more fun. Every day, I would discover new ways in which to code, new patterns of coding and also new ways of extracting more from the model. Almost as though you are seeing a junior developer grow into his/her role. As always, the negative side also existed, in that I committed less and less to memory. When I know how I can rediscover something, why commit it to memory? Just like google maps where I don’t need to know which highways and interchanges I need to take to go to Frankfurt.
2025 is when agents appeared. I allow the agent to see my complete workspace, guide it on the design I need and ask it to code in a certain pattern. In a few seconds to a few minutes, I have my design ready. I think more in terms of design and review rather than getting stuck with implementation. I will write some detailed posts on this in the coming days but agentic coding is really changing how I think about code.
In the near future, software engineers will spend less time writing syntax and more time defining intent, structure, and verification criteria. Our job won’t vanish — it will evolve into a blend of architecture, ethics, and orchestration. That’s not the end of engineering. It’s engineering growing up. I think software developers will be designers who will have advanced AI tools and pure software coders will become less important. As AI becomes another layer in the software abstraction stack, the challenge shifts from understanding how to code to understanding how to trust what’s coded. In safety-critical systems, this means rethinking verification, not just automation. If 3 years can bring so much to software, I am certain the next five will redefine it completely.