I don't agree with the premise of this article for several reasons. Firstly, generative AIs tend to suffer from the same issues as humans; they tend to produce flawed results and struggle with accuracy. You can see it in AI-generated images; sometimes the faces of subjects look smudged or distorted. Sometimes AI generated code is missing a closing brace or doesn't quite do what was intended.
Even in the best case, assuming that AI could write code exactly as asked, we would still face the same communication issues which have been plaguing the software industry since its inception. AI could not implement an optimal solution to a problem unless it understood the business domain within which it operates - This is because there is a significant gap between what people say they want and what they really want. A large part of a software engineer's role is to use their understanding of a business domain in order to fill in gaps and adjust for flaws in communication in order to implement optimal solutions and focus on the most important problems.
An AI capable of solving the technical communication problem would also be capable of solving any business and strategic problem; such AI would make entrepreneurs and business people redundant long before they made developers redundant. Figuring out an optimal strategy at a high level is a lot easier than figuring it out at a low technical level. At the code level, every single detail matters. In a competitive business environment, figuring out optimal solutions at the code level is extremely challenging.