The same reasoning in civil engineering would translate to saying that you only need a shovel and a mason's trowel to build anything. Yes, you can, the "competitive features" like improved insulation or just fashionable architecture don't depend on how you put the parts in place. But more powerful tools allow you to do more things in limited time.
Having said that, building those tools is complicated enough so it takes too much time and isn't generally done, be it building languages, debuggers, or whatever tool you need, basically equally. You do correctly, if obliquely refer to a well-identified problem of insufficient tooling even in those languages that are generally considered sufficiently expressive to not require additional syntax or semantics. Interestingly, this is independent of whether you're working in a more specialized language that has the concepts in question embedded in its core primitives, or in a less specialized language that requires you to first build these higher concepts (of interest to you) out of its lower-level constructs. It would even appear that the problem is roughly of the same scope regardless of whether you're building a new language (in the traditional "lexical-syntactic" sense, not just in terms of new APIs) or not, with the "standard approach" you're mentioning involving using non-specific analysis and debugging tools that are all-too-often only of marginal benefit because they don't provide the views that a large system might require to be more easily comprehended or modified, for example, by a newcomer. Building improved tools unfortunately requires some kind of model regardless of whether the model is explicit in form of another language or merely implicit in the code of the tools and the patterns of use of some API you're building. But the designers of programming environments can't possibly anticipate all the domains you might want to use their environment for, so even if you decide not to use or develop another language for an application, unfortunately, not much changes, there are still tooling problems to be solved. You've merely shifted the burden from one kind of tools to another kind of tools, and one could successfully argue that accessible techniques for building better tools (to bring them into the realm of what Eric Raymond refers to as "casual programming" in TAoUP) are highly desirable for overall productivity.
there is nothing done in new languages that can't be done in others
Then why aren't we programming everything in assembly? (But I agree that the greatest value today may not lie in the development of new completely general languages but perhaps in finding ways to formulate ideas in specific application domains more concisely while keeping readability so that you could achieve more complex but beneficial things using less brain time, including doing the same (in a circular fashion) to the development of such languages in order to simplify this process it actually made sense to do in practice and didn't involve getting a PhD in computer science. The world indeed doesn't need another rehash of C++ or Lisp.)
"This was a computer of the 'last' generation--last, because no other could have greater calculating power. Limits were imposed by such properties of matter as Planck's constant and the speed of light. Greater calculating ability could be achieved only by the so-called imaginary computers, designed by theorists engaged in pure mathematics and not dependent on the real world. The constructors' dilemma arose from the necessity of satisfying mutually exclusive conditions to pack the most neurons into the smallest volume. The travel time of the signals could not be longer than the reaction time of the components; otherwise, the time taken by the signals would limit the speed of calculation. The newest relays responded in one-hundred-billionth of a second. They were the size of atoms, so that an actual computer had a diameter of barely three centimeters. A computer any larger would be slower. The Hermes' computer did indeed take up half the control room, but that was for its peripherals: decoders, hierarchic assemblers, and so-called hypothesis generators, which, with the linguistic modules, did not operate in real time. But decisions in critical situations, in extremis, were made by the lightning-swift core, which was no bigger than a pigeon's egg."
niche and egghead languages aren't how the world at large does things.
The fallacy of appealing ad populum applies to programming languages as well. Not to mention the fact that lots of people who are not doing things "how the world at large does [it]" are not exactly going to be talking about it a lot if they find themselves in a competitive environment. Why give your competition ideas? I mean, the probability of your competition "getting it" is often very low but still non-zero.
the incompetent buffoons who
..identified an attack, identified the people behind it and took out a competitor.
Given that perfect security is effectively impossible, I'll take that booby prize.
The clash of ideas is the sound of freedom.