My arguement in this case is that the machine would have to be smarter not than a single person, but than an entire group of people, all with different expertise and internal creative processes, combined, to result in an intilligence explosion. That's a very different conclusion.
But it's the wrong conclusion. This is a mistake (IMO) that I see a lot of people make when talking about machine intelligence in comparison to humans.
The thing is, these machines that are smarter than men are still machines. They are not humans. When you say a machine is "as smart as a human", people seem to automatically assume that means they're as fallible as humans too. That's not necessarily going to be the case and I would argue it likely will not be the case. It's not necessary, nor is it probably the easiest route, to reach a human level of intelligence by designing a machine brain exactly like a human's. More likely, we'll design it in some brute force digital way so that it is computationally as or more powerful than a human brain, but neither has some of our capacity for creative thought nor any of our problems with memory or senses or whatever. (The former is not guaranteed, though; creative thought and intelligence are linked, so a "smart" machine may be just as creative as we are, especially if pre-programmed with a set of overriding directives, as it no doubt would be because otherwise what's the point?)
So yes, a single machine "smarter" than a human could have its own designs in memory and could probably fairly easily teach itself how to build a copy of itself simply by studying how to do it. It would never forget anything until it ran out of memory, it would never be distracted by thoughts of love or sex or by being too tired or bored, it would presumably be built with the precision and dexterity of a robot, so that's not an issue. It wouldn't need to worry about experience, as a human does, because its limbs will just do whatever its "brain" tells them to - unlike a human. It could also pretty easily build machines smarter than itself, either by just adding more computing power or by linking itself to the copies it produces.
Humans are usually not limited in what they can do by their intelligence, but by all of their fallibilities, not to mention a desire for leisure time. We don't want to just be working all the time, and we want to do what we love to do, even if it means we can only build one part of a robot instead of the whole thing. If your field is welding, maybe you don't have any interest in learning how to design memory chips. A robot or android is not going to have that "problem"; it will learn to do whatever it needs to do in order to build whatever it needs to build to satisfy whatever directives it's programmed with. And it'll do it without any leisure time short of stopping to literally recharge its batteries.
*That* is what's really dangerous about all this, IMO. I don't even think robots need to be *as* smart as humans to cause us real problems. A robot with an IQ equivalent of about 50 (which is still far beyond where we are today) but a large amount of memory and good basic dexterity could probably replicate itself and then defend itself (with its buddies) if programmed with an innocuous directive like self-preservation. We are counting on the fact that our higher intelligence will protect us against dumber machines because we will be able to think more creatively and keep one step ahead, but all they really need to do is go to a library and get the right books to study, then hide out in the woods for a while building up a dumb but formidable army.
Even a single semi-intelligent machine programmed poorly could just waltz into a gun store, take a gun off the rack and start shooting people. And it'd probably require an RPG to take it down. We are already almost there - autonomous gun robots have already gone berzerk and killed people. Someday these things are gonna be roaming the streets.
The "singularity" and runaway machine intelligence is not even necessary for things to start spiraling out of control. A *little* bit of intelligence combined with the law of unintended consequences is all that's necessary.