What you're looking at is called IPC (instructions per cycle). It all depends on the processor design, which has too many factors to mention. In the Pentium 4/Athlon 64 era, AMD's processors had higher IPC, yes. But they don't any more, aside from the Intel Atom/AMD C/E-Series (known as Bobcat), where an AMD C-50 at 1GHz will match a 1.6GHz Dual-Core Atom. The 1.6GHz AMD E-350 will smash them both. What processors are you looking at, specifically?
At the moment, Intel's Core-i series has a MUCH higher IPC than AMD's Bulldozer/FX series. That doesn't automatically make them better, but because they have similar clock speeds in the high end (in the Desktop, ie i7-3770K, AMD FX-8150), the Intel chips are faster in all but the most multi-threaded applications, such as video conversion (ignoring quicksync).
You can still get decent performance from a low IPC chip, such as the Pentium 4 or AMD FX, which is done by having a very high clock-speed (eg, Pentium 4s got up to 3.6GHz and the AMD FX-4170 is 4.2GHz), but this comes at the cost of lots of heat and power - fine in a desktop (reduced overclocking potential aside), but not in a laptop, which is why laptops always run at low clock speeds. The problem for AMD is that Intel's i7-2700K and 3770K run at 3.4 and 3.5 GHz respectively, while maintaining incredibly high IPC. But you pay more for the CPU and motherboard on the Intel platforms, so there's so much to take into account. Plus, 95% of people couldn't tell the difference between two differently performing chips if their life depended on it. Then there's graphics. AMD's A-Series graphics are dramatically faster than anything Intel can offer. But higher-end systems often use dedicated graphics, rendering that irrelevant.
tl;dr - depends on the chip design. Just compare what you're looking at directly in benchmarks of the applications that matter to you. Hope I didn't add more confusion to the mix.