Um, no. They have much _lower_ failure rates since there are no moving parts. Typical time between failures (MTBF) is usually 1 million hours or more.
Ahh statistical sampling for reliability...as a former FPGA guy, one of my favourite topics.
MTBF is a terrible inadequate metric for SSDs *specifically* because they have no moving parts and their technology isn't repairable. Technically, SSDs don't even have an MTBF, they have an MTTF, that's because failures in an SSD generally can't be repaired. They're catastrophic and require the drive need to be replaced completely. Remember: MTBF = MTTF + Mean time to repair (MTTR).
Also MTBF/MTTF numbers don't indicate how long you can use a drive before you expect it fail. They relate the relative risk of getting a defective unit from the manufacturer. It's how long their sample group lasted before an error was encountered. But it doesn't tell you much about the nature of the failure and that's very important.
Right now failure numbers are more or less the same for consumer-level SSD and HD technology. Usually in the 1.5 - 2.0 million hour range. The difference, and this is critical, is the MTBF vs. MTTF distinction. An error in traditional mechanical drive is not uncommon and the tech has advanced over the years to the point where, despite a failure, it can usually be worked around by the drive's controller. The failures aren't catastrophic; at least not right away. That's not the case for SSD drives. We haven't got a good way to work around bad blocks in the devices yet that isn't expensive (read: that isn't really in consumer-grade SSD tech).
Mechanical drives enjoy the slow death. This keeps their MTBF numbers low, but their effective lifespan pretty high because once failures start to occur you've usually got time to shut it off, replace the drive, an recover the data. You don't get that with SSDs. The numbers might look the same on paper, but the effect of a failure is dramatically different for the technologies.
What they do suffer from is the limited number of rewrites per memory cell. This means you can only rewrite each block on the drive a finite number of times, typically 3-5 thousand. This means, e.g. that if you want to wear out a 256GB SSD, the conservative estimate is you will need to write 768 terabytes of data to it before it starts failing. Number of reads is unlimited. I doubt 99.9999% of users ever get anywhere close to that number. It's worth pointing out that it _could_ be dangerous to run an SSD with a mostly-write workload when it doesn't have a lot of free space, and is therefore forced to re-use the same blocks.
This is one of those areas where people haven't had to think very hard for a very long time because spinning platter disk tech has gotten so good. It's almost unreasonable to ask consumers to put this stuff in to their heads in order for them to enjoy the benefits. The best, current advice, for them is: know that when SSDs fail they fail hard and you lose access to everything. So keep a backup. There's no click-of-death to warn you failure is coming. There's no reduced performance to tip you off. It's just *blink* and it's gone.