// Read more here: // https://my.onetrust.com/s/article/UUID-d81787f6-685c-2262-36c3-5f1f3369e2a7?language=en_US //
You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

Creative Destruction

More than a decade ago, Michael Kinsley, the journalist and former editor of this magazine, developed Parkinson's disease--a degenerative condition that impairs motor and speech control, producing tremors, rigidity, and eventually severe disability. While the standard regimen of medications helped, he knew that his symptoms were bound to get steadily worse with time. He needed something better--something innovative--before the disease really progressed. In 2006, he got it at the famed Cleveland Clinic in Ohio.

The treatment Mike received is called Deep Brain Stimulation, or DBS for short. It began with a physician--one of the world's top Parkinson's specialists--drilling two holes in his head, into which were implanted two thin electrodes made of titanium. The electrodes were attached to wires, which the physician threaded behind the internal portions of Mike's ear, down his neck, and eventually into his chest cavity, where they were connected to a pair of tiny battery-powered controllers. After the surgery, the doctor activated the controllers using a remote device, unleashing a steady pulse of small electrical shocks that ran across the wires, through the electrodes, and--finally--to the part of the brain that regulates movement. DBS doesn't cure Parkinson's, but it has been shown to control the symptoms for extended periods of time. And that's what happened for Mike (who is also, full disclosure, a friend).

DBS represents the cutting edge of Parkinson's treatment; the Food and Drug Administration approved it only ten years ago. It is also very costly. Medtronic, a company that makes the electrodes, says the whole procedure costs between $50,000 and $60,000. And, because the treatment's main effect is to suppress and delay the onset of symptoms, rather than cure the disease, Mike started wondering whether a system of universal health insurance would pay for it--and, if so, in which cases.

After all, in universal coverage systems, the government typically defines a minimum set of benefits--a list that is put together based on frank assessments of cost effectiveness. (Even if the government achieves universal coverage through private plans rather than through a single-payer system, most insurers would likely end up offering something very close to that same set of benefits.) The government might decide that $50,000 or $60,000 is simply too much to spend for something that doesn't cure Parkinson's--or, at least, limit the treatment to certain people, such as those in more advanced stages of the disease. Mike could always have paid for the procedure out of his own pocket. But most Americans couldn't. If the government decided the treatment wasn't cost effective, he pointed out, many Americans would be forced to go without it--unless they could find a doctor and hospital willing to do it for free.

And that prompted another thought--not from Mike but from me. All of this was assuming DBS even existed. The United States is famously the world leader in medical innovation--in part, it would seem, because we spend like a drunken sailor when it comes to medical care. Today, we devote 16 percent of our gross domestic product to health care, by far the largest proportion of any country in the world. (The highest spending country in Europe, Switzerland, devotes just 12 percent.) That huge, largely uncontrolled spending translates into large profits for health care companies, offering an incentive for them to do research and development--the kind, presumably, that plays a significant role in breakthroughs like DBS. Universal health care would attempt to bring health care costs under control by, among other things, using government's leverage to drive down prices of everything from medical services to drugs and devices. And, if the payoff for something like DBS weren't as big as it is now, who's to say a company would have bothered developing it in the first place?

As Mike himself acknowledged, none of this seals the case against universal health care. On the contrary, maybe the trade-offs between covering everybody and fostering innovative health care are inevitable--and perhaps innovation has to come second. Maybe what is good for some people with Parkinson's isn't necessarily in the best interests of the country as a whole. On the other hand, people with Parkinson's can contribute more to the economy (and society in general) if their symptoms subside. They might also need less ongoing care, which could actually save money. Besides, true innovation ultimately benefits everybody by pushing the boundaries of the medically possible. Can we really count on a universal coverage system to weigh all of that? In other words, can we really be sure that universal health care won't come at the expense of innovative medicine?

It's a valid set of questions, which is more than you can say for most of the arguments against universal health care circulating these days. If you've listened to Rudy Giuliani or any of the other Republican presidential candidates lately, then you've probably heard them claim that creating universal health care would necessarily lead to inferior treatments, particularly for deadly diseases like cancer. But that just isn't so. While the United States is a world leader in cancer care, other countries, such as France, Sweden, and Switzerland, boast overall survival rates that are nearly comparable. For some variants--such as cervical cancer, non-Hodgkin lymphoma, and two common forms of leukemia--the U.S. survival rate, although good, lags behind at least some other countries. You may also have heard critics complain that universal health care inevitably leads to long lines for treatments, as it sometimes has in Britain and Canada. Again, the facts just don't back that up. According to the Organization for Economic Cooperation and Development, France and Germany don't have chronic waiting lines. Access to care in those countries turns out to be as easy as, if not easier than, in the United States, where even people with good private insurance must sometimes wait to see a specialist or go through managed care gatekeepers to get tests and treatments recommended by their physicians. As National Review's Ramesh Ponnuru recently acknowledged, in a refreshing burst of candor, "[T]he best national health-insurance programs do not bear out the horror stories that conservatives like to tell about them."

But one argument against universal health insurance isn't so easy to dismiss: the argument about innovation and the cutting edge of medical care. It goes more or less along the lines of my conversation with Mike Kinsley: In a universal coverage system, the government would seek to limit spending by forcing down payments to doctors and pharmaceutical companies, while scrutinizing treatments for cost-effectiveness. This, in turn, would lead to both less innovation and less access to the innovation that already exists. And the public would end up losing out, because, as Tyler Cowen wrote last year in The New York Times, "the American health care system, high expenditures and all, is driving innovation for the entire world."

Cowen, a George Mason University economist, is a self-described libertarian. But it's not just libertarians, or even just conservatives, who say such things. Liberals have been known to voice similar concerns, albeit more carefully. Notable among them is David Cutler, a highly respected Harvard economist, whose book Your Money or Your Life makes a powerful argument that spending a lot of money on health care is frequently worth it--specifically, that investments in areas like neonatal and cardiovascular care have produced longer and healthier lives, more than justifying their exorbitant price tags. And, while Cutler's work on this subject remains somewhat iconoclastic, most economists would concede that it's possible a universal system could stifle innovation by pushing too hard on prices or applying the wrong kind of scrutiny to medical treatments.

But it's one thing to say that universal coverage could lead to less innovation or reduce the availability of high-tech care. It is quite another to say that it will do those things, which is the claim that opponents frequently make. That argument requires several leaps of logic, many of them highly suspect. The forces that produce innovation in medicine turn out to be a great deal more complicated than critics of universal coverage seem to grasp. Ultimately, whether innovation would continue to thrive under universal health care depends entirely on what kind of system we create and how well we run it. In fact, it's quite possible that universal coverage could lead to better innovation.

The story of Deep Brain Stimulation actually holds some important lessons about how innovation frequently takes place--and why it's not all that dependent on a non-universal, private health care system like the one we have in the United States. For one thing, it turns out that DBS isn't exactly an American innovation. If anybody deserves credit for developing it, it's the French--and one French doctor in particular.

That doctor's name is Alim-Louis Benabid. A recently retired neurosurgeon who did his work at the University of Grenoble, near the French Alps, Benabid spent the early part of his career treating Parkinson's patients with what was, at the time, the standard regimen: first, medication; then, when the medication stopped working, surgery. The surgery involved performing lesions in the brain--that is, deliberately damaging or removing diseased tissue--with the hope of destroying the part that was causing the tremors and disability. This procedure sometimes alleviated symptoms, but it was also a clumsy, irreversible move with the potential for severe side-effects. (It was easy to damage the wrong part of the brain.) That's why it was reserved for patients with the worst symptoms--those for whom medication had either stopped working or never worked at all.

The key challenge in surgery was always figuring out where, exactly, to perform the lesions. To do that, surgeons would begin by applying small electrical charges to different parts of the brain--then observe which part of the body reacted. (Patients were kept under local anesthesia only, so their bodies could respond to the stimuli.) Benabid was doing that to a patient one day in 1985 when serendipity struck: One of the shocks suddenly caused a tremor to stop altogether. As he later explained in an interview with Technology Review, Benabid at first thought he had hurt the patient and apologized. But the patient said, "No, no, it was nice." So Benabid tried again--and, once again, the charge stopped the tremor. "My first thought was, I was relieved it wasn't a complication. The concomitant thought was, 'That's interesting!'"

Benabid theorized that applying a charge on a constant basis might suppress symptoms for long periods of time. And a prototype of hardware for doing that already existed: Years before, neurosurgeons had begun using small implanted electrodes to treat severe chronic pain, such as the kind that often followed a stroke. Benabid began experimenting in 1987 with the use of electrodes in Parkinson's patients and, in 1996, published what is now considered the seminal paper demonstrating that DBS can work.

The development of DBS was one part basic knowledge--an understanding of how Parkinson's works and how the brain responds to electrical stimulation--and one part sheer luck. Profits, on the other hand, had relatively little to do with it. According to Robert Gross, an Emory University neurosurgeon and expert in the field, Benabid had actually approached the companies that already made electrodes for use in treating chronic pain, suggesting they develop a device specifically for Parkinson's. But they declined initially, so Benabid had to use the existing devices and adapt them on his own. "The companies did not lead those advances," Gross says. "They followed them."

In this sense, DBS offers an important window into the way medical innovation actually happens. The great breakthroughs in the history of medicine, from the development of the polio vaccine to the identification of cancer-killing agents, did not take place because a for-profit company saw an opportunity and invested heavily in research. They happened because of scientists toiling in academic settings. "The nice thing about people like me in universities is that the great majority are not motivated by profit," says Cynthia Kenyon, a renowned cancer researcher at the University of California at San Francisco. "If we were, we wouldn't be here." And, while the United States may be the world leader in this sort of research, that's probably not--as critics of universal coverage frequently claim--because of our private insurance system. If anything, it's because of the federal government.

The single biggest source of medical research funding, not just in the United States but in the entire world, is the National Institutes of Health (NIH): Last year, it spent more than $28 billion on research, accounting for about one-third of the total dollars spent on medical research and development in this country (and half the money spent at universities). The majority of that money pays for the kind of basic research that might someday unlock cures for killer diseases like Alzheimer's, aids, and cancer. No other country has an institution that matches the NIH in scale. And that is probably the primary explanation for why so many of the intellectual breakthroughs in medical science happen here.

There's no reason why this has to change under universal health insurance. NIH has its own independent funding stream. And, during the late 1990s, thanks to bipartisan agreement between President Clinton and the Republican Congress, its funding actually increased substantially--giving a tremendous boost to research. With or without universal coverage, subsequent presidents and Congress could ramp up funding again--although, if they did so, they would be breaking with the present course. It so happens that, starting in 2003, President Bush and his congressional allies let NIH funding stagnate, even though the cost of medical research (like the cost of medicine overall) was increasing faster than inflation. The reason? They needed room in the budget for other priorities, like tax cuts for the wealthy. In this sense, the greatest threat to future medical breakthroughs may not be universal health care but the people who are trying so hard to fight it.

So is that the end of the story? No. Somebody still has to turn scientific knowledge into practical treatments. Somebody has to apply the understanding of how, say, a cancer cell reacts in the presence of a chemical in order to produce an actual cancer drug. It's a laborious, frustrating, and risky process--one for which, traditionally, the private sector has taken primary responsibility. And, yet, that doesn't mean the private sector always performs this function particularly well. Unlike the NIH, whose support for medical research seems to represent a virtually unambiguous good, the private sector's efforts to translate science into medicine are much more of a mixed bag.

As books like Marcia Angell's The Truth About the Drug Companies and Merrill Goozner's The $800 Million Pill point out, a lot of the alleged innovation we get from private industry just isn't all that innovative. Rather than concentrating on developing true blockbusters, for the last decade or so the pharmaceutical industry has poured the lion's share of its efforts into a parade of "me-too" drugs--close replicas of existing treatments that offer little in the way of new therapeutic advantages but generate enormous profits because they are patented and because companies have become exceedingly good at promoting their sales directly to consumers.

The most well-known example of this is Nexium, which AstraZeneca introduced several years ago as the successor to Prilosec, its wildly successful drug for treating acid reflux. AstraZeneca promoted Nexium heavily through advertising--you may remember the ads for the new "purple pill"--and, as a result, millions of patients went to their doctors asking for it. Trouble was, the evidence suggested that Nexium's results were not much better than Prilosec's--if, indeed, they were better at all. And, since Prilosec was going off patent, competition from generic-brand copies was about to make it a much cheaper alternative. (The fact that Prilosec's price was about to plummet, needless to say, is precisely why AstraZeneca was so eager to roll out a new, patented drug for which it could charge a great deal more money.)

The Nexium story highlights yet another problem with the private sector's approach to innovation. Because the financial incentives reward new treatments--the kind that can win patents--drug- and device-makers generally show little interest in treatments that involve existing products. Yet sometimes finding a new way to use an old remedy is the best way to innovate. As Goozner notes in his book, even as Prilosec and its competitors (like Tagamet) were flying off the drugstore shelves, academic scientists were arguing that it made more sense to treat some patients with a regimen of older drugs--antibiotics?--that could cure ulcers rather than combat their effects. But no drug company was going to make a fortune repackaging old antibiotics. So the industry, having already invested heavily in products like Nexium, basically ignored this possibility.

Just to be clear, this doesn't mean that private industry plays no constructive role in medical innovation. Computed Tomography (CT)--which a survey of internal medicine doctors recently ranked the top medical innovation in recent history--owes its existence to basic scientific discoveries about physics. But it's the steady involvement of companies like General Electric, which have poured untold sums into research and development of CT scanners, that produced the technology we have today--and will produce even better technology tomorrow.

Yet even this story has a downside, as Shannon Brownlee chronicles in her new book, Overtreated. It's the potential to sell many more such devices, at a very high cost, that has enticed companies like GE to invest so much money in them. In fact, compared to the rest of the developed world, the United States has a relatively high number of CT machines (although Japan has more). But experts have been warning for years of CT overuse, with physicians ordering up scans when old-fashioned examinations would do just fine. (Some experts even worry that over-reliance on scans may be leading to atrophied general exam skills among physicians.) Studies have shown that the mere presence of more CT scanners in a community tends to encourage more use of them--in part because the machine owners need to justify the cost of having invested in them. The more CT devices we buy, the less money we have for other kinds of medical care--including ones that would offer a lot more bang for the buck.

And don't forget one other thing: At least performing too many CT scans doesn't tend to result in injury. The same can't be said for other medical interventions with serious side-effects.

The ideal would be to come up with some way of achieving the best of both worlds--paying for innovation when it yields actual benefits, but without neglecting less glitzy, potentially more beneficial forms of health care. And that is precisely what the leading proposals for universal health care seek to do. All of them would establish independent advisory boards, staffed by leading medical experts, to help decide whether proposed new treatments actually provide clinical value. The fact that Barack Obama's plan includes such a provision is particularly telling, since one of the plan's architects is David Cutler-- the economist constantly promoting the value of innovation.

Of course, the idea of involving the government in these decisions is anathema to many conservatives--since, they argue, the private sector is bound to make better decisions than a bunch of bureaucrats in Washington. But, while that's frequently true in economics, health care may be an exception. One feature of the U.S. insurance system is its relentless focus on short-term good. Private insurers have little incentive to pay for interventions that don't yield immediate benefits, because they are gaining and losing members all the time. As a result, money invested on patient health may very well help a competitor's bottom line. What's more, the for-profit insurance industry--like the pharmaceutical and device industries--responds to Wall Street, which cares more about quarterly filings than long-term financial health. So there's relatively little incentive to spend money on the kinds of innovations that yield long-term, diffuse benefits--such as the creation of a better information infrastructure that would help both doctors and consumers judge what treatments are necessary when.

The government, by contrast, has plenty of incentive to prioritize these sorts of investments. And, in more centralized systems, it can do just that. Several European countries are way ahead of us when it comes to establishing electronic medical records. When fully implemented, these systems will allow any doctor, nurse, or hospital seeing a patient for the first time to discover instantly what drugs that person has taken. It's the single easiest way to prevent medication errors--a true innovation. Thousands of Americans die because of such errors every year, yet the private sector has neither the will nor, really, the way to fix this problem.

Another virtue of more centralized health care is its ability to generate savings by reducing administrative waste. A universal coverage system that significantly streamlined billing (either by creating one common form or simply replacing basic insurance with one, Medicare-like program) and cut down on the need for so many insurance middle-men would leave more resources for actual medical care--and real medical innovation.

None of which is to say a universal coverage system couldn't have a chilling effect on innovation while severely pinching access to medical care that is expensive but, arguably, worth it. All it would take was a system that had both a rigid budget and very low funding. The British have such a system, or something approximating it. Even after some recent spending increases, they still devote just 9 percent of the gross domestic product to health care, less than many European nations and a little more than half of what the United States spends. And that shows up in the availability of cutting-edge care. Relative to other highly developed countries, Britain is one of the last to get the latest cancer drugs to its patients. And that probably helps explain why British cancer survival rates generally lag, too.

But few of the plans under discussion in this country would create such a strict budget. And nobody in this country seriously proposes reducing U.S. spending to British levels. Rather, the goal is to reduce our spending moderately and carefully; the savings, most likely, would materialize over time. In the end, we would probably still spend more than what even the higher-spending countries in Europe pay. And that should be enough, given that the citizens of those countries are not exactly missing out on cutting-edge medical treatments. France and Switzerland--traditionally the two highest spenders--get the newest cancer drugs to their patients with virtually the same speed as the United States does. And, when it comes to cancer radiation equipment, France actually has more per person than we do.

So what, then, would have happened to my friend Mike Kinsley if such a system had been in place here? From the looks of things, exactly what has happened already: He would have gotten the DBS treatment. Nearly every country in Europe covers DBS under its national health insurance system, even England with its famously low spending and scrutiny of new treatments. People over 70 can't always get the treatment in those countries, but, in part, that's because many physicians believe it's not usually worth the risks at that age. (And they may be right, depending on which studies you believe.) Medicare, meanwhile, also covers it, making it available to all of this country's elderly. Working-age Americans, on the other hand, may face some obstacles: According to Medtronic, private insurers occasionally deny coverage--to say nothing of those people who don't have insurance at all. DBS is just one example, to be sure, but it seems to be emblematic of the truth about universal health insurance: You don't have to choose between universal access and innovation. It's possible to have both--as long as you do it right.

Jonathan Cohn is a senior editor at The New Republic, a senior fellow at Demos, and the author of Sick: The untold story of America's health care crisis--and the people who pay the price (HarperCollins).