« GUEST COLUMN: Thoughts on Patent Reform | Main | A Comprehensive Approach To Patent Quality »


Patrick Reilly

While the suggestions are noteworthy, albeit not really news to most practitioners, the desire to avoid potential willfulness arguments means that larger companies will continue to aim towards the minimum level of sufficiency for applications.

The ever decreasing numbers of patent prosecution specialists inside large companies only compounds the problem, leading towards the "throw it against the wall (USPTO) and see what sticks" mentality.

I am hopeful, however, that proposed changes to U.S. patent law, combined with recent infringement case law, will ultimately result in increased patent quality. But, the current backlogs means we won't see any of these improvements for at least another five years.

In the meantime, mutual assured destruction (MAD) is the name of the patent strategy game.

Gregory T. Kavounas

This is a most sensible article. Metrics should be developed, so as to drive prosecution better.

The article points to the wide variety of conditions under which patents are prosecuted, and the correspondingly widely varying results.

The following is very well known to patent managers and practitioners: "More time and energy (...in preparing a patent application...) up front will hopefully maximize active patent term life and minimize pendency, minimize file wrapper estoppel, reduce inconsistencies and adverse claim constructions, reduce cost and litigation, and maximize the likelihood that the claims will be upheld when tested.".

Alas, it is not well known to non-patent managers. Indeed, no credit will be given when this more time and energy is indeed spent for a special patent: We will never learn about those it deterred. We will, but the rest of the world will not, learn that this special patent was licensed more quickly because they could not avoid or invalidate it, or they did not dare litigate it. So, we cannot easily prove the case that better care up front will result in an advantage.

In the absence of such credit, the pressure from non-patent management is to demand we procure large numbers of patents with as small budgets as possible; and thus many shortcuts are taken at preparation.

This is a vicious cycle. It results in patents that defendants will chance litigating. This in turn results in management not having faith that well cared for patenting can do the job, and instead demanding large numbers at cut rate budgets, and so on.

Metrics can hopefully get us out of this vicious cycle.

Paul F. Morgan

This is a difficult and complex subject due largely to conflicting interests. One clear idicia of patent quality should be if patent claims can survive a competent prior art search. But the small amount of time that USPTO examiners have to devote to prior art searching and adequate claim scope evaluation is too often inadequate to achieve that. But a gross increase in patent application fees and examiner staffing to provide that is not politically feasable or economically possible. Thus, thorough post-grant reexamination systems for the small percentage of enforced patents that anyone actually cares about is the only practical answer. [Jury validity trials are clearly not.] Furthermore, as Tom Arnold pointed out years ago, this is a strange business in which companies willing to spend millions to defend their patents in litigation are at the same time often trying to obtain their patents as cheaply as possible. Also, there is excessive use of gross total numbers of patents as an alleged indicator of technological level for PR purposes. Changing views of the CAFC as to what good patents should contain, or not contain, and practitioners who have difficulty keeping up with it all, is another quality related issue. "Metrics" are not going to define or solve these underlying issues.

The comments to this entry are closed.