Article Date: 12/1/2009

Deciphering Clinical Study Data
EVALUATING STUDIES

Deciphering Clinical Study Data

Evaluate research with a critical eye to ensure you get the best guidance for your practice and your patients

By Sheila B. Hickson-Curran, BSc(Hons), MCOptom, FAAO



Sheila Hickson-Curran is director of Medical Affairs for Vistakon, Division of Johnson & Johnson Vision Care, Inc.

As in many fields, eyecare practitioners are moving more and more toward evidence-based medicine. But making the best decisions can be confusing when the "facts" of various studies seem to contradict one another. How can you determine what is reliable? And what is most relevant for clinical decision-making?

The goals of this article are to review the essential elements of clinical research, propose key questions to help you critically evaluate comparative studies, and share insights from colleagues about how they use research findings in clinical practice. It is important to emphasize that before initiating any clinical research, protocols must be reviewed by appropriate Ethics/IRB committees for approval.

Study Objective and Hypotheses

The study objective, or the reason for initiating the clinical study, should be clear and objective. It should be based on something more than just proving that product A is better than product B. Hypotheses are the individual questions that the study seeks to answer. These should be carefully selected based on prior research, clearly defined, and not changed once the study protocol has been finalized.

Study Design and Randomization

The optimal study design varies depending on the objective and hypotheses. In contact lens research, both parallel group and crossover studies are common.

In parallel group studies, one group of subjects uses the test lens and the other group uses a control lens. Parallel group studies are more "real world" and relatively easier to conduct and analyze, but they require larger sample sizes that need to be carefully balanced. This design doesn't lend itself to determining patient preference, as each group did not experience both lens types.

In crossover studies, each group uses either the test or control lens and then crosses over to the other lens. These studies can be performed with fewer subjects and they allow for analysis of preferences among and within subjects. However, this does make the analysis more complex and may result in carryover effects or problems with inadequate washout periods between each treatment.

A study can be bilateral (same lens in both eyes) or contralateral (different lens in either eye). A contralateral design may initially be appealing because variables such as environmental and patient-specific factors (tear film quality, for instance) are well controlled, a smaller sample size is needed and the study may be shorter. However, in the real world, there may be unforeseen variables such as between-eye differences, inter-eye interactions, or compliance problems such as patients mixing up which eye each lens is in. Typically, parallel group bilateral studies identify differences that are most clinically relevant, while crossover and contralateral designs tend to pick up smaller differences in product performance.

Multicenter and prospective studies are generally considered more robust, but single-site studies and retrospective reviews of existing data can also provide valuable information. Randomization, a process by which subjects are equally likely to be assigned to either the test or control lens, or to a sequence of lenses, helps to minimize bias and produce groups that are comparable for both known and unknown factors.

The study period is another important factor to consider. A typical rule of thumb is that the study needs to be 50 percent longer than the average onset of the test variable.

"I prefer to see data over multiple time points, but I certainly want to see that the time point chosen is consistent with what I know about the condition," says Paul Karpecki, OD, in private practice with Koffler Vision Group in Lexington, Ky. He would be skeptical, for example, of a treatment for conjunctivitis in which the effect was measured only at 10 days. "We typically expect conjunctivitis to resolve over a few days to a week, so measuring efficacy much earlier or later than that isn't clinically relevant," he says.

All Variables Controlled

Poorly controlled variables can bias the results of a clinical study. Factors that may have an influence on the results (e.g., previous lens-wearing experience, refractive cylinder) should be taken into account with appropriately adjusted statistical analysis. In the contact lens field, we frequently see this principle violated. If, for example, a study compares lens B and lens C in patients who are all habitual wearers of lens A, the patients' preferences may be skewed by a similarity of one of the test lenses to lens A in material (e.g., oxygen permeability, modulus), design, or other factors. The findings, therefore, should only be extrapolated to patients who have previously worn lens A. Beware of "refitting" studies in which previous wearers of one lens type are refit into another lens type. The sample is inherently biased, so the results must be interpreted in that context and cannot be projected to the general wearing population.

Missing data should be clearly explained. In addition to the number of subjects enrolled, researchers should also present the number of patients at each scheduled visit and the number successfully completed. Discontinuations are normal, but check whether they are lens-related and/or affect one group more than another.

Masking Subjects/Investigators

Masking is used to reduce bias. Figure 1 shows the relative credibility of different types of study masking. Investigator masking is most important for subjectively graded variables such as redness or corneal staining, while subject masking is more important for subject-reported variables. Double-masked studies, in which both the subject and the investigator are masked, are ideal, but these are not always possible in contact lens research because of unique lens markings and packaging differences.

Figure 1. Diagram showing hierarchy of masking. Sponsor masked brings more credibility at every level. Masking is key to minimizing bias.

It is also preferable for the identity of the sponsor to be masked during the clinical trial. Industry funding for a study and/or its authors should always be disclosed in study reporting and is certainly worth considering in weighing the conclusions. It doesn't necessarily bias the results, however.

"In a way, I find industry-sponsored studies to be more credible than most because they are so carefully scrutinized," says Mile Brujic, OD, a private practitioner in Bowling Green, Ohio. "I think if you just step back and ask yourself whether a new lens or a new treatment is in the patient's best interest, you can't go wrong," he says. "If I can meet the patient's need for healthy, comfortable, high quality vision, those benefits will trickle up into increased revenue and referrals for my practice and eventually, more contact lens sales for the manufacturer. We all win, especially when my decisions are driven by sound clinical data."

Appropriate Number and Composition of Subjects

What is the right sample size? Does every good study need to enroll hundreds of patients? Not necessarily. Sample size should be calculated in advance and depends on study design, the sensitivity of the scales used, primary variables, and the size of difference the study is looking for (Figure 2). Statistical power, or the probability of detecting an effect that is truly present, is directly related to sample size.

Figure 2. Sample size should be calculated in advance and depends on the study design and the size of difference the study is looking for.

Larger samples are needed to measure subjective responses, such as comfort, because there is more variation in individual response, while smaller samples may be enough to identify statistically significant objective results.

As examples, two published studies supported by Vistakon were both well-constructed but had vastly different sample sizes. In a study to determine whether contact lens wear affects children's self-perceptions, 484 children were enrolled (Walline et al, 2006). But in a study comparing the rotational stability of astigmatic lenses with accelerated stabilization design versus a prism-ballast design, the sample size that was needed, based on the accuracy and standard deviation of the response, was only 20 patients (Young and McIlraith, 2008).

Remember that the composition of the study sample determines whether and how far the results can be extrapolated beyond the study. If only myopes were studied, for example, the study findings may not apply to hyperopes.

Statistical Analysis and Results Presentation

Pay close attention to statistical analysis and how results are presented. With any clinical study, we want to know the probability of a result being real, rather than having occurred by chance. The level of probability, also called the "confidence interval," is set in the study protocol (typically 95 percent).

Averages are common in reporting clinical results, but they have little meaning in isolation and can mask large, clinically meaningful variations in response. Means and/or medians, standard deviations (SD) or interquartile range, and sample size (n) should all be given. Preference results can often mislead as well when reported as "of those who expressed a preference." For example, in a study of 100 subjects, the statement that "70 percent preferred lens A over lens B" is of little clinical relevance if 90 percent of patients actually expressed no preference. The results would therefore be based on a sample of 10 subjects who did express a preference.

Examine the scales on graphs carefully. Specifically, watch out for a truncated y axis that suppresses the zero. This is a common visual trick to emphasize an effect, as is evident in Figure 3. A 3-D graphic can also accentuate differences between results. The "well-dressed" graph should include the following features: title, footnote with further details, timeframe, sample sizes (n=), clearly labeled axes, units of measurement, meaningful scales, error bars or box-and-whisker graphs, and P values (Figure 4).

Figure 3. These two graphs were created from the same data. The graph on the left misleads by making lens B appear to be much more comfortable than lens A. When visual tricks—the truncated y axis, 3-D bars, lack of error bars—aren't used, it is easy to tell from the graph on the right that lens B's comfort advantage is minimal and not statistically different.

Figure 4. An example of a well-dressed graph.

Most clinicians are not statisticians. Fortunately, the review process for scientific journals (e.g., Optometry and Visual Sciences, Ophthalmology, Eye and Contact Lens, Optometry and Vision Science, or Cornea) includes a rigorous review of the statistical methods used; the application of improper statistical analysis is a frequent reason for rejection of papers submitted to these journals. For clinicians, therefore, just knowing that something has been accepted and published by a peer-reviewed journal provides greater confidence in the validity of the data.

Meaningful Performance Differences

"We have to be aware of the difference between statistical and clinical significance," notes Dr. Brujic. "I want to know whether a study's findings are statistically significant, but that doesn't always means the difference will be meaningful to my patients."

He is absolutely right on that point. In Figure 5, you can see that there are significant differences in slit lamp findings between the lenses (P=0.01). However, because conjunctival redness less than Grade 2 rarely requires clinical action, the difference is not very important in practice. Conversely, where results are not found to be statistically significant it may still be worth considering their clinical significance.

Relevant and Accurate Conclusions; Appropriate Recommendations

Consider whether the conclusions drawn by a speaker, author, or advertisement are meaningful, relevant, and substantiated by the data. The study objective stated at the outset should have been met and the hypotheses either proved or disproved.

You may have to go more in-depth than an abstract or single graph to understand whether the stated or implied conclusions are truly representative of the study. Nuances and limitations are usually discussed more fully in the paper than any snapshot graph can convey.

"E-mail news blasts and trade journals are excellent sources for staying on top of the latest research in a broad way," says Dr. Karpecki. "But when a study piques your interest or the findings are surprising, you should take the time to read the full study," he says.

Any recommendations made should also reflect any cautions with interpreting the results and suggest where further research is needed.

Published Results

The publication of study results is an important part of the scientific process. Peer-reviewed publications offer readers a greater degree of confidence because the work has been reviewed by a panel of experts in the field. Published papers should include details of the study objective and hypotheses, study design, sample size and composition, analysis method, and conclusions so that clinicians can fully evaluate the results. Company-sponsored work is often referenced as "data on file" that can't easily be reviewed by those outside the company. More significant studies are those that are published or presented at conferences, as there is reassurance to readers or attendees that the study has been scientifically reviewed and has been registered on www.clinicaltrials.gov.

Using Research in Clinical Practice

Knowing how to evaluate the details of a clinical study is just the first step, of course. The next challenge for practitioners is determining how study findings affect their patient care.

Many practitioners say that new data often reinforces what they are already doing. "I've always been a big advocate of fitting kids with contact lenses," says David Kading, OD, a practitioner in Seattle, Wash. "But the Contact Lenses in Pediatrics (CLIP) and Adolescent and Child Health Initiative to Encourage Vision Empowerment (ACHIEVE) studies have really provided the scientific evidence to support that fitting children is not only possible, but desirable."

The CLIP studies showed that children ages 8-to-12 can successfully wear and care for their contact lenses, that they can be fit in a reasonable amount of chair time, and that contact lens wear improves their quality of life compared to spectacle wear (Walline et al, 2007;. Walline et al, 2008). The ACHIEVE study is a large, multi-site, prospective study that enrolled nearly 500 ethnically diverse myopic children ages 8 to 11 (Walline et al, 2008). The children were randomly assigned to either contact lens or spectacle wear and followed for three years. To date, the ACHIEVE researchers have reported that contact lens wear does not cause myopic progression and that children have higher self-esteem and feel more confident about their participation in sports and other activities when they are wearing contact lenses (Walline et al, 2009).

Figure 5. An example of a statistical significance that has no real clinical relevance.

Figure 6. A 10-point study checklist.

Both of these studies were supported by funding from Johnson & Johnson Vision Care, Inc. "ACHIEVE is a good example of a study that provides valuable data to our profession, but that could never have been accomplished without financial support from industry," says Dr. Kading.

Dr. Brujic agrees. "And the important lesson from these two studies is not that any particular contact lens is best for children, but that children can benefit from contact lenses," he says.

Sometimes new research can turn current practice on its head. "It's really easy to become biased by your own anecdotal experience," says Dr. Brujic. "Sometimes there is no scientific backing at all for the way things have always been done." Newer, better products may also overcome problems that influenced treatment in the past.

Over time, as a body of evidence mounts, the entire profession can shift gears. We are seeing this happen now with the shift in prescribing patterns toward silicone hydrogel contact lenses and away from older hydrogel lens materials that may not be as healthy for the eyes.

Don't Take Their Word for It

"I read about a lot of new studies and will often try to duplicate them in my practice," says Dr. Kading. "For example, I was initially very skeptical about a controversial report of corneal staining with certain contact lens solutions (An-drasko and Ryen, 2008)," he says. "But when I got similar results myself, I began to at least think differently about solution-lens interactions."

"Good patient care demands that we stay up-to-date on the latest clinical research," says Dr. Karpecki. "Once I understand the study design and conclusions, I ask myself whether those conclusions make sense based on my own clinical experience, and then apply the results to a select group of patients so that I can monitor the study's validity in a clinical setting."

When he's trying out a new product or considering a change in patient care, Dr. Karpecki says he also asks patients a lot more questions. "I recognize that a study emphasizing one feature may have left out something that is important to me and my patients," he says. "So I ask very specific questions about vision, comfort, ease of use, and any side effects or problems."

Share the Data with Patients

While the details of a peer-reviewed paper may be difficult for lay people to understand, "Patients do want to know that your clinical decision-making is based on research, not just guesswork," says Dr. Kading. Letting them know about the latest research gives them confidence in your ability to stay current. It can also simplify the conversation.

"For example, when parents are worrying about whether their kids are ready to try contact lenses, I can tell them that the majority of parents in the CLIP study were happy that they allowed their children to try contact lenses," says Dr. Brujic. Study data also helps you explain why you are recommending a new product or a change in a therapeutic regimen, especially when a patient may have concerns about the cost.

"A lot of my patients will ask about less expensive alternatives," says Dr. Karpecki. "Sometimes the cheaper lens or the generic medication is a perfecdy reasonable option; other times I have good clinical reasons for recommending a more expensive but more clinically effective product. Patients generally choose what is best for their health if you can just explain why that is so," he says.

Conclusion

Clinical research is costly and time-consuming to conduct, and no clinical study is entirely without flaw. Figure 6 provides a 10-point checklist to help you evaluate clinical study results. You can also receive a 10-Point Guide to Evaluating Clinical Research by e-mailing pa@its.jnj.com.

Best practice health care depends on clinicians understanding clinical research and applying the results. CLS

This paper is based on material presented by Jane Veys, MCOptom, and Cristina Schnider, OD, at the 2007 British Contact Lens Association Clinical Conference.

For references, please visit www.clspectrum.com/references.asp and click on document #169.



Contact Lens Spectrum, Issue: December 2009