QUALITATIVE CASE STUDY
Shaping a Future Customer 'Mindset'
Our Client wanted to understand the individual mental 'steps' that prescribers needed to take before they could take a dramatically new attitude toward treating a common type of cancer.
Testing future or hypothetical mental states is always difficult, especially with physicians who believe that those mental states will be dictated solely by clinical data.
We had to devise a way to identify barriers (both subtle and obvious) to adoption of these mindsets and to walk Rx'ers through a number of
potential steps needed
to get them there.
We made a tiny disruptive tweak to the usual card sort methodology by asking respondents to rank a group of four 'Mindsets.' With the rankings in hand, we broke with convention by asking the doctors to focus on the 'Mindset' they least agreed with.
As they sat with the problematic Mindset staring up at them, we asked them to choose three Reasons to Believe (RTBs) that would get them closest to embracing the ideas behind that Mindset.
The exercise harnessed each respondent's contrarian side and essentially enlisted them to find RTBs that would persuade them (despite not wanting to be persuaded!). The resulting insights gave our customer a huge leg up in nudging the market toward its hoped-for Mindsets.
QUALITATIVE CASE STUDY
Understanding 'Natural Language'
Our Client had a pipeline oncology drug with some atypical side effects. They
were unsure how important these effects
would be to prescribers compared with
others from existing cancer drugs.
Key was identifying the natural language physicians would use to describe the side effects to patients, or whether they would mention them at all.
Also of interest was how they prioritized these side effects in presentations to patients--was the order based on how disruptive the side effect would be, how many patients would have it, or only on likelihood of its interrupting treatment? No one knew.
Traditional approaches to this kind of problem are abstract (and honestly a bit lame), with moderators asking docs, "How would you describe this side effect to one of your patients?" This leads to overgeneralized answers that fail to take individual patient traits into account as real-world
Our Client had developed a number of internal Patient Profiles that were in need of validation, i.e., figuring out whether oncologists would think of each as a distinct "type" or lump some together as having essentially similar needs with regard to treatment.
With the luxury of time--our Client had planned ahead--we built a methodology exposing each oncologist to professional actors who had been trained to play metastatic cancer patients with very specific individual profiles
Each "character" was devised to have a unique level of education, extent of disease, number of prior drug regimens, number of dependents, etc.
Each also had a unique set of personal priorities, e.g., ability to spend time being physically active, living until a child's graduation, not having side effects that interfered with cognition
We trained actors from local theater companies in four US cities (later in EU markets) and worked for weeks to prepare them for their "roles."
Physicians first did an upfront interview section with the moderator, then consulted with two Actor patients, followed by a moderator debrief at the end. These debriefs proved transformative as they provided context for each physician's behavior in the simulated patient encounters.
The oncologists loved having someone to react to, rather than something, and dove into the exercise with gusto. Without any prompting, they were customizing how they educated each patient
about all of the available treatment options.
The Client was able to see stark contrasts in how physicians described each treatment, which side effects they mentioned, and whether they actually presented all available options (as they had initially claimed to) or narrowed the selection to one or two treatments that they felt were the "right fit."
The data were so compelling, yet so at odds with what the Client's Team expected, that it significantly shaped key decisions they made prior to launch.
QUANTITATIVE CASE STUDY
Identify caregiver segment(s) to target at launch for Client's at-home treatment of a pediatric CNS condition. Client anticipated that caregivers will drive demand through HCP gatekeepers.
Caregivers ensure correct treatment administration and are de facto decision-makers.
Treatment is a first-to-market digital therapeutic in an established pharmacologic market.
Client has limited promotional means.
Caregivers have wide-ranging attitudes towards the condition, the need for treatment, and even the idea of using a medical device.
A self-administered Internet survey with nationwide sample of caregivers. We analyzed via cluster and latent-class analyses, as well as dimensional segmentation solutions:
Collected basic demographics and medical history related to symptoms and treatment
Levels of agreement on attitudinal and emotional statements related
to healthcare behaviors, specific condition, and approaches to its treatment
Found differences in each segment to allow identification and tailoring of future marketing initiatives including optimal channels of communication
Created Detailed Customer Portraits
Designed a simple classification scheme that can be embedded
into a caregiver-facing Website
WHY IT WORKED
Good (and frequent) collaborations among Client's team and its
marketing and advertising partners
Workshops among key stakeholders at analytic critical points to reach alignment and to ensure that resulting caregiver segments are usable and make sense while having robust support
Team at Armature has done multiple segmentation studies--spanning diverse therapeutic categories with HCPs, Caregivers, and C-Suite Executives
Support the development of promotional messaging materials for a cancer product by finding most compelling 'story' for presentation to HCPs.
Client needed robust, empirical data as rationale
for message selection choices and story flow.
Quantitative story building generally yields low consensus and often over-weights efficacy.
Treatment was a second-to-market agent for a rare cancer type that requires special diagnostic tests
that were not yet routinely adopted.
Set of 28 messages, divided among 14 categories (there are over potential 1.2MM message-story combinations). In other words, a lot of material to test. (Maybe we should have listed that among Challenges, too.)
A self-administered Internet survey with a nationwide sample of HCP Specialists
We chose an adaptive choice-based conjoint (ACBC) approach where an individual's response history determines specific message combinations they react to. Multiple message combinations tested via task that mirrors daily decisions (not rating, ranking, or MaxDiff). This in turn reduced respondent fatigue, yielding better-quality data.
Additional stated task: Respondents select messages believed to be a necessary part of sales call where the rep could only discuss three (3) statements with them.
We correlated stated and derived importance to further build support for which messages to include.
WHY IT WORKED
No other method would thoroughly test the message set given the practical limitations of time
and specialist universe size.
Our analysis showed where implicit and explicit decision-making converged. This gave our Client the confidence that the recommended 'stories' had robust support. Client also gained insight into what HCPs really need to know vs. what they say they
need in evaluations of 'willingness to Rx.'
Experience: Principals at Armature have done multiple message optimization studies using univariate and choice-model techniques, and are fluent in adaptive choice-based methodologies.