The application of new computerized methods of data analysis to vast collections of medical, biological, and
other data is emerging as a central feature of a broad vision of precision medicine (PM) in which systems based
on artificial intelligence (AI) assist clinicians in treatment, diagnosis, or prognosis. The use of AI to analyze big
data for clinical decision-making opens up a new domain for ELSI inquiry to address a possible future when
the implications of genetics and genomics become embedded into algorithms, pervasive yet implicit and
difficult to identify. Thus, an important target of inquiry is the development and developers of these
algorithms. There are three distinctive features of the application of AI, and in particular machine learning
(ML), to the domain of PM that create the need for ELSI inquiry. First, the process of developing ML-based
systems for PM goals is technically and organizationally complex. Thus, members of development teams will
likely have different expertise and assumptions about norms, responsibilities, and regulation. Second, machine
learning does not solely operate through predetermined rules, and is thus difficult to hold accountable for its
conclusions or reasoning. Third, designers of ML systems for PM may be subject to diverse and divergent
interests and needs of multiple stakeholders, yet unaware of the associated ethical and values implications for
design. These distinctive features of ML in PM could lead to difficulties in detecting misalignment of design
with values, and to breakdown in responsibility for realignment. Because machine learning in the context of
precision medicine is such a new phenomenon, we have very little understanding of actual practices, work
processes and the specific contexts in which design decisions are made. Importantly, we have little knowledge
about the influences and constraints on these decisions, and how they intersect with values and ethical
principles. Although the field of machine learning for precision medicine is still in its formative stage, there is
growing recognition that designers of AI systems have responsibilities to ask such questions about values and
ethics. In order to ask these questions, designers must first be aware that there are values expressed by design.
Yet, there are few practical options for designers to learn how to increase awareness. Our specific aims are:
Aim 1 To map the current state of ML in PM by identifying and cataloging existing US-based ML in PM
projects and by exploring a range of values expressed by stakeholders about the use of ML in PM through
a combination of multi-method review, and interviews of key informants and stakeholders.
Aim 2 To characterize decisions and rationales that shape ML in PM and explore whether and how
developers perceive values as part of these rationales through interviews of ML developers and site visits.
Aim 3 To explore the feasibility of using design rationale as a framework for increasing awareness of the
existence of values, and multiple sources of values, in decisions about ML in PM through group-based
exercises with ML developers from academic and commercial settings.