Stanley L. Witkin, University of Vermont
The development of the Salisbury Statement on Practice Research reflects the belief and value position among certain academics (and one hopes practitioners although they were not represented among those who produced this statement) [1] that research can and should be more responsive and useful to practice. This is not a new issue and the current focus on practice research can be viewed as the latest incarnation of various attempts to bring these two activities into closer alignment.
A primary justification for these attempts is the belief that a closer connection between research and practice will be beneficial to individuals and society. In general, research is seen as providing important knowledge for practice, while practice can provide contextual relevance to research. However, differences in their aims, language, expertise, audiences, and settings (among others) keep them estranged. Thus, the “issue of practice research” as used here refers to beliefs and values about practice and research that lead to a perceived discrepancy between the current state of affairs and a more desirable one.
In this paper, I think (and write) “out loud” about how this issue has been (and is) understood and addressed, and its implications for social work. I also propose some alternative ways of considering this issue (if considered an issue at all) that challenge how we think about practice and research. As my title suggests, my intent is not to provide a polemic nor a critique of the Salisbury Group’s efforts. Rather, I hope that my “comments and musings” will serve as a heuristic, stimulating new ideas and encouraging further dialogue on practice and research and their relationship.
I begin by problematizing a common assumption embedded in the Salisbury Statement (and more generally in the social work literature): that there is a “gap” between research and practice. Early in their report, the Salisbury Group notes “Much of the contemporary meaning [concerning the connection of research and practice] turns on the issue of how to bridge the gap between the world of research and the world of practice” (Salisbury Statement, p. 3). This assumption of a gap is of long duration and widely considered to accurately portray the current relationship between research and practice in several disciplines and professions [2] . It requires no justification and forms the backdrop for practice research efforts.
The word gap is used somewhat metaphorically to imply distance; activities that are carried out in different spaces (or “worlds” as stated in the Report). These can be physical spaces (such as where practice and research take place), but more importantly social spaces analogous to how the term “generation gap” is used to describe the different life worlds of youth and elders and the attendant challenges in achieving mutual understanding. Similarly, the notion of a practice-research gap connotes different contexts that impede closer coordination. It is seen as a problem that invites solutions aimed at bridging or reducing it .
Kirk and Reid (2002) describe these attempted solutions as following two strategies: using research as a model for practice or more commonly, using research-generated information as the basis for practice. In the former case, it is argued that practice should be structured and/or conducted in a manner similar to (conventional) research; for example, problems should be defined in operational terms, hypotheses formulated about causes, data collected, and outcomes evaluated. The latter strategy rests on the belief that research should provide the knowledge base for practice (Kazdin 2008). In other words, practice should become either an expression of research or it should prioritize research generated information (as in evidence based practice).
How to implement these strategies and what they might look like remain challenges. Several approaches have been tried including: (1) Using exhortation and rhetoric (usually aimed at practitioners), for example, arguing that research-based practice is more effective or more ethical; (2) lobbying for societal and institutional change, for example, implementing legal mandates that require service programs to use research or evaluation in order to retain funding; (3) increasing the accessibility of research to practitioners, for example, revising research requirements of educational curricula or developing training programs; (4) increasing the relevance of research to practice, for example, conducting research on problems identified by practitioners or inviting practitioner involvement; (5) facilitating the use of research in practice, for example, adapting research tools and methods to the practice setting such as rapid assessment instruments and single-case designs; and (6) conceptualizing practice as analogous to research, for example, characterizing it as a problem-solving process.
Most of these strategies maintain the superior epistemological position of research, privileging research knowledge and directing change toward practice or practitioners. Interestingly, the term “practice research” may support this position. In this expression, “practice” functions as an adjective, modifying the noun, “research.” Thus, practice research refers to a type of research that is relevant to practice where relevance refers to research topics germane to practice. Curiously, the reverse terminology, “research practice,” rather than being the converse of this position – a type of practice that is relevant to research - refers to a type of practice that emulates research. Relevance in this case refers to practice that utilizes or functions like research, precisely the two strategies identified by Kirk and Reid. Therefore, using the term practice research to characterize the response to the gap may contribute to the authority of research and limit the field of potential solutions.
What is striking about all these efforts to bridge the gap is that despite their numerous manifestations over decades, they have not been successful in substantially changing the situation (or why the need for the Salisbury Statement?) Although there have been many examples of individual projects that demonstrated a research-practice connection (some in this issue), the general situation remains largely unchanged. In fact, it is the lack of general change that makes these projects worthy of publication.
If practitioners are not using research-generated information, as implied by the gap, then from where does their information derive? A bit of reflection generates some likely candidates: formal education, professional development activities, readings, experience with clients, conversations with colleagues, supervision, media sources, and observations. Since research-based information is involved to varying degrees in all of these sources, we might wonder why is this not sufficient? What is the desired utilization level of research-based knowledge and how would we know when this level was reached? For those associated with the evidence-based practice groups like the Campbell Collaborative that provide evaluations of the empirical efficacy of different practice approaches and methods, the response might be that practitioners should use the most empirically supported methods based on the Collaborative’s evidentiary assessment. This approach essentially restricts research-generated information to formal research studies (and often, to a subset of these). It also suggests implicitly that the “real” issue may not be one of research-based information per se, but its source. That is, in order to “count” as research information for practice the information needs to come directly from research studies rather that being garnered from the alternative sources identified above.
The above example can be read as a political issue. Researchers (and others who benefit from this position) want greater control of practice. One way to do this is to privilege information generated from research studies (with certain types of studies such as experiments being given greatest authority) and to define practice problems and outcomes in ways that fit research data [3] . In my view, this represents the position of many who advocate that social work should be more scientific and it is within this context of a scientific social work that questions such as how to make research relevant to practice or how to make practice more research based get asked. Method is privileged. Issues that are less amenable to research influence such as the relational aspects of practice, are of less concern or get redefined in ways that conform to research protocols.
In a broader sense, the social work as science position is inherently conservative. The ideology of science begins with a relatively uncritical stance toward “how things are.” Dominant discourses are considered true or accurate representations of the extant world and research focuses on discovering the particularities of this assumed reality. In contrast, professions like social work are concerned with how things should be. And although philosophers caution about assuming ought from is, what is taken as reality forms the boundaries of the possible.
We can take this analysis a bit further by asking what success would look like? That is, how would we know if we were successful in bridging or closing the gap? The answer, as already hinted at, will depend on the particular model of research and practice to which one subscribes. For the conventional researcher, success might be complete adherence to evidence-based practice. For others it might involve greater collaboration (although again it is not clear how much) between researchers and practitioners. Or, as in some forms of participatory action research, it might mean collapsing the distinctions between practitioners and researchers with each functioning in both roles and working collaboratively on specific problems. Still for others no model of practice research would be considered successful without the inclusion of service users - the people to whom all of these efforts are directed.
Also critical, and touched on previously, is whether practice research is to be applied to practice in a general sense, or to specific sites of practice. Success in the first sense seems an expression of the belief that research produces generalizable knowledge and therefore, such knowledge should be widely applied. The latter approach is more modest, circumscribed, and contextual. In this case, success might be the implementation of a practice-research project in a particular setting. From this perspective, there are successes but not Success.
Finally, we might ask how we would know if the situation was changing? Would we need to conduct research? And if so, would this be another instantiation of a research perspective? Addressing these questions might help clarify some of the complexities associated with practice research and some of the implications of its expressions.
In sum, the perception and interpretation of a practice-research gap is related to various understandings of practice, research, and their relationship, as well as “political” considerations involving control over knowledge production. Discussions about practice- research cannot be divorced from these factors. I suspect that doing so (that is, not recognizing and granting legitimacy to the diversity of practice and research models or acknowledging “political” dynamics) has contributed to the perennial nature of the issue and to the sometimes contentious positions taken by those with different preferences about how to conduct research and practice. Are there alternatives? It is to this issue that I now briefly turn.
If my assessment is accurate, it might be beneficial to shift the focus from trying to bridge the gap to exploring other ways of understanding it (while recognizing it as an arbitrary way to construe the situation). One possibility is to view “the gap” as an asset rather than a problem in need of change. According to the asset position, having a research-practice gap and its attendant tension is beneficial for both groups. An analogy from social work education illustrates this position. Several years ago (Witkin, 1998) I wrote about the tension between the academy (specifically, academic social work programs) and their field constituency (the organizations that supervised students). This tension arose, in part, from the different kinds of knowledge having value within their respective contexts. These different knowledges were, in my view, both important and complementary. I further argued that another source of this tension was the attempts of each group (i.e., practitioners and academics) to reproduce itself or more specifically, its perception of itself, in the other. Academics wanted practitioners to be scholarly, theoretical, research-literate, and reflective; practitioners wanted academics to be grounded in the “real world” (their real world), to be politically astute (e.g., sensitive to the needs of organizations), practical, and skill oriented. I wrote:
From the vantage point of each group this desire [to reproduce itself] makes sense: each “knows” what social workers need. Unfortunately, given these positions, strategies to accomplish this reproduction are, when detected, resisted (trying to make practitioners into researchers is one example). Ironically, if one group were successful in transforming the other into itself, the capacity of the relationship to generate creative solutions through a combining or synthesis of differences, would diminish. In fact, there would hardly be a relationship, just a reflection. (390).
Therefore, to preserve their valued knowledge it was important to resist the efforts of the other to reproduce themselves. I think something similar might operate between practitioners and researchers.
Building on this argument, we might entertain another meaning of “gap,” that of a passageway through mountains. Used in this sense, the practice-research gap might come to mean the way one navigates through the differences between practice and research and the different ways each are understood.
How then do these activities acquire their labels as practice and research? In part, they derive from the people who perform them – those who hold certain credentials and who identify themselves as practitioners or researchers. A second criterion is the activity’s primary aim, in this case, helping or learning. Helping (to enhance well-being) is the sine qua non of practice. Whatever the form of practice (e.g., case management or therapy) or the level at which it is conducted (micro or macro), the broad aim is helping. In a similar fashion, learning is the general aim of research. It would make no sense to conduct research if you did not think you could learn something new. The forms this learning might take are diverse – new discoveries or interpretations of situations, recognizing something as an X rather than a Y, judgments about the best course of action in a particular situation and so on. Taken together, one might say that practice research is about learning how to provide “better” help.
Obviously, the helping-learning distinction is not absolute. One may learn something new in practice or help others through conducting research; however, these are not their primary aims (at least as these activities are conventionally understood). Further, these differences illuminate the asymmetry of the practice-research relationship: to help others one needs knowledge, but gaining knowledge does not require helping. It would not be accurate, however, to think of research as entirely divorced from such considerations. Social work research is expected not to be harmful and to provide benefits to participants and/or society.
Understanding the place of helping in the research enterprise has been a focus of research ethics. While it is beyond the scope of this paper to provide an in-depth analysis of this issue, a few points are relevant, particularly in relation to the populations served by social workers. Within research ethics, helping is expressed by the notion of beneficence. It is part of the “big three” ethical principles of respect for persons, beneficence, and justice (Putney and Gruskin 2002) codified in documents such as the Declaration of Helsinki. Based on the utilitarian principle of the greatest good for the greatest number, its meaning in the research context is typically expressed as the assessment of potential benefits versus potential harms to research participants or society. Utilitarianism and its applications have been criticized on many fronts (e.g., Sen 1979), but particularly relevant for social workers are its limitations regarding the understanding of research benefits with indigenous people and vulnerable groups. When considered from their perspective, a good deal of research aimed at providing benefits can be considered harmful. As expressed by Mäori scholar Fiona Cram, “… they are finding out about us but this knowing does not challenge the status quo that maintains our marginalization” (2004: 2). Such statements suggest the kind of “benevolent colonialism” that Jim Ife (2007) cautions social workers about - the privileging of our own assumptions and beliefs and their imposition on others.
These differing viewpoints raise other issues germane to practice research. First, it prompts us to be mindful of whose perspective we are operating from. Researchers, practitioners, research participants, and service users may have different assessments. Second, it encourages the inclusion of various standpoints particularly as a counterbalance to the more dominant research perspective. Third, it challenges the belief that research and research-generated information are ipso facto beneficial. It is not only the actions of researchers that require scrutiny, but the research itself, for instance, how it is structured. Fourth, it encourages consideration of what is considered to be the benefits and risks of research and how they are determined. Such assessments require looking beyond obvious indicators, to more subtle factors such as how problem conceptualization in research information functions. For example, Nespor and Groenke (2009) discuss “responsibility at a distance” “. . . how the processes researchers examine in the lives of the people and events they encounter directly are also constitutive of lives and events elsewhere” (998). That is, research information does not remain in the setting in which it was generated. Its potential impact can extend beyond the population or events studied.
Collectively, these concerns suggest that the benefits of research are not self-evident and that research may even be harmful. A lesson to be learned from the experiences of indigenous groups is the need to re-examine research that claims to depict the realities of others (e.g., service users) without their involvement. An appreciation of reflexivity and the relation of knowledge to power can enhance the sensitivity of such re-examination.
Social workers want to be inclusive and accommodating of different positions. These are positive values. However, there may be some cases where differences are so great that it makes little sense to try to include them under the same conceptual umbrella. I believe this is the case with practice research. This does not mean that some forms of research should be eliminated, but that we must recognize the multiple ways of understanding this activity and their impact on our judgments about what is desirable.
Let me give a brief example. What we might call conventional research seeks to discover generalizable knowledge, while alternative approaches (e.g., those based on postmodern ideas) see research as generating local knowledge. Those practicing from the second perspective think the aims of the first perspective (discovery and generalizability) are not possible. They also believe there are some troubling value implications associated with this position. These researchers would be hard pressed to endorse a position that calls for greater use of conventional research, even in the service of practice. On the other side are those who see the alternative research position as quite limited and of having its own set of troubling value implications, for example, researcher bias and subjectivity. From within the community of each, these positions are reasonable and cogent.
What does it mean for these different researchers to engage in practice-research? Can they agree on principles to which both should adhere? Should they? One possibility is accepting that people working within different research traditions have different kinds of practice research. For instance, conventional researchers may interpret practice research as synonymous with evidence-based practice; whereas for alternative researchers it may mean greater collaboration with clients.
These meanings may not be congruent because they are based on different assumptions about knowledge, values, research, and practice. In this sense, practice research as a uniform enterprise may not be a useful project. Instead we might consider practice researches (plural) that allow those operating from different traditions to figure out the best ways to improve practices. Such a position may transform practice research from a site of struggle to one of peaceful coexistence.
The research on risk assessment provides a good example of some of the complexities involved in trying to formulate a position on practice-research. In the past two decades, there has been a growing increase in attempts to assess risk factors related to various individual and social issues such as disease, criminality, child and elder abuse, suicide, and psychopathology. So common have these analyses become that the term “risk society” is commonly used to characterize social life. Some view this development as an important advancement in society’s ability to predict and prevent unwanted or dangerous conditions or events. From this perspective one could argue that risk assessment studies represent an excellent illustration of practice-research; to wit, the provision of risk data that practitioners can use in their practice. For some, such data could be a cornerstone of “preventive” practice. On the other hand, there is another literature that presents the current enthusiasm for risk assessment as having questionable ethical implications, a inflated aura of precision, and deleterious effects on practice. More specifically, these concerns include increased government surveillance and “governmentality” (a term coined by Foucault to refer to various forms of social control and the techniques used to achieve such control). Such surveillance is rationalized as protecting society; however, as Parton & Kirk (2009) argue, “Those considered potentially at risk are the subject of increased state surveillance, intervention and control, even though many, if not most of them, would never have become ‘cases’ of the problem and even when some of them may suffer harms from such state and professional intervention” (p. 29).
Risk assessment may also have an impact on the way in which practitioners conduct their practice. For example, within the context of the English criminal justice system, Fitzgibbon (2007) discusses how an emphasis on risk assessment (and evidence based practice) has led to a shift in accountability from the client to the public and the how the client-worker relationship has been supplanted by concern with controlling risk. According to Fitzgibbon, the sense of the client as a whole person is lost into a disassembly of risk categories.
My point here is that practice-research can be many things, some of which we may want to promote and some which we may not. Although the term practice research is a convenient way to identify a relationship between two activities, it creates the impression of a methodology or cohesive approach. I would argue it is neither. Rather, practice research will be whatever researchers call it. As with any such enterprise, there will be multiple perspectives and evaluations of the consequences of these efforts. Its value will be assessed, as is all research, by the various audiences to whom it is directed which may or may not include those who are its intended beneficiaries. Further, given the hierarchical authority accorded to research-generated information relative to information generated by practitioners’ and clients’ experience, we must be mindful of how practice research, no matter how well intentioned, gets taken up and used.
I have proposed that the metaphoric use of a gap to describe the research- practice relationship and its interpretation as a problem frames our thinking and conduct of practice research. Metaphors are inevitable and useful (even in research writing), but no particular metaphor is mandated by any situation. Experimenting with other tropes may offer novel ways of construing the research-practice relationship. For example, instead of a gap, the relationship could be viewed as a dance, a duet, a theatrical production, or a conversation. How might these metaphors invite new responses? For instance, in a theatrical production the actors can assume various roles, revise, and improvise; duets require sensitivity to the other, accommodation and adaptation. Instead of bridging, would we be exploring coordinating, collaborating, or harmonizing?
I have also suggested that much of our thinking about practice research reflects and maintains a research standpoint. In my view we need to interrogate this standpoint from the multiple perspectives identified above. If not, then practice research can become a closed system in which it is constructed and evaluated from one perspective whose assumptions and core beliefs are incorrigible. In this case we risk re-making practice as research and reducing the possible ways of identifying, understanding and responding to social ills.
As I review what I have written I recognize two impressions. The first is the sense that I have done this before, or to quote the famous American baseball player, Yogi Berra, it feels like “déjà vous all over again.” The recurrent manifestations of the practice research issue express, in my view, the multifaceted and value-based nature of social work that position it somewhere between science and art. Therefore, it is not surprising to find that my former involvement in the area of the practice-research relationship (see, for example, Witkin & Gottschealk, 1989; Witkin, 1991) is still somewhat relevant. It also reinforces the notion that this issue may continue to be experienced as a tension or conflict. However, I would again add that although uncomfortable at times, it is better to live with the tensions and conflicts of multiplicity then to silence or marginalize others.
My second impression is that of further complicating an already complex issue. I have offered no specific recommendations other than keeping things complicated! Some of these complications include questioning assumptions, applying multiple interpretive frameworks, and remaining open to and tolerant of the many expressions of practice-research. If anything, I would like to see greater efforts on fostering dialogue among researchers, practitioners, policy makers, and service users, identifying different perspectives on problems and how to work collaboratively toward addressing them.
Throughout this paper I have implied that an underlying theme of the practice research issue is the relevance of research to practice. But relevance, like many of the issues discussed, is in the eyes of the beholder, relative to many factors and not easily reduced to a definitive definition or simple formula. In my work as a social work educator I often hear students question the relevance of their academic course work. For these students relevance is a proximal issue. They are interested in how well their courses prepared them in the present to fulfill the responsibilities of a social worker. As an academic, I look at relevance in the long term –will students be able to critically assess new ideas that they encounter long after they have left the university. Both positions have merit. Relevance will be determined in multiple ways, by different people, with different interests, from different standpoints.
I have identified four ways of addressing the practice-research relationship and hence, practice research: to continue struggling, trying to promote certain views or approaches over others; to redefine the tension between practice and research as positive; to generate new metaphors to characterize the situation; to “trouble” the practice-research dualism by, for instance, integrating them into a single approach (a laudable, but pragmatically, conceptually and ideologically challenging project); and finally, a to embrace multiplicity; that is, accept that there are many understandings of practice and research that will influence how we view their relationship. One implication of this last position is to spend less energy on figuring out the way to do practice research and more on finding new ways to communicate our understandings and interests in order to work together in mutually agreeable ways. Practice research need not be a solution to a problem. Instead, it can be a site for dialogue among all with interests in working toward a more just and humane world.
[1] Although some members of the Committee engage in practice, it appears that their primary affiliation is with an academic institution.
[2] A Google Scholar search (June 2010) on “research-practice gap” generated 93 references where this term appeared in the title and 1210 references where it appeared anywhere in article.
[3] This can be taken back another level, at least in the U.S., where it can be argued that government (particularly under the Bush administration) wanted to control research by defining rigor in traditional ways (i.e., experimental research) and linking it with funding priorities.
Cram, F. with Ormond, A. and Carter, L. 2004: Researching Our Relations: Reflections on Ethics And Marginalization. Paper presented at the Kamehameha
Schools Research Conference on Hawaiian Well-being, Kea’au, HI. pp. 1-13. Retrieved May 1, 2010, from: www.ksbe.edu/pase/pdf/KSResearchConference/2004presentations/Cram.pdf
Fitzgibbon, D.W.M. 2007: Risk Analysis and the New Practitioner: Myth or Reality? Punishment & Society, 9 (1), pp. 87-97.
Ife, J. 2007: The New International Agendas: What Role for Social Work?
Council for Social Work Education October 2007: San Francisco, USA.
Kirk, S. A. and Reid, W. J. 2002: Social work and Science: A Critical Appraisal. Columbia University Press.
Nespor, J. and Groenke, S. L. 2009: Ethics, Problem Framing, and Training in Qualitative Inquiry. Qualitative Inquiry, 15, pp. 996-1012.
Parton N. and Kirk S. A. 2009: The Nature and Purposes of Social Work. In I. Shaw, K. Briar-Lawson, J. Orme and R. Ruckdeschel (eds) The Sage Handbook of Social Work Research. Sage pp. 23-36.
Putney, S. B. and Gruskin, S. 2002: Time, Place, and Consciousness: Three Dimensions of Meaning for US Institutional Review Boards, American Journal of Public Health 92(7), pp. 1067-1070.
Sen, A. 1979: Utilitarianism and Welfarism, The Journal of Philosophy, 76 (9), pp. 463-489.
Witkin, S. L. 1991: Empirical Clinical Practice: A Critical Analysis, Social Work, 36 (2), pp. 158-163.
Witkin, S. L. 1998: Mirror, Mirror on the Wall: Creative Tensions, the Academy, and the Field. Social Work, 43(5), pp. 389-391.
Witkin, S. L. and Gottschalk, S. 1989: Considerations in the Development of a Scientific Social Work. Journal of Sociology and Social Welfare, 16(1), pp. 19 - 29.
Author´s Address:
Professor Stanley L. Witkin
University of Vermont
Department of Social Work
Burlington / Vermont
05405
USA
Email: switkin@uvm.edu
urn:nbn:de:0009-11-29248