Marcus J. Hollander
Simon Fraser University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcus J. Hollander.
The Permanente Journal | 2013
Dan MacCarthy; Rivian Weinerman; Liza Kallstrom; Helena Kadlec; Marcus J. Hollander; Scott B. Patten
OBJECTIVES An adult mental health module was developed in British Columbia to increase the use of evidence-based screening and cognitive behavioral self-management tools as well as medications that fit within busy family physician time constraints and payment systems. Aims were to enhance family physician skills, comfort, and confidence in diagnosing and treating mental health patients using the lens of depression; to improve patient experience and partnership; to increase use of action or care plans; and to increase mental health literacy and comfort of medical office assistants. METHODS The British Columbia Practice Support Program delivered the module using the Plan-Do-Study-Act cycle for learning improvement. Family physicians were trained in adult mental health, and medical office assistants were trained in mental health first aid. Following initial testing, the adult mental health module was implemented across the province. RESULTS More than 1400 of the provinces 3300 full-service family physicians have completed or started training. Family physicians reported high to very high success implementing self-management tools into their practices and the overall positive impact this approach had on patients. These measures were sustained or improved at 3 to 6 months after completion of the module. An Opening Minds Survey for health care professionals showed a decrease in stigmatizing attitudes of family physicians. CONCLUSIONS The adult mental health module is changing the way participants practice. Office-based primary mental health care can be improved through reimbursed training and support for physicians to implement practical, time-efficient tools that conform to payment schemes. The module provided behavior-changing tools that seem to be changing stigmatizing attitudes towards this patient population. This unexpected discovery has piqued the interest of stigma experts at the Mental Health Commission of Canada.
Healthcare Management Forum | 1990
Marcus J. Hollander; Alan Campbell
The extent to which health administration can be considered a profession is examined in the context of the five major models of the professions in the literature (historical, trait, functional, economic and power). A synthesis of the various models into a proposed integrative model, or typology of occupations, is also presented. Control over entry and control over conduct are identified as the two major dimensions by which the professional status of various occupations is measured. Health administration is seen as a profession in this context, and the prospects for further professionalization are discussed.
Healthcare Management Forum | 1990
Marcus J. Hollander
generalization representing a complex set of principles. For example, a chair is a construct. It is understood as an abstraction. There are many different types of chairs but most people understand the term “chair“ as a concept. However, if one tries to define a chair as, for example, being an object having a seat and four legs which is used by humans for sitting on, one raises the question of what is a stool. Is a stool a chair or something conceptually different? Under what conditions can a stool function as a chair? The process of trying to define objective criteria for a construct (as we have tried to do above) is called developing an operational definition. An operational definition is one which defines objective criteria (a seat, four legs, used for sitting on, etc.) as being representative of an abstract construct. In order for an operational definition to be relevant, people have to agree that it is a valid representation of a construct. There are at least four types of validity. Content or face validity refers to the degree to which experts in a field judge that the operational definition or measurement instrument is developed in such a way that it measures what it purports to measure. Criterion-related validity refers to the extent to which the instrument or method corresponds to some valid criterion of measurement. There are two types of criterion validity, concurrent and predictive. Concurrent validity uses a contemporaneous standard, for example, measuring a watch (a new and potentially more accurate tool to measure time) against a sundial. Predictive validity uses a criterion which does not become available until a future point in time; for example, validating a system to pick winners in horse races by comparing the system to the actual number of winners predicted. Consfmct validity refers to the degree to which an instrument or method corresponds to a theoretical construct. Having developed a valid operational definition of a construct, the tool which measures the operational definition must be such that when applied, it always provides the same score when it measures the same object in the same state. In other words, the measurement instrument must be reliable. There are at least two major tests of reliability. Test-retest reliability refers to an analysis in which an instrument is used to measure a standard object or condition at two points in time to determine that the instrument provides a consistent score when measuring the same thing. For example, a psychological depression scale should provide the same score at two different points in time if the subject is in the same state of depression when both measurements are taken. Inter-rater reliability refers to the degree of consistency in test scores when different raters rate the same object using the same measurement instrument. Two different nurses measuring the same person, using the same instrument, should get the same score on, for example, a functional deficit scale. How were these key concepts used in the previous parable? The chairman of the board gave the administrator a goal which was composed of multiple and, possibly, contradictory constructs. This meant that the administrator had to effectively try to read the chairman’s mind in determining the relevant constructs and acceptable operational definitions. In developing indicators, one must be careful to understand numbers within their organizational context. For example, if economy refers to lowest cost inputs, sub-unit B, with a cost per nursing hour of
Healthcare quarterly | 2009
Marcus J. Hollander; Guiping Liu; Neena L. Chappell
17.58, was more economical than sub-unit A with a comparative Two other key concepts also need to be explained. A construct is an cost of
Healthcare quarterly | 2009
Marcus J. Hollander; Helena Kadlec; Ramsay Hamdi; Angela Tessaro
19.78. Given that sub-unit A had nurses who were at the top of the pay scale, it would be impossible, given a legal contract, to roll these costs back. In theory, it may be possible to get nurses to accept a rollback in pay at the bargaining table. However, this is very difficult. Therefore, is the hourly salary rate an appropriate measure of economy in this case? If so. how realistic is the expectation that subunit A can increase its economy (that is, lower its unit input cost)? If efficiency is a measure of lowest cost output, is sub-unit B more efficient at a cost per treatment hour of
Healthcare quarterly | 2007
Marcus J. Hollander; Neena L. Chappell; Michael J. Prince; Evelyn Shapiro
33.80 compared to
Healthcare quarterly | 2009
Marcus J. Hollander
46.37? If so, what can be done to increase efficiency in sub-unit A? The cost per case for regular clients could be argued to be an indicator of efficiency or effectiveness. In this paper, it is argued that it is a measure of effectiveness as it conforms to the overall goal of providing the best care at the lowest cost. In this case, efficiency and effectiveness are in conflict, as increasing efficiency compromises effectiveness and vice versa. For example, increasing the effectiveness (lowest cost per case) decreases efficiency (lowest cost per treatment hour) due to the additional time required for training and community development. The parable also points out the relationship of competing goals. Under what conditions do goals of system efficiency and effectiveness take precedence over agency goals of efficiency and effectiveness? Who defines which goals are paramount? For example, the agency was reluctant to increase its effectiveness (lowest cost per case) because it would increase its cost per treatment hour and it did not wish to go back to its funding agency for an increase in its grant. Similarly, if system efficiency and effectiveness can be improved through the use of home care, how can these improvements be realized if the system is administratively segmented so that community and facility budgets are treated separately? With regard to validity, validity is at its core a question of consensus by relevant actors. While there are methodological techniques to test validity, in an applied setting, these tests are seldom carried out. Therefore, within the applied setting of an agency, measures are valid if they are accepted as such by key actors. If the number of visits per nurse per day is seen as a valid operational definition of sub-unit efficiency, then this will form the reality that shapes decision making in that agency. Taking Steps to Use Analysis The following are a series of steps which can be taken by those who wish to try to use analysis to improve service delivery. Assessing the Agency’s Readiness for Analysis Relevant analysis can only take place in fertile ground. This means that key actors should have an appreciation of the analysis enterprise and a desire to use analysis as a tool for decision making. While the board and administrator do not need to be researchers, they need to have a basic understanding of the critical nature of the process of translating goals to constructs to operational definitions to valid and reliable measurement instruments. While the analyst can carry out the process of going from goals to measures, the analyst must have access to senior decision makers to jointly discuss and work through the process of clarifying goals and obtaining agreement on operational definitions. This should be done before the study is carried out, otherwise, as often happens, the study will be discredited because it does not answer the “real” question of concern or because there is no consensus on the validity of operational definitions. There is little which is more wasteful and disappointing than analysis which misses the mark. If there is not an adequate readiness to use research, administrators can expand their knowledge of the research process, show key actors
Healthcare quarterly | 2011
Rivian Weinerman; Helen Campbell; Magee Miller; Janet Stretch; Liza Kallstrom; Helena Kadlec; Marcus J. Hollander
Healthcare quarterly | 2010
Marcus J. Hollander; Jo Ann Miller; Helena Kadlec
Healthcare quarterly | 2010
Marcus J. Hollander; Christopher Corbett; Paul Pallan