Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Lafond is active.

Publication


Featured researches published by Daniel Lafond.


Human Factors | 2012

Dealing With Task Interruptions in Complex Dynamic Environments Are Two Heads Better Than One

Sébastien Tremblay; François Vachon; Daniel Lafond; Chelsea Kramer

Objective: This study examined whether teaming up mitigates individual vulnerability to task interruptions in complex dynamic situations. Background: Omnipresent in everyday multitasking environments, task interruptions are usually detrimental to individual performance. This is particularly crucial in dynamic command and control (C2) safety-critical contexts because of the additional challenge imposed by the continually evolving situation during the interruption. Method: We employed a firefighting microworld to simulate C2 in the context of supervisory control to examine the relative impact of interruptions on participants working in a functional dyad versus operators working alone. Results: Although task interruption was detrimental to participants’ efficacy of monitoring resources, the negative impact of interruption was reduced for those working in teams. Teaming up translated into faster resumption time, but only if both teammates were interrupted simultaneously. Interrupting only one team member was associated with increased postinterruption communications and slower resumption time. Conclusion: These findings suggest that in complex dynamic situations working in a small team confers more resistance to task interruption than working alone by virtue of the reduced individual workload typical of teamwork. The benefit of collaborative work seems nevertheless mediated by the coordination and communication overhead associated with teamwork. Application: The present findings have practical implications for operators dealing with unexpected events such as task interruptions in C2 environments.


Small Group Research | 2011

Evidence of Structure-Specific Teamwork Requirements and Implications for Team Design

Daniel Lafond; Marie-Eve Jobidon; Caroline Aubé; Sébastien Tremblay

This article reports an experiment using the C3Fire microworld—a functional simulation of command and control in a complex and dynamic environment—in which 24 three-person teams were organized according to either a functional or multifunctional allocation of roles. We proposed a quantitative approach for estimating teamwork requirements and comparing them across team structures. Two multiple linear regression models were derived from the experimental data, one for each team structure. Both models provided excellent fits to the data. The regression coefficients revealed key similarities and some major differences across team structures. The two most important predictors were monitoring effectiveness and coordination effectiveness regardless of team structure. Communication frequency was a positive predictor of performance in the functional structure but a negative predictor in the multifunctional structure. In regard to communication content, the proportion of goal-oriented communications was found to be a positive predictor of team performance in functional teams and a weak negative predictor of team performance in multifunctional teams. Mental load was a useful predictor in functional teams but not in multifunctional teams. Results show that this method is useful for estimating teamwork requirements and support the claim that teamwork requirements can vary as a function of team structure.


Human Factors and Ergonomics Society Annual Meeting Proceedings | 2009

Decision Analysis Using Policy Capturing and Process Tracing Techniques in a Simulated Naval Air-Defense Task:

Daniel Lafond; Julie Champagne; Guillaume Hervet; Jean-François Gagnon; Sébastien Tremblay; Robert Rousseau

Research in cognitive systems engineering (CSE) and decision support requires an understanding of the psychological processes involved in a given task. The purpose of the present study is to investigate how policy capturing and process tracing may help understand the decision mechanisms involved in a naval air-defense task and characterize how human decision making effectiveness can be improved. We report results from a study in which participants performed a threat evaluation and weapons assignment task within a naval air-defense microworld. Policy capturing and process tracing techniques provide both converging perspectives and complementary insights into complex decision making processes.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2007

Assessing temporal support for dynamic decision making in C2

Robert Rousseau; Sébastien Tremblay; Daniel Lafond; François Vachon; Richard Breton

Temporal awareness is key to successful decision making in a wide range of command and control situations, yet little explicit support to maintaining temporal awareness is provided by Decision Support Systems (DSS) for time-critical decisions. In the context of simulated weapon-target scheduling, the present study compared the decision support gained from two display formats: typical geospatial display and temporal display. The results demonstrated that the temporal display facilitates scheduling performance though its beneficial impact seems to require greater familiarization.


ieee international multi disciplinary conference on cognitive methods in situation awareness and decision support | 2011

Supporting situation awareness: A tradeoff between benefits and overhead

François Vachon; Daniel Lafond; Benoît R. Vallières; Robert Rousseau; Sébastien Tremblay

The prevalence of surveillance and information collection technologies provides decision-makers with greater volume and complexity of information to monitor and on which to base decisions than ever before. In this ever increasing dynamic and information rich environment, the role for decision support systems (DSS) to augment cognition and situation awareness (SA) is becoming crucial. However, unless a better understanding is gained of the factors that promote SA without interfering with other critical cognitive functions, the design and development of such technology may serve only to exacerbate rather than enhance the desired effect. The present study investigates how DSS designed to support particular aspects of SA may affect task performance. In the context of a functional computer-controlled simulation of single ship naval anti-air warfare, a baseline condition was compared to two conditions in which a DSS was integrated to the original interface to support different facets of SA. Participants in the two DSS conditions showed an increased SA level, as measured by the QUASA technique, compared to those in the control condition. Despite this benefit, the two DSS actually lead to a reduced performance, as indexed by defense effectiveness. These findings suggest that the benefits of DSS in terms of SA may be accompanied by an overhead with adverse effects on task performance, particularly in situations of high cognitive load and time constraints. This calls for more holistic evaluations of the cognitive impacts of decision support technologies and the development of methods to simultaneously address competing constraints when designing DSS.


computational intelligence and security | 2011

Complex decision making experimental platform (CODEM): A counter-insurgency scenario

Daniel Lafond; Michel B. DuCharme

The complex decision making experimental platform (CODEM) is intended as a shareable research tool to stimulate multidisciplinary research on complex dynamic situation management and as an environment for training and testing cognitive readiness. The experimenter can set general parameters, configure the interface, specify the model, insert events and define the resources and capabilities of each player using the scenario development tool. No programming skills are required. Task complexity can be varied by introducing feedback loops, delayed effects, time pressure, situational uncertainty, adjusting model transparency and changing the relationships between system elements. CODEM creates detailed logs of events and actions essential for cognitive process tracing and the evaluation of decision making effectiveness. The first task designed with CODEM is a counter-insurgency scenario in which a coalition force seeks to stabilize a failing state. A genetic algorithm is used to estimate the best strategy in that scenario for comparison with human results. An adversarial version also allows insurgents to be controlled by a human opponent (or a red team) rather than an artificial agent. CODEM can be used as a cognitive engineering testbed and as a training environment for improving decision making and adaptation skills in complex situations.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2010

The Ubiquitous Nature of the Hebb Repetition Effect: Error Learning Mistaken for the Absence of Sequence Learning

Daniel Lafond; Sébastien Tremblay; Fabrice B. R. Parmentier

Sequence learning is essential in cognition and underpins activities such as language and skill acquisition. One classical demonstration of sequence learning is that of the Hebb repetition effect, whereby serial recall improves over repetitions on a repeated list relative to random lists. When addressing the question of which mechanism underlies the effect, the traditional approach is to prevent the action of processes thought to be responsible for sequence learning: If the typical Hebb repetition effect is reduced, these processes are key to the effect, researchers claim. By reanalyzing the data of F. B. R. Parmentier, M. T. Maybery, M. Huitson, and D. M. Jones (2008)-who reported no Hebb effect for sequences of auditory-spatial stimuli-we revealed that error learning can be mistaken for the absence of sequence learning. Indeed, incorrect responses are reproduced increasingly over repetitions. Our findings suggest that the Hebb repetition effect can be associated with response learning as well as stimulus processing.


Journal of Cognitive Engineering and Decision Making | 2012

Support Requirements for Cognitive Readiness in Complex Operations

Daniel Lafond; Michel B. DuCharme; Jean-François Gagnon; Sébastien Tremblay

The authors report two experiments studying the requirements for effective decision making in a complex environment. The focus lies on three components of individual cognitive readiness: situation awareness (SA), problem solving, and decision making. Participants performed a simulated society management task in which they could allocate resources to stabilize a national crisis involving multiple interrelated factors (political, economic, environmental, and social). A striking aspect of this simulation is that even though information about the causes and effects within the system is available, most individuals fail to bring the system to the targeted state because of unintended consequences of their decisions. The experiments test the impact of two cognitive support tools designed to improve anticipation of future outcomes. Results show that supporting short-term anticipation (with perfectly accurate projections) was insufficient to improve effectiveness, but supporting long-term anticipation (with approximate projections) successfully improved performance in this complex environment. We conclude with a review of requirements that training and technological support should address to augment individual cognitive readiness for operations in complex environments and propose an extension to SA theory by conceptualizing a Level 4 SA (long-term projection) that may be particularly important to overcome the “wall of complexity.”


Journal of Cognitive Engineering and Decision Making | 2017

Judgment Analysis in a Dynamic Multitask Environment Capturing Nonlinear Policies Using Decision Trees

Daniel Lafond; Benoît Roberge-Vallières; François Vachon; Sébastien Tremblay

Policy capturing is a judgment analysis method that typically uses linear statistical modeling to estimate expert judgments. A variant to this technique is to capture decision policies using data-mining algorithms designed to handle nonlinear decision rules, missing attributes, and noisy data. In the current study, we tested the effectiveness of a decision-tree induction algorithm and an instance-based classification method for policy capturing in comparison to the standard linear approach. Decision trees are relevant in naturalistic decision-making contexts since they can be used to represent “fast-and-frugal” judgment heuristics, which are well suited to describe human cognition under time pressure. We examined human classification behavior using a simulated naval air defense task in order to empirically compare the C4.5 decision-tree algorithm, the k-nearest neighbors algorithm, and linear regression on their ability to capture individual decision policies. Results show that C4.5 outperformed the other methods in terms of goodness of fit and cross-validation accuracy. Decision-tree models of individuals’ judgment policies actually classified contacts more accurately than their human counterparts, resulting in a threefold reduction in error rates. We conclude that a decision-tree induction algorithm can yield useful models for training and decision support applications, and we discuss the application of judgmental bootstrapping in real time in dynamic environments.


ieee international multi disciplinary conference on cognitive methods in situation awareness and decision support | 2016

Comparing methods for assessing operator functional state

Olivier Gagnon; Marc Parizeau; Daniel Lafond; Jean-François Gagnon

The assessment of an operators functional state (i.e., the multidimensional pattern of human psychophysiological conditions that mediates performance) has great potential for increasing safety and reliability of critical systems. However, live monitoring of functional state using physiological and behavioral data still faces several challenges before achieving the level of precision required in many operational contexts. One open question is the level of granularity of the models. Is a general model sufficient or should subject-specific models be trained to ensure high accuracy? Another challenge concerns the formalization of a valid ground truth for training classifiers. This is critical in order to train models that are operationally relevant. This paper introduces the Decontextualized Dynamic Performance (DDP) metric which allows models to be trained simultaneously on different tasks using machine learning algorithms. This paper reports the performance of various classification algorithms at different levels of granularity. We compare a general model, task-specific models, and subject-specific models. Results show that the classification methods do not lead to statistically different performance, and that the predictive accuracy of subject-specific and task-specific models was actually comparable to a general model. We also compared various time-window sizes for the new DDP metric and found that results were degrading with a larger time window size.

Collaboration


Dive into the Daniel Lafond's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helen M. Hodgetts

Cardiff Metropolitan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Breton

Defence Research and Development Canada

View shared research outputs
Researchain Logo
Decentralizing Knowledge