Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Logan Michael Yliniemi is active.

Publication


Featured researches published by Logan Michael Yliniemi.


Ai Magazine | 2014

Multirobot Coordination for Space Exploration

Logan Michael Yliniemi; Adrian K. Agogino; Kagan Tumer

Teams of artificially intelligent planetary rovers have tremendous potential for space exploration, allowing for reduced cost, increased flexibility and increased reliability. However, having these multiple autonomous devices acting simultaneously leads to a problem of coordination: to achieve the best results, the they should work together. This is not a simple task. Due to the large distances and harsh environments, a rover must be able to perform a wide variety of tasks with a wide variety of potential teammates in uncertain and unsafe environments. Directly coding all the necessary rules that can reliably handle all of this coordination and uncertainty is problematic. Instead, this article examines tackling this problem through the use of coordinated reinforcement learning: rather than being programmed what to do, the rovers iteratively learn through trial and error to take take actions that lead to high overall system return. To allow for coordination, yet allow each agent to learn and act independently, we employ state-of-the-art reward shaping techniques. This article uses visualization techniques to break down complex performance indicators into an accessible form, and identifies key future research directions.


simulated evolution and learning | 2014

PaCcET: An Objective Space Transformation to Iteratively Convexify the Pareto Front

Logan Michael Yliniemi; Kagan Tumer

In multi-objective problems, it is desirable to use a fast algorithm that gains coverage over large parts of the Pareto front. The simplest multi-objective method is a linear combination of objectives given to a single-objective optimizer. However, it is proven that this method cannot support solutions on the concave areas of the Pareto front: one of the points on the convex parts of the Pareto front or an extreme solution is always more desirable to an optimizer. This is a significant drawback of the linear combination. In this work we provide the Pareto Concavity Elimination Transformation PaCcET, a novel, iterative objective space transformation that allows a linear combination in this transformed objective space to find solutions on concave areas of the Pareto front in the original objective space. The transformation ensures that an optimizer will always value a non-dominated solution over any dominated solution, and can be used by any single-objective optimizer. We demonstrate the efficacy of this method in two multi-objective benchmark problems with known concave Pareto fronts. Instead of the poor coverage created by a simple linear sum, PaCcET produces a superior spread across the Pareto front, including concave areas, similar to those discovered by more computationally-expensive multi-objective algorithms like SPEA2 and NSGA-II.


genetic and evolutionary computation conference | 2014

Evolutionary agent-based simulation of the introduction of new technologies in air traffic management

Logan Michael Yliniemi; Adrian K. Agogino; Kagan Tumer

Accurate simulation of the effects of integrating new technologies into a complex system is critical to the modernization of our antiquated air traffic system, where there exist many layers of interacting procedures, controls, and automation all designed to cooperate with human operators. Additions of even simple new technologies may result in unexpected emergent behavior due to complex human/machine interactions. One approach is to create high-fidelity human models coming from the field of human factors that can simulate a rich set of behaviors. However, such models are difficult to produce, especially to show unexpected emergent behavior coming from many human operators interacting simultaneously within a complex system. Instead of engineering complex human models, we directly model the emergent behavior by evolving goal directed agents, representing human users. Using evolution we can predict how the agent representing the human user reacts given his/her goals. In this paradigm, each autonomous agent in a system pursues individual goals, and the behavior of the system emerges from the interactions, foreseen or unforeseen, between the agents/actors. We show that this method reflects the integration of new technologies in a historical case, and apply the same methodology for a possible future technology.


simulated evolution and learning | 2014

Multi-objective Multiagent Credit Assignment Through Difference Rewards in Reinforcement Learning

Logan Michael Yliniemi; Kagan Tumer

Multiagent systems have had a powerful impact on the real world. Many of the systems it studies air traffic, satellite coordination, rover exploration are inherently multi-objective, but they are often treated as single-objective problems within the research. A very important concept within multiagent systems is that of credit assignment: clearly quantifying an individual agents impact on the overall system performance. In this work we extend the concept of credit assignment into multi-objective problems, broadening the traditional multiagent learning framework to account for multiple objectives. We show in two domains that by leveraging established credit assignment principles in a multi-objective setting, we can improve performance by i increasing learning speed by up to 10x ii reducing sensitivity to unmodeled disturbances by up to 98.4% and iii producing solutions that dominate all solutions discovered by a traditional team-based credit assignment schema. Our results suggest that in a multiagent multi-objective problem, proper credit assignment is as important to performance as the choice of multi-objective algorithm.


genetic and evolutionary computation conference | 2017

Monopolies can exist in unmanned airspace

Scott Forer; Logan Michael Yliniemi

With the increased use of unmanned aerial vehicles (UAVs) for both commercial and private use comes the inevitability that over-saturated airspaces will exist. If an airspace becomes congested and difficult to traverse, the possibility of an entity abusing, controlling, and even monopolizing the space can be extremely dangerous. In this paper we show that this type of monopolization can exist. We use cooperative coevolutionary algorithms to examine multiple teams of UAVs coexisting in the same airspace. Considering two equally-sized teams: A and B, if Team A chooses to cooperate with Team B, and considers team Bs losses as its own, the system can work fluidly. If Team A chooses to focus on its own concerns while ignoring impacts on Team B, Team B can suffer a 99% increase in midair conflicts. If Team A chooses to actively prevent Team B from fluid operation, Team Bs number of midair conflicts can suffer a 394% increase.


Knowledge Engineering Review | 2017

Autonomous Unmanned Aerial Vehicle (UAV) landing in windy conditions with MAP-Elites

Sierra A. Adibi; Scott Forer; Jeremy Fries; Logan Michael Yliniemi

With the recent increase in the use of Unmanned Aerial Vehicles (UAVs) comes a surge of inexperienced aviators who may not have the requisite skills to react appropriately if weather conditions quickly change while their aircraft are in flight. This creates a dangerous situation, in which the pilot cannot safely land the vehicle. In this work we examine the use of the MAP-Elites algorithm to search for sets of weights for use in an artificial neural network. This neural network directly controls the thrust and pitching torque of a simulated 3-degree of freedom (2 linear, 1 rotational) fixed-wing UAV, with the goal of obtaining a smooth landing profile. We then examine the use of the same algorithm in high-wind conditions, with gusts up to 30 knots. Our results show that MAP-Elites is an effective method for searching for control policies, and by evolving two separate controllers and switching which controller is active when the UAV is near-ground level, we can produce a wider variety of phenotypic behaviors. The best controllers achieved landing at a vertical speed of −1 and at an angle of approach of


Knowledge Engineering Review | 2017

Preface to the special issue: adaptive and learning agents

Daan Bloembergen; Tim Brys; Logan Michael Yliniemi

Adaptive and learning agents are able to optimise their behaviour in unknown and potentially changing environments, while using previous experience to improve their performance with respect to some evaluation measure. The community of Adaptive and Learning Agents (ALA) studies systems that are capable of acting autonomously and adapting to their surroundings. While the development of a single learning agent may already present a serious challenge, current research frontiers also have a large focus on systems where multiple agents interact in a shared environment. Often, these systems are inherently decentralised, rendering a centralised single agent learning approach infeasible. Examples of such systems are, for example, multi-robot set-ups, decentralised network routing, distributed load-balancing, electronic auctions, traffic control, and many others. In multiagent settings, agents not only have to deal with a dynamic environment, but also with other agents that act, learn and change over time. When agent objectives are aligned and all agents try to achieve a common goal, coordination among the agents is still required to reach optimal results. When agents have conflicting goals, a clear optimal solution may no longer exist and an equilibrium between agent behaviours is generally sought. These issues have given rise to an important research track studying coordination mechanisms in multiagent learning. In addition, current research within the ALA community focuses how agents can share experience with other agents, or how human operators can guide the learning process. Work in this direction falls under the scope of transfer learning, human-agent interaction, teaching, reward shaping, and advice. This special issue contains selected papers from the 2016 Adaptive and Learning Agents (ALA) workshop, held as a satellite workshop at the Autonomous Agents and MultiAgent Systems conference (AAMAS) in Singapore. The goal of the ALA


genetic and evolutionary computation conference | 2016

Multiobjective Neuroevolutionary Control for a Fuel Cell Turbine Hybrid Energy System

Mitchell K. Colby; Logan Michael Yliniemi; Paolo Pezzini; David Tucker; Kenneth M. Bryden; Kagan Tumer

Increased energy demands are driving the development of new power generation technologies with high efficient. Direct fired fuel cell turbine hybrid systems are one such development, which have the potential to dramatically increase power generation efficiency, quickly respond to transient loads (and are generally flexible), and offer fast start up times. However, traditional control techniques are often inadequate in these systems because of extremely high nonlinearities and coupling between system parameters. In this work, we develop multi-objective neural network controller via neuroevolution and the Pareto Concavity Elimination Transformation (PaCcET). In order for the training process to be computationally tractable, we develop a computationally efficient plant simulator based on physical plant data, allowing for rapid fitness assignment. Results demonstrate that the multi-objective algorithm is able to develop a Pareto front of control policies which represent tradeoffs between tracking desired turbine speed profiles and minimizing transient operation of the fuel cell.


adaptive agents and multi-agents systems | 2016

Using Awareness to Promote Richer, More Human-Like Behaviors in Artificial Agents

Logan Michael Yliniemi; Kagan Tumer

The agents community has produced a wide variety of compelling solutions for many real-world problems, and yet there is still a significant disconnect between the behaviors that an agent can learn and those that exemplify the rich behaviors exhibited by humans. This problem exists both with agents interacting solely with an environment, as well as agents interacting with other agents. The solutions created to date are typically good at solving a single, well-defined problem with a particular objective, but lack in generalizability.


genetic and evolutionary computation conference | 2015

Complete Multi-Objective Coverage with PaCcET

Logan Michael Yliniemi; Kagan Tumer

The Pareto Concavity Elimination Transformation (PaCcET) is a promising new development in multi-objective optimization. It transforms the objective space so that a computationally-cheap linear combination of objectives can attain (even concave) Pareto-optimal points. In this work we propose a simple extension to the PaCcET framework, which biases the optimization process toward less-covered areas of the Pareto front.

Collaboration


Dive into the Logan Michael Yliniemi's collaboration.

Top Co-Authors

Avatar

Kagan Tumer

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Drew Wilson

Austin Peay State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Tucker

United States Department of Energy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Pezzini

United States Department of Energy

View shared research outputs
Researchain Logo
Decentralizing Knowledge