Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naveen Sundar Govindarajulu is active.

Publication


Featured researches published by Naveen Sundar Govindarajulu.


Philosophy and Theory of Artificial Intelligence | 2013

Toward a Modern Geography of Minds, Machines, and Math

Selmer Bringsjord; Naveen Sundar Govindarajulu

We herein report on a project devoted to charting some of the most salient points in a modern “geography” of minds, machines, and mathematics; the project is funded by the John Templeton Foundation, and is being carried out in Bringsjord’s AI and Reasoning Laboratory.


A Construction Manual for Robots' Ethical Systems | 2015

Ethical Regulation of Robots Must Be Embedded in Their Operating Systems

Naveen Sundar Govindarajulu; Selmer Bringsjord

The authors argue that unless computational deontic logics (or, for that matter, any other class of systems for mechanizing moral and/or legal principles) or achieving ethical control of future AIs and robots are woven into the operating-system level of such artifacts, such control will be at best dangerously brittle.


international joint conference on artificial intelligence | 2017

On Automating the Doctrine of Double Effect

Naveen Sundar Govindarajulu; Selmer Bringsjord

The doctrine of double effect (DDE) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE. We briefly present DDE, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to successfully simulate scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE-compliant autonomous systems from scratch; or one can use it to verify that a given AI system is DDE-compliant, by applying a DDE layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by sketching initial work on how one can apply our DDE layer to the STRIPS-style planning model, and to a modified POMDP model. This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.


Archive | 2016

Leibniz’s Art of Infallibility, Watson, and the Philosophy, Theory, and Future of AI

Selmer Bringsjord; Naveen Sundar Govindarajulu

When IBM’s Deep Blue beat Kasparov in 1997, Bringsjord (Technol Rev 101(2):23–28, 1998) complained that despite the impressive engineering that made this victory possible, chess is simply too easy a challenge for AI, given the full range of what the rational side of the human mind can muster. However, arguably everything changed in 2011. For in that year, playing not a simple board game, but rather an open-ended game based in natural language, IBM’s Watson trounced the best human Jeopardy! players on the planet. And what does Watson’s prowess tell us about the philosophy, theory, and future of AI? We present and defend snyoptic answers to these questions, ones based upon Leibniz’s seminal writings on a universal logic, on a Leibnizian “three-ray” space of computational formal logics that, inspired by those writings, we have invented, and on a “scorecard” approach to assessing real AI systems based in turn on that three-ray space.


Archive | 2015

How Models of Creativity and Analogy Need to Answer the Tailorability Concern

John Licato; Selmer Bringsjord; Naveen Sundar Govindarajulu

Analogy is a major component of human creativity. Tasks from the ability to generate new stories to the ability to create new and insightful mathematical theorems can be shown to at least partially be explainable in terms of analogical processes. Artificial creativity and AGI systems, then, require powerful analogical subsystems—or so we will soon briefly argue. It quickly becomes obvious that a roadblock to such a use for analogical systems is a common critique that currently applies to every one in existence: the so-called “Tailorability Concern” (TC). Unfortunately, TC currently lacks a canonical formalization, and as a result the precise conditions that must be satisfied by an analogical system intended to answer TC are unclear. We remedy this problem by developing a still-informal but clear formulation of what it means to successfully answer TC, and offer guidelines for analogical systems that hope to progress further toward AGI.


artificial general intelligence | 2014

Toward a Formalization of QA Problem Classes

Naveen Sundar Govindarajulu; John Licato; Selmer Bringsjord

How tough is a given question-answering problem? Answers to this question differ greatly among different researchers and groups. To begin rectifying this, we start by giving a quick, simple, propaedeutic formalization of a question-answering problem class. This formalization is just a starting point and should let us answer, at least roughly, this question: What is the relative toughness of two unsolved QA problem classes?.


International Conference on Unconventional Computing and Natural Computation | 2013

Small Steps toward Hypercomputation via Infinitary Machine Proof Verification and Proof Generation

Naveen Sundar Govindarajulu; John Licato; Selmer Bringsjord

After setting a context based on two general points (that humans appear to reason in infinitary fashion, and two, that actual hypercomputers aren’t currently available to directly model and replicate such infinitary reasoning), we set a humble engineering goal of taking initial steps toward a computing machine that can reason in infinitary fashion. The initial steps consist in our outline of automated proof-verification and proof-discovery techniques for theorems independent of PA that seem to require an understanding and use of infinitary concepts (e.g., Goodstein’s Theorem). We specifically focus on proof-discovery techniques that make use of a marriage of analogical and deductive reasoning (which we call analogico-deductive reasoning).


Archive | 2018

Toward a Smart City Using Tentacular AI

Atriya Sen; Selmer Bringsjord; Naveen Sundar Govindarajulu; Paul Mayol; Rikhiya Ghosh; Biplav Srivastava; Kartik Talamadupula

The European Initiative on Smart Cities [2] is an effort by the European Commission [4] to improve quality of life throughout Europe, while progressing toward energy and climate objectives. Many of its goals are relevant to and desirable in the world at large. We propose that it is essential that artificial agents in a Smart City have theories of the minds of its inhabitants. We describe a scenario in which such theories are indispensable, and cannot be adequately and usefully captured by current forms of ambient intelligence. Then, we show how a new form of distributed, multi-agent artificial intelligence, Tentacular AI, which among other things entails a capacity for reasoning and planning based in highly expressive cognitive calculi (logics), is able to intelligently address this situation.


Archive | 2018

Are Autonomous-and-Creative Machines Intrinsically Untrustworthy?

Selmer Bringsjord; Naveen Sundar Govindarajulu

Given what has been discovered in the case of human cognition, this principle seems plausible: An artificial agent that is both autonomous (A) and creative (C) will tend to be, from the viewpoint of a rational, fully informed agent, (U) untrustworthy . After briefly explaining the intuitive, internal structure of this disturbing (in the context of the human sphere) principle, we provide a more formal rendition of the principle designed to apply to the realm of intelligent artificial agents. The more-formal version makes use of some basic structures available in one of our cognitive-event calculi, and can be expressed as a (confessedly — for reasons explained — naive) theorem. We prove the theorem, and provide simple demonstrations of it in action, using a novel theorem prover (ShadowProver). We end by pointing toward some future defensive engineering measures that should be taken in light of the theorem.


Archive | 2018

The Epistemology of Computer-Mediated Proofs

Selmer Bringsjord; Naveen Sundar Govindarajulu

Epistemology includes in large part investigation of the conditions by which rational human knowledge and belief, of the propositional variety, can be secured. Our particular instance of this investigation arises from the stipulation that a human (a) receives a partial or complete formal argument/proof (\(\mathcal {A}\)) for/of a conclusion ϕ, where some computing machine \(\mathcal {M}\) “stands between” or mediates a’s receiving \(\mathcal {A}\) and ϕ. The mediation can take any number of forms, ranging from the simple and mundane (e.g., a is a teacher who types in to a text-editing system a proof of some easy theorem for a math class, and then prints out the proof for subsequent study and presentation to the class) to the exotic and famous (e.g., a receives a too-big-to-survey printout of a computer-generated proof of the four-color theorem). Under what conditions is it rational for a to believe ϕ? Once we have erected at least a reasonably precise framework for understanding the structure of arguments and proofs, classifying computing machines, ranking strength of knowledge and belief, and distinguishing at least roughly between types of computer mediation, this result, as we indicated, is a framework in which this pair of questions (and other, related ones) can eventually be answered.

Collaboration


Dive into the Naveen Sundar Govindarajulu's collaboration.

Top Co-Authors

Avatar

Selmer Bringsjord

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

John Licato

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Rikhiya Ghosh

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Kartik Talamadupula

University of Tennessee Health Science Center

View shared research outputs
Top Co-Authors

Avatar

Atriya Sen

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Daniel Arista

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Evan McCarty

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Joe Johnson

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Logan Gittelson

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge