Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moninder Singh is active.

Publication


Featured researches published by Moninder Singh.


international workshop on mobile commerce | 2002

Framework for security and privacy in automotive telematics

Sastry S. Duri; Marco Gruteser; Xuan Liu; Paul Andrew Moskowitz; Ronald Perez; Moninder Singh; Jung-Mu Tang

Automotive telematics may be defined as the information-intensive applications that are being enabled for vehicles by a combination of telecommunications and computing technology. Telematics by its nature requires the capture of sensor data, storage and exchange of data to obtain remote services. In order for automotive telematics to grow to its full potential, telematics data must be protected. Data protection must include privacy and security for end-users, service providers and application providers. In this paper, we propose a new framework for data protection that is built on the foundation of privacy and security technologies. The privacy technology enables users and service providers to define flexible data model and policy models. The security technology provides traditional capabilities such as encryption, authentication, non-repudiation. In addition, it provides secure environments for protected execution, which is essential to limiting data access to specific purposes.


Ibm Journal of Research and Development | 2007

Statistical methods for automated generation of service engagement staffing plans

Jianying Hu; Bonnie K. Ray; Moninder Singh

In order to successfully deliver a labor-based professional service, the right people with the right skills must be available to deliver the service when it is needed. Meeting this objective requires a systematic, repeatable approach for determining the staffing requirements that enable informed staffing management decisions. We present a methodology developed for the Global Business Services (GBS) organization of IBM to enable automated generation of staffing plans involving specific job roles, skill sets, and employee experience levels. The staffing plan generation is based on key characteristics of the expected project as well as selection of a project type from a project taxonomy that maps to staffing requirements. The taxonomy is developed using statistical clustering techniques applied to labor records from a large number of historical GBS projects. We describe the steps necessary to process the labor records so that they are in a form suitable for analysis, as well as the clustering methods used for analysis, and the algorithm developed to dynamically generate a staffing plan based on a selected group. We also present results of applying the clustering and staffing plan generation methodologies to a variety of GBS projects.


Mobile Networks and Applications | 2004

Data protection and data sharing in telematics

Sastry S. Duri; Jeffrey G. Elliott; Marco Gruteser; Xuan Liu; Paul Andrew Moskowitz; Ronald Perez; Moninder Singh; Jung-Mu Tang

Automotive telematics may be defined as the information-intensive applications enabled for vehicles by a combination of telecommunications and computing technology. Telematics by its nature requires the capture, storage, and exchange of sensor data to obtain remote services. Such data likely include personal, sensitive information, which require proper handling to protect the drivers privacy. Some existing approaches focus on protecting privacy through anonymous interactions or by stopping information flow altogether. We complement these by concentrating instead on giving different stakeholders control over data sharing and use. In this paper, we identify several data protection challenges specifically related to the automotive telematics domain, and propose a general data protection framework to address some of those challenges. The framework enables data aggregation before data is released to service providers, which minimizes the disclosure of privacy sensitive information. We have implemented the core component, the privacy engine, to help users manage their privacy policies and to authorize data requests based on policy matching. The policy manager provides a flexible privacy policy model that allows data subjects to express rich constraint-based policies, including event-based, and spatio-temporal constraints. Thus, the policy engine can decide on a large number of requests without user assistance and causes no interruptions while driving. A performance study indicates that the overhead is stable with an increasing number of data subjects.


Ibm Systems Journal | 2002

An architecture of diversity for commonsense reasoning

John McCarthy; Marvin Minsky; Aaron Sloman; Leiguang Gong; Tessa A. Lau; Leora Morgenstern; Erik T. Mueller; Doug Riecken; Moninder Singh; Push Singh

Although computers excel at certain bounded tasks that are difficult for humans, such as solving integrals, they have difficulty performing commonsense tasks that are easy for humans, such as understanding stories. In this Technical Forum contribution, we discuss commonsense reasoning and what makes it difficult for computers. We contend that commonsense reasoning is too hard a problem to solve using any single artificial intelligence technique. We propose a multilevel architecture consisting of diverse reasoning and representation techniques that collaborate and reflect in order to allow the best techniques to be used for the many situations that arise in commonsense reasoning. We present story understanding—specifically, understanding and answering questions about progressively harder children’s texts—as a task for evaluating and scaling up a commonsense reasoning system.


conference on computer supported cooperative work | 2000

Algebra jam: supporting teamwork and managing roles in a collaborative learning environment

Mark K. Singley; Moninder Singh; Peter G. Fairweather; Robert G. Farrell; Steven Swerling

We are building a collaborative learning environement that supports teams of students as they collaborate synchronously and remotely to solve situated, multi-step problems involving algebraic modeling. Our system, named Algebra Jam, provides a set of tools to help students overcome two of the most serious impediments to successful collaboration: establishing common ground and maintaining group focus. These tools include tethered and untethered modes of operation including discrepancy notification, a goal-oriented team blackboard, object-oriented chat with collabicons, reification of problem solving roles, and the optional inclusion of a tutor agent as a virtual team participant. The tutor agent not only offers help and feedback on problem solving actions but also accumulates evidence about individual and group problem solving performance in a Bayesian inference network. The system is envisioned as a testbed for developing theories of teaming.


international conference on pattern recognition | 2008

K-means clustering of proportional data using L1 distance

Hisashi Kashima; Jianying Hu; Bonnie K. Ray; Moninder Singh

We present a new L1-distance-based k-means clustering algorithm to address the challenge of clustering high-dimensional proportional vectors. The new algorithm explicitly incorporates proportionality constraints in the computation of the cluster centroids, resulting in reduced L1 error rates. We compare the new method to two competing methods, an approximate L1-distance k-means algorithm, where the centroid is estimated using cluster means, and a median L1 k-means algorithm, where the centroid is estimated using cluster medians, with proportionality constraints imposed by normalization in a second step. Application to clustering of projects based on distribution of labor hours by skill illustrates the advantages of the new algorithm.


international conference on data mining | 2012

An Analytics Approach for Proactively Combating Voluntary Attrition of Employees

Moninder Singh; Kush R. Varshney; Jun Wang; Aleksandra Mojsilovic; Alisia R. Gill; Patricia I. Faur; Raphael Ezry

We describe a framework for using analytics to proactively tackle voluntary attrition of employees. This is especially important in organizations with large services arms where unplanned departures of key employees can lead to big losses by way of lost productivity, delayed or missed deadlines, and hiring costs of replacements. By proactively identifying top talent at a high risk of voluntarily leaving, an organization can take appropriate action in time to actually affect such employee departures, thereby avoiding financial and knowledge losses. The main retention action we study in this paper is that of proactive salary raises to at-risk employees. Our approach uses data mining for identifying employees at risk of attrition and balances the cost of attrition/replacement of an employee against the cost of retaining that employee (by way of increased salary) to enable the optimal use of limited funds that may be available for this purpose, thereby allowing the action to be targeted towards employees with the highest potential returns on investment. This approach has been used to do a proactive retention action for several thousand employees across several geographies and business units for a large, Fortune 500 multinational company. We discuss this action and discuss the results to date that show a significant reduction in voluntary resignations of the targeted groups.


Ibm Journal of Research and Development | 2012

Sales-force performance analytics and optimization

M. Baier; J. E. Carballo; A. J. Chang; Yu Lu; Aleksandra Mojsilovic; M. J. Richard; Moninder Singh; Mark S. Squillante; Kush R. Varshney

We describe a quantitative analytics and optimization methodology designed to improve the efficiency and productivity of the IBM global sales force. This methodology is implemented and deployed via three company-wide initiatives, namely the Growth and Performance (GAP) program, the Territory Optimization Program (TOP), and the Coverage Optimization with Profitability (COP) initiative. GAP provides a set of analytical models to measure and optimize sales capacity and profitable sales growth. TOP develops a set of analytical models and methods for the analysis and optimization of assigning customers to sellers and other sales channels. COP provides additional recommendations on sales-coverage adjustment on the basis of an improved estimation of customer profit. We discuss these three programs in detail and describe how they work together to provide an analytics-driven transformation of the IBM global sales force to improve various sales metrics, such as revenue and cost.


conference on information and knowledge management | 2005

Automated cleansing for spend analytics

Moninder Singh; Jayant R. Kalagnanam; Sudhir Verma; Amit Jaysukhlal Shah; Swaroop K. Chalasani

The development of an aggregate view of the procurement spend across an enterprise using transactional data is increasingly becoming a very important and strategic activity. Not only does it provide a complete and accurate picture of what the enterprise is buying and from whom, it also allows it to consolidate suppliers, as well as negotiate better prices. The importance, as well as the complexity, of this cleansing exercise is further magnified by the increasing popularity of Business Transformation Outsourcing (BTO) wherein enterprises are turning over non-core activities, such as indirect procurement, to third parties, who now need to develop an integrated view of spend across multiple enterprises in order to optimize procurement and generate maximum savings. However, the creation of such an integrated view of procurement spend requires the creation of a homogeneous data repository from disparate (heterogeneous) data sources across various geographic and functional organizations throughout the enterprise(s). Such repositories get transactional data from various sources such as invoices, purchase orders, account ledgers. As such, the transactions are not cross-indexed, refer to the same suppliers by different names, and use different ways of representing information about the same commodities. Before an aggregated spend view can be developed, this data needs to be cleansed, primarily to normalize the supplier names and correctly map each transaction to the appropriate commodity code. Commodity mapping, in particular, is made more difficult by the fact that it has to be done on the basis of unstructured text descriptions found in the various data sources. We describe an on-demand system to automatically perform this cleansing activity using techniques from information retrieval and machine learning. Built on standard integration and application infrastructure software, this system provides enterprises with a fast, reliable, accurate and on-demand way of cleansing transactional data and generating an integrated view of spend. This system is currently in the process of being deployed by IBM for use in its BTO practice.


international conference on data mining | 2015

Identifying Employees for Re-skilling Using an Analytics-Based Approach

Karthikeyan Natesan Ramamurthy; Moninder Singh; Michael Davis; J. Alex Kevern; Uri Klein; Michael Peran

Modern organizations face the challenge of constantly evolving skills and an ever-changing demand for products and services. In order to stay relevant in business, they need their workforce to be proficient in the skills that are in demand. This problem is exacerbated for large organizations with a complex workforce. In this paper, we propose a novel, analytics-driven approach to help organizations tackle some of these challenges. Using historic records on skill proficiency of employees and human resource data, we develop predictive algorithms that can model the adjacencies between the skills that are in supply and those that are in demand. Combined with another proposed approach for predicting the learning ability of people based on human resource data, we develop a framework for identifying the propensity of each individual to be successfully re-trained to a target skill. Our proposed approach can also ingest data on manual skill adjacencies provided by the business to augment the predictive modeling framework. We evaluate the proposed approach for a representative set of target skills and demonstrate a high performance which improves further on adding information about manual skill adjacencies. Feedback on preliminary deployment of this approach for re-skilling indicates that a large percentage of employees recommended by the analytics framework were accepted for further review by the business. We will incorporate the observations made by the business to iteratively improve the predictive learning approach.

Researchain Logo
Decentralizing Knowledge