Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taiga Nakamura is active.

Publication


Featured researches published by Taiga Nakamura.


Signal Processing | 2002

An audio watermarking method using a two-dimensional pseudo-random array

Ryuki Tachibana; Shuichi Shimizu; Seiji Kobayashi; Taiga Nakamura

In this paper, we describe a multiple-bit audio watermarking method which is robust against wow-and-flutter, random sample cropping, and pitch shifting. Though these processings are easy to perform, they are difficult for audio watermarks to survive, because they introduce mis-synchronization between the embedded and detection watermarks. Our main ideas against these mis-synchronization attacks are a two-dimensional pseudo-random array (PRA), magnitude modification, and non-linear subbands. The embedding algorithm modifies the magnitudes of segmented areas in the time-frequency plane of the content, according to a two-dimensional pseudo-random array, while the detection algorithm correlates the magnitudes with the PRA. The two-dimensional array makes the watermark robust against cropping because, even when some portions of the content are heavily degraded, other portions of the content can match the PRA and contribute to watermark detection. Secondly, the magnitude modification enables detection even from displaced detection windows. This is because magnitudes are less influenced than phases by fluctuations of the analysis windows caused by random cropping. The last idea, wider bandwidths at higher frequencies, keeps the correspondence of the embedded and detection PRA even for pitch-shifted content. We theoretically and experimentally analyze the robustness of the proposed algorithm against a variety of signal degradations.


Proceedings of SPIE | 2001

Audio watermarking method robust against time- and frequency-fluctuation

Ryuki Tachibana; Shuichi Shimizu; Taiga Nakamura; Seiji Kobayashi

In this paper, we describe an audio watermarking algorithm that can embed a multiple-bit message which is robust against wow-and-flutter, cropping, noise-addition, pitch-shift, and audio compressions such as MP3. The algorithm calculates and manipulates the magnitudes of segmented areas in the time-frequency plane of the content using short-term DFTs. The detection algorithm correlates the magnitudes with a pseudo-random array that maps to a two-dimensional area in the time-frequency plane. The two-dimensional array makes the watermark robust because, even when some portions of the content are heavily degraded, other portions of the content can match the pseudo-random array and contribute to watermark detection. Another key idea is manipulation of magnitudes. Because magnitudes are less influenced than phases by fluctuations of the analysis windows caused by random cropping, the watermark resists degradation. When signal transformation causes pitch fluctuations in the content, the frequencies of the pseudo-random array embedded in the content shift, and that causes a decrease in the volume of the watermark signal that still correctly overlaps with the corresponding pseudo-random array. To keep the overlapping area wide enough for successful watermark detection, the widths of the frequency subbands used for the detection segments should increase logarithmically as frequency increases. We theoretically and experimentally analyze the robustness of proposed algorithm against a variety of signal degradations.


international conference on service oriented computing | 2015

Pricing IT Services Deals: A More Agile Top-Down Approach

Aly Megahed; Kugamoorthy Gajananan; Mari Abe; Shun Jiang; Mark A. Smith; Taiga Nakamura

Information technology service providers bid on high valued services deals in a competitive environment. To price these deals, the traditional bottom up approach is to prepare a complete solution, i.e., know the detailed services to be offered to the client, find the exact costs of these services, and then add a gross profit to reach the bidding price. This is a very time consuming and resource intensive process. There is a business need to get quick (agile) early estimates of both cost and price using a core set of high level data for the deal. In this paper, we develop a two-step top down approach for doing this. In the first step, we mine historical and market data to come up with estimates on the cost and price. We provide some numerical results based on industry data that statistically shows that there is a benefit of using historical data in this step beside the traditional way of using market data. Because the bidding price is not the sole factor affecting the chances of winning a deal, we then enter the different price points in a predictive analytics model (step two) to calculate the relative probability of winning the deal at each point. Such probabilities with the corresponding prices can provide significant insights to the business helping them reach quick reliable pricing.


ieee international conference on services computing | 2015

A Progress Advisor for IT Service Engagements

Peifeng Yin; Hamid R. Motahari Nezhad; Aly Megahed; Taiga Nakamura

Monitoring the status of ongoing sales opportunities in IT service engagements is important for sales teams to improve the win rate of deals. Existing approaches aim at predicting the final outcome, i.e., The eventual chance of winning or losing a deal, given a snapshot of the deal data. While this type of prediction indirectly advises on the deal status, it offers limited guidance and insights. During the engagement progress, there occur numerous milestones and key events whose occurrence and status is important in achieving the desired outcome of the deal. These interim milestones and events may happen in different time intervals during the lifecycle of a deal, depending on the deal size and other parameters. In this paper, we describe a novel Bernoulli-Dirichlet predictive model for predicting the occurrence of key events and milestones within a service engagement process to assist in monitoring the progress of ongoing deals. This model enables predicting the timeline and status of the next event(s), given the current history of milestones activity in the engagement lifecycle. Through such a step-by-step guidance, sales teams may have a higher chance of success by knowing of upcoming events, and preparing to counter undesired events. We show the empirical evidences of significance and impact of such an approach in a real-world service provider environment.


ieee international conference on services computing | 2016

A Top-Down Pricing Algorithm for IT Service Contracts Using Lower Level Service Data

Kugamoorthy Gajananan; Aly Megahed; Mari Abe; Taiga Nakamura; Mark A. Smith

Information technology (IT) service providers competing for high valued contracts need to produce a compelling proposal with competitive price. The traditional approach to pricing IT service deals, which builds up the bottom-up costs from the hierarchy of services, is often time consuming, resource intensive, and only available late as it requires granular information of a solution. Recent work on top-down pricing approach enables efficient and early estimates of cost and prices using high level services to overcome and complement these problems. In this paper, we describe an extended pricing method for top-down pricing using the secondary service level. The method makes use of data lower level services to calculate improved estimates, yet still requires minimal input. We compare the previous and new approaches based on industrial data on historical and market deals, and demonstrate that the new approach can generate more accurate estimates. In addition, we also show that mining historical data would yield more accurate estimation than using market data for services, experimental results are in consistent with our findings in previous work.


requirements engineering | 2010

Extending Automated Analysis of Natural Language Use Cases to Other Languages

Avik Sinha; Amit M. Paradkar; Hironori Takeuchi; Taiga Nakamura

Natural language is the preferred form for writing use cases. While a few linguistic techniques exist that extract or validate structured information from unstructured natural language use case, they often cannot be extended beyond their primary language. Extending linguistic analysis and automated validation capabilities across multiple languages is necessary not only for widespread industrial adoption but it helps in analyzing a collection of multilingual use cases (quite frequent in multi-national projects) that need to be aggregated. We have published a UIMA (Unstructured Information Management Architecture) based linguistic engine for analyzing English use cases. In this paper, we report on extension of our linguistic technique to Japanese and effect of such an extension on the automated requirement validation suite.


workshop on xml security | 2003

Optimistic fair contract signing for Web services

Hiroshi Maruyama; Taiga Nakamura; Tony Hsieh

Reliable and atomic transactions are a key to successful e-Business interactions. Reliable messaging subsystems, such as IBMs MQ Series, or broker-based techniques have been traditionally used for this purpose. In this paper, we take a radically different approach to address this problem, which is to apply the idea of Optimistic Fair Contract Signing recently proposed by Asokan, Shoup, and Waidner. We show a design of the protocol based on the latest XML and Web Services Security standards and discuss the benefits and limitations of this approach.


conference on information and knowledge management | 2017

Tone Analyzer for Online Customer Service: An Unsupervised Model with Interfered Training

Peifeng Yin; Zhe Liu; Anbang Xu; Taiga Nakamura

Emotion analysis of online customer service conservation is important for good user experience and customer satisfaction. However, conventional metrics do not fit this application scenario. In this work, by collecting and labeling online conversations of customer service on Twitter, we identify 8 new metrics, named as tones, to describe emotional information. To better interpret each tone, we extend the Latent Dirichlet Allocation (LDA) model to Tone LDA (T-LDA). In T-LDA, each latent topic is explicitly associated with one of three semantic categories, i.e., tone-related, domain-specific and auxiliary. By integrating tone label into learning, T-LDA can interfere the original unsupervised training process and thus is able to identify representative tone-related words. In evaluation, T-LDA shows better performance than baselines in predicting tone intensity. Also, a case study is conducted to analyze each tone via T-LDA output.


international conference on service oriented computing | 2016

A Discrete Constraint-Based Method for Pipeline Build-Up Aware Services Sales Forecasting

Peifeng Yin; Aly Megahed; Hamid R. Motahari Nezhad; Taiga Nakamura

Services organizations maintain a pipeline of sales opportunities with different maturity level (belonging to progressive sales stages), lifespan (time to close) and contract values at any time point. As time goes, some opportunities close (contract signed, or lost) and new opportunities are added to the pipeline. Accurate forecasting of contract signing by the end of a time period (e.g., quarterly) is highly desirable to make appropriate sales activity management with respect to the projected revenue. While the problem of sales forecasting has been investigated in general, two specific aspects of sales engagement for services organizations, which entail additional complexity, have not been thoroughly investigated: (i) capturing the growth trend of current pipeline, and (ii) incorporating current pipeline build-up in updating the prediction model. We formulate these two issues as a dynamic curve-fitting problem in which we build a sales forecasting model by balancing the effect of current pipeline data and the model trained based on historical data. There are two challenges in doing so, (i) how to mathematically define such a balance and (ii) how to dynamically update the balance as more new data become available. To address these two issues, we propose a novel discrete-constraint method (DCM). It achieves the balance via fixing the value of certain model parameters and applying a leave-one-out algorithm to determine an optimal free parameter number. By conducting experiments on real business data, we demonstrate the superiority of DCM in sales pipeline forecasting.


international conference on service oriented computing | 2016

Top-Down Pricing of IT Services Deals with Recommendation for Missing Values of Historical and Market Data

Aly Megahed; Kugamoorthy Gajananan; Shubhi Asthana; Valeria Becker; Mark A. Smith; Taiga Nakamura

In order for an Information Technology (IT) service provider to respond to a client’s request for proposals of a complex IT services deal, they need to prepare a solution and enter a competitive bidding process. A critical factor in this solution is the pricing of various services in the deal. The traditional way of pricing such deals has been the so-called bottom-up approach, in which all services are priced from the lowest level up to the highest one. A previously proposed more efficient approach and its enhancement aimed at automating the pricing by data mining historical and market deals. However, when mining such deals, some of the services of the deal to be priced might not exist in them. In this paper, we propose a method that deals with this issue of incomplete data via modeling the problem as a machine learning recommender system. We embed our system in the previously developed method and statistically show that doing so could yield significantly more accurate results. In addition, using our method provides a complete set of historical data that can be used to provide various analytics and insights to the business.

Researchain Logo
Decentralizing Knowledge