Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where P. Takis Metaxas is active.

Publication


Featured researches published by P. Takis Metaxas.


acm symposium on parallel algorithms and architectures | 1996

Improved methods for hiding latency in high bandwidth networks (extended abstract)

Matthew Andrews; Tom Leighton; P. Takis Metaxas; Lisa Zhang

In this paper we describe methods for mitigating the degradation in performance caused by high latencies in parallel and distributed networks. Our approach ]s similar in spirit to the “complementary slackness” method of latency hiding, but has the advantage that the slackness does not need to be provided by the programmer, and that large slowdowns are not needed in order to hide the latency. Onr approach is also similar in spirit to the latency hiding methods of [~], but is not restricted to memoryless dataflow types of programs. Most of our analysis is centered on the simulation of unit-delay rings on networks of workstations ( NOWS) with arbitrary delays on the links. For example, given any collection of operations (including updates of large local memories or databases) that runs in t steps on a ring of n workstations with unit link delays, we show how to perform the same collection of operations in O(t log3 n) steps on any connected, bounded-degree network of n / log3 n workstations for which the auerage link delay is constant. (Here we assume that the bandwidth available on the NOW links is O(log n) times the bandwidth available on the ring links. An extra factor of log n appears in the slowdown without this assumption. ) The result makes non-trivial use of redundant computation, which is required to avoid a slowdown that is proportional to the maxtmum link delay. The increase in memory and computational load on each workstation needed for the redundant computation is at most 0( 1). In the case where the average latency in the network of workstations (dave) is not constant, then the slowdown needed for the simulation degrades by an additional factor of 0( G). This is still far superior to a slowdown of @(dmaX) which can occur without redundant computation. As a consequence of our work on rinm. we can also derive emulati&s of a wide variety of o;h’er unit-delay network architectures on a NOW with high-latency links. For example, we show how to emulate an N-node 2dimensional array with unit delays, using slowdown s = 0( filog3 N + Nile log3 N&) on any connected bounded-degree network of O(N/s) workstations with average link delay dave. The emulation is work-preserving and the slowdown is close to optimal for many configurations of the network of workstations. We also prove lower bounds that establish limits on the degree to which the high latency links can be mitigated. These bounds demonstrate that it is easier to overcome latencies in dataflow types of computations than in computations that require access to large local databases. *Department of Mathematics and Laboratory for Computer Science, hlIT Supported by NSF contract 9302476 -CCR, ARMY grant DAAH04-95-1-0607 and ARPA contract NOOO1495-1.1246 Email andrewsdmath mlt edu t Department of Mathematics and Laboratory for COmputer Sc,ence, MIT Supported by ARMY grant DAAH0495-1-0507 and .4RPA contract NO O014-95-1-1246 Emall ftlQmath mlt eclu t Department of Computer Science, Wellesley college SUPported by NSF contract 9504421 -CCR, ARMY grant DAAH0495-1-0607 and ARPA contract NOOO14-95-1-1246 Emall. pmetaxas{~wellesley edu


symposium on the theory of computing | 1996

Automatic methods for hiding latency in high bandwidth networks (extended abstract)

Matthew Andrews; Tom Leighton; P. Takis Metaxas; Lisa Zhang

Department of Mathematics and Laboratory for Computer Science, MIT Supported by an NSF graduate fellowship, ARMY grant DA.4H04-95-1-0607 and ARPA contract NoOO14-95-l1~46 Ema,l. ~lu@ma+h mlt ~du Permission to make digitallhard copies of all or part of this material for personal or clasaroom use is granted without fee provided that tbe copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copyright is by permission of the ACM, Inc. To copy otherwise, to republish, to peat on servers or to redistribute to lists, requires specific permission and/or fee. SPAA’96, Padua, Italy ‘@1996 ACM 0-89791-809-6/96/06 ..


international conference on computer graphics and interactive techniques | 1998

The art and science of multimedia: an interdisciplinary approach to teaching multimedia at a liberal arts college

Naomi Ribner; P. Takis Metaxas

3.50


human factors in computing systems | 1995

Interactive multimedia conference proceedings

Samuel A. Rebelsky; James Ford; Kenneth Harker; Fillia Makedon; P. Takis Metaxas; Charles B. Owen

In this paper we describe methods for mitigating the degradation in performance caused by high latencies in parallel and distributed networks. Our approach is similar in spirit to the “complementary slackness” technique for latency hiding but has the advantage that the slackness does not need to be provided by the programmer and that large slowdowns are not needed in order to hide the latency. For example, given any algorithm that runs in T steps on an n-node ring with unit link delays, we show how to run the algorithm in O(T) steps on any n-node bounded-degree connected network with average link delay 0(1 ). This is a significant improvement over prior approaches to latency hiding, which require slowdowns proportional to the maximum link delay (which can be quite large in comparison to the average delay). In the case when the network has average link delay L&, our simulation runs in O(&Z’) steps using n/G processors, thereby preserving efficiency. We also show how to simulate an n x n array with unit link delays using slowdown O(df& log513 n) Q ‘2/3 log-sla n)-nodearray with average link on an (n dave delay d~ve. We anticipate that our results wilf be of interest in the context of parallel and distributed computing on networks of workstations (N OWS). NOWS typically *Department of Mathematics and Laboratory for Computer Science, MIT. Supported by NSF contract 9302476-CCR and ARPA contract NOO014-95-1-1246. Email: andrews@ math. mit.edu. i Department of Mathematics and Laboratory for COmputer Sc]ence, MIT. Supported by ARMY grant DAAH0495-1-0607 and ARPA contract NOO014-95-1-1246. Email: [email protected].


ACM Computing Surveys | 1995

Fundamental ideas for a parallel computing course

P. Takis Metaxas

Department of Computer Science, Wellesley College. SUPported by NSF contract 9504421-CCR and ARPA contract NOO014-95-1-1246. Email: pmetaxas@)wellesley. edu. ~Department of Mathematics and Laboratory for Computer Science, MIT. Supported by an NSF graduate fellowship and ARPA contract NOO014-95-1-1246. Emad: [email protected]. Permission to make digital/hard copies of all or part of Uda material for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or conunemial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copyright is by perrniaaion of rhe ACM, Inc. To copy othe~is% to republish, to post on servers or to redbdbutc to lists, requires apecdic permission andlor fee. ~TO-C’96, Philadelphia PA,USA Q 1996 ACM ()-89791.785-5/96/()5. .!


Archive | 2010

From Obscurity to Prominence in Minutes: Political Speech and Real-Time Search

Eni Mustafaraj; P. Takis Metaxas

3.50 have the luxury of very high bandwidth finks but suffer from latency problems caused by long wires or delays in accessing the network. Our work suggests an approach for overcoming such problems in a waY that can be made transparent to the programmer (e.g., by making the network appear to function as if it were comprised of low-latency links).


Archive | 2011

On the predictability of the U.S. elections through search volume activity

Catherine Lui; P. Takis Metaxas; Eni Mustafaraj

At Wellesley College, very rarely do the Fine Art and ComputerScience faculty cross paths. At least that was the case until lastyear when we taught an experimental course that brought togetherthe work we were doing in our respective corners of multimedia intoone class. The course was taught for a second time in the spring of1998 semester and has been incorporated into the curriculum. Thispaper describes our experience organizing and teaching such acourse.


Archive | 1994

Parallel computation : practical implementation of algorithms and machines :

Peter A. Gloor; Fillia Makedon; James W. Matthews; Donald B. Johnson; P. Takis Metaxas; Matthew Cheyney; Scott Dynes

Computer technology has changed the way that conference proceedings can be archived and presented. No longer are researchers limited to printed text; electronic proceedings allow virtual participants in the conference to search the proceedings for ideas, to add and share annotations, and to create paths of related concepts through the proceedings. Proceedings that incorporate nontextual materials, such as audio, video, and slides from conference presentations provide further opportunities for virtual participants. In this demonstration of the DAGS interactive multimedia conference proceedings, we present an electronic conference proceedings interface that incorporates both papers and presentations. This interface presents a wide variety of features, admits nonlinear interactions, and suggests new roles for conference proceedings.


Journal of Universal Computer Science | 1998

The Roles of Video in the Design, Development, and Use of Interactive Electronic Conference Proceedings

Samuel A. Rebelsky; Fillia Makedon; P. Takis Metaxas; James Ford; Charles B. Owen; Peter A. Gloor

In this note we analyze an introductory undergraduate (or early graduate) course that spans the spectrum of what we call parallel computing. Topics should include algorithms, interconnection network architectures, theory, and programming. The focus of the course should be on the undergraduate level, that is, it should make no unreasonable assumptions about student background, and it should build on top of the core CS courses, in particular data structures and fundamental algorithms. Some knowledge of machine organization is useful but not required. Even though the field of parallel processing is rather young, many of its fundamental ideas have been known for a long time in other fields. Recognizing this fact is essential to our approach, as we propose to teach parallelism using metaphors and knowledge with which students are familiar, either because they appear in real life or because they were learned in other courses. As a general aphorism we may say that the basic idea behind parallelism is to employ many resources to solve a particular problem. For this approach to work, it is necessary to break down the problem into subproblems of lesser complexity, allocating the resources needed to solve the subproblems, and combining the partial solutions into a global one. The subproblems may be independent or overlapping; depending on the situation, a different technique is employed. A closer look at algorithms developed over the last fifteen years on a variety of models and platforms reveals a set of ideas (basic techniques) that is used repeatedly. These ideas should be taught in any introductory course. We mention them in the following and suggest a context in which each one can be introduced, Divide and conquer is one of the fundamental paradigms in sequential computation that happens to be inherently parallel. In this technique, the subproblems are independent and can be solved without communication. The degree of concurrency to be achieved is bounded by the number of subproblems at hand. Divide and conquer can be introduced in the context of Eratosthenes’ prime sieve, or computing w, where partitioning is straightforward, or quicksort, which has a slightly more complicated partitioning. Interestingly, it is conceptually easier to understand quicksort in the context of parallelism than in the context of sequential recursion. Pipelining is a parallel technique every student knows because it has the assembly-line analogy in real life. This technique is appropriate when a large number of similar, time-dependent problems is to be solved (sometimes they are called tasks to be executed). Because of their dependency, they cannot be performed in parallel; however, each one can be broken into a sequence of subproblems/subtasks, which then can be performed in a pipelined fashion. The degree of concurrency is limited by the number of identifiable subproblems. Pipelining


Archive | 2015

Spread and Skepticism: Metrics of Propagation on Twitter (Extended Abstract)

Samantha Finn; P. Takis Metaxas; Eni Mustafaraj

Collaboration


Dive into the P. Takis Metaxas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fillia Makedon

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Tom Leighton

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charles B. Owen

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge