Featured Researches

General Literature

One More Revolution to Make: Free Scientific Publishing

Computer scientists are in the position to create new, free high-quality journals. So what would it take?

Read more
General Literature

Oprema -- The Relay Computer of Carl Zeiss Jena

The Oprema (Optikrechenmaschine = computer for optical calculations) was a relay computer whose development was initiated by Herbert Kortum and which was designed and built by a team under the leadership of Wilhelm Kaemmerer at Carl Zeiss Jena (CZJ) in 1954 and 1955. Basic experiments, design and construction of machine-1 were all done, partly concurrently, in the remarkably short time of about 14 months. Shortly after the electronic G 2 of Heinz Billing in Goettingen it was the 7th universal computer in Germany and the 1st in the GDR. The Oprema consisted of two identical machines. One machine consisted of about 8,300 relays, 45,000 selenium rectifiers and 250 km cable. The main reason for the construction of the Oprema was the computational needs of CZJ, which was the leading company for optics and precision mechanics in the GDR. During its lifetime (1955-1963) the Oprema was applied by CZJ and a number of other institutes and companies in the GDR. The paper presents new details of the Oprema project and of the arithmetic operations implemented in the Oprema. Additionally, it covers briefly the lives of the two protagonists, W. Kaemmerer and H. Kortum, and draws some comparisons with other early projects, namely Colossus, ASCC/Mark 1 and ENIAC. Finally, it discusses the question, whether Kortum is a German computer pioneer.

Read more
General Literature

Paths to Unconventional Computing: Causality in Complexity

I describe my path to unconventionality in my exploration of theoretical and applied aspects of computation towards revealing the algorithmic and reprogrammable properties and capabilities of the world, in particular related to applications of algorithmic complexity in reshaping molecular biology and tackling the challenges of causality in science.

Read more
General Literature

Philosophical Solution to P=?NP: P is Equal to NP

The P=?NP problem is philosophically solved by showing P is equal to NP in the random access with unit multiply (MRAM) model. It is shown that the MRAM model empirically best models computation hardness. The P=?NP problem is shown to be a scientific rather than a mathematical problem. The assumptions involved in the current definition of the P?=NP problem as a problem involving non deterministic Turing Machines (NDTMs) from axiomatic automata theory are criticized. The problem is also shown to be neither a problem in pure nor applied mathematics. The details of The MRAM model and the well known Hartmanis and Simon construction that shows how to code and simulate NDTMs on MRAM machines is described. Since the computation power of MRAMs is the same as NDTMs, P is equal to NP. The paper shows that the justification for the NDTM P?=NP problem using a letter from Kurt Godel to John Von Neumann is incorrect by showing Von Neumann explicitly rejected automata models of computation hardness and used his computer architecture for modeling computation that is exactly the MRAM model. The paper argues that Deolalikar's scientific solution showing P not equal to NP if assumptions from statistical physics are used, needs to be revisited.

Read more
General Literature

Questions for a Materialist Philosophy Implying the Equivalence of Computers and Human Cognition

Issues related to a materialist philosophy are explored as concerns the implied equivalence of computers running software and human observers. One issue explored concerns the measurement process in quantum mechanics. Another issue explored concerns the nature of experience as revealed by the existence of dreams. Some difficulties stemming from a materialist philosophy as regards these issues are pointed out. For example, a gedankenexperiment involving what has been called "negative" observation is discussed that illustrates the difficulty with a materialist assumption in quantum mechanics. Based on an exploration of these difficulties, specifications are outlined briefly that would provide a means to demonstrate the equivalence of of computers running software and human experience given a materialist assumption.

Read more
General Literature

Re-run, Repeat, Reproduce, Reuse, Replicate: Transforming Code into Scientific Contributions

Scientific code is not production software. Scientific code participates in the evaluation of a scientific hypothesis. This imposes specific constraints on the code that are often overlooked in practice. We articulate, with a small example, five characteristics that a scientific code in computational science should possess: re-runnable, repeatable, reproducible, reusable and replicable.

Read more
General Literature

Real Time Models of the Asynchronous Circuits: The Delay Theory

The chapter from the book introduces the delay theory, whose purpose is the modeling of the asynchronous circuits from digital electrical engineering with ordinary and differential pseudo-boolean equations.

Read more
General Literature

Recruitment, Preparation, Retention: A case study of computing culture at the University of Illinois at Urbana-Champaign

Computer science is seeing a decline in enrollment at all levels of education, including undergraduate and graduate study. This paper reports on the results of a study conducted at the University of Illinois at Urbana-Champaign which evaluated students attitudes regarding three areas which can contribute to improved enrollment in the Department of Computer Science: Recruitment, preparation and retention. The results of our study saw two themes. First, the department's tight research focus appears to draw significant attention from other activities -- such as teaching, service, and other community-building activities -- that are necessary for a department's excellence. Yet, as demonstrated by our second theme, one partial solution is to better promote such activities already employed by the department to its students and faculty. Based on our results, we make recommendations for improvements and enhancements based on the current state of practice at peer institutions.

Read more
General Literature

Research Methods in Computer Science: The Challenges and Issues

Research methods are essential parts in conducting any research project. Although they have been theorized and summarized based on best practices, every field of science requires an adaptation of the overall approaches to perform research activities. In addition, any specific research needs a particular adjustment to the generalized approach and specializing them to suit the project in hand. However, unlike most well-established science disciplines, computing research is not supported by well-defined, globally accepted methods. This is because of its infancy and ambiguity in its definition, on one hand, and its extensive coverage and overlap with other fields, on the other hand. This article discusses the research methods in science and engineering in general and in computing in particular. It shows that despite several special parameters that make research in computing rather unique, it still follows the same steps that any other scientific research would do. The article also shows the particularities that researchers need to consider when they conduct research in this field.

Read more
General Literature

Rethinking Abstractions for Big Data: Why, Where, How, and What

Big data refers to large and complex data sets that, under existing approaches, exceed the capacity and capability of current compute platforms, systems software, analytical tools and human understanding. Numerous lessons on the scalability of big data can already be found in asymptotic analysis of algorithms and from the high-performance computing (HPC) and applications communities. However, scale is only one aspect of current big data trends; fundamentally, current and emerging problems in big data are a result of unprecedented complexity--in the structure of the data and how to analyze it, in dealing with unreliability and redundancy, in addressing the human factors of comprehending complex data sets, in formulating meaningful analyses, and in managing the dense, power-hungry data centers that house big data. The computer science solution to complexity is finding the right abstractions, those that hide as much triviality as possible while revealing the essence of the problem that is being addressed. The "big data challenge" has disrupted computer science by stressing to the very limits the familiar abstractions which define the relevant subfields in data analysis, data management and the underlying parallel systems. As a result, not enough of these challenges are revealed by isolating abstractions in a traditional software stack or standard algorithmic and analytical techniques, and attempts to address complexity either oversimplify or require low-level management of details. The authors believe that the abstractions for big data need to be rethought, and this reorganization needs to evolve and be sustained through continued cross-disciplinary collaboration.

Read more

Ready to get started?

Join us today