Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan R. Loaiza is active.

Publication


Featured researches published by Juan R. Loaiza.


international conference on data engineering | 2015

Oracle Database In-Memory: A dual format in-memory database

Tirthankar Lahiri; Shasank Chavan; Maria Colgan; Dinesh Das; Amit Ganesh; Michael J. Gleeson; Sanket Hase; Allison L. Holloway; Jesse Kamp; Teck-Hua Lee; Juan R. Loaiza; Neil Macnaughton; Vineet Marwah; Niloy Mukherjee; Atrayee Mullick; Sujatha Muthulingam; Vivekanandhan Raja; Marty Roth; Ekrem Soylemez; Mohamed Zait

The Oracle Database In-Memory Option allows Oracle to function as the industry-first dual-format in-memory database. Row formats are ideal for OLTP workloads which typically use indexes to limit their data access to a small set of rows, while column formats are better suited for Analytic operations which typically examine a small number of columns from a large number of rows. Since no single data format is ideal for all types of workloads, our approach was to allow data to be simultaneously maintained in both formats with strict transactional consistency between them.


very large data bases | 2015

Distributed architecture of Oracle database in-memory

Niloy Mukherjee; Shasank Chavan; Maria Colgan; Dinesh Das; Michael J. Gleeson; Sanket Hase; Allison L. Holloway; Hui Jin; Jesse Kamp; Kartik Kulkarni; Tirthankar Lahiri; Juan R. Loaiza; Neil Macnaughton; Vineet Marwah; Atrayee Mullick; Andy Witkowski; Jiaqi Yan; Mohamed Zait

Over the last few years, the information technology industry has witnessed revolutions in multiple dimensions. Increasing ubiquitous sources of data have posed two connected challenges to data management solutions -- processing unprecedented volumes of data, and providing ad-hoc real-time analysis in mainstream production data stores without compromising regular transactional workload performance. In parallel, computer hardware systems are scaling out elastically, scaling up in the number of processors and cores, and increasing main memory capacity extensively. The data processing challenges combined with the rapid advancement of hardware systems has necessitated the evolution of a new breed of main-memory databases optimized for mixed OLTAP environments and designed to scale. The Oracle RDBMS In-memory Option (DBIM) is an industry-first distributed dual format architecture that allows a database object to be stored in columnar format in main memory highly optimized to break performance barriers in analytic query workloads, simultaneously maintaining transactional consistency with the corresponding OLTP optimized row-major format persisted in storage and accessed through database buffer cache. In this paper, we present the distributed, highly-available, and fault-tolerant architecture of the Oracle DBIM that enables the RDBMS to transparently scale out in a database cluster, both in terms of memory capacity and query processing throughput. We believe that the architecture is unique among all mainstream in-memory databases. It allows complete application-transparent, extremely scalable and automated distribution of Oracle RDBMS objects in-memory across a cluster, as well as across multiple NUMA nodes within a single server. It seamlessly provides distribution awareness to the Oracle SQL execution framework through affinitized fault-tolerant parallel execution within and across servers without explicit optimizer plan changes or query rewrites.


international conference on data engineering | 2016

Fault-tolerant real-time analytics with distributed Oracle Database In-memory

Niloy Mukherjee; Shasank Chavan; Maria Colgan; Michael J. Gleeson; Xiaoming He; Allison L. Holloway; Jesse Kamp; Kartik Kulkarni; Tirthankar Lahiri; Juan R. Loaiza; Neil Macnaughton; Atrayee Mullick; Sujatha Muthulingam; Vivekanandhan Raja; Raunak Rungta

Modern data management systems are required to address new breeds of OLTAP applications. These applications demand real time analytical insights over massive data volumes not only on dedicated data warehouses but also on live mainstream production environments where data gets continuously ingested and modified. Oracle introduced the Database In-memory Option (DBIM) in 2014 as a unique dual row and column format architecture aimed to address the emerging space of mixed OLTAP applications along with traditional OLAP workloads. The architecture allows both the row format and the column format to be maintained simultaneously with strict transactional consistency. While the row format is persisted in underlying storage, the column format is maintained purely in-memory without incurring additional logging overheads in OLTP. Maintenance of columnar data purely in memory creates the need for distributed data management architectures. Performance of analytics incurs severe regressions in single server architectures during server failures as it takes non-trivial time to recover and rebuild terabytes of in-memory columnar format. A distributed and distribution aware architecture therefore becomes necessary to provide real time high availability of the columnar format for glitch-free in-memory analytic query execution across server failures and additions, besides providing scale out of capacity and compute to address real time throughput requirements over large volumes of in-memory data. In this paper, we will present the high availability aspects of the distributed architecture of Oracle DBIM that includes extremely scaled out application transparent column format duplication mechanism, distributed query execution on duplicated in-memory columnar format, and several scenarios of fault tolerant analytic query execution across the in-memory column format at various stages of redistribution of columnar data during cluster topology changes.


very large data bases | 2015

Engineering database hardware and software together

Juan R. Loaiza

Since its inception, Oracles database software primarily ran on customer configured off-the-shelf hardware. A decade ago, the architecture of conventional systems started to become a bottleneck and Oracle developed the Oracle Exadata Database Machine to optimize the full hardware and software stack for database workloads. Exadata is based on a scale-out architecture of database servers and storage servers that optimizes both OLTP and Analytic workloads while hosting hundreds of databases simultaneously on the same system. By using database specific protocols for storage and networking we bypass limitations imposed by conventional network and storage layers. Exadata is now deployed at thousands of Enterprises including 4 of the 5 largest banks, telecoms, and retailers for varied workloads such as interbank funds transfers, e-commerce, ERP, Cloud SaaS applications, and petabyte data warehouses. Five years ago, Oracle initiated a project to extend our database stack beyond software and systems and into the architecture of the microprocessor itself. The goal of this effort is to dramatically improve the performance, reliability and cost effectiveness of a new generation of database machines. The new SPARC M7 processor is the first step. The M7 is an extraordinarily fast conventional processor with 32-cores per socket and an extremely high bandwidth memory system. Added to its conventional processing capabilities are 32 custom on-chip database co-processors that run database searches at full memory bandwidth rates, and decompress data in real-time to increase memory bandwidth and capacity. Further, the M7 implements innovative fine-grained memory protection to secure sensitive business data. In the presentation we will describe how Oracles engineering teams integrate software and hardware at all levels to achieve breakthrough performance, reliability, and security for the database and rest of the modern data processing stack.


Archive | 1997

Pluggable tablespaces for database systems

William Bridge; Jonathan D. Klein; J. William Lee; Juan R. Loaiza; Alex Tsukerman; Gianfranco Putzolu


Archive | 1997

Automatic failover for clients accessing a resource through a server

Hasan Rizvi; Ekrem Soylemez; Juan R. Loaiza; Robert J. Jenkins


Archive | 2001

System and method for providing fine-grained temporal database access

Jonathan D. Klein; Amit Ganesh; Juan R. Loaiza; Gary C. Ngai


Archive | 1998

System and method for scheduling a resource according to a preconfigured plan

Ann Rhee; Sumanta Chatterjee; Juan R. Loaiza; Kesavan Srinivasan


Archive | 1997

Planned session termination for clients accessing a resource through a server

Hasan Rizvi; Ekrem Soylemez; Juan R. Loaiza


Archive | 1997

Method and apparatus for restoring a portion of a database

Cornelius G Doherty; Gregory Pongracz; William Bridge; Juan R. Loaiza; Mark Ramacher

Collaboration


Dive into the Juan R. Loaiza's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge