Lukasz Lopacinski
Brandenburg University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lukasz Lopacinski.
design and diagnostics of electronic circuits and systems | 2015
Lukasz Lopacinski; Joerg Nolte; Steffen Buechner; Marcin Brzozowski; Rolf Kraemer
Transmission efficiency is an interesting topic for data link layer developers. The overhead of protocols and coding should be reduced to a minimum. This maximizes a link throughput. This is especially important for high-speed networks, where a small degradation of efficiency will degrade the throughput by several Gbps. We describe a redundancy balancing algorithm for an adaptive hybrid automatic repeat request with Reed-Solomon coding. We introduce a testing environment, most important technical issues, and results generated on a field programmable gate array. The hybrid automatic repeat request and Reed-Solomon algorithms are explained. We provide a mathematical description, and a block diagram of the adaptation algorithm. All necessary algorithm simplifications are explained in details. The algorithm can be represented by basic operations in hardware. In most cases, it finds the optimal coding for a predefined bit error rate.
science and information conference | 2015
Lukasz Lopacinski; Marcin Brzozowski; Rolf Kraemer
This paper presents simulation results of the data link layer for 100Gbps wireless communication. A frame aggregation and segmentation, forward error correction codes, and a hybrid automatic repeat request scheme are in the scope. The frame segmentation suffers from the acknowledge frame length currently, and this aspect has to be improved. We favor Reed-Solomon codes because of relative high throughput and sufficient error performance comparing to convolutional codes. We verified hybrid automatic repeat request scheme in two versions. Additionally, we proposed some possible improvements for the mentioned techniques. The frame format and simulation models (two state Markov chains) are explained in details. At the end parameters of the data link layer and their influence on the performance is done. A physical layer turnaround time seems to be the leading factor. Finally, we mentioned a hardware accelerator processor for the Reed-Solomon codes.
ieee international conference on ubiquitous wireless broadband | 2015
Lukasz Lopacinski; Marcin Brzozowski; Joerg Nolte; Rolf Kraemer; Steffen Buechner
One of the most calculation intensive operations for a 100 Gbps wireless packet processing is a forward error correction (FEC). We are using standard field programmable gate arrays (FPGAs) to prepare a data link layer demonstrator. Therefore, we need to find a high-parallelized FEC structure for our device. The difficulty is to design the 100 Gbps FEC engine that can be realized in an FPGA. In one of previous papers, we have proposed a solution based on convolutional coding, but the engine consumed equivalent logic of 23 FPGAs [1]. That solution could not be implemented in nowadays FPGAs. In this paper, we propose parallel Reed-Solomon (RS) coders to reach the 100 Gbps throughput. The main task is to select the best candidates from available correction codes for the targeted 100 Gbps wireless communication and fit it to one or two high-end FPGAs. At the end, we demonstrate a system with two FPGAs, which is achieving continuous user data transfer rate of 97 Gbps and is negotiating the RS parameters according to the channel bit error rate.
ursi atlantic radio science conference | 2015
Lukasz Lopacinski; Marcin Brzozowski; Rolf Kraemer; Joerg Nolte; Steffen Buechner
To achieve 100Gbps wireless transmission, not only a very fast physical layer is required. The effort of the analog transceiver can be wasted due to the overhead induced by the higher network layers. Delays and latencies caused by a duplex switching can dramatically reduce the goodput of the link. In every microsecond of a delay, 12.5kB of the data transfer is wasted. Therefore, we need to extend the frame size, but that will lead to a higher packet error rate.
local computer networks | 2015
Steffen Büchner; Jörg Nolte; Rolf Kraemer; Lukasz Lopacinski; Reinhardt Karnapke
Todays applications and services become more dependent on fast wireless communication, for the upcoming years data-rate demands of 100Gbit/s can be easily expected. However, fulfilling that demand is a task which cannot simply be solved by upscaling existing technologies. While most of the research tackles the challenges regarding the transmission technology from the physical layer up to base-band processing, we focus on the challenges concerning the handling of that vast amount of data. The overall goal is to bring together the transmission technology with the operating system to create a suitable end-to-end communication solution. In this paper we argue that communication can be understood as a soft-realtime problem and how that helps introducing parallelism into protocol-processing.
Wireless Communications and Mobile Computing | 2017
Lukasz Lopacinski; Marcin Brzozowski; Rolf Kraemer
This paper presents a hardware processor for 100 Gbps wireless data link layer. A serial Reed-Solomon decoder requires a clock of 12.5 GHz to fulfill timings constraints of the transmission. Receiving a single Ethernet frame on a 100 Gbps physical layer may be faster than accessing DDR3 memory. Processing so fast streams on a state-of-the-art FPGA (field programmable gate arrays) requires a dedicated approach. Thus, the paper presents lightweight RS FEC engine, frames fragmentation, aggregation, and a protocol with selective fragment retransmission. The implemented FPGA demonstrator achieves nearly 120 Gbps and accepts bit error rate (BER) up to . Moreover, redundancy added to the frames is adopted according to the channel BER by a dedicated link adaptation algorithm. At the end, ASIC synthesis results are presented including detailed statistics of consumed energy per bit.
personal, indoor and mobile radio communications | 2016
Lukasz Lopacinski; Joerg Nolte; Steffen Buechner; Marcin Brzozowski; Rolf Kraemer
In this article, an improved turbo product decoding scheme is proposed. The new method is almost as effective as hard decodable low-density parity check codes (HD-LDPC). Due to the modified codeword shape, no external interleavers are required to correct burst errors. If the decoder uses Reed-Solomon (RS) codes, then error correction performance against burst errors is significantly higher than the gain provided by HD-LDPC with an external interleaver. An additional advantage is a possibility to design a dedicated decoder for Virtex7 field programmable gate array (FPGA) serial transceivers. In our case, we use the method for 100 Gbps data link layer processor dedicated for wireless communication in the Terahertz band. The targeted platform is Virtex7 FPGA, but the solution can be easily scaled on other technologies.
local computer networks | 2016
Steffen Büchner; Lukasz Lopacinski; Jörg Nolte; Rolf Kraemer
With the recent roll-out of 100 Gbit Ethernet technology for high-performance computing applications and the technology for 100 Gbit wireless communication emerging on the horizon, it is just a matter of time until non-high performance computing applications will have to utilize these data rates. Since 10 Gbit/s protocol processing is already challenging for current server machines and simply upscaling the computing resources is no solution, new approaches are needed. In this paper, we present a stream processing based design approach for scalable communication protocols. The stream processing paradigm enables us to adapt the communication protocol processing for a certain hardware configuration without touching the protocols implementation. We use this design technique to develop a prototype communication protocol for ultra-high throughput applications and we demonstrate how to adapt the protocol processing for a Stable Throughput as well as for a Low Latency scenario. Last but not least, we present the evaluation results of the experiments, which show that the measured throughput respectively latency of the adapted protocol, scales nearly linear with the number of provided interfaces.
iberian conference on information systems and technologies | 2017
Mohamed S. Abouzeid; Lukasz Lopacinski; Eckhard Grass; Thomas Kaiser; Rolf Kraemer
In this paper, a Space-Time Block Code (STBC) based on the Golden number, Golden code is proposed for a massive MIMO-RFID systems. Based on channel modelling for massive MIMO-RFID system, the proposed space-time code is applied to the tag side. Simulation results show that the proposed code for massive MIMO-RFID systems outperforms Alamouti code while simplifying the receivers complexity. The Bit Error Rate (BER) performance of the proposed technique demonstrates that high diversity gain for the tag is achieved leading to a highly reliable and more robust RFID range. Furthermore, the link capacity between the tagged item and the reader can be increased. The proposed RFID technique provides superior performance against the state-of-the-art RFID techniques.
Frequenz | 2017
Steffen Büchner; Lukasz Lopacinski; Rolf Kraemer; Jörg Nolte
Abstract 100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.