An Implementation and Analysis of a Kernel Network Stack in Go with the CSP Style
AAn Implementation and Analysis of a KernelNetwork Stack in Go with the CSP Style
Harshal Sheth & Aashish Welling
Abstract
Modern operating system kernels are written in lower-level languages such as C. Although the low-levelfunctionalities of C are often useful within kernels, they also give rise to several classes of bugs. Kernelswritten in higher level languages avoid many of these potential problems, at the possible cost of decreasedperformance. This research evaluates the advantages and disadvantages of a kernel written in a higherlevel language. To do this, the network stack subsystem of the kernel was implemented in Go with theCommunicating Sequential Processes (CSP) style. Go is a high-level programming language that supportsthe CSP style, which recommends splitting large tasks into several smaller ones running in independent“threads”. Modules for the major networking protocols, including Ethernet, ARP, IPv4, ICMP, UDP, andTCP, were implemented. In this study, the implemented Go network stack, called GoNet, was comparedto a representative network stack written in C. The GoNet code is more readable and generally performsbetter than that of its C stack counterparts. From this, it can be concluded that Go with CSP style is aviable alternative to C for the language of kernel implementations.
We would like to thank our mentor, Cody Cutler, forhis constant guidance and encouragement. We wouldalso like to acknowledge Prof. Frans Kaashoek, whorecommended an initial direction for this project. Fi-nally, we would like to thank the MIT PRIMES pro-gram for providing us with this opportunity.
Modern operating systems utilize a kernel to inter-face with the hardware available to them. Most cur-rent operating system kernels are written in the Cprogramming language, which allows them to per-form low-level functions effectively. However, C alsoallows a variety of problems to occur. This paperexplores the viability of writing a kernel with CSPstyle in the Go programming language as a means ofavoiding some of the problems associated with cur-rent operating system kernels. The network stack, one of many kernel subsystems, was built to eval-uate the advantages and disadvantages of this ap-proach. To ensure that using Go with CSP styledoes not hurt the performance of the network stack,the stack’s performance was then compared to thatof a conventional C language network stack. Thereadability, modularity, and concurrency of the twonetwork stacks’ code were also evaluated.
Computers are an integral part of modern day soci-ety. Computers are expected to be both reliable andefficient. This requires a stable and bug-free operat-ing system kernel, as otherwise, the bugs within thekernel may make other user applications operate un-stably and unreliably. The operating system kernelserves as a bridge between the applications and usersof a computer and the hardware of the machine. Thekernel manages the system resources, including mem-ory and hard disk space, and handles the schedulingof processes on the CPUs. It also provides users ac-1 a r X i v : . [ c s . O S ] M a r ess to input and output devices and enables networkaccess. User applications run on top of the kernel,and make use of the kernel’s functionality throughits library of system calls. Most commodity operating system kernels are im-plemented in the C programming language. C is themost popular kernel language because it gives a highdegree of control over memory usage and other lowerlevel aspects of the program operation. This free-dom comes at the cost of allowing problems suchas double-free bugs (freeing memory twice), out ofbounds errors on arrays (accessing memory that isnot part of an array), and deadlocks. It also doesnot ensure type safety (preventing misinterpretationof data by interpreting data of one type as anothertype).As the number of microprocessor cores per com-puter increases [1], the ability to take advantage ofmultithreading is increasingly advantageous to a ker-nel’s overall design. However, kernels implemented inC are not able to easily take advantage of all of thecores of a machine, because C does not lend itself toleveraging modern microprocessor features. Threadsin C, which are used to distribute work among cores,are expensive in both memory and CPU usage; syn-chronizing these threads is even more difficult andsometimes convoluted.
One way to overcome some of these drawbacks isto implement the kernel in a higher level language.This may eliminate many of the problems associatedwith kernels implemented in C. For example, manyhigher level languages provide array bounds check-ing and garbage collection. However, programs thatare written in higher level languages generally runslower than those written in C, and sometimes willincur additional overheads from interpreters, auto-matic memory management, and garbage collection.In addition, the more abstracted higher level lan-guages could make it difficult to perform some of the kernel’s low-level tasks.
There have been a few attempts to implement ker-nels using higher level languages. None of these haveachieved widespread adoption for a variety of rea-sons.
Mirage
Mirage is a Linux Foundation project thatfocuses on turning a web application into a “stan-dalone, specialized unikernel that runs under the Xenhypervisor” [2]. It contains rudimentary implemen-tations of the kernel subsystems, written in OCaml.Because it is built for use within a unikernel, a single-user single-process kernel designed specifically to runin a Virtual Machine, it does not satisfy the needs ofmost users. In addition, it is not able to achieve par-allelism on multiple cores, as it was built for runningwithin a single process.
Pycorn
Pycorn is an operating system written inPython. It currently is compatible with only 16-bitARM-based microcomputers [3]. Because Python isan interpreted language, Pycorn is extremely slow inpractice, and performance is not one of the project’sgoals. Because Pycorn has limited target platformsand is not focused on performance, it has never beenfit for widespread use. The project has been inactivesince late 2012.
Since implementing an entire kernel is a massive en-gineering effort, a single kernel subsystem was imple-mented instead. The subsystem chosen was the net-work stack, which is a necessary feature of any kernel.The network stack’s functionality and performancecan easily be tested, making it an ideal subsystem toimplement.
A kernel subsystem was built in Go to demonstratethe comparative advantages of writing the kernelin a higher level language. Go, specifically, was2hosen because it lends itself to the concurrent se-quential processes (CSP) style. The CSP style pro-motes deconstructing complicated tasks into smaller,more manageable subtasks. These subtasks can bedone with individual processes, which communicatewith each other to complete the original, larger task.Goals of the CSP style include helping the program-mer design, implement, and validate complex com-puter systems [4], and this is especially importantwhen designing software as complicated as a ker-nel. Go provides a thread-safe way of using CSPstyle through its version of threads, called goroutines ,and a synchronized communication construct calleda channel . The Go runtime automatically schedules goroutines onto the physical cores of the system. TheCSP style allows the Go programmer to easily takeadvantage of all of the cores of a computer whilemaintaining readability and reducing bugs. This isbecause the network stack can be split into multiplesubtasks that can all run inside their own goroutines ,which are dynamically scheduled to efficiently takeadvantage of all available cores. This also improvesmodularity of the code which improves readabilityand makes it easier to debug. The CSP style is onlyfeasible in a garbage collected language. Go providesthe necessary garbage collection as well as strong typ-ing, which eliminates entire classes of bugs includingincorrect type casting, double free errors, and useafter free errors. This, among other things, makesGo code simple and easy to maintain. Furthermore,Go’s defer statement allows for easier cleanup at theend of functions, reducing the likelihood of deadlocksfrom neglecting to unlock mutexes.
The advantages of Go and CSP Style may come witha cost. For example, garbage collection has a perfor-mance overhead and causes the entire Go runtime tosuspend briefly [5]. In addition, using multiple coresrequires communication between these cores, whichcan be expensive. The purpose of this project is todetermine whether the benefits of Go, a higher levellanguage, and CSP style outweigh the disadvantageof having decreased raw speed.
To implement a fully independent network stack, theGo stack, named GoNet, was built on the tap in-terface. For full functionality, all basic network pro-tocols, including Ethernet, ARP, IPv4, ICMP, UDP,and TCP, were implemented. To ensure that GoNet’sperformance was not impacted, latency and through-put were measured and compared to that of a similarnetwork stack written in C.
To fully simulate an independent network stack,GoNet operates on a virtual network interface calledtap. A tap interface is a virtual network interface,and it mimics actual hardware with simple software.GoNet reads and writes to the tap interface as if itwere a normal, physical interface, and the tap inter-face, in conjunction with the bridge interface, acts asa router into a subnetwork of the host operating sys-tem. This allows GoNet to even utilize its own MACaddress and IP address, and to connect to externalnetworks.
GoNet implements protocols on the data-link, net-work, and transport layers [6]. Each layer runs inde-pendently of the other layers and protocols, as shownin Figure 1. This allows for increased concurrency,as well as increased efficiency under high loads.The implementation of each protocol uses a similarstructure: a “packet dealer”. The IP packet dealeris illustrated in Figure 2. The packet dealer readspackets from the lower layer, transmitted through channels . Channels are represented by the arrows inFigures 1 and 2. The IP packet dealer sends packetsto different IP readers running in their own gorou-tines , represented in figure 2 by separate boxes. AsIP readers finish processing the packets they receivefrom the IP packet dealer, they forward the processeddata to the next layer packet dealers.
Ethernet
The Ethernet layer allows for differentnetwork layer protocols to bind to a specific Ether-3igure 2: This flowchart shows the design of the IPv4 protocol packet dealer. Each box represents a goroutine ,and each black arrow represents a channel . The IPv4 packet dealer reads packets from the output channel of the Ethernet layer and forwards these packets to the correct IP Reader using channels . The packets areprocessed by the IP readers and are then forwarded to the packet dealer of the protocol above.Figure 1: This flowchart illustrates how each networkstack protocol receives and processes independentlyfrom the other protocols. Each protocol runs in itsown set of goroutines . Therefore, each protocol canrun concurrently with one another. Received dataare passed up the stack in channels which are repre-sented by the black arrows. net protocol. For example, the IPv4 implementationbinds to the Ethernet protocol 2048 to receive allIPv4 packets, and the ARP implementation binds toEthernet protocol 2054.
ARP
In order to send data to other network stackson a local network, GoNet needs the MAC address ofthe target machine. The Address Resolution Proto-col (ARP) is implemented to enable GoNet to obtainthis information. ARP allows GoNet to get the MACaddress from the destination computer’s target pro-tocol address, such as the destination computer’s IPaddress [7]. The GoNet implementation of ARP cre-ates a goroutine for each ARP request. This allowseach ARP request goroutine to block until either themain ARP packet dealer notifies it of a response orthe request times out.
IPv4
The Internet Protocol Version 4 (IPv4) de-sign is illustrated in Figure 2. As explained earlier,it uses a packet dealer structure. It also includesmultiple IP Readers, and fragment assemblers when4eeded. Communication between all of these com-ponents is accomplished through the channels repre-sented by the black arrows in the figure.IPv4 fragmentation is utilized when an IP payloadwould make the IP packet larger than the maximumtransmission unit (MTU). The IP segment is split upinto multiple fragments, each containing data neededfor reassembly. When the segments at the destina-tion, they must be reassembled into the original IPsegment [8]. GoNet’s fragment assemblers illustratethe advantages of CSP style.Each fragment assembler encapsulates both theprocess of reassembling a fragmented IP segmentand the associated data. This approach reduces thecomplexity of the code, because each fragmented IPpacket has its own designated assembler, in the styleof CSP. This is contrary to traditional fragment as-sembly methods, where a global data structure man-ages the fragment assembly for all packets. Thislocalization of data is made more feasible by thelightweight design of goroutines and the garbage col-lected Go language, which greatly reduces the likeli-hood of memory leaks.
ICMP and Ping
GoNet implements the pingportion of the Internet Control Message Protocol(ICMP). The ICMP implementation follows the nor-mal packet dealer structure. The ping implementa-tion also has its own packet dealer which handles allof the ICMP ping packets sent first to the ICMPpacket dealer. The ping packet dealer forwards re-ceived ping requests to a special set of goroutines that reply to ping requests. If GoNet has sent pingrequests, then the ping packet dealer forwards thereplies to a dedicated goroutine that is started foreach of the mentioned ping requests.
UDP
The User Datagram Protocol (UDP) is aconnectionless protocol. Because UDP is a relativelysimple protocol, the GoNet implementation just usesa basic packet dealer to forward packets to their as-sociated UDP readers.
TCP
The Transmission Control Protocol (TCP) isa connection-oriented transport layer protocol that guarantees in-order delivery of data. Because TCPis connection-oriented, it utilizes a server and a clientto initialize a connection. Once a connection is es-tablished, it is managed by a Transmission ControlBlock (TCB) [9].The GoNet implementation of TCP uses the stan-dard packet dealer structures to manage source anddestination ports. Each TCB utilizes two long-running goroutines . One processes incoming packets.The other waits for and sends data, as well as createsadditional goroutines that manage the retransmis-sion of single packets. Each of these two long-running goroutines represents half of the duplex TCP connec-tion. Internally, it also uses channels to synchronizeand manage all of the goroutines that are created.For example, the incoming packet processor gorou-tine uses channels to notify packet retransmission goroutines when an acknowledgment packet arrives.
GoNet’s performance was compared to that of tapip,a multi-threaded network stack written in C [10].This comparison allows for the evaluation of the prosand cons of a network stack written in a higher levellanguage with the CSP style. Both stacks imple-ment similar protocols, operate in user space, andutilize a tap interface. This allows the performanceof both stacks to be compared fairly. The testing wasperformed on a Ubuntu 14.04 machine with Linux3.13.0, 16 GB of memory, and an Intel Xeon Quad-Core Dual Socket processor.
The first performance metric that was evaluated waslatency. To measure latency, the response times of 50ping requests were averaged. The ping requests weresent from the Linux kernel that both stacks were run-ning on. To determine the stacks’ performance un-der increased load, multiple pings were sent from theLinux kernel simultaneously. The test was run from1 to 1000 concurrent ping “connections” to simulatepossible loads that a network stack might endure.To ensure that the tests on the two stacks were runfairly, all other variables were held constant, includ-5ng the number of ping requests each ping “connec-tion” would send, the ICMP receive buffer size, theinterval between the ping requests, and the ping re-quest packet size.
The second performance metric that was evaluatedwas throughput. The throughput of a stack is theamount of data that it can send or receive in a givenamount of time. The following process was used tomeasure the throughput of the stacks:1. A TCP server was initialized.2. A TCP client was initialized. The connectionwas made over the local network (localhost) toeliminate any overhead caused by the tap inter-face.3. Four kilobytes (kB) of data were sent from theclient to the server.4. The total real time that the stack took to com-plete the said procedures completely was mea-sured. This time, along with the specific amountof data sent, was used to calculate throughput.The stacks’ performances were measured as the num-ber of clients increased, to test the comparative scal-ability of the stacks. The test was done up to 100concurrent clients.A variety of precautions were taken to ensure thatthe throughput was measured precisely. For exam-ple, all comparable buffer sizes were set equal. Intapip, each client and server connection ran in itsown thread; GoNet was similar, except it used gorou-tines instead of threads. Additional precautions weretaken to ensure that each connection had completedbefore stack termination and that the payloads ofeach connection were transferred in their entirety.
The code of GoNet was far simpler than that of its Cstack counterpart. In addition, the performance, overboth latency and throughput, of GoNet was actuallybetter than that of tapip.
In terms of protocol operation, both GoNet and tapipwere correct. This was determined by successfullytesting both stacks against a Linux Kernel TCP end-point. However, tapip leaked memory during thetest. This is because tapip stores packets in packetbuffers and these buffers are sometimes double freedor not freed at all. When tapip double frees memory,it either crashes or causes undefined behavior. Whentapip does not free memory, the unfreed memory ac-cumulates and hogs resources until the system even-tually crashes. Go makes it easy to avoid these typesof problems with its built-in garbage collection.
It is hard to quantitatively evaluate the merits ofwriting code in the Go language compared to the Clanguage. The following code comparisons are usedto illustrate some of the advantages of higher levellanguages. The code that is being compared is allpart of the IP fragment reassembly process. The Ccode is on the top of each comparison segment, andthe Go code is on the bottom.
Fragment Reassembly Initialization
The fol-lowing code segments compare the steps that tapipand GoNet take to initialize a new fragment reassem-bler when a new fragmented IP segment arrives.GoNet creates a new goroutine for each packet that isbeing reassembled while tapip uses a global structureto hold the data for all of the ongoing reassemblies. struct fragment * frag ;frag = xmalloc ( sizeof (* frag ));list_add (& frag -> frag_list , & frag_head );list_init (& frag -> frag_pkb );return frag ;ipr. fragBuf [ bufID ] = make ( chan [] byte ,FRAG_ASSEM_BUF_SZ )quit := make ( chan bool , 1)done := make ( chan bool , 1)didQuit := make ( chan bool , 1)go ipr. fragAssembler ( /* ... */ )go ipr. killFragAssembler ( /* ... */ ) dding Fragments to a Reassembly Queue These code comparisons show how the structure de-fined in the fragment reassembly initialization makesadding fragments to the processing queue easier inGoNet than in the C stack. This allows the gorou-tine that processes an IP segment in GoNet to simplyforward the packet to its respective reassembler andmove onto subsequent packets. This improves themodularity of GoNet’s code, as well as its readabilityand concurrency. int insert_frag (/* ... */) {/* additional fragment processing */list_add (& pkb -> pk_list , pos );return 0;frag_drop :free_pkb (pkb );return -1;}ipr. fragBuf [ bufID ] <- b
Dealing With Completed Fragments
The fol-lowing code segments underscore the advantages thatCSP style and the Go language provide over currentC stacks. Tapip has to deal with fragmented packetsbefore it can move on to subsequent packets. Thisintroduces a variety of problems. For example, itforces the C IP implementation to track the statesof all of the ongoing fragment reassemblies at thesame time. This encourages the use of possibly com-plicated global variables and structures and makesthread synchronization difficult. In contrast, GoNetspawns a separate fragment assembler goroutine foreach new fragmented IP packet that it receives. Each goroutine is responsible for all of the separate frag-ments that make up the IP segment. After the frag-ment assembler finishes assembling the packet, it sim-ply sends the reassembled segment back into process-ing. This process is completely independent of themain IP packet processing goroutines , and hence al-lows for concurrency and parallelism, as well as farmore readable, understandable, and clean code. if ( complete_frag ( frag ))pkb = reass_frag ( frag );else pkb = NULL ;return pkb;struct pkbuf * reass_frag (struct fragment * frag ) {/* more processing */delete_frag ( frag );return pkb;}ipr. incomingPackets <- append (fullPacketHdr , payload ...)done <- true
Fragmentation Cleanup
Both stacks have todelete an entry from a data structure. The datastructure tracks channels for Go and defragmenta-tion structures for tapip. In addition, tapip has toexplicitly free the memory allocated for each frag-mented packet buffer, as well as the memory fromthe defragmentation structure as well. struct pkbuf *pkb;list_del (& frag -> frag_list );while (! list_empty (& frag -> frag_pkb )) {pkb = frag_head_pkb ( frag );list_del (& pkb -> pk_list );free_pkb (pkb );}free ( frag );delete (ipr. fragBuf , bufID )
The latency and throughput of both the C stack andGoNet were measured and compared.
The trends of the latency test results can be seen inFigure 3. The drop rates of both stacks were negligi-ble. With 1 ping, tapip outperformed GoNet by overthree times with a latency of 0.074 ms when com-pared to GoNet’s latency of 0.234 ms. However, with1000 concurrent pings, GoNet outperformed tapip byalmost five times with a latency of 0.717 ms whencompared to tapip’s latency of 3.279 ms. GoNet be-gins to outperform tapip when the number of con-current connections becomes greater than about 600.7oNet’s latency grows linearly while tapip’s latencyto grow exponentially. GoNet’s latency trend is supe-rior to tapip’s latency trend, because, at low numbersof concurrent pings, the latencies of both stacks aresmall enough to be negligible, but at higher numbersof concurrent pings, the absolute difference in latencyis much larger.Figure 3: This graph displays the latency of bothGoNet and tapip by the number of concurrent pings.Based on these results, it can be inferred that tapipcan process a small number of packets very fast whileit is slower with processing larger numbers of pack-ets. This is likely because it is not as concurrentas GoNet. In contrast, GoNet takes a longer timeto process each packet, but is mostly unaffected byincreased load, likely because of the degree of con-currency within the implementations of each proto-col. This can be seen in Figure 3, as tapip’s latencygrows much faster than GoNet’s, even though it be-gins with much lower latency.The sharp increase in the latency of tapip also sup-ports this explanation of the results. After about800 concurrent connections, tapip becomes unable tofield each set of pings requests before the next set issent by the concurrent ping connections, and hence,a backlog of ping requests develops. This causes a de-lay in the response to all of the pings, resulting in asharp growth in tapip’s latency. Since tapip does notdrop any packets, it is not possible that it is dropping Figure 4: This graph displays the throughput of bothGoNet and tapip by the number of concurrent TCPconnections.packets because of a full buffer.This sharp increase in tapip’s response times high-lights the underlying problem with its architecture,and the architecture of many other networks stacksas well: processing a single packet at all layers beforemoving onto a new packet is suboptimal, as it cannot scale or achieve parallelism effectively.
The results of the throughput test can be seen inFigure 4. With 1 concurrent connection, GoNet out-performed tapip with a throughput of 7.3 Mbit/swhen compared to tapip’s throughput of 4.6 Mbit/s.With 100 connections, GoNet outperformed tapipwith a throughput of 284.9 Mbit/s when comparedto tapip’s throughput of 195.0 Mbit/s. In addition,GoNet’s throughput increases at a faster rate thantapip’s throughput. This shows that GoNet can con-tinue to scale for even larger numbers of connectionswhile tapip may not be able to handle such load.These results make sense given the architecture ofGoNet. All of the TCBs in tapip are managed bya single thread. In contrast, each TCB in GoNethas two threads managing it: one for each half ofthe duplex connection. In this way, GoNet is able to8fficiently multiplex a large number of connectionsonto a limited number of cores more efficiently thantapip. Hence, it achieves far greater throughput forlarge numbers of connections. With small numbersof connections, GoNet is still slightly more efficient,as GoNet splits the work of the TCB among two goroutines , while tapip has one thread performingprocessing. GoNet outperforms tapip for all numbersof concurrent connections.
The operating system kernel is important for man-aging a computer system’s resources. Therefore, thekernel needs to be well designed in order to supportthe rest of the operating system properly. Modernkernels are written in lower-level languages such as C,which allow several classes of bugs to occur; writing akernel in a higher level language can eliminate severalof these. However, higher level languages have theirown downsides. The network stack, a kernel subsys-tem, was built in Go to demonstrate the advantagesof a kernel written in a higher-level language. GoNetand tapip both operate on the tap interface. Bothnetwork stacks also implement similar protocols suchas Ethernet, IPv4, ARP, UDP, and TCP. The net-work stack that was built in Go, called GoNet, per-forms competitively against a similar network stackwritten in C, called tapip.GoNet’s code was simpler than that of the C stack,as demonstrated in the IP fragment reassembler codecomparison. GoNet, which was built with the CSPstyle, could simplify and modularize in a more effec-tive way than the C stack. This also allowed for in-creased concurrency and parallelism and helped im-prove the performance of GoNet. In latency tests,GoNet achieved lower latency than tapip for numbersof connections greater than about 600. In through-put testing, its parallelism allowed it to outperformtapip for all numbers of concurrent TCP connectionsranging from 1 to 100.However, there are possible sources of error in thetests. For example, tapip is not a mainstream net-work stack, and may have room for optimization.Also, the Linux kernel might have scheduled each run of the test differently, which would lead to variationin the results. In addition, the latency test results forboth GoNet and tapip could have been limited by thespeed of the tap interface that the packets were sentand received on. There also could be other externalvariables that are unaccounted for that could affectthe results of either test.There are also alternate explanations for the re-sults of the tests. For latency, unforeseen uncon-trolled variables may have caused tapip to slow forlarger numbers of connections. In addition, tapipleaks memory by not freeing the memory beforedeleting references to it. For high numbers of concur-rent ping connections in the latency test, the highermemory usage of tapip could increase the overheadof tracking allocated blocks of memory and slow theoverall program. These possibilities are unlikely asthey would have caused a more gradual deteriorationin tapip’s performance rather than the more suddendrop.This experimentation shows that a kernel subsys-tem written in Go with CSP style can improve read-ability, modularity, concurrency, reliability, and sta-bility without significantly affecting performance ad-versely. This shows that the Go language with CSPstyle is a viable alternative to the C language forkernel implementations.
This project can be expanded in many different di-rections. Possible directions: • Support could be added for IPv6 in both thetransport layer protocols and the network layerprotocols [6]. This would simply make GoNetmore applicable in different environments. • A socket API could be built on top of the exist-ing stack, as this would allow application layerprotocols to be built on top of GoNet, extend thefunctionality of the current stack’s implementa-tion, and make the stack POSIX compliant. • The application layer protocols, which are pro-tocols that run on top of UDP, TCP, and othertransport layer protocols, could be implemented.9ome possible protocols include Secure Shell(SSH), Telnet, the Hypertext Transfer Proto-col (HTTP), the File Transfer Protocol (FTP),the Domain Name Service (DNS), and the Net-work Time Protocol (NTP). Implementing theseprotocols would allow GoNet to become morefunctional to end users, and hence could becomemore ready for use as a user-space alternative tothe system network stack. • More detailed CPU and memory profiling couldbe done to find and remove any bottlenecks inGoNet. Also, race detection and memory ana-lyzers could be used to find any additional prob-lems in GoNet. • Additional performance metrics could be devel-oped in order to better understand the differ-ences of the two stacks. • Implement other kernel subsystems with theeventual goal of implementing the entire kernelin Go. Moving GoNet into kernel space wouldalso allow for testing when compared to a widervariety of network stacks, as it would becomecomparable to a wider variety of kernels. Inaddition, the other kernel subsystems could becompared in the subsystem’s proper metric, giv-ing a more holistic view of the advantages anddisadvantages of Go with CSP Style.
References [1] H. Sutter, “The Free Lunch Is Over: A Fun-damental Turn Toward Concurrency in Soft-ware,”
Dr. Dobb’s Journal , vol. 30, no. 3,Mar. 2005. [Online]. Available: .[2]
Mirage tcp/ip , MirageOS, Sep. 2015. [Online].Available: https://mirage.io/ .[3] T. Wuff,
Pycorn , Nov. 2012. [Online]. Avail-able: https://github.com/tornewuff/pycorn . [4] C. A. R. Hoare, “Communicating SequentialProcesses,” in, J. Davies, Ed. Prentice Hall In-ternational, Jun. 2004, p. 207. [Online]. Avail-able: .[5] R. Hudson, “Go GC: Latency Problem Solved,”Google, Jul. 2015. [Online]. Available: https://talks.golang.org/2015/go-gc.pdf .[6] O. Jacobsen and D. Lynch,
A Glossary of Net-working Terms , RFC 1208, Internet Engineer-ing Task Force, 1991. [Online]. Available: .[7] D. C. Plummer,
An Ethernet Address Resolu-tion Protocol , RFC 826 (Internet Standard), In-ternet Engineering Task Force, Nov. 1982. [On-line]. Available: .[8] J. Postel,
Internet Protocol , RFC 791 (InternetStandard), Internet Engineering Task Force,Sep. 1981. [Online]. Available: .[9] J. Postel,
Transmission Control Protocol , RFC793 (Internet Standard), Internet Engineer-ing Task Force, Sep. 1981. [Online]. Available: .[10] X. Wang,
Tapip , Nov. 2013. [Online]. Avail-able: https://github.com/chobits/tapiphttps://github.com/chobits/tapip