Researchers at the Massachusetts Institute of Technology have devised a new method for optimizing and enabling twice the throughput on wired networks using Transmission Control Protocol (TCP).
According to researchers their ‘Remy’ ‘machine learning’ system can create its own algorithm for implementing TCP, which they claim is unlike anything human developers have ever written. Even though TCP have been tweaked and refined over the last three or so decades, there are only so many that are deemed as ‘good’ schemes.
Current TCP algorithms, the researchers claim, can only utilize a few rules for how a computer responds when there are performance issues (i.e. throttling transmission rate due to dropped packets, etc.). MIT’s Remy system, however, is said to be able to create algorithms with more than 150 rules. Rather than focusing on ‘predictable’ network situations, Remy looks at instances where minute changes can lead to bigger changes in performance.
It takes about four to twelve hours for Remy to create a specific algorithm that will deliver the best networking performance. The researchers conducted tests on their Remy-produced algorithms and found that their optimized TCPs provide twice the amount of throughput as most commonly used versions of TCP, and slashes delay by two-thirds.
Tests on a wireless network (Verizon’s mobile data network) revealed that Remy’s TCP produced twenty to thirty percent better throughput and twenty-five to forty percent lower delay.
Remy’s work in the lab sounds fantastic, but the problem is the system hasn’t been tested on the digital landscape we call the open web.
No comments:
Post a Comment