Seminar Topics

www.seminarsonly.com

IEEE Seminar Topics

Fast And Secure Protocol


Published on Apr 02, 2024

Abstract

Aspera's fasp transfer technology is an innovative software that eliminates the fundamental bottlenecks of conventional file transfer technologies such as FTP, HTTP, and Windows CIFS, and dramatically speeds transfers over public and private IP networks.

The approach achieves perfect throughput efficiency, independent of the latency of the path and robust to packet losses. In addition, users have extra-ordinary control over individual transfer rates and bandwidth sharing, and full visibility into bandwidth utilization. File transfer times can be guaranteed, regardless of the distance of the endpoints or the dynamic conditions of the network, including even transfers over satellite, wireless, and inherently long distance and unreliable international links. Complete security is built-in, including secure endpoint authentication, on-the-fly data encryption, and integrity verification

In this digital world, fast and reliable movement of digital data, including massive sizes over global distances, is becoming vital business success across virtually every industry. The Transmission Control Protocol(TCP) that has traditionally been the engine of this data movement, however has inherent bottlenecks in performance(fig 1), especially for networks with high round-trip time and packet loss, and most pronounced on high-bandwidth networks. It is well understood that these inherent “soft” bottlenecks are caused by TCP’s Additive. Increase Multiplicative Decrease(AIMD) congestion avoidance algorithm, which slowly probes the available bandwidth of the network, increasing the transmission rate until packet loss is detected and then exponentially reducing the transmission rate.

However, it is less understood that other sources of packet losses due to physical network media, not associated with network congestion equally reduce the transmission rate. In fact, TCP AIMD itself creates losses, and equally contributes to the bottleneck. In ramping up the transmission rate until loss occurs, AIMD inherently overdrives the available bandwidth. In some cases, this self-induced loss actually surpasses loss from other causes (E.g. Physical media) and turns a loss free communication “channel” into an unreliable “channel” with an unpredictable loss ratio. The loss-based congestion control in TCP AIMD has a deadly impact on throughput: Every packet loss leads to retransmission, and stalls the delivery of data to the receiving application until retransmission occurs. This can slow the performance of any network application but is fundamentally flawed for reliable transmission of large “bulk” data, for example file transfer, which does not require in-order (byte-stream delivery).

Shortcomings of TCP Transfer

Transferring large data sets—big files, big collections of files—via inexpensive IP networks, instead of shipping tapes, discs, or film, promises to change fundamentally the economics of content production, distribution, and management. Under ideal conditions, data may be moved quickly and inexpensively using ordinary file transfer methods such as FTP, HTTP, and Windows CIFS copy. However, on real wide-area and high-speed network paths these methods' throughput collapses, failing to use more than a small fraction of available capacity. This is a consequence of the design of TCP, the underlying protocol they all rely on. New TCP stacks and new network acceleration devices are marketed to help, but they fail to fully utilize many typical wide-area network paths. Consequently, conventional FTP, and even new "acceleration" solutions, cannot provide the speed and predictability needed for global file transfers.

The TCP bottleneck in file transfer

The transmission control protocol (TCP) that provides reliable data delivery for conventional file transfer protocols has an inherent throughput bottleneck that becomes more severe with increased packet loss and latency. The bar graph shows the maximum throughput achievable under various packet loss and network latency conditions on an OC-3 (155 Mbps) link for file transfer technologies that use TCP (shown in yellow). Transmission rates are defined by rate of the bitstream of the digital signal and are designated by hyphenation of the acronym OC and an integer value of the multiple of the basic unit of rate, e.g., OC-48. The base unit is 51.84 Mbit/s. Thus, the speed of optical-carrier-classified lines labeled as OC-n is n × 51.84 Mbit/s. The throughput has a hard theoretical limit that depends only on the network round-trip time (RTT) and the packet loss. Note that adding more bandwidth does not change the effective throughput. File transfer speeds do not improve and expensive bandwidth is underutilized. OC-3 is a network line with transmission speed of up to 155.52 Mbit/s (payload: 148.608 Mbit/s; overhead: 6.912 Mbit/s, including path overhead) using fiber optics. Depending on the system OC-3 is also known as STS-3 (electrical level) and STM-1 (SDH).

Consequences

TCP file transfers are slow and bandwidth utilization of single file transfers is poor. In local or campus area networks, where packet loss and latency are small but non-negligible (0.1%/10ms), the maximum TCP throughput is 50 Mbps. Typical file transfer rates are lower, 20-40 Mbps (with TCP stack tuning on the endpoints) on gigabit ethernet. Because standard TCP halves its throughput in response to a single packet loss event, at high speeds, even a low loss percentage significantly lowers TCP throughput. Even with an abundance of bandwidth, transfer times are disappointing and expensive bandwidth is underutilized. The bandwidth utilization problem compounds on wide area links where increased network latency combines with packet loss.

A typical FTP transfer across the United States has a maximum theoretical limit of 1.7 megabits per second (Mbps), the maximum throughput of a single TCP stream for 90ms latency and 1% loss, independent of link bandwidth. On typical intercontinental links or satellite networks, the effective file transfer throughput may be as low as 0.1% to 10% of available bandwidth. On a typical global link (3%/150ms), maximum TCP throughput degrades to 500-600 kilobits per second, 5% of a 10 Mbps link. Sometimes network engineers attempt to improve the throughput by "tuning" the operating system parameters used by the TCP networking stack on the file transfer endpoints or applying a TCP acceleration device.

While this technique boosts throughput on clean networks, the improvement vanishes when real packet loss due to channel characteristics or network congestion increases. TCP file transfers over difficult networks (with high packet loss or variable latency) are extremely slow and unreliable. TCP does not distinguish packet losses due to network congestion from normal latency variations or bit errors on some physical channels such as satellite links and wireless LANs, and severely self-throttles. FTP throughput over good satellite conditions is 100 kbps and degrades by more than half during high error periods such as rain fade. Large transfers can be extremely slow and may not complete. TCP file transfer rates and times are unpredictable. As a window-based protocol, TCP can only determine its optimal rate through feedback from the network.

TCP overdrives the network until packets are dropped by intervening routers, and in the best case, oscillates around its optimal rate, causing instabilities in the network for file transfer and other applications. Over commodity Internet links where traffic loads vary, file transfer rates may vary widely with network load. File transfers slow down and may exceed the allotted time window,. TCP acceleration devices may improve throughput and smooth the transfer rate when links are clean, but are also window-based and subject to unpredictable back off.

Complete Security

The fasp protocol provides complete built-in security without compromising transfer speed. The security model, based solely on open standards cryptography, consists of secure authentication of the transfer endpoints using the standard secure shell (SSH), on-the-fly data encryption using strong cryptography (AES-128) for privacy of the transferred data, and an integrity verification per data block, to safeguard against man-in-the-middle and anonymous UDP attacks. The transfer preserves the native file system access control attributes between all supported operating systems, and is highly efficient: With encryption enabled, fasp achieves WAN file transfers of 40-80 Mbps on a laptop computer; 100-150 Mbps on a P4 or equivalent single processor machine; and 200-400 Mbps+ on dual-processor or duo-core workstations.

 Secure endpoint authentication.

Each transfer session begins with the transfer endpoints performing a mutual authentication over a secure, encrypted channel, using SSH ("standard secure shell"). SSH authentication provides both interactive password login and public-key modes. Once SSH authentication has completed, the fasp transfer endpoints generate random cryptographic keys to use for bulk data encryption, and exchange them over the secure SSH channel. These keys are not written to disk, and are discarded at the end of the transfer session.

 On-the-fly data encryption.

Using the exchanged keys, each data block is encrypted on-the-fly before it goes on the wire. fasp uses a 128-bit AES cipher, re-initialized throughout the duration of the transfer using a standard CFB (cipher feedback) mode with a unique, secret nonce (or "initialization vector") for each block. CFB protects against all standard attacks based on sampling of encrypted data during long-running transfers.

 Integrity verification.

fasp accumulates a cryptographic hashed checksum, also using 128-bit AES, for each datagram. The resulting message digest is appended to the secure datagram before it goes on the wire, and checked at the receiver to verify message integrity. This protects against both man-in-the-middle and re-play attacks, and also against anonymous UDP denial-of-service attacks.

fasp vs. FTP on gigabit metropolitan and wide area networks

Conventional TCP file transfer technologies such as FTP dramatically reduce the data rate in response to any packet loss, and cannot maintain long-term throughputs at the capacity of high-speed links. For example, the maximum theoretical throughput for TCP-based file transfer under metropolitan area network conditions (0.1% packet loss and 10 ms RTT) is 50 megabits per second (Mbps), regardless of bandwidth. The effective FTP throughput is even less (22 Mbps). In contrast, fasp achieves 100% utilization of high-speed links with a single transfer stream.

Conclusion

The market need for a flexible, next generation file transfer technology is pervasive and exploding as file sizes increase, network capacities grow, and users increasingly exchange data and media. As an all-software platform deployable on any standard computing device, fasp™ is capable of filling the next-generation need, providing optimal data transfer in an application, over any Internet network, from consumer broadband to gigabit core. As a result, fasp eliminates the fundamental bottlenecks of TCP and UDP based file transfer technologies such as FTP and UDT, and dramatically speeds transfers over public and private IP networks. Fasp removes the artificial bottlenecks caused by imperfect congestion control algorithms, packet losses, and the coupling between reliability and congestion control.

References

• https://www.asperasoft.com/en/technology_sections
• https://www.technologyreview.in/web/24526/
• https://nextbigfuture.com/2010/02/aspera-fasp-air-uploader-application.html
• https://en.wikipedia.org/wiki/Fast_And_Secure_Protocol
• Computer Networking, 5/e ,James F. Kurose ,Keith W. Ross ISBN: 0-13-607967-9











Are you interested in this topic.Then mail to us immediately to get the full report.

email :- contactv2@gmail.com

Related Seminar Topics