Network Engineering Asked on January 7, 2021
I was reading a textbook which says:
Let’s begin our study of TCP timer management by considering how TCP estimates
the round-trip time between sender and receiver. This is accomplished as follows.
The sample RTT, denoted SampleRTT, for a segment is the amount of time between
when the segment is sent (that is, passed to IP) and when an acknowledgment for
the segment is received. Instead of measuring a SampleRTT for every transmitted segment, most TCP implementations take only one SampleRTT measurement at a time. That is, at any point in time, the SampleRTT is being estimated for only one of the transmitted but currently unacknowledged segments, leading to a new value of SampleRTT approximately once every RTT.
I’m a little bit confused here, the text in black says it won’t measure SampleRTT for every segement, then it says a new value of SampleRTT will be approximately once every RTT, which still sounds like TCP measure SampleRTT for every segement to the an average RTT?
From the sender's perspective, segments within the send window are all "in flight" simultaneously. So, instead of trying to track each segment's RTT, just one segment is tracked at a time. Since it takes RTT to send a segment and receive ACK, one sample per RTT is taken that way.
If you'd track each segment's RTT you'd have number of segments in send window = window size / segment size samples per RTT - that's actually more than you need, so it wastes memory and processing power.
As Jeff has pointed out in his answer, today's implementations commonly use the TCP timestamp option to simplify RTT measurement. Timestamping provides finer-grained information with less processing overhead. Do check out Jeff's links as they're well worth reading.
Correct answer by Zac67 on January 7, 2021
I suggest you start by reading RFC 1323 §3 RTTM: Round-Trip Time Measurement which is a fantastic introduction to this problem, a great perspective on how long very smart people have worked on it, and how little has changed since 1992.
The Linux tcp_input.c source also contains a lot of useful commentary and links to a few newer academic papers on this topic.
If you check on your own workstation, using tcpdump
or wireshark
, you'll find that most TCP segments exchanged by your computer have a timestamp option present. This allows more frequent RTT measurement providing better inputs to the Smoothed RTT used to calculate RTO, and with less complexity.
Without TCP timestamps, systems have to do what Zac67 described, with associated problems / limitations discussed in both the above links and really all the literature about this subject.
Answered by Jeff Wheeler on January 7, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP