M-Lab Testing Platform
M-Lab provides one of the largest collection of open Internet performance data. As a consortium of research, industry, and public interest partners, M-Lab is dedicated to providing an ecosystem for the open, verifiable measurement of global network performance. All of the data collected by M-Lab’s global measurement platform is made openly available, and all of the measurement tools hosted by M-Lab are open source.
The .CA Internet Performance Test (IPT) uses vendor-neutral test servers located throughout Canada at various Internet Exchange Points (IXPs). Server nodes in the Canadian Internet Exchange Points located in Toronto, Montreal and Calgary are running the M-LAB platform, which offers a number of different tests to measure network speed and latency, blocking and throttling. The IPT utilizes the Network Diagnostic Test (NDT) to provide the speed and diagnostic information regarding your configuration and network infrastructure.
Using the Data
As each user performs a test, their data is anonymously collected and aggregated into a large dataset that spans Canada. Researchers will be able to understand the capabilities of Canada’s Internet infrastructure. As the reporting infrastructure grows, we will be able to overlay demographic and social data to help understand who is getting the best benefits from this great technology. All tests performed using the M-Lab NDT platform have the data stored within the M-Lab database and available through Google’s BigQuery Data.
Once a sufficient amount of Internet Performance Test data has been collected, CIRA will provide an easy to use way to access this data. In the meantime you can access all the data M-Lab collects (including IPT data) directly via:
- in raw format at https://www.measurementlab.net/data/docs/gcs/
- via an SQL interface (see https://www.measurementlab.net/data/docs/bq/quickstart/)
- Visualized in Public Data Explorer
The Network Diagnostic Test (NDT) Results
M-Lab’s Network Diagnostic Test (NDT) connects your computer to one of our servers within Canadian IXPs to provide network configuration and performance testing. It communicates with a server to perform diagnostic functions and then displays the results to the test users. For additional details on the NDT test itself, refer to the links below:
- For an explanation of the protocol itself see, Description of the NDT Protocol
- To learn how configuration information is gathered and what details NDT can provide see Description of the NDT test methodology
- If errors occur, check for the cause here NDT Common Warnings and Errors
Detailed Results
The particular values are separated with commas without any spaces. The following results are stored:
Variable | Description |
ClientToServerSpeed | Measured throughput speed from client to server (value in mb/s). |
PacketLoss | Percentage of packets that had to be resent due to transmission error |
ServerToClientSpeed | Measured throughput speed from server to client (value in mb/s). |
TCPInfo | The TCPInfo record provides results from the TCP_INFO netlink socket. These are the same values returned to clients at the end of the download (S2C) measurement. |
TCPInfo.AdvMSS | Advertised MSS. |
TCPInfo.AppLimited | Flag indicating that rate measurements reflect non-network bottlenecks. Note that even very short application stalls invalidate max_BW measurements. |
TCPInfo.ATO | Delayed ACK Timeout. Quantized to system jiffies. |
TCPInfo.Backoff | Exponential timeout backoff counter. Increment on RTO, reset on successful RTT measurements. |
TCPInfo.BusyTime | Time with outstanding (unacknowledged) data. Time when snd.una is not equal to snd.next. |
TCPInfo.BytesAcked | The number of data bytes for which cumulative acknowledgments have been received. |
TCPInfo.BytesReceived | The number of data bytes for which have been received. |
TCPInfo.BytesRetrans | Bytes retransmitted. May include headers and new data carried with a retransmission (for thin flows). |
TCPInfo.BytesSent | Payload bytes sent (excludes headers, includes retransmissions). |
TCPInfo.CAState | Loss recovery state machine. For traditional loss based congestion control algorithms, CAState is also used to control window adjustments. |
TCPInfo.DataSegsIn | Input segments carrying data (len>0). |
TCPInfo.DataSegsOut | Transmitted segments carrying data (len>0). |
TCPInfo.Delivered | Data segments delivered to the receiver including retransmits. As reported by returning ACKs, used by ECN. |
TCPInfo.DeliveredCE | ECE marked data segments delivered to the receiver including retransmits. As reported by returning ACKs, used by ECN. |
TCPInfo.DeliveryRate | Observed Maximum Delivery Rate. |
TCPInfo.DSackDups | Duplicate segments reported by DSACK. Not reported by some Operating Systems. |
TCPInfo.ElapsedTime | The duration of the measurement as measured by the M-Lab server in milliseconds. |
TCPInfo.LastAckSent | Time since last ACK was sent (not implemented). Present in TCP_INFO but not elsewhere in the kernel. |
TCPInfo.LastDataRecv | Time since last data segment was received. Quantized to jiffies. |
TCPInfo.LastDataSent | Time since last data segment was sent. Quantized to jiffies. |
TCPInfo.Lost | Scoreboard segments marked lost by loss detection heuristics. Accounting for the Pipe algorithm. |
TCPInfo.MaxPacingRate | Settable pacing rate clamp. Set with setsockopt( ..SO_MAX_PACING_RATE.. ). |
TCPInfo.MinRTT | Minimum Round Trip Time. From an older, pre-BBR algorithm. |
TCPInfo.NotsentBytes | Number of bytes queued in the send buffer that have not been sent. |
TCPInfo.Options | Bit encoded SYN options and other negotiations TIMESTAMPS 0x1; SACK 0x2; WSCALE 0x4; ECN 0x8 – Was negotiated; ECN_SEEN – At least one ECT seen; SYN_DATA – SYN-ACK acknowledged data in SYN sent or rcvd. |
TCPInfo.PacingRate | Current Pacing Rate, nominally updated by congestion control. |
TCPInfo.PMTU | Maximum IP Transmission Unit for this path. |
TCPInfo.Probes | Consecutive zero window probes that have gone unanswered. |
TCPInfo.RcvMSS | Maximum observed segment size from the remote host. Used to trigger delayed ACKs. |
TCPInfo.RcvRTT | Receiver Side RTT estimate. |
TCPInfo.RcvSpace | Space reserved for the receive queue. Typically updated by receiver side auto-tuning. |
TCPInfo.RcvSsThresh | Current Window Clamp. Receiver algorithm to avoid allocating excessive receive buffers. |
TCPInfo.Reordering | Maximum observed reordering distance. |
TCPInfo.ReordSeen | Received ACKs that were out of order. Estimates reordering on the return path. |
TCPInfo.Retrans | Scoreboard segments marked retransmitted. Accounting for the Pipe algorithm. |
TCPInfo.Retransmits | Number of timeouts (RTO based retransmissions) at this sequence. Reset to zero on forward progress. |
TCPInfo.RTO | Retransmission Timeout. Quantized to system jiffies. |
TCPInfo.RTT | Smoothed Round Trip Time (RTT). The Linux implementation differs from the standard. |
TCPInfo.RTTVar | RTT variance. The Linux implementation differs from the standard. |
TCPInfo.RWndLimited | Time spend waiting for receiver window. |
TCPInfo.Sacked | Scoreboard segment marked SACKED by sack blocks. Accounting for the Pipe algorithm. |
TCPInfo.SegsIn | The number of segments received. Includes data and pure ACKs. |
TCPInfo.SegsOut | The number of segments transmitted. Includes data and pure ACKs. |
TCPInfo.SndBufLimited | Time spent waiting for sender buffer space. This only includes the time when TCP transmissions are starved for data, but the application has been stopped because the buffer is full and can not be grown for some reason. |
TCPInfo.SndCwnd | Congestion Window. Value controlled by the selected congestion control algorithm. |
TCPInfo.SndMSS | Current Maximum Segment Size. Note that this can be smaller than the negotiated MSS for various reasons. |
TCPInfo.SndSsThresh | Slow Start Threshold. Value controlled by the selected congestion control algorithm. |
TCPInfo.State | TCP state is nominally 1 (Established). Other values reflect transient states having incomplete rows. |
TCPInfo.TotalRetrans | Total number of segments containing retransmitted data. |
TCPInfo.Unacked | Number of segments between snd.nxt and snd.una. Accounting for the Pipe algorithm. |
TCPInfo.WScale | BUG Conflation of SndWScale and RcvWScale. See github.com/m-lab/etl/issues/790 |