Outline
- what is the Internet?
- network edge end systems, access networks, links
- network core packet switching, circuit switching, network structure
- delay, loss, throughput in networks
- protocol layers, service models
- networks under attack: security
- history
Delay, loss, throughput in networks
即网络中的延迟、丢失、吞吐量。
延迟 Delay
Transmission delay
将数据包从主机传输到传输介质所花费的时间称为传输延迟。
The time taken to transmit a packet from the host to the transmission medium is called Transmission delay.
Let bps is the bandwidth and bit is the size of the data then transmission delay is,
This delay depends upon the following factors:
- If there are multiple active sessions, the delay will become significant(相当重要的).
- Increasing bandwidth(带宽) decreases transmission delay.
- MAC protocol largely influences the delay if the link is shared among multiple devices.
- Sending and receiving a packet involves a context switch in the operating system, which takes a finite time.
Propagation delay
数据包传输到传输介质后,必须经过介质才能到达目的地。因此,数据包的最后一位到达目的地所花费的时间称为传播延迟。
Delays in Computer Network - GeeksforGeeks
After the packet is transmitted to the transmission medium, it has to go through the medium to reach the destination. Hence the time taken by the last bit of the packet to reach the destination is called propagation delay.
Factors affecting propagation delay:
- Distance: It takes more time to reach the destination if the distance of the medium is longer.
- Velocity: If the velocity(speed) of the signal is higher, the packet will be received faster.
Note:
Queueing delay
让数据包被目的地接收,数据包不会立即被目的地处理。它必须在称为缓冲区的队列中等待。所以它在被处理之前在队列中等待的时间称为排队延迟。
Let the packet is received by the destination, the packet will not be processed by the destination immediately. It has to wait in a queue in something called a buffer. So the amount of time it waits in queue before being processed is called queueing delay.
In general, we can’t calculate queueing delay because we don’t have any formula for that.
This delay depends upon the following factors:
- If the size of the queue is large, the queuing delay will be huge. If the queue is empty there will be less or no delay.
- If more packets are arriving in a short or no time interval, queuing delay will be large.
- The less the number of servers/links, the greater is the queuing delay.
Processing delay
现在数据包将被用于处理,称为处理延迟。处理器处理数据包所花费的时间是中间路由器决定将数据包转发到何处、更新 TTL、执行报头校验和计算所需的时间。
Now the packet will be taken for the processing which is called processing delay.
Time is taken to process the data packet by the processor that is the time required by intermediate routers to decide where to forward the packet, update TTL, perform header checksum(校验) calculations.
It also doesn’t have any formula since it depends upon the speed of the processor and the speed of the processor varies from computer to computer.
This delay depends upon the following factors:
- It depends on the speed of the processor.
Packet loss 丢包
- queue (aka(别称) buffer) preceding(提前的) link in buffer has finite capacity(容量)
- packet arriving to full queue dropped (aka lost)
- lost packet may be retransmitted by previous node, by source end system, or not at all(压根没送)
Throughput 吞吐量
Rate (bits/time unit) at which bits transferred between sender/receiver
类比物理当中定义的”速度“:
- instantaneous(瞬时): rate at given point in time
- average(平均): rate over longer period of time
Link on end-end path that constrains(约束) end-end throughput.
So, how do we define throughput? Again, network throughput refers to how much data can be transferred from source to destination within a given timeframe. Throughput measures how many packets arrive at their destinations successfully. For the most part, throughput capacity is measured in bits per second, but it can also be measured in data per second.
Packet loss, latency(延迟), and jitter(抖动) are all related to slow throughput speed. Latency is the amount of time it takes for a packet to make it from source to destination, and jitter refers to the difference in packet delay. Minimizing all these factors is critical to increasing throughput speed and data performance.
You can think of bandwidth as a tube and data throughput as sand. If you have a large tube, you can pour more sand through it at a faster rate. Conversely, if you try to put a lot of sand through a small tube, it will go very slowly.