Introduction
This article explains how to apply a small set of Linux TCP tuning parameters that can improve throughput and stability on some network paths—especially on higher-latency links or when transferring large amounts of data.
These tweaks are already included in newer Webdock image rebuilds, so you only need this guide if your KVM server was created before September 6th, 2024.
Important note: if you see very low throughput (for example ~30 Mbit/s), the cause can be unrelated to TCP tuning. In one real case, the bottleneck was traffic being routed through a WireGuard tunnel. This guide helps TCP performance, but it won’t overcome limits caused by tunneling, MTU issues, packet loss, CPU bottlenecks, or rate limiting elsewhere in the network path.
When Should You Apply This
Apply this guide if all of the following are true:
-
Your server was created before September 6th, 2024
-
You are experiencing lower-than-expected TCP throughput on normal (non-tunneled) traffic
-
You want to enable BBR congestion control and allow larger socket buffers
If your traffic goes through a VPN/tunnel (WireGuard/OpenVPN/IPsec), test performance outside the tunnel as well, because the tunnel path/MTU/CPU can dominate performance.
Applying the Tweaks
Note: As mentioned in the introduction these changes are only applicable for any KVM servers that are created before September 6th, 2024. First, log in to your server as a sudo user and switch to root.
$ sudo su -
Create a new config file in the sysctl directory.
# touch /etc/sysctl.d/99-webdock-tcp-tuning.conf
Now run the below command to apply the config.
cat > /etc/sysctl.d/99-webdock-tcp-tuning.conf << 'EOF' # Webdock TCP tuning (for older images / high-throughput paths) # Allow large socket buffers (must be >= tcp_rmem/tcp_wmem max) net.core.rmem_max = 805306368 net.core.wmem_max = 805306368 # TCP autotuning limits: min default max (bytes) net.ipv4.tcp_rmem = 4096 122880 805306368 net.ipv4.tcp_wmem = 4096 122880 805306368 # BBR + fq pacing (helps on many real-world WAN paths) net.ipv4.tcp_congestion_control = bbr net.core.default_qdisc = fq # Avoid persisting old path metrics that may reduce throughput net.ipv4.tcp_no_metrics_save = 1 EOF
Now that you have applied the tweaks, reboot the server for the changes to take effect.
Verify the configuration
Check whether the value are applied by running below as the root user
sysctl net.core.rmem_max net.core.wmem_max \ net.ipv4.tcp_rmem net.ipv4.tcp_wmem \ net.ipv4.tcp_congestion_control net.core.default_qdisc \ net.ipv4.tcp_no_metrics_save
Confirm that BBR is available
sysctl net.ipv4.tcp_available_congestion_control
You should see bbr in the output. If it isn’t listed, the congestion control setting can’t be applied as intended.
Troubleshooting notes (if speed is still low)
If you still see unexpectedly low TCP throughput after applying the above:
-
Check if traffic is going through a tunnel (e.g., WireGuard). Tunnels can reduce throughput due to:
-
MTU/fragmentation issues (common)
-
Single-core encryption bottlenecks
-
Suboptimal routing paths
-
-
Check for packet loss or an unstable path (even small loss can severely reduce TCP speed)
-
Check MTU (especially if you use VXLAN/GRE/WireGuard)
-
Ensure testing is done with a proper tool (e.g.,
iperf3) and ideally compare:-
inside the tunnel vs outside the tunnel
-
single stream vs multiple streams
-
Conclusion
This guide showed how to apply persistent TCP tuning settings for older Webdock KVM servers (created before September 6th, 2024). These settings increase TCP buffer ceilings, enable BBR congestion control, and use the fq queue discipline to improve throughput and stability on many real-world network paths.
If you need help validating performance or diagnosing a low-throughput case (especially involving VPN tunnels like WireGuard), contact Webdock Support with details of your test method and whether traffic is tunneled.