top of page
Writer's picturebrunfortcatrene

Optimize Connection Buffer Size



The Optimize connection buffer size enables optimization of socket buffer size. It greatly improves transfer speed. Disable it only when experiencing problems. The option is not available with WebDAV protocol.


. 2019-12-19 11:30:20.325 Opening remote file. > 2019-12-19 11:30:20.325 Type: SSH_FXP_OPEN, Size: 117, Number: 515 > 2019-12-19 11:30:22.341 Type: SSH_FXP_WRITE, Size: 32764, Number: 1030 . 2019-12-19 11:30:45.810 Waiting for dispatching send buffer timed out, asking user what to do. . 2019-12-19 11:30:45.825 Asking user: . 2019-12-19 11:30:45.825 **Host is not communicating for 15 seconds. . 2019-12-19 11:30:45.825 . 2019-12-19 11:30:45.825 Wait for another 15 seconds?** ()




Optimize Connection Buffer Size



Once again this only manifests itself when transferring large files to and from the gateway itself with SCP, this does not happen with WinSCP transfers that are just transiting the gateway. Apparently this is due to an upgraded gateway version of SSHD that does not care one bit for WinSCP's buffer size optimization strategy; not sure but I suspect this SSHD version upgrade is related to the new Gaia 3.10 kernel that is now mandatory in R80.40.


Prototype of .NET assembly built around WinSCP scripting interface. 147 SSL core upgraded to OpenSSL 1.0.0g. SFTP status packets with missing language tag are accepted. 770 Disabling session option Optimize connection buffer size disables unlimited SSH window to overcome bugs in some older version of OpenSSH. 635 When Optimize connection buffer size is enabled, also FTP socket internal buffer size is increased. Thanks to tteras. 787 Added workaround for Chokes on SSH-2 ignore messages SSH server bug. 577


Network data compression reduces the size of the session data unit (SDU) transmitted over a data connection. Reducing the size of data reduces the time required to transmit a SQL query and result across the network. In addition, compressed data uses less bandwidth which allows transmission of larger data in less time. The data compression process is transparent to the application layer.


Under typical database configuration, Oracle Net encapsulates data into buffers the size of the SDU before sending the data across the network. Oracle Net sends each buffer when it is filled, flushed, or when an application tries to read data. Adjusting the size of the SDU buffers relative to the amount of data provided to Oracle Net to send at any one time can improve performance, network utilization, and memory consumption. When large amounts of data are being transmitted, increasing the SDU size can improve performance and network throughput. SDU size can be adjusted lower or higher to achieve higher throughput for a specific deployment.


The amount of data provided to Oracle Net to send at any one time is referred to as the message size. Oracle Net assumes by default that the message size will normally vary between 0 and 8192 bytes, and infrequently, be larger than 8192 bytes. If this assumption is true, then most of the time, the data is sent using one SDU buffer.


Reliable network protocols, such as TCP/IP, buffer data into send and receive buffers while sending and receiving to or from lower and upper layer protocols. The sizes of these buffers affect network performance by influencing flow control decisions.


The RECV_BUF_SIZE and SEND_BUF_SIZE parameters specify sizes of socket buffers associated with an Oracle Net connection. To ensure the continuous flow of data and better utilization of network bandwidth, specify the I/O buffer space limit for receive and send operations of sessions with the RECV_BUF_SIZE and SEND_BUF_SIZE parameters. The RECV_BUF_SIZE and SEND_BUF_SIZE parameter values do not have to match, but should be set according to your environment.


For best performance, the size of the send and receive buffers should be set large enough to hold all the data that may be sent concurrently on the network connection. For optimal network performance, these buffers should be set to at least the bandwidth-delay product.


It is important to consider the total number of concurrent connections that your system must support and the available memory resources. The total amount of memory consumed by these connections depends on the number of concurrent connections and the size of their respective buffers.


Because the database server writes data to clients, setting the SEND_BUF_SIZE parameter on the server-side is typically adequate. If the database server is receiving large requests, then also set the RECV_BUF_SIZE parameter.To configure the database server, set the buffer space size in the listener.ora and sqlnet.ora files.


The TCP window size can always be adapted based on the resources available tothe process involved and the TCP algorithm in use. As the diagram shows, windowscaling lets a connection go well beyond the 65 KiB window size defined inoriginal TCP specification.


I have a Linux box that I use as a file server. I have a monthly cron job that tars up the contents of the data drive and then copies it via scp to another machine for safe keeping. The resulting tarball is around 300GB in size, and it normally takes about a day and a half to complete the copy (over an 802.11g Wi-Fi connection).


I also encountered slow SCP performance while coping files 150-300KiB/s instead of 10MiB/s. Also I noticed that on target server 1 CPU core was busy 100% while I was coping a file. I googled a bit and found proposal: disable "Optimize connection buffer size" in SCP connection options. It helped. After disabling this option speed increased to expected network level, CPU load on server significatly reduced.


Came here because having the same issue with slow speeds using WinSCP after upgrading to 6.6.1. Jerky_san's advice "On the connect prompt click advanced, connections, and deselect optimize connection buffer size" fixed it for me too.


In earlier versions of Windows, the Windows network stack used a fixed-size receive window (65,535 bytes) that limited the overall potential throughput for connections. The total achievable throughput of TCP connections could limit network usage scenarios. TCP receive window autotuning enables these scenarios to fully use the network.


WAAS TFO, which uses an optimized implementation of TCP based on Binary Increase Congestion TCP (BIC-TCP), also uses memory for the purposes of guaranteed delivery and pipelining. TFO also leverages other TCP optimizations, including window scaling, selective acknowledgment, and large initial windows, to improve TCP performance. However, all these optimization will not improve performance if buffer capacity is simply too small to fill the available network link. In such cases, the buffer capacity may need to be increased to accommodate the WAN link separating two or more locations.


Increasing the memory allocated to TCP connections, which in the case of WAEs is called adjusting TFO buffers, allows more data to be in flight between two nodes at a given time. This is also referred to as "keeping the pipe full", because it can allow communicating nodes to fully leverage the available bandwidth of the network. When coupling other optimizations, such as DRE or PLZ, the performance improvement can be exponentially higher as "keeping the pipe full" becomes "keeping the pipe full of compressed data". Consider a scenario where a T3 link connects a campus to a remote data center over very long distance. Although an increase to TCP memory (adjusting TFO buffers) may allow for near line-speed utilization of this link.


You can also configure TFO buffer settings from the CM GUI by going to Devices > Device or Device Group > Acceleration > Acceleration TCP Settings. Changes to the buffer settings of a WAE take effect only for new connections that are established after the configuration change.


If a WAE encounters a situation where the system memory is oversubscribed based on the TFO buffer configuration and the number of connections to optimize, it will begin reassigning memory from existing connections to support new connections. In this way, the WAE can adapt to changes in load, even if it is configured to allocate large amounts of memory to connections. Additional TFO settings include TFO keepalives and MSS values. TFO keepalives, enabled by default, help the WAEs track connection status. MSS settings are used to adjust the MSS used on the original and optimized connections. It may be necessary to shrink the MSS values on the optimized connection (optimized-mss) if encapsulation or Virtual Private Network (VPN) is present in the network between the WAEs to ensure that fragmentation is not encountered, which can significantly impact performance.


For multi-user applications, make use of connection pooling. Create the poolonce during application initialization. Do not oversize the pool, seeConnection Pooling . Use a session callback function to set session state, seeSession CallBacks for Setting Pooled Connection State.


Tune your network. For example, when inserting or retrieving a large numberof rows (or for large data), or when using a slow network, then tune theOracle Network Session Data Unit (SDU) and socket buffer sizes, see OracleNet Services: Best Practices for Database Performance and High Availability.


When using Oracle Client 21 (or later), changing the cache size does notimmediately affect connections previously acquired and currently in use. Whenthose connections are subsequently released to the pool and re-acquired, theywill then use the new value. If it is neccessary to change the size on aconnection because it is not being released to the pool, useConnection.stmtcachesize. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Commentaires


bottom of page