Network messages in Progress

one of the big factors I noticed when I was doing the test to move the users from the share-memory connect to the remote-client connect, was the network messages. I never paid close attention to how they are used or how big those messages(packets) are. In the test servers, after I moved the users to the frontends, the CPU on the database server didn't seem to drop at all, one thing I noticed was the system CPU was a lot higher than the user CPU in the vmstat output which was not what I wanted to see. That just means instead of using more CPU for user related tasks, the system is preoccupied with system realted tasks, such as context switches, handling interrupts. Overall, it doesn't have a good outlook.

Using tcpdump to monitor the network packets between the database server and the frontend server, you will see exactly how many message are exchanged, that's how many messages coming in and how many message going out, on the top of thatm you will see how big each message is in byte. In my test, I saw the message are many and small, averaging less than 100 bytes per packet.

So I dug a little deep and read some more about how the CPUs are interrupted everytime when a network message comes in, it can only add more context switch to the already busy CPU schedulers. After a few lenghty conversation with Progress, Rich at Progress added couple of knobs in for the 102B release to let the DBAs turne how the network message flows.

This applies to the client-server connect only. There are couple of gates in Progress when it comes to how the records are sent to the client. First is the server itself, it sleeps 2 seconds if there is nothing to do, you can see that by 'truss' a process and see the pattern, the 2-second is a default. Second, it's the sending fo the first record, Progress is sending over the first record over to the client first before it continue with the rest. There must be some good reasons for this desige (at the beginning). Thirdly, when there are more records to send, there is 0 sec in the polling to get the next record, while it takes 10 microseconds for each record. This is a CPU intensive activity. Forthly the -Mm parameter, if you set it to 4K and hoping it will send 4K of data each time, it won'most likely happen because the default number of records for in message/packet is 16, so you can set your -Mm to 16K, but if your record size is 100 bytes, the actual size of record going across is 1600 bytes.

The new knobs Progress provided are:
1. how long the server can rest before it wakes up to find work to do, the default is 2 seconds.
2. you can omit the sending of the first record.
3. you can specify how how the pollskip should wait instead of 0 second in default
4. you can specify the fill percentage of your -Mm before a message is send across
5. you can specify how many records can be stuffed in each message before it is sent.

The "chatty" or "choppy" Progress can become cool and calm even smooth, if you tune them carefully, or you will suffer that response-time delay in exchange. because almost every option is let you add more delay in the flow, so the messages can be exchanged more "efficiently", as long as that doesn't affect the user experiences that is.