Control socket timeout while processing a show command

Hi.

We have a situation where during a show command we see the confd closing the control socket:

Show started at 15:50:38

2025-10-10 15:50:38.337 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:50:38 st-fln-dm4170-204 confd[5060]: devel-c get_next_object request for callpoint 'oper-dc-bgp' path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry
2025-10-10 15:50:38.341 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:50:38 st-fln-dm4170-204 confd[5060]: devel-c close_usess db request daemon id: 80
2025-10-10 15:50:38.341 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:50:38 st-fln-dm4170-204 confd[5060]: devel-c get_next_object succeeded for callpoint 'oper-dc-bgp' path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry
2025-10-10 15:50:38.349 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:50:38 st-fln-dm4170-204 confd[5060]: devel-c get_elem request for callpoint bgpTransCp path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry{1 peerIndex 2 ipv4 mplsBgpVpn 00:01:71:00:00:3f:48:00:00:00:32:64:db:29 112 0}/bgpFlapStatsCleardamp
2025-10-10 15:50:38.357 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:50:38 st-fln-dm4170-204 confd[5060]: devel-c get_elem succeeded for callpoint bgpTransCp path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry{1 peerIndex 2 ipv4 mplsBgpVpn 00:01:71:00:00:3f:48:00:00:00:32:64:db:29 112 0}/bgpFlapStatsCleardamp

(...)

2025-10-10 15:52:35.598 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:52:35 st-fln-dm4170-204 confd[5060]: devel-c get_elem succeeded for callpoint bgpTransCp path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry{1 peerIndex 2 ipv4 mplsBgpVpn 00:01:51:00:00:3f:48:00:00:00:70:c0:da:b5 112 0}/bgpFlapStatsCleardamp
2025-10-10 15:52:35.600 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:52:35 st-fln-dm4170-204 confd[5060]: devel-c get_elem request for callpoint bgpTransCp path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry{1 peerIndex 2 ipv4 mplsBgpVpn 00:01:51:00:00:3f:48:00:00:00:70:c0:da:b5 112 0}/bgpFlapStatsClearstat
2025-10-10 15:52:35.603 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:52:35 st-fln-dm4170-204 confd[5060]: devel-c get_elem succeeded for callpoint bgpTransCp path /DC-BGP-MIB:DC-BGP-MIB/bgpNlriTable/bgpNlriEntry{1 peerIndex 2 ipv4 mplsBgpVpn 00:01:51:00:00:3f:48:00:00:00:70:c0:da:b5 112 0}/bgpFlapStatsClearstat
2025-10-10 15:52:35.612 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:52:35 st-fln-dm4170-204 confd[5060]: devel-c Control socket request timed out daemon 'bgp-app' id 80
2025-10-10 15:52:35.613 : 1-1 : <Debug> %CONFD : fifo2elemlog_all[4930] : Oct 10 15:52:35 st-fln-dm4170-204 confd[5060]: - Daemon bgp-app timed out

Socket closed at 15:52:35

In this show, we are iterating (through maapi-cursor) over a lot of entries to filter in the ones required by the show parameters.

While we’ve been iterating, a maapi-get request failed with “internal error” matching the time of the socket closure.

We also being asking for confd to extend the timeout for the maapi socket. But nothing was done regarding the control socket.

Our process is a single-thread one, thus we’ve being neglecting the control socket while accessing maapi.

Is there any way to keep the control socket alive in this situation?

What is the best way to make this “slow show” finish successfully?

Regards

Caimi

Hi,

Is there a reason you are not using threads? The situation you are describing is the basis for the User Guide discussion of using threads to move long running tasks out of the control socket loop.

Scott

Hi Scott

It is a project definition… All of our processes are single thread, and should have been “fast enough” to be responsible….

But we found this old show that is not performing as intended…

Is there a way to make this work in the single-thread scenario?

Or should we do a complete rework of this implementation?

Regards

Caimi

I think the threads approach would be the most reliable but I’m happy to get other inputs from the community.

Best,

Scott