Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[action] [PR:345] Fix snmp agent not-responding issue when high CPU utilization #346

Merged
merged 1 commit into from
Jan 14, 2025

Conversation

mssonicbld
Copy link
Collaborator

Fix sonic-net/sonic-buildimage#21314
The SNMP agent and MIB updaters are basic on Asyncio/Coroutine,
the mib updaters share the same Asyncio event loop with the SNMP agent client.
Hence during the updaters executing, the agent client can't receive/respond to new requests.

When the CPU utilization is high (In some stress test we make CPU 100% utilization), the updates are slow, and this causes the snmpd request to be timeout because the agent got suspended during updating.

- What I did
Decrease the MIB update frequency when the update execution is slow.
pros:

  1. The snmp request can success even if 100% CPU utilization.
  2. The snmpd request seldomly fails due to timeout, combined with Update snmpd.conf.j2 prolong agentXTimeout to avoid timeout failure in high CPU utilization scenario sonic-buildimage#21316 , we have 4*5 = 20s time windows for the SNMP agent to wait for the MIB updates finish and respond to snmpd request.
  3. Relief the CPU cost when CPU is high, the can avoid CPU becomes more crowded.

cons:

  1. Tested on pizzabox (4600c), the updaters are very fast, generally finished within 0.001~0.02s, the chagne won't actually affect the frequency and interval.
  2. On Cisco chassis, the update of SNMP data could be delayed for 10~20s(at most 60s in extreme situation). Per my oberservation, most of the updater finishes within 0.5s. But for
    1.a ciscoSwitchQosMIB.QueueStatUpdater, generally finishs in 0.5-2s, expected to be delayed to 5-20s
    1.b PhysicalTableMIBUpdater, generally finishs in 0.5-1.5s, expected to be delayed to 5-1.5s
    1.c ciscoPfcExtMIB.PfcPrioUpdater, generally finishs in 0.5-3s, expected to be delayed to 5-30s

- How I did it
In get_next_update_interval, we compute the interval based on current execution time.
Roughly, we make the 'update interval'/'update execution time' >= UPDATE_FREQUENCY_RATE(10)
More specifically, if the execution time is 2.000001s, we sleep 21s before next update round.
And the max interval won't be longer than MAX_UPDATE_INTERVAL(60s).

- How to verify it
Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well.

- Description for the changelog

<!--
Please make sure you've read and understood our contributing guidelines;
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md

** Make sure all your commits include a signature generated with `git commit -s` **

If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx"

Please provide the following information:
-->
Fix sonic-net/sonic-buildimage#21314
The SNMP agent and MIB updaters are basic on Asyncio/Coroutine,
  the mib updaters share the same Asyncio event loop with the SNMP agent client.
  Hence during the updaters executing, the agent client can't receive/respond to new requests.

When the CPU utilization is high (In some stress test we make CPU 100% utilization), the updates are slow, and this causes the snmpd request to be timeout because the agent got suspended during updating.

**- What I did**
Decrease the MIB update frequency when the update execution is slow.
pros:
   1.  The snmp request can success even if 100% CPU utilization.
   2. The snmpd request seldomly fails due to timeout, combined with sonic-net/sonic-buildimage#21316 , we have 4*5 = 20s time windows for the SNMP agent to wait for the MIB updates finish and respond to snmpd request.
   3. Relief the CPU cost when CPU is high, the can avoid CPU becomes more crowded.

cons:
   1. Tested on pizzabox (4600c), the updaters are very fast, generally finished within 0.001~0.02s, the chagne won't actually affect the frequency and interval.
   2. On Cisco chassis, the update of SNMP data could be delayed for 10~20s(at most 60s in extreme situation). Per my oberservation, most of the updater finishes within 0.5s. But for
         1.a ciscoSwitchQosMIB.QueueStatUpdater, generally finishs in 0.5-2s, expected to be delayed to 5-20s
         1.b PhysicalTableMIBUpdater, generally finishs in 0.5-1.5s, expected to be delayed to 5-1.5s
         1.c ciscoPfcExtMIB.PfcPrioUpdater, generally finishs in 0.5-3s, expected to be delayed to 5-30s

**- How I did it**
In get_next_update_interval, we compute the interval based on current execution time.
Roughly, we make the 'update interval'/'update execution time' >= UPDATE_FREQUENCY_RATE(10)
More specifically, if the execution time is 2.000001s, we sleep 21s before next update round.
And the max interval won't be longer than MAX_UPDATE_INTERVAL(60s).

**- How to verify it**
Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well.

**- Description for the changelog**
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->
@mssonicbld
Copy link
Collaborator Author

Original PR: #345

@mssonicbld
Copy link
Collaborator Author

/azp run

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

@mssonicbld mssonicbld merged commit 0c399dc into sonic-net:202411 Jan 14, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant