Is it possible (and how) to implement the following scenario using pysnmp:
snmp_agent.receive_traps()
.... some code...
print(snmp_agent.get_traps())
from my understanding the trap listener must be run from a separate thread, otherwise it will block the code in step 2. For reference here's the snmp agent method I'm using right now to receive traps.
def receive_traps(self, trap_event_timeout=40):
trap_event_start = time()
traps = []
def event_timer(time_now):
if time_now - trap_event_start > trap_event_timeout:
self._snmp_engine.transportDispatcher.jobFinished(1)
def cbFun(
snmpEngine, stateReference, contextEngineId, contextName, varBinds, cbCtx
):
try:
varBinds = [
ObjectType(ObjectIdentity(name), value).resolveWithMib(
self._mib_view_controller
)
for name, value in varBinds
]
except error.SmiError as err:
print("MIB resolution error\n{}".format(err))
else:
traps.append(
{
name.getMibSymbol()[1]: value.prettyPrint()
for name, value in varBinds
}
)
trap_receiver = ntfrcv.NotificationReceiver(self._snmp_engine, cbFun)
self._snmp_engine.transportDispatcher.registerTimerCbFun(event_timer)
self._snmp_engine.transportDispatcher.jobStarted(1)
try:
self._snmp_engine.transportDispatcher.runDispatcher()
except Exception:
self._snmp_engine.transportDispatcher.closeDispatcher()
raise
finally:
self._snmp_engine.transportDispatcher.unregisterTimerCbFun(event_timer)
trap_receiver.close(self._snmp_engine)
return traps
I can offer a couple of approaches for your consideration:
Push the code implementing step (2) to a function and have pysnmp's main loop calling that function periodically. You trap processing code can block for a long time, may be have the processing code running in a thread.
Implement your own main loop along the lines of pysnmp's from where you can do step (2). May be in a thread as well.
Have your trap processing code running in a thread (or in a subprocess) at all times reading incoming traps from the SNMP thread via threading.Queue
or multiprocessing.Queue
.
If trap processing can take time, my gut feeling would lean me towards the latter approach (multiprocessing
is awesome).