You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After S3 suspend and resume any of Podman bridge and host-side epairs sometimes end up down.
This is not consistent and I have not yet identified an unequivocal way to reproduce it.
It sometimes also happened after long period of idleness but I still need to rule out other causes (local scripts) for this
This hacky palliative restores things to a working state and can be added to /etc/rc.resume or executed as root on the command line:
PODMAN_BRIDGE='cni-podman0'
ifconfig ${PODMAN_BRIDGE} inet 10.88.0.1/24
ifconfig ${PODMAN_BRIDGE} up
for EPAIR_HOST in $(ifconfig ${PODMAN_BRIDGE} | grep member | awk '{print $2}'); do
ifconfig ${EPAIR_HOST} | grep description \
| while read noop noop noop noop JAIL_NAME noop noop EPAIR_JAIL
do
ifconfig -j ${JAIL_NAME} ${EPAIR_JAIL} up;
done
ifconfig ${EPAIR_HOST} up;
done
Why I think suspend/resume resilience is important:
devs and admins being able to run oci containers not only in server context but also on their laptops / daily drivers fosters familiarity, confidence and observability and facilitates experimenting and prototyping
inversely, podman networking apparently dying in random ways sends all kind of alarm signals and leads non-experts to associate to associate container on FreeBSD with friction and frustration
many thanks to @Crest for his help in debugging this and teaching me a number of thiings in the process
The text was updated successfully, but these errors were encountered:
This hacky palliative restores things to a working state and can be added to /etc/rc.resume or executed as root on the command line:
Why I think suspend/resume resilience is important:
many thanks to @Crest for his help in debugging this and teaching me a number of thiings in the process
The text was updated successfully, but these errors were encountered: