Many avid IRC users use an IRC bouncer, a proxy service that remains persistently connected to your preferred IRC networks and channels. Instead of connecting directly to an IRC network such as irc.example.com, you connect to a machine like bouncer.mysite.com which runs the bouncer software. The bouncer, in turn, is connected to the IRC network. When you log into your bouncer, it shows messages in your channels you may have missed while offline, as well as private messages from other users. Read more here.
ZNC is an IRC bouncer. And this git repo is a simple example of wrapping the dockerhub znc container image to run our own ZNC bouncer in OpenShift.
Sounds cool right? It is. And here's how you can use it.
Create the app from the Docker Hub image and expose it to webtraffic
oc new-app dudash/openshift-znc --name=znc-demo
oc expose service znc-demo
Now you can access it via the route that was automatically exposed on port 6697.
Fork this repo by clicking the button on the top right of this page.
Create the app from a Docker image and expose it to webtraffic
oc new-app https://github.com/yourfork/openshift-znc.git --name=znc-demo
oc expose service znc-demo
(Note: if you are using OpenShift Online you will have to build your own docker image and new-app from that image because Dockerfile builds are currently not allowed when using OpenShift Online).
Here's a quick way to set that up.
- Download the znc.conf from this repo or create your own
- Turn that file into a config map by running:
oc create configmap znc-config --from-file=./znc.conf
- Pause the rollouts so we can make all our changes without rolling updates
oc rollout pause dc/znc-demo
- Map the configmap to the deployment config by running:
oc volume dc/znc-demo --add --configmap-name=znc-config --mount-path='/znc-config/configs/' --default-mode='0777'
- Set an ENV var to tell the scripts running in the pod to use a different config path
oc env dc/znc-demo OVERRIDE_DATADIR='/znc-config'
- Resume deployments
oc rollout resume dc/znc-demo
- Now OCP will re-deploy and the znc.conf file is mounted in a /znc-config/configs path on the running container
You could also consume the config map data in a PV so that changes are preserved. I'll leave this as an exercise for you to figure out. Start by reading the details here: