-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kitematic for easy setup #63
Comments
I believe the reason it won't work right now is because kitematic assumes you want to run the container on your machine. In reality, in most cases we want to run mccy on a remote server and it'll create MC containers somewhere in the swarm. Does that sound right? So I believe I could write up documentation on how to run mccy locally with the environment variables set for the remote server. Yes? If so, this would be a good opportunity to set it up myself since I haven't actually done that. I've been relying on staging to test my changes so far. |
I'll have to try it later, so yeah, from memory I'm not quite sure what would stop it from working. It used to be that Kitematic would start containers without first prompting for env vars or command-line options. If that's still the case, then it'll fail to start since Passing that would be enough to point at any remote Docker daemon that doesn't have TLS enabled. If it does have TLS, then a volume needs to be attached to |
And first start up it doesn't allow you to change any options so it does fail. I believe docker/kitematic#1260 would fix that though. Before that fix however could we allow it to fail, and then change the vars? |
It would be helpful if the kitematic support were a selectable module, so that it could be swapped out with support for (EG) Rancher, or Kubernetes, or any other docker-cluster-orchestration API. We use Rancher here, and many people use Kubernetes, so being able to select a different method of starting containers (raw docker/kitematic/rancher/kubernets/etc) at MCCY startup would allow MCCY to be more compatible with everyone. I understand it would mean a fair amount of addiitonal work, though. |
I was thinking of just writing up something in the wiki on how to do it. @sshipway, are you thinking of something programmed? |
Bummer, docker/kitematic#1260 still hasn't been fixed. Yes, as a work around you should be able to configure it after it fails to start with defaults...and that might actually work. |
I may have misunderstood what Kitematic does? Rancher and Kubernetes provide a new API that goes over the top of the whole docker cluster, so instead of calling the Docker API to spin up a container on the host, you call the Rancher (or Kubernetes) API to spin up a container somewhere in the cluster. I had thought Kitematic was something similar? |
Hm. From what I understand, Kitematic is the simple case of just a container on a host. It's really just a GUI for the My thought was to make a quick start guide using kitematic to give people a feeling for how mccy works. It would be:
Then the user could go to localhost:8080, the web app would load up, and they could start MC containers through it. |
Sounds like a good plan for a getting started guide. Yeah, Kitematic is really just a GUI for the Docker client. |
Ah, my bad. I had thought that Kitematic did a little more in handling a cluster of Docker hosts and provided a separate API. However, I'd still like to have the option to tell mccy to plug into the Rancher API (or Kubernetes API) as an alternative to using the Docker socket... ;) |
Haha I'm taking a few day break, and then I'll dig in my heels again so we can bump to 0.1 official. My aim is to then do #1. After that, then maybe I can think about Rancher / Kubernetes O.O But that's just me. I'm sure itzg has his own mental plan 😜 |
Cool and thanks for spinning off the specific issue on that. |
I'm not sure it's possible yet, but it'd be really cool to use kitematic as the frontend to running mccy.
"Run containers through a simple, yet powerful graphical user interface."
kitematic
docker toolbox
The text was updated successfully, but these errors were encountered: