20:00:33 <sdake> #startmeeting kolla
20:00:34 <openstack> Meeting started Mon Sep 29 20:00:33 2014 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:38 <openstack> The meeting name has been set to 'kolla'
20:00:41 <sdake> #topic rollcall
20:00:46 <mspreitz> hi
20:00:48 <derekwaynecarr> hi
20:00:49 <larsks> hallo
20:00:50 <jdob> o/
20:00:50 <sdake> howdie \o/
20:00:51 <dvossel> hi
20:00:52 <rhallisey> hello
20:00:52 <jrist> o/
20:00:52 <FlorianOtel> hi all
20:00:57 <sasha2> hi
20:01:00 <scollier> hi
20:01:02 <jpeeler> hey
20:01:04 <portante> o/
20:01:08 <Slower> howdy
20:01:16 <slagle> hi
20:01:38 <sdake> #topic agenda
20:01:46 <sdake> https://wiki.openstack.org/wiki/Meetings/Kolla#Agenda_for_next_meeting
20:01:55 <sdake> anyone have anything to add or change?
20:01:57 <funzo> Chris Alfonso
20:02:03 <sdake> howdie funzie
20:02:17 <larsks> Seems reasonable to me.
20:02:26 <mspreitz> I would not mind a review of the big picture
20:02:35 <mspreitz> I found the discussion confusing
20:02:46 <sdake> mspreitz cool maybe we should do that first
20:02:54 <sdake> #topic big picture - what are we trying to accomplish
20:02:54 <jdob> also curious what exists today, if anything
20:03:11 <Slower> sdake: I don't know if it's appropriate but I wanted to ask about this vs the openstack container effort..
20:03:12 <mspreitz> Do you want some leading questions?
20:03:24 <sdake> slower I don't think it has mnuch to do with the container effort
20:03:29 <sdake> shoot mspreitz :)
20:03:34 <Slower> ok I'll talk to you about it later
20:03:48 <bthurber> sdake: scollier, apologies for the tardiness
20:03:48 <sdake> so big picture - put kubernetes in the undercloud
20:03:57 <mspreitz> What I understood is that the basic idea is using container images of OpenStack controllers and/or compute nodes...
20:04:01 <mspreitz> Q1: Which of those?
20:04:10 <mspreitz> Q2: what deploys the minions
20:04:17 <mspreitz> Q3: in VMs or on bare metal?
20:04:28 <sdake> q1 both q2 minions deployed by magic elves :) 3 - bare metal
20:04:29 <mspreitz> I'll stop there for now
20:04:37 <sdake> q2 may be a challenge for us
20:04:41 <sdake> I think eveyrone is struggling with that ponit
20:04:47 <sdake> trying to do kubernetes dev
20:05:04 <mspreitz> So both compute nodes and controller nodes will be containers, right?
20:05:05 <dvossel> for 3, are we limited to bare metal for any reason?
20:05:09 <larsks> sdake: some chance the answer to q2 may end up being "puppet", or something like that, right?
20:05:15 <sdake> right larsks
20:05:30 <sdake> dvossel no just better performance characterisitcs vs a vm
20:05:59 <mspreitz> Re Q1: my vague understanding is that there is a lot of cross-configuration to be done while installing OpenStack...
20:06:09 <mspreitz> is that already done in the images?
20:06:13 <sdake> it is not done yet
20:06:18 <sdake> so jdob asekd what is avialble
20:06:19 <mspreitz> And if so, doesn't it have to be undone/redone?
20:06:32 <larsks> mspreitz: totally. trying to figure out how best to handle that is one of the big to-do items, I think.
20:06:32 <sdake> atm, we have a container that launches mariadb, and a container that launches keystone
20:06:34 <derekwaynecarr> RE q2: "everyone is struggling with that point" can you elaborate?  is it a k8s specific issue or enhancement needed?
20:07:02 <sdake> derekwaynecarr well deployment of the intitial node is a huge pita which  eveyrone struggles with
20:07:03 <mspreitz> OK, I think I understand where things are
20:07:09 <sdake> people make tools like pxeboox etc
20:07:15 <sdake> but it is still a struggle ;)
20:07:22 <mspreitz> but it seems to me that Kubernetes is not bringing a lot to the table for the  big  problem, which is all that configuration
20:07:35 <FlorianOtel> Maybe I'm missing the point here but (oversimplifying...) is this simply TripleO -> OpenStack on Kubernetes effort then ?
20:07:39 <mspreitz> and the magic elves' job
20:07:40 <sdake> good point, the config is difficult
20:07:44 <larsks> mspreitz: kubernetes brings scheduling and service auto-discovery, the latter of which will help out with but not solve the cross-config issue.
20:07:47 <sdake> florianotel right
20:08:03 <sdake> I dont know if anyone has a config solution
20:08:24 <derekwaynecarr> derekwaynecarr: k8s today makes some assumptions on how things are configured from a networking perspective as well to support per pod-ip concepts as well... all we have today is some salt scripts to configure, but we could do more
20:08:56 <sdake> #topic Discuss development environment
20:08:58 <FlorianOtel> sdake, Ok, follow-up Q then:  Point being... ?  i.e. What's the tech / ops advantage of doing so ?
20:09:28 <sdake> flrianotel treating openstack as a hermetically sealed container for that particular piece of software
20:09:36 <sdake> eg, all the magic is hidden in a container
20:09:52 <FlorianOtel> TripleO proved to be enough of a challenge as-is (at least AFAIU..(, not quite clear what k8s will bring to that picture ??
20:09:54 <sdake> I'd have to convince you containers are a good idea in general
20:09:59 <sdake> for them to be a good idea for openstack
20:10:27 <FlorianOtel> sdake, No need. That was table stakes for me to attend this meeting already :)
20:10:45 <sdake> cool
20:10:50 <mspreitz> sdake: wait...
20:11:00 <sdake> so dev environment, larsks can you put a #Link to your repo for launching kube on heat
20:11:00 <mspreitz> it is one thing to say containers are good for virtualization
20:11:10 <mspreitz> it is another to say they are a good way to setup software on machine
20:11:13 <mspreitz> machines
20:11:23 <mspreitz> where each machine is being treated as a whole
20:11:35 <sdake> container is like rpm with a build in installer/deployment model
20:11:36 <sdake> imo :)
20:11:40 <larsks> link is https://github.com/larsks/heat-kubernetes for the heat templates, but I am not meetingbot-aware enough to know if that is sufficient :)
20:11:41 <FlorianOtel> mspreitz, the latter IMO. That's the whole point AFAICT
20:11:42 <sdake> so it is for software4 setup
20:11:43 <mspreitz> we will put one container on each machine, right?  no virtualization
20:11:45 <FlorianOtel> sdake, +1
20:12:10 <sdake> mspreitz the scheduler will sort that out
20:12:16 <sdake> the scheduler being kubernetes
20:12:21 <sdake> but it could put multiple things on one machine
20:12:32 <sdake> I guess, I don't know how kubernetes scheduler works ;-)
20:12:36 <mspreitz> if we are not fixing one container per machine then this is significantly different from what one expects of a bare metal install of OpenStack
20:12:44 <larsks> mspreitz: I was actually assuming multiple containers/machine, in most cases.  E.g., right now you might have a controller running multiple services.  In the k8s wolrd, maybe that will be one service/container, but multiple containers/host.
20:12:54 <radez> sdake: I think that's right, multiple containers can end up on one machine
20:13:02 <larsks> mspreitz: I think that is actually pretty similar to a bare-metal install of openstack.
20:13:17 <larsks> radez: certainly, the scheduler will put multiple pods on a single host.
20:13:22 <derekwaynecarr> sdake: k8s scheduler is in early stages, but multiple containers can end up on same stages, major scheduling constraint today is free host port
20:13:28 <mspreitz> larsks: maybe you are thinking of more containers than I was assuming
20:13:45 <derekwaynecarr> *same machines
20:13:49 <radez> mspreitz: the break down is a container a service pretty much
20:13:51 <mspreitz> if we use a container for each of what is now a process, then..
20:14:01 <mspreitz> right
20:14:12 <larsks> mspreitz: possibly, or a "pod" for each of what is now a process (a "pod" being a group of tightly linked containers).
20:14:13 <sdake> but the process carries config and init with it
20:14:18 <mspreitz> well, I am not sure about service vs. process
20:14:20 <larsks> Err, s/process/service/
20:14:24 <sdake> with container, all that gets lumped together which is hermetically sealed = winning
20:14:58 <mspreitz> OK, next issue..
20:15:13 <sdake> back on larsks link, I'd recommend folks get the install dev environmnet rolling if you plan to write code for the project
20:15:24 <sdake> so far, jpeeler, larsks, radez, sdake have got the env running that I am aware of
20:15:26 <mspreitz> Is there any concern with allowing all the freedom that the k8s scheduler currently has?  What if we want to keep some heavy services off of compute nodes?
20:15:27 <sdake> so bug them for qs
20:15:49 <sdake> I think we want to keep everything off the compute nodes
20:15:50 <larsks> note that larsks considers his heat templates a bit of a hack, in particular the way it's handling the overlay network to meet kube's networking model.
20:15:55 <sdake> and I think mesos can solve that
20:16:10 <sdake> but like I said, I don't know enough about the scheduler to know for sure ;)
20:16:31 <mspreitz> those heat templates ... are they published anywhere?
20:16:38 <mspreitz> I know of ones that use Rackspace resources
20:16:41 <sdake> yar on larsks github
20:16:42 <larsks> mspreitz: yeah, that link I posted a few lines back...
20:16:52 <sdake> heat-kubernetes
20:17:01 <larsks> Actually, y'all are talking too much, it's a lot of lines back now :)
20:17:08 <larsks> mspreitz: https://github.com/larsks/heat-kubernetes
20:17:22 <mspreitz> thanks, something glitched and I did not see the earlier ref
20:17:23 <sdake> any questions about dev environment?
20:17:42 <sdake> I think the minimum you want to get setup is make surey ou can do a keystone endpoint-list from outside the instance
20:17:42 <larsks> no question, just encouragement to submit PRs for those templates if you think something can be done better...
20:18:15 <sdake> the dev environment is focused on heat because its easy to setup openstack
20:18:24 <sdake> larsks can serve as a point of contact if you get stuck there ;)
20:18:29 * sdake volunteers larsks!!
20:18:31 * larsks hides.
20:18:57 <sdake> #topic Brainstorm 10 blueprints to kick off with
20:19:14 <sdake> we need some features to implement in the launchpad t racker
20:19:24 <sdake> I think radez was thinking of entering one
20:19:34 <radez> I put one in... lemme get the link
20:19:40 <radez> https://blueprints.launchpad.net/kolla/+spec/kube-glance-container
20:19:50 <bthurber> another underlying question is how much the containers may be customized using puppet...thinking along the line of the staypuft installer or RHCI common installer which will be staypuft based
20:19:57 <larsks> I think we should probably have a simliar blueprint for each service we are attempting to containerize.
20:20:07 <sdake> bthurber no idea ;)
20:20:12 <radez> this is basically to start working through the glance containers... I don't know what's involved so I'll have to fill in stuff as I go a bit
20:20:19 <larsks> bthurber: I think that is one of the things we need to figure out.
20:20:24 <bthurber> +1
20:20:33 <sdake> anyone volunteer to make a launchpad tracker for all the containers?
20:20:39 <sdake> separately of course ;)
20:20:47 <larsks> radez: bthurber: I almost think that "figuring out how to handle configuration" is going to be the #1 blueprint, because it's going to inform the work on everything else...
20:21:09 <mspreitz> larsks: right, and ..
20:21:10 <bthurber> you bet...work backwards a bit to determine the overall strategy
20:21:15 <radez> larsks: agreed, though that shouldn't prevent us from doing work to get things working
20:21:21 <mspreitz> wouldn't the obvious thing be to leverage the service binding of k8s?
20:21:35 <larsks> mspreitz: totally! That's what I was mentioning earlier.
20:21:48 <mspreitz> OK.  But let's avoid the botch introduced by k8s
20:21:54 <rhallisey> sdake, I can do it
20:21:58 <larsks> mspreitz: which botch?
20:22:02 <sdake> thanks rhallisey
20:22:06 <mspreitz> SERVICE_HOST
20:22:15 <mspreitz> it requires that every proxy be universal
20:22:27 <mspreitz> using container linking envars instead avoids that assumption
20:22:29 <sdake> #action rhallisey to enter separate blueprints for each openstack service with containerization language
20:22:30 <larsks> mspreitz: Ah, okay.  So far we've been using the --link-like environment vars.
20:22:47 <derekwaynecarr> mspreitz: there is a proposal for k8s to eliminate that BOTCH
20:22:54 <mspreitz> great
20:23:11 <mspreitz> is it in the k8s repo of issues?
20:23:26 <derekwaynecarr> see https://github.com/GoogleCloudPlatform/kubernetes/issues/1107
20:23:46 <mspreitz> thanks
20:24:38 <sdake> as far as services go, there is nova, swift, cinder, neutron, horizon, keystone, glance, ceilometer, heat, troeve, zaqar, sahara
20:24:49 <sdake> that is 13 separate blueprints
20:25:00 <mspreitz> neutron might not be monolithic
20:25:18 <sdake> rhallisey once you have the blueprints entered, can you send a mail to openstack-dev so people can take ownership of the individual ones?
20:25:22 <larsks> mspreitz: neutron might noe be pretty!
20:25:34 <sdake> neutron and cinder aregonig to be a real challenge
20:25:40 <rhallisey> sdake, should we split up by containers or by services?
20:25:45 <rhallisey> sdake, ok
20:26:03 <mspreitz> I suggest organize people by service
20:26:07 <larsks> rhallisey: I would say "by service" for now, and possibly the implementation will be multi-container.  Or not.
20:26:09 <mspreitz> let the people decide about containers
20:26:11 <radez> I vote by component/service
20:26:14 <larsks> Ah, great minds.
20:26:41 <bthurber> sdake: possibly more....if you want to break out the components of each service
20:26:59 <sdake> ya, atm we break out each componenet of each service into a separate container
20:27:06 <bthurber> +1
20:27:09 <sdake> I think we will have to experiment to see what works best there
20:27:23 <bthurber> there may be some shared components as well
20:27:34 <sdake> so topic * Define list of initial Docker Images (10 min)
20:27:35 <sdake> is probably covered
20:27:57 <sdake> #topic Map core developer to docker image for initial implementation\
20:28:01 <larsks> bthurber: although my sticking to one-process-per-container, we avoid the whole "how we we handle process supervision" bugaboo.  And I think that the "pod" abstraction makes the one-process-per-container model a little more tenable.
20:28:21 <sdake> I guess my thinking on this is we can just pick up blueprints when rhallisey sends out the note
20:28:25 <sdake> does that work for everyone?
20:28:50 <larsks> Sure.  rhallisey, don't forget to include "supporting" services like mysql, rabbitmq...
20:29:06 <bthurber> larsks: prob good to start there and as we mature see where there is overlap.  May find opportunity for some efficiency.
20:29:07 <rhallisey> larsks, sounds good
20:29:39 <rook> WRT Neutron, is there a Blueprint built for this? i am curious how we can containerize some of the services. larsks maybe you might know?
20:29:48 <larsks> rook: no blueprints yet!
20:29:51 <radez> rhallisey: did you see the link to the glance one I created? if it doesn't meet your standard just ditch it or we can change it
20:29:52 <rook> larsks: roger.
20:30:04 <rook> larsks: how about your thoughts? ;)
20:30:13 <rook> we can offline it...
20:30:30 <rhallisey> radez, I'll take a look
20:30:46 <radez> rook:  if you look at the current code some things are already broken down across different service
20:31:01 <sdake> #topic gating
20:31:04 <radez> you can get an idea of how some are being done already there to get your gears turning
20:31:04 <rook> radez which code?
20:31:08 <larsks> rook: chat after meeting, maybe?
20:31:10 <sdake> atm, we have no gating in the codebase
20:31:20 <sdake> I'll file blueprints for every service to introduce gating
20:31:21 <rook> radez: roger - my concern is namespaces wrt Neutron
20:31:28 <sdake> I thinkj what would work best is atleast  tempest gating on the containers
20:31:35 <sdake> I'll tackle t he implementation
20:31:44 <sdake> if somone wants to join me, that wfm ;)
20:31:51 <radez> rook: https://github.com/jlabocki/superhappyfunshow/
20:32:11 <larsks> rook:  but note, moving to github.com/openstack Real Soon Now.
20:32:14 <larsks> sdake: where is that right now?
20:32:40 <sdake> stackforge -> https://review.openstack.org/#/c/124453/
20:33:10 <rook> larsks radez thx
20:33:46 <sdake> any other thoughts on gating?
20:33:55 <larsks> Nah, temptest seems like a reasonable starting point.
20:34:06 <sdake> #topic open discussion
20:34:20 <sdake> likely we will just end in 10 mins, so I'll set a 10 minute timer :)
20:34:36 <sdake> anyone have any open items they wish to discuss?
20:34:55 <jdob> dumb question, what room does the project talk in?
20:35:01 <jdob> is there a kolla room or using #tripleo?
20:35:02 <sdake> #tripleo
20:35:04 <jdob> kk
20:35:14 <jdob> as soon as I started asking that I remembered the initial email
20:35:15 <larsks> sdake: do we want to create a project-specific channel?
20:35:31 <sdake> larsks the tripleo folks thought it would be better if we used the same channel
20:35:38 <larsks> Fair enough.
20:35:42 <sdake> because separate channels never die, and we are really just an offshoot of the tirpleo project
20:36:31 <sdake> any other discussion?
20:36:38 <sdake> 30 secs and I'll end meeting ;)
20:36:54 <rook> =)
20:37:03 <sdake> thanks folks
20:37:05 <sdake> #endmeeting