16:00:49 <adrian_otto> #startmeeting containers
16:00:50 <openstack> Meeting started Tue Dec 13 16:00:49 2016 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:54 <openstack> The meeting name has been set to 'containers'
16:00:56 <juggler> adrian_otto ah, ok
16:00:58 <tmorin> juggler: sorry about that
16:01:08 <juggler> tmorin no worries :-)
16:01:16 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-12-13_1600_UTC Our Agenda
16:01:23 <adrian_otto> #topic Roll Call
16:01:24 <strigazi> o/
16:01:27 <adrian_otto> Adrian Otto
16:01:29 <jvgrant> Jaycen Grant
16:01:31 <Drago1> o/
16:01:31 <tonanhngo> Ton Ngo
16:01:32 <hongbin> o/
16:01:34 <hieulq_> Hieu LE o/
16:01:41 <juggler> Perry Rivera o/
16:01:45 <swatson> O/
16:02:24 <randallburt> o/
16:02:42 <adrian_otto> hello juggler strigazi jvgrant tonanhngo Drago1 hongbin hieulq_  swatson and randallburt
16:03:20 <adrian_otto> I had a webinar I was hosting a moment ago, so I asked juggler to be here to start the meeting for us just in case I was held over for a few minutes.
16:03:27 <juggler> hello
16:03:33 <adrian_otto> :-)
16:03:56 <juggler> hello all again!
16:04:04 <adrian_otto> #topic Announcements
16:04:15 <adrian_otto> 1) FYI: Some clouds used in Openstack CI do NOT have internet access. Example: http://logs.openstack.org/53/404253/7/check/gate-functional-dsvm-magnum-k8s-ubuntu-xenial/83e2636/console.html#_2016-12-12_08_44_01_365605
16:04:24 <adrian_otto> not sure who added that one to the agenda.
16:04:31 <adrian_otto> any follow up remarks on this announcements?
16:04:35 <strigazi> I added this, I have an update but I couldn't login
16:04:47 <adrian_otto> proceed, strigazi
16:05:02 <strigazi> The problem is that docker can't reach the docker.io or gcr.io
16:05:40 <strigazi> There is an issue with network configuration
16:05:59 <strigazi> only in some providers
16:06:04 <adrian_otto> is this a malfunction of those clouds?
16:06:14 <adrian_otto> or is that the expected configuration for those?
16:06:19 <strigazi> I askes on infra channel and they told me we can't do anything
16:06:48 <strigazi> I think is the configuration, I'll continue looking into this
16:06:53 <adrian_otto> ok, let's regroup on this in open discussion
16:07:01 <strigazi> sure
16:07:06 <adrian_otto> or maybe that was enough just to let us all know it's ongoing work
16:07:17 <adrian_otto> 2) Reminder, there will be no meeting on 2016-12-27
16:07:18 <strigazi> on going work, nothing more
16:07:26 <adrian_otto> any other announcements from team members?
16:07:53 <adrian_otto> #topic Review Action Items
16:08:01 * adrian_otto adrian_otto to ask swatson about claiming https://bugs.launchpad.net/magnum/+bug/1489908
16:08:02 <openstack> Launchpad bug 1489908 in Magnum "Tech-Debt: Add tests for DB migration" [High,Confirmed] - Assigned to Stephen Watson (stephen-watson)
16:08:13 <adrian_otto> Status: complete, swatson is now working on it.
16:08:19 <adrian_otto> to my delight!
16:08:22 <swatson> :)
16:08:36 <adrian_otto> thanks swatson !!
16:08:40 <swatson> np
16:08:42 <adrian_otto> #topic Blueprints/Bugs/Reviews/Ideas
16:08:48 <adrian_otto> Essential Blueprints
16:09:05 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi]
16:09:25 <juggler> yea swatson!
16:09:39 <adrian_otto> strigazi: you still with us?
16:09:39 <strigazi> Good progress, the db migration script is done
16:09:45 <adrian_otto> ah, there you are
16:10:00 <strigazi> Finishing the python code
16:10:12 <strigazi> see: https://review.openstack.org/#/c/395012/
16:10:32 <strigazi> The spec might need a revision
16:10:55 <strigazi> When written the goal was to do the migration in two setps
16:11:03 <strigazi> s/setps/steps
16:11:28 <strigazi> But for a service like magnum we can afford a small downtime to upgrade, like heat does
16:12:05 <strigazi> So, as it is pushed now, it adds the new table and migrates all the data to the new schema
16:12:07 <adrian_otto> our preference would be to support zero downtime upgrades
16:12:16 <randallburt> IIRC Heat is hoping to get rid of any need for downtime. Hard problem, but if we can avoid it, we should
16:12:54 <Drago1> strigazi: There's only one migration in the spec, not two
16:13:03 <randallburt> preference would be for adding new table, syncing data, and using code or triggers to sync for at least a cycle
16:13:04 <strigazi> randallburt ^^ afaik heat doesn't have a solid plan at the moment
16:13:13 <randallburt> strigazi:  fair enough
16:13:36 <Drago1> The fact that heat isn't doing it yet was enough for me to abandon it in the spec
16:13:39 <hieulq_> strigazi: your code contain DROP part, which may be the cause for upgrade downtime
16:13:44 <strigazi> Drago1, true thanks
16:13:47 <randallburt> Why?
16:14:19 <randallburt> Maybe we can come up with something that helps them rather than waiting on them to solve a problem with something we may or may not be able to use
16:14:21 <strigazi> Currently, magnum requires the daemons to be stop to upgrade the db
16:14:36 <hieulq_> strigazi: IIRC, nova team try to avoid DROP part (contract phase) for achieving rolling-upgrade
16:14:46 <strigazi> even bfore this change
16:14:50 <Drago1> I did not want to gate flatten attributes on that. If we want, we can work on it, but I see no reason to block features until we have it
16:15:03 <Drago1> randallburt: ^
16:15:11 <adrian_otto> +1
16:15:15 <strigazi> Drago1 +1 it's a  totally different task
16:15:18 <randallburt> strigazi:  so I can't do blue/green deployment with Magnum at all?
16:15:21 <jvgrant> Drago1 +1
16:15:24 <randallburt> agreed
16:15:30 <randallburt> it can be solved separately
16:15:58 <adrian_otto> any more remarks on flatten-attributes?
16:16:10 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nedegroups [Drago]
16:16:38 <Drago1> No update on Nedegroups. I haven't had time to work on it. jvgrant?
16:17:00 <jvgrant> working on some minor updates from the last set of feedback
16:17:08 <juggler> point of clarification: nodegroups or Nedegroups?
16:17:11 <jvgrant> i was out part of last week, i should have another update today
16:17:20 <adrian_otto> juggler: NodeGroups
16:17:24 <juggler> ty
16:17:35 <Drago1> ;)
16:17:43 <juggler> Drago1 :)
16:17:44 <adrian_otto> I tried to edit it, but the wiki system is busted currently
16:17:56 <adrian_otto> and I cut&pasted that rom the wiki
16:18:04 <adrian_otto> ok, next BP
16:18:10 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-etcd-cluster-coe Secure Central Data store(etcd) [yatin]
16:18:20 <adrian_otto> I don't think yatin is here to give an update.
16:18:29 <adrian_otto> I think the code for this was in review last I checked.
16:18:42 <strigazi> The k8s patch is in very good shape, testing
16:18:49 <Drago1> link?
16:19:04 <strigazi> https://review.openstack.org/#/c/407374/
16:19:05 <adrian_otto> #link https://review.openstack.org/407374
16:19:41 <adrian_otto> ok, next one is:
16:19:44 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi]
16:20:11 <strigazi> spec revised, ready for input
16:20:24 <strigazi> #link https://review.openstack.org/#/c/392193/
16:20:48 <adrian_otto> thanks strigazi
16:21:05 <adrian_otto> any remarks on this?
16:21:33 <strigazi> I don't have any at the moment
16:21:43 <adrian_otto> ok, that concludes the Essential Blueprints review
16:21:47 <adrian_otto> Other Work Items
16:21:51 <adrian_otto> #link https://review.openstack.org/391537 Specification for Magnum cluster stats API [vijendar]
16:22:09 <adrian_otto> has vijendar started his time off?
16:22:25 <adrian_otto> welcome jasond
16:22:35 <randallburt> dunno tbh
16:22:35 <jasond> adrian_otto: thanks
16:23:06 <adrian_otto> ok, the open question on this is should we use a /stats endpoint for this
16:23:20 <adrian_otto> and based on further discussion I came to prefer that approach to others
16:23:34 <adrian_otto> that's what's shown in the current review IIRC
16:23:38 <tonanhngo> +1
16:23:39 <randallburt> adrian_otto:  is that question still open? thought vijendar changed it because Drago1 brought up a technical issue with the other proposal
16:23:52 <adrian_otto> do we want to discuss this at all, or are we all happy using /stats?
16:23:54 <Drago1> Yes, /stats versus what alternative?
16:24:13 <jvgrant> I think everyone was mostly agreed now on /stats
16:24:15 <randallburt> adrian_otto:  IMO, leave further discussion to the spec and circle back if it stalls again
16:24:42 <adrian_otto> the one with the potential namespace collision problem is the alternative, but I wanted to confirm consensus on the change, as we've had intermittent review participation on that spec.
16:25:05 <adrian_otto> ok, I'm planning to vote to merge that today.
16:25:11 <jvgrant> the other portion of why it was requested for the meeting was a desire to have community agreement on the included stats
16:25:15 <hongbin> there is a minor issue
16:25:34 <hongbin> that is who implemented the metric collection is unspecified (in driver or magnum)
16:25:41 <Drago1> I agree with tonanhngo's comment that the driver should be responsible for what stats to collect.
16:25:47 <hongbin> that is not a blocker, but better to address it in a revision
16:26:03 <hongbin> Drago1: then, state it in the spec :)
16:26:04 <adrian_otto> hongbin: noted, thanks for reminding me about that
16:26:06 <jvgrant> in my opinion, i think we start with what was included here and add additional when needed
16:26:08 <randallburt> adrian_otto:  hongbin: if vijendar isn't available today, I'll address those comments as best I can to keep the train moving
16:26:30 <hongbin> ok
16:26:32 <randallburt> its not a question of "what", but some idea of the expected "how" at this point
16:27:19 <Drago1> hongbin: I will. I would do it right now but I can't sign in to gerrit…
16:27:24 <adrian_otto> one thing we can do is open tech debt bug tickets against a spec when we have something that we anticipate revisiting, but we don't want to hold spec merge up further.
16:27:29 <randallburt> Drago1:  nobody can atm
16:27:49 <randallburt> ok so Drago1 or I will update the spec sometime today if we can't get Vijendar
16:27:50 <hongbin> ok, it is not a blocker, so feel free to merge it
16:27:50 <adrian_otto> I think that means we can't open bugs either ;)
16:27:59 <adrian_otto> thanks
16:28:18 <adrian_otto> ok, that brings us to an important topic
16:28:21 <swatson> adrian_otto: +1 to the tech debt idea
16:28:25 <adrian_otto> #topic Should we support non-Atomic Fedora? https://review.openstack.org/#/c/409251/
16:28:37 <randallburt> yes
16:28:42 <adrian_otto> some background...
16:28:43 <randallburt> :)
16:29:05 <adrian_otto> Magnum drivers should each support one OS
16:29:14 <juggler> +1 tech debt bug tickets
16:29:24 <adrian_otto> as the required orchestration can vary from one to the next
16:30:03 <adrian_otto> and having multi-personality drivers results in more complex driver code with lots of conditional logic
16:30:37 <adrian_otto> one new feature we are working on this cycle is lifecycle operations (to support upgrades and restarts for example)
16:31:05 <adrian_otto> we planned to leverage the heat client and heat features to implement this
16:31:06 <adrian_otto> correction...
16:31:08 <adrian_otto> heat agent
16:31:22 <adrian_otto> rather than having a custom magnum agent
16:31:51 <adrian_otto> in implementing the head agent we discovered some difficulty in making it work in Fedora Atomic, even though we know that theoretically it *should work*
16:32:10 <hongbin> what are the difficulties?
16:32:11 <adrian_otto> and we showed that id *does* work with Fedora
16:32:26 <adrian_otto> hongbin: hold on just a moment before we go into problem solving mode
16:32:37 <strigazi> well it works, it's a bit more difficult to add the required packages
16:32:37 <randallburt> hongbin:  atomic does not allow write access to the default directories the openstack agents need
16:32:55 <juggler> another q: do we have the developer resources or potential to increase developer resources to support non-Atomic Fedora?
16:33:07 <strigazi> randallburt can you elaborare
16:33:15 <hongbin> containerized the heat agents doesn't solve the problem?
16:33:20 <randallburt> strigazi:  once adrian_otto is done.
16:33:27 <strigazi> randallburt ok
16:33:28 <Drago1> No, they don't, because they take too long to load the image
16:33:33 <adrian_otto> ok, so here is the question I have been building toward
16:33:44 <adrian_otto> do we care what OS our drivers use?
16:33:55 <adrian_otto> if so, do we have a strong preference for Fedora Atomic
16:33:59 <adrian_otto> if so, why?
16:34:08 <adrian_otto> the floor is yours
16:34:12 <strigazi> IMO
16:34:43 <randallburt> I'd like to add that while Atomic seems great as a base distro, they don't seem to be quick to get a working k8s 1.4 out the door. IMO, Magnum *can't* release without 1.4 support in Ocata.'
16:34:47 <strigazi> I don't have a preference in Atomic or any other OS, but the optimal solution is containerized solution
16:34:55 <strigazi> s/solution/solutions
16:35:04 <juggler> as a Fedora ambassador, perhaps I should recuse myself :)
16:35:06 <randallburt> strigazi:  why is containerized "optimal"?
16:35:10 <adrian_otto> strigazi: acknowledged. I think we all share that preference.
16:35:23 <adrian_otto> but could we treat that as directional?
16:35:27 <rochaporto> randallburt: if we move the kube pieces to containers, then it's not an issue
16:35:38 <adrian_otto> randallburt: exactly
16:35:45 <Drago1> strigazi: Do you mean containerized in order to achieve immutable infrastructure, or is there another reason?
16:35:46 <randallburt> rochaporto:  oh, I agree, I was speaking of the host OS, not magnum stuffs
16:36:14 <randallburt> no, they're talking about containerized k8s (self hosted) vs the host OS.
16:36:20 <randallburt> Drago1:  ^
16:36:47 <adrian_otto> containerizing the COE services makes them more upgradable, and furthermore upgradable within an immutable infrastructure discipline (sounds contrary to logic when I write it that way)
16:36:48 <jasond`> randallburt: strigazi: i was messing with hyperkube, seems promising
16:37:14 <randallburt> right we agree on containerized k8s. has nothing to do with the os od
16:37:18 <randallburt> s/od/os
16:37:33 <Drago1> os os?
16:37:34 <randallburt> we can and have done containerized k8s 1.4 on Fedora
16:37:47 <randallburt> sorry, "host os"
16:37:54 <rochaporto> so which parts of the os? docker?
16:38:20 <adrian_otto> randallburt: my guess is that because Fedora is designed to be used in an immutable infrastructure context that some stakeholders may have a preference for it. I want to verify if that preference in fact is in play or not.
16:38:33 <adrian_otto> s/Fedora/Fedora Atomic/
16:38:41 <randallburt> is it fair to say that as a community we don't care about the host OS but do care that k8s is containerized in the ocata release of Magnum?
16:39:05 <adrian_otto> I agree with that at least
16:39:09 <Drago1> Switching to Fedora non-atomic doesn't preclude us from still treating the infrastructure as immutable, it just makes it less painful to work with in some cases
16:39:10 <adrian_otto> ^^ randallburt
16:39:11 <strigazi> we do want k8s containerized in ocata
16:39:20 <randallburt> so Atomic isn't a requirement moving forward?
16:39:46 <adrian_otto> so this is something that I plan to float on the ML as well
16:39:48 <tonanhngo> We started with Atomic, but I have not heard a real requirement for it
16:39:51 <randallburt> strigazi: agreed. Atomic isn't required for that, though, right?
16:39:53 <adrian_otto> but I wanted to start that conversation here
16:40:00 <rochaporto> randallburt: yes, if all is containerized then it shouldn't matter? but if agents are installed from packages then it does?
16:40:26 <strigazi> randallburt, it's not, but fedora opens the door to put dnf install -y in heat templates
16:40:31 <randallburt> rochaporto:  imo it doesn't matter how the agents are installed other than footprint and boot time
16:40:52 <randallburt> according to Drago1 containerizing the agents has a very negative impact on cluster spin-up times
16:41:01 <rochaporto> if you do dnf install magnum-agent then it won't work in atomic
16:41:03 <randallburt> so IMO, its better to have the agents native on the host
16:41:07 <hongbin> we could do this: check if the package is installed, if not, pull down the containerized compoenent
16:41:21 <Drago1> Is it problematic that the heat agents would not be containerized? If they are containerized, the image ends up being 300+ MB and on CI it takes ~4 minutes to `docker load/pull`, which is too long
16:41:25 <Drago1> Because of python
16:41:37 <adrian_otto> so let's pose a hypothetical question: If there were a compelling reason to use an alternate OS, and there were engineers committed to maintaining such an alternate driver, woudl we as a dev team have an objection to bringing in an additional driver (first as contrib, and later in tree if it sees production use)?
16:42:24 <jvgrant> adrian_otto: i don't see a problem with that
16:42:24 <hongbin> it needs to be looked at case by case IMO
16:42:41 <randallburt> adrian_otto:  fine, but IMO, this is duplicating work since we also have to solve the containerized k8s 1.4 issue as well and its not productive to have to rebase/refactor these drivers a dozen times between now an ocata
16:43:02 <randallburt> we have to solve all those things in the same driver
16:43:05 <adrian_otto> hongbin: agreed, but what might be some reasons we would be reluctant to welcome a new driver?
16:43:14 <tonanhngo> It should be safe to start in contrib so it can be evaluated
16:43:19 <adrian_otto> randallburt: that's a good reason.
16:43:20 <randallburt> and cert revocation requiring heat agents is true of *all* drivers
16:43:22 <hongbin> adrian_otto: as others said, duplicated code
16:43:35 <adrian_otto> hongbin: acknowledged.
16:43:52 <rochaporto> but you can still run the agent in a container (we do it for our local certs with a bind mount)
16:44:10 <randallburt> rochaporto:  using Atomic?
16:44:13 <strigazi> yes
16:44:13 <rochaporto> yes
16:44:17 <adrian_otto> rochaporto: see Drago1 remarks above about load time for the heat agent once containerized.
16:44:33 <strigazi> this is a problem only on openstack infra
16:44:38 <randallburt> wait, so this discussion is mute if we have a solution that's simply not committed in the Magnum repo?
16:44:46 <Drago1> strigazi: Which becomes our problem
16:44:48 <randallburt> moot
16:45:09 <randallburt> strigazi:  this is a problem in real world usage of Magnum though.
16:45:12 <rochaporto> randallburt: we run multiple additional containers for local systems (storage, krb5, etc)
16:45:19 <rochaporto> it's not the heat-agent
16:45:24 <randallburt> oh
16:45:39 <rochaporto> they are in go, and pulled from a local docker registry so it's fast
16:45:49 <hongbin> ok, how about this. make magnum support both containerized k8s and packaged k8s
16:45:54 <randallburt> rochaporto:  so you *don't* use the heat agents
16:46:01 <hongbin> on the ci, build everything into the image
16:46:06 <randallburt> hongbin:  the issue is not about containerized k8s.
16:46:23 <hongbin> then, the problem is the ci?
16:46:24 <rochaporto> randallburt: my point was it's possible to containerize it (although i understand the ci issues today)
16:46:26 <randallburt> hongbin:  the issue is containerized heat agents and them not working in Fedora Atomic
16:46:43 <randallburt> possible doesn't mean acceptable, though
16:46:52 <Drago1> rochaporto: It HAS been containerized, and I abandoned that idea because of the CI issues
16:47:06 <randallburt> Drago1:  you mean the heat agents?
16:47:09 <Drago1> Yes
16:47:13 <randallburt> k
16:47:36 <hongbin> well, what i suggested is don't containerized the agents in ci, but does containerized the agents in other cases
16:47:45 <randallburt> again, I think we need to get back to the core question. Do we require Atomic for the Fedora hosted drivers?
16:48:01 <randallburt> hongbin:  if its too slow for CI, its too slow to foist on a user IMO
16:48:03 <Drago1> hongbin: That means we still need non-containerized heat agents in CI, which is what we're talking about
16:48:15 <adrian_otto> hongbin suggested having two variants... one that we optimize for ideal CI performance, and another focused on pursuit of pure immutable infrastructure.
16:48:26 <adrian_otto> what do others think of that idea?
16:48:40 <randallburt> adrian_otto:  very boo
16:48:49 <juggler> boo?
16:48:58 <randallburt> as in "do not like"
16:49:02 <juggler> ah
16:49:05 <adrian_otto> he prefers to have one
16:49:14 <strigazi> the atomic image  with  heat agent inside didn't work?
16:49:40 <juggler> adrian_otto ah
16:49:43 <jasond`> strigazi: os-apply-config wasn't writing os-collect-config.conf for some reason
16:49:43 <jvgrant> i think it is a decent workaround but could be problematic not testing on variant
16:49:47 <adrian_otto> strigazi: not *yet* but we think it *should*. We felt the Fedora approach would get us cert revocation sooner.
16:50:07 <randallburt> as well as speed up support for 1.4
16:50:11 <randallburt> so bonus
16:50:13 <strigazi> adrian_otto ok
16:50:18 <hongbin> jvgrant: tested it in thrid-party CI :)
16:50:46 <hongbin> 2 cents
16:51:05 <adrian_otto> we'll plan to wrap this discussion soon so we will have time for open discussion too.
16:51:19 <jvgrant> hongbin: agreed.
16:51:22 <randallburt> ok, to clarify again. would anyone -2 basing the fedora k8s drivers on non-atomic images?
16:51:36 <Drago1> Using non-atomic opens up the possibility that the drivers could treat the infrastructure as non-immutable. But, we could just… not merge in changes that would do that.
16:51:48 <jvgrant> i still haven't heard a really good reason why we need atomic
16:51:54 <randallburt> same
16:51:58 <strigazi> I don't have  a problem moving from atomic, but the new driver with new packages is like redoing everything
16:52:07 <jvgrant> doesn't the ironic driver use non-atomic already
16:52:13 <strigazi> yes
16:52:17 <rochaporto> so does mesos
16:52:18 <randallburt> strigazi:  we can remove that part in favor of hypercube
16:52:19 <Drago1> strigazi: Luckily it's much easier to work with non-atomic because you get DIB!
16:52:34 <jvgrant> Drago1: +1
16:53:05 <hongbin> i am ok to switch to non-atomic for now, but i think magnum should keep the micro-os in mind in the long term
16:53:06 <jvgrant> it would make automating building images easier which is something that we wanted at one point as well
16:53:37 <adrian_otto> hongbin: I agree. I think having a mOS option as the primary supported one is ideal long term.
16:53:49 <rochaporto> same here
16:53:49 <tonanhngo> you can also install additional packages as needed for development, troubleshooting
16:53:55 <strigazi> it makes it easy to add many extra packages inside
16:54:03 <jvgrant> much easier
16:54:10 <Drago1> adrian_otto, hongbin: micro-os meaning a philosophy or something concrete?
16:54:58 <hongbin> Drago1: meaning the approach used by CoreOS, Atomic
16:55:04 <hongbin> Drago1: i.e. immutable infra
16:55:12 <Drago1> And if the heat agent eventually gets written in go, I'm all for containerizing it
16:55:17 <Drago1> hongbin: Gotcha
16:55:29 <adrian_otto> Drago1: I'm using the term mOS as any distro designed for immutable infra usage. (CoreOS, Atomic, Snappy, etc)
16:55:32 <randallburt> not micro as in small
16:55:38 <adrian_otto> I'm not religious about which we use
16:55:52 <adrian_otto> so if there is a proposal that departs from that long term vision in the mean time, we can state the visionary intent in the commit message of the review to make it clear we are not advocating a change in long term strategy.
16:56:30 <adrian_otto> although such a departure would admittedly duplicate some work, we'd want to know what the reasoning is before investing too much effort there
16:56:47 <randallburt> right. so, moving on?
16:56:51 <adrian_otto> I'm opening to open discussion now. We can still continue this topic as well
16:57:05 <adrian_otto> #topic Open Discussion
16:57:13 <randallburt> Still waiting on feedback for the last of this patch chain for driver encapsulation:
16:57:14 <randallburt> #link https://review.openstack.org/#/c/405709
16:57:35 <strigazi> randallburt it's coming
16:57:44 <randallburt> cool, thanks strigazi
16:57:46 <adrian_otto> jvgrant: is there a reason you did not workflow+1?
16:57:55 <adrian_otto> oh, wait
16:57:59 <hieulq_> Hi, we want to integrate OSProfiler in Magnum, please review and approve if no problem: https://blueprints.launchpad.net/magnum/+spec/osprofiler-support-in-magnum
16:58:01 <jasond`> strigazi: will you remind mvelton to send me that systemd unit file for hyperkube?
16:58:03 <adrian_otto> tonanhngo: you had a -1 vote there
16:58:07 <randallburt> someone else reported a possible issue but I could not reproduce nor get details on how to
16:58:14 <strigazi> jasond yes
16:58:15 <jvgrant> adrian_otto: lack of more reviewers. wanted at least another +1
16:58:19 <jasond`> strigazi: thanks
16:58:25 <tonanhngo> It broke for me, but I can revisit
16:58:25 <jvgrant> and then we got a -1
16:58:39 <adrian_otto> jvgrant: thanks
16:58:39 <randallburt> adrian_otto:  jvgrant I am ok and encourage more reviewers before +A
16:58:58 <randallburt> jvgrant:  haven't gotten any info on reproducing that nor do the gates back it up
16:59:19 <Drago1> tonanhngo: How many times did you test it? I see the cluster not found in master sometimes, but not often.
16:59:46 <Drago1> As in, in the hundreds of clusters I've created, I've seen it 2 or 3 times
16:59:56 <adrian_otto> 1 minute remaining. About to close the meeting.
16:59:59 <randallburt> got to run to another meeting. laters!
16:59:59 <tonanhngo> I tried it twice
17:00:02 <jvgrant> randallburt: ok, there has been time now for reviews and as long as we don't haven additional feedback i think we can push it
17:00:03 <hongbin> hieulq_: the bp looks ok to me
17:00:10 <randallburt> k
17:00:14 <hieulq_> hongbin: thanks :-)
17:00:15 <randallburt> gotta run
17:00:16 <adrian_otto> Thanks for attending our team meeting today. Our next meeting will be on 2016-12-20 at 1600 UTC. I will not be working that day, so I may coordiante an alternate chair.
17:00:23 <adrian_otto> #endmeeting