15:17:02 <j^2> #startmeeting openstack-chef
15:17:03 <openstack> Meeting started Thu Apr  9 15:17:02 2015 UTC and is due to finish in 60 minutes.  The chair is j^2. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:17:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:17:06 <openstack> The meeting name has been set to 'openstack_chef'
15:17:12 <j^2> There go. Hi all!
15:17:38 <sc`> o/
15:17:45 <wenchma> hello
15:17:54 <j^2> sc`: take it away
15:18:12 <sc`> #topic rdo kilo packages
15:19:00 <sc`> started following #rdo's progress on kilo. they have what looks like mostly working packages in their delorean repo
15:19:30 <sc`> http://trunk.rdoproject.org/centos70/report.html is the link to their CI builds for anyone that wants to follow along
15:20:31 <sc`> #topic nova-api-metadata fix
15:21:16 <sc`> still wip on getting openstack-compute to toggle the attribute. been backlogged quite a bit but am catching up with queued patches
15:21:39 <sc`> that's about all i have
15:24:18 <wenchma> seems there is a patch fo fix nova-api-metadata
15:24:28 <wenchma> https://review.openstack.org/#/c/142249/
15:24:54 <wenchma> sc`: right ?
15:25:00 <wenchma> or partial
15:25:38 <sc`> yeah... i tried that approach myself a while back
15:26:28 <sc`> right now we're overriding enabled_apis in environments, which is ok as a stopgap
15:27:05 <sc`> but the cookbook should know whether or not to lay down the proper config var depending on if you have the metadata recipe in the run_list
15:30:39 <sc`> the current state in openstack-chef-repo is to let the metadata recipe handle things itself, but what if someone doesn't want to use the recipe and wants nova-api to manage the metadata service itself?
15:31:51 <markvan> sc`: I think we need to make that decision.  I vote for using the recipe and NOT allowing nova-api to mess with it.  Cleaner for our env's that way.
15:32:46 <sc`> it does make sense that way
15:33:12 <markvan> I don't see a need to clutter our support to allow both ways.  recipe works fine and does the job correctly, so use it.  I think we need a patch to put out a warning if enabled_apis contains metadata.
15:33:33 <markvan> and then remove it from the list to avoid to overlap issue.
15:35:56 <sc`> yep. makes sense
15:38:29 <markvan> k, I can followup with a patch for that, and see what reviewers think of that approach.
15:38:38 <sc`> there's no real valid reason to keep both around, other than to keep what's out of the box intact
15:39:33 <markvan> #topic  ubuntu kilo packages
15:39:51 <markvan> I posted what I think is the latest workaround here: http://paste.openstack.org/show/200736/
15:40:20 <markvan> The one patch is needed as a result of the service role work: https://review.openstack.org/#/c/171330/
15:40:50 <markvan> I have not see movement on the nova libvirt bug yet, but I have asked around a bit
15:41:44 <wenchma> for 171330, gave some comments
15:43:21 <wenchma> if we make amdin create public glance image, we also need to make sure publicize_image is admin in glance policy file
15:43:24 <markvan> wenchma: yup, need to update policy to allow for non admin public images now.  It was a security measure
15:43:57 <markvan> and for admin, that is the default in the glance_policy file
15:44:26 <markvan> see https://github.com/openstack/glance/blob/master/etc/policy.json#L10
15:44:37 <wenchma> yes, The ability to upload a public image is now admin-only by default
15:46:07 <wenchma> hmmm....
15:46:20 <wenchma> should be this rightr
15:46:24 <markvan> and from history, the cookbooks have to date avoided messing with the policy files, so until someone asks, I don't see a need for supporting non-admin public with the lwrp
15:47:33 <wenchma> maybe I deployed the node , not use the latest pkgs
15:47:43 <markvan> The other way to enable that would be to create another use within the admin role, then they could also create public images.  This is doable today with the identity lwrp.
15:49:14 <wenchma> looks gogd
15:49:16 <wenchma> good
15:49:23 <markvan> And in general I think this was a good move by the base, separate what a "service" and "admin" can do, to make security controllable
15:51:21 <markvan> So I guess that leads to my main topic.... how to get patches flowing again, and how to de-couple a bit from being stuck in the future?
15:51:29 <markvan> #topic repo dependencies
15:52:32 <markvan> If you look at our patch list, there are a bunch of low haning fruit ones that are straight forward and imo ready to go.    So, what is the best way to move forward?
15:52:59 <markvan> I have tried the ubuntu/rdo trunks a bit, but usually very messy.
15:54:27 <markvan> Then I tried using our testing repo with our internal repo (aio_neutron, we don't support nova network), and it works.  But since it's an internal repo, is that enough to justify merging a patch?
15:54:32 <markvan> thoughts?
15:55:16 <sc`> i think as long as the code is from the same line, it should be about the same
15:55:54 <sc`> but as far as merging, i'd personally be comfortable with being able to pull down internet artifacts and being able to prove that the proposed patch works
15:55:57 <markvan> yup, our internal repo is usually almost bleeding edge, rebased daily, and is fairly clean of forks.
15:56:33 <markvan> yup, I can't disagee with having public artifacts to verify
15:56:58 <sc`> if it works against ubuntu/rdo trunk, that's one thing
15:57:07 <j^2> hey all, i’m actaully at my laptop, i can actaully type
15:57:37 <markvan> So, does that mean we could create our own snapshot somwhow, and not live so close to the edge?
15:57:45 <j^2> i need to catch up with :)
15:58:09 <sc`> perhaps snapshotting ubuntu/rdo trunk may be enough to suffice
15:59:10 <markvan> if we find a cut that works for our basic verify needs, would be nice to hang on to that for a short period to allow basic patches to continue to flow.
15:59:44 <markvan> There will be patches that require a later level of base openstack, but I think those a fewer in number
16:01:19 <markvan> Is there a way we could simply capture/clone a repo into a zip and push that up to a github project, maybe next to the testing suite.  and then tie those to gether with doc for testing with it.
16:02:07 <markvan> I'n not an expert on repos, but it seems like it can be done locally, just download zip and change url to it?
16:03:02 <sc`> longer-term, i'd like to get rdo, ubuntu, etc. on an openstack site, but that's a bigger yak to shave
16:03:23 <sc`> rdo would be amenable to mirroring packages that way
16:03:52 <markvan> sc`: that's encouraging to hear, and a cleaner approach for all
16:07:05 <cmluciano> catching up
16:07:24 <cmluciano> we’re talking about how to get up-to-date packages it seems
16:07:30 <markvan> Looking at yum, seems like we could just have a zip of the repo, and then inject a recipe to push that to the node, unzip and update the atttr url to point to file:///...
16:07:48 <cmluciano> I suppose that’s the major downfall of having to rely on rdo/ubuntu
16:07:51 <markvan> cmluciano: more then up to date, a cut of "working" packages
16:08:27 <cmluciano> Ah
16:12:06 <sc`> i was looking at the backlog and noticed several rhel 7.1 patches outstanding. is this because of fauxhai?
16:14:07 <cmluciano> I haven’t been CRing things since I can’t get master to work
16:14:34 <cmluciano> I know we have a couple of workarounds but I haven’t been able to get any to work without running into other issues
16:14:47 <markvan> sc`: yup, need an update to that fauxhai gem to pull in the 7.1 support,  then a patch to the gem files to pull that in
16:15:26 <markvan> cmluciano: yup, see http://paste.openstack.org/show/200736/ for where I think we stand with ubuntu + master + aio_nova
16:16:11 <sc`> with markvan's notes, i have a converged node but i hit the nova bug indicated in those notes when booting an image
16:17:43 <sc`> looks like there's some recent activity on the bug
16:18:15 <markvan> so, with that in mind, I think the https://review.openstack.org/#/c/171330/1 image patch should go in, as the images are being created properly.  And then we eliminate one patch from the workaround list.
16:18:59 <sc`> i concur
16:19:35 <cmluciano> workflowed that patch
16:20:55 <cmluciano> from that paste it appears that the other issues involve updated packages and an open bug for nova cpu affinity
16:20:58 <markvan> bummer, that nova libvirt cpu pinning bug looks like a ubuntu packaging issue, probably not up to date
16:21:50 <cmluciano> sad_panda
16:21:51 <markvan> cmluciano: yup, the pysaml2 package fix has gone into the ubuntu trunk.
16:22:28 <markvan> so, it's close, but I don't see a workaround for the nova bug.
16:24:04 <markvan> And that's why I'm suggesting we think about cutting a copy of a working ubuntu/rdo repo to help make progress with some patches
16:24:42 <cmluciano> where would these be hosted?
16:26:13 <markvan> I was thinking just locally for now, so take a cut, create a zip, push that to github project.  Then we needed, use new recipe in testing repo to pull that down, unzip in node and point to it with file://
16:27:23 <markvan> shoiuld be able to grab the zip directly from the github via a new testing suite recipe step
16:29:05 <markvan> not sure If I'm making sense, maybe I'll have to fully prototype it, seems fairly easy way to do it, and allows anyone to upload to github new zips
16:29:16 <sc`> i see where you're going with that
16:29:40 <markvan> not sure if there's an easier way/place to host a working cut
16:29:50 <sc`> ya. it has to go somewhere
16:29:59 <sc`> it's just a matter of *where*
16:31:00 <markvan> I figure either way, you have to download the packages, so having them right on the node is easy place to start.  and yeah for multi, would need push to each node.
16:33:09 <markvan> or maybe we can find some public cloud drive space out there to use for at least a holding 1 or 2 cuts, not sure how big, but I don't think multi gig's here.
16:33:34 <sc`> packagecloud maybe?
16:33:36 <markvan> that would be easier, then just a attr override to use it
16:35:10 <sc`> or some to-be-created openstack infra?
16:35:48 <sc`> that's all longer-term stuff, most likely
16:36:43 <markvan> bummer, looks like packagecloud is very nice, but free acct only allows 25 packages, I think openstack repos are just a bit bigger
16:36:52 <sc`> for right now, just snapshotting into a tar could work. it's kinda ugly imho
16:37:06 <sc`> but it'd work
16:37:43 <markvan> yeah, I think it would allow some progress for us
16:40:48 <markvan> j^2: is there any place within Chef to drop a repo cut? so we could get public http access when using testing suite?
16:46:48 <openstackgerrit> Merged stackforge/cookbook-openstack-image: Only admin can create public glance images  https://review.openstack.org/171330
17:05:07 <j^2> markvan: sorry i’m missing your question? you mean chef-dk?
17:05:10 <j^2> #endmeeting