15:17:02 #startmeeting openstack-chef 15:17:03 Meeting started Thu Apr 9 15:17:02 2015 UTC and is due to finish in 60 minutes. The chair is j^2. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:17:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:17:06 The meeting name has been set to 'openstack_chef' 15:17:12 There go. Hi all! 15:17:38 o/ 15:17:45 hello 15:17:54 sc`: take it away 15:18:12 #topic rdo kilo packages 15:19:00 started following #rdo's progress on kilo. they have what looks like mostly working packages in their delorean repo 15:19:30 http://trunk.rdoproject.org/centos70/report.html is the link to their CI builds for anyone that wants to follow along 15:20:31 #topic nova-api-metadata fix 15:21:16 still wip on getting openstack-compute to toggle the attribute. been backlogged quite a bit but am catching up with queued patches 15:21:39 that's about all i have 15:24:18 seems there is a patch fo fix nova-api-metadata 15:24:28 https://review.openstack.org/#/c/142249/ 15:24:54 sc`: right ? 15:25:00 or partial 15:25:38 yeah... i tried that approach myself a while back 15:26:28 right now we're overriding enabled_apis in environments, which is ok as a stopgap 15:27:05 but the cookbook should know whether or not to lay down the proper config var depending on if you have the metadata recipe in the run_list 15:30:39 the current state in openstack-chef-repo is to let the metadata recipe handle things itself, but what if someone doesn't want to use the recipe and wants nova-api to manage the metadata service itself? 15:31:51 sc`: I think we need to make that decision. I vote for using the recipe and NOT allowing nova-api to mess with it. Cleaner for our env's that way. 15:32:46 it does make sense that way 15:33:12 I don't see a need to clutter our support to allow both ways. recipe works fine and does the job correctly, so use it. I think we need a patch to put out a warning if enabled_apis contains metadata. 15:33:33 and then remove it from the list to avoid to overlap issue. 15:35:56 yep. makes sense 15:38:29 k, I can followup with a patch for that, and see what reviewers think of that approach. 15:38:38 there's no real valid reason to keep both around, other than to keep what's out of the box intact 15:39:33 #topic ubuntu kilo packages 15:39:51 I posted what I think is the latest workaround here: http://paste.openstack.org/show/200736/ 15:40:20 The one patch is needed as a result of the service role work: https://review.openstack.org/#/c/171330/ 15:40:50 I have not see movement on the nova libvirt bug yet, but I have asked around a bit 15:41:44 for 171330, gave some comments 15:43:21 if we make amdin create public glance image, we also need to make sure publicize_image is admin in glance policy file 15:43:24 wenchma: yup, need to update policy to allow for non admin public images now. It was a security measure 15:43:57 and for admin, that is the default in the glance_policy file 15:44:26 see https://github.com/openstack/glance/blob/master/etc/policy.json#L10 15:44:37 yes, The ability to upload a public image is now admin-only by default 15:46:07 hmmm.... 15:46:20 should be this rightr 15:46:24 and from history, the cookbooks have to date avoided messing with the policy files, so until someone asks, I don't see a need for supporting non-admin public with the lwrp 15:47:33 maybe I deployed the node , not use the latest pkgs 15:47:43 The other way to enable that would be to create another use within the admin role, then they could also create public images. This is doable today with the identity lwrp. 15:49:14 looks gogd 15:49:16 good 15:49:23 And in general I think this was a good move by the base, separate what a "service" and "admin" can do, to make security controllable 15:51:21 So I guess that leads to my main topic.... how to get patches flowing again, and how to de-couple a bit from being stuck in the future? 15:51:29 #topic repo dependencies 15:52:32 If you look at our patch list, there are a bunch of low haning fruit ones that are straight forward and imo ready to go. So, what is the best way to move forward? 15:52:59 I have tried the ubuntu/rdo trunks a bit, but usually very messy. 15:54:27 Then I tried using our testing repo with our internal repo (aio_neutron, we don't support nova network), and it works. But since it's an internal repo, is that enough to justify merging a patch? 15:54:32 thoughts? 15:55:16 i think as long as the code is from the same line, it should be about the same 15:55:54 but as far as merging, i'd personally be comfortable with being able to pull down internet artifacts and being able to prove that the proposed patch works 15:55:57 yup, our internal repo is usually almost bleeding edge, rebased daily, and is fairly clean of forks. 15:56:33 yup, I can't disagee with having public artifacts to verify 15:56:58 if it works against ubuntu/rdo trunk, that's one thing 15:57:07 hey all, i’m actaully at my laptop, i can actaully type 15:57:37 So, does that mean we could create our own snapshot somwhow, and not live so close to the edge? 15:57:45 i need to catch up with :) 15:58:09 perhaps snapshotting ubuntu/rdo trunk may be enough to suffice 15:59:10 if we find a cut that works for our basic verify needs, would be nice to hang on to that for a short period to allow basic patches to continue to flow. 15:59:44 There will be patches that require a later level of base openstack, but I think those a fewer in number 16:01:19 Is there a way we could simply capture/clone a repo into a zip and push that up to a github project, maybe next to the testing suite. and then tie those to gether with doc for testing with it. 16:02:07 I'n not an expert on repos, but it seems like it can be done locally, just download zip and change url to it? 16:03:02 longer-term, i'd like to get rdo, ubuntu, etc. on an openstack site, but that's a bigger yak to shave 16:03:23 rdo would be amenable to mirroring packages that way 16:03:52 sc`: that's encouraging to hear, and a cleaner approach for all 16:07:05 catching up 16:07:24 we’re talking about how to get up-to-date packages it seems 16:07:30 Looking at yum, seems like we could just have a zip of the repo, and then inject a recipe to push that to the node, unzip and update the atttr url to point to file:///... 16:07:48 I suppose that’s the major downfall of having to rely on rdo/ubuntu 16:07:51 cmluciano: more then up to date, a cut of "working" packages 16:08:27 Ah 16:12:06 i was looking at the backlog and noticed several rhel 7.1 patches outstanding. is this because of fauxhai? 16:14:07 I haven’t been CRing things since I can’t get master to work 16:14:34 I know we have a couple of workarounds but I haven’t been able to get any to work without running into other issues 16:14:47 sc`: yup, need an update to that fauxhai gem to pull in the 7.1 support, then a patch to the gem files to pull that in 16:15:26 cmluciano: yup, see http://paste.openstack.org/show/200736/ for where I think we stand with ubuntu + master + aio_nova 16:16:11 with markvan's notes, i have a converged node but i hit the nova bug indicated in those notes when booting an image 16:17:43 looks like there's some recent activity on the bug 16:18:15 so, with that in mind, I think the https://review.openstack.org/#/c/171330/1 image patch should go in, as the images are being created properly. And then we eliminate one patch from the workaround list. 16:18:59 i concur 16:19:35 workflowed that patch 16:20:55 from that paste it appears that the other issues involve updated packages and an open bug for nova cpu affinity 16:20:58 bummer, that nova libvirt cpu pinning bug looks like a ubuntu packaging issue, probably not up to date 16:21:50 sad_panda 16:21:51 cmluciano: yup, the pysaml2 package fix has gone into the ubuntu trunk. 16:22:28 so, it's close, but I don't see a workaround for the nova bug. 16:24:04 And that's why I'm suggesting we think about cutting a copy of a working ubuntu/rdo repo to help make progress with some patches 16:24:42 where would these be hosted? 16:26:13 I was thinking just locally for now, so take a cut, create a zip, push that to github project. Then we needed, use new recipe in testing repo to pull that down, unzip in node and point to it with file:// 16:27:23 shoiuld be able to grab the zip directly from the github via a new testing suite recipe step 16:29:05 not sure If I'm making sense, maybe I'll have to fully prototype it, seems fairly easy way to do it, and allows anyone to upload to github new zips 16:29:16 i see where you're going with that 16:29:40 not sure if there's an easier way/place to host a working cut 16:29:50 ya. it has to go somewhere 16:29:59 it's just a matter of *where* 16:31:00 I figure either way, you have to download the packages, so having them right on the node is easy place to start. and yeah for multi, would need push to each node. 16:33:09 or maybe we can find some public cloud drive space out there to use for at least a holding 1 or 2 cuts, not sure how big, but I don't think multi gig's here. 16:33:34 packagecloud maybe? 16:33:36 that would be easier, then just a attr override to use it 16:35:10 or some to-be-created openstack infra? 16:35:48 that's all longer-term stuff, most likely 16:36:43 bummer, looks like packagecloud is very nice, but free acct only allows 25 packages, I think openstack repos are just a bit bigger 16:36:52 for right now, just snapshotting into a tar could work. it's kinda ugly imho 16:37:06 but it'd work 16:37:43 yeah, I think it would allow some progress for us 16:40:48 j^2: is there any place within Chef to drop a repo cut? so we could get public http access when using testing suite? 16:46:48 Merged stackforge/cookbook-openstack-image: Only admin can create public glance images https://review.openstack.org/171330 17:05:07 markvan: sorry i’m missing your question? you mean chef-dk? 17:05:10 #endmeeting