15:01:53 #startmeeting openstack-helm 15:01:54 Meeting started Tue May 23 15:01:53 2017 UTC and is due to finish in 60 minutes. The chair is v1k0d3n. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:00 The meeting name has been set to 'openstack_helm' 15:02:06 welcome team 15:02:12 https://etherpad.openstack.org/p/openstack-helm-meeting-2017-05-23 15:02:30 o/ 15:02:45 or those of you who have some items you wish to add to theagenda, please add them. 15:02:59 ಠ‿ಠ 15:03:03 0/ 15:03:05 o/4 15:03:05 o/ 15:03:08 i give up 15:03:27 hello 15:03:30 o/ 15:03:31 o/ 15:03:48 hey SamYaple good to have you 15:03:57 im my own person 15:04:01 \o/ STEVE HOLT. 15:07:48 ok, sorry...just getting started. 15:08:01 mostly some checks on where we're at with bugs and long standing reviews. 15:08:11 1692834 - I've checked this on Helm 2.3, it's fine. But I guess it's good to have 2.4 issues listed, right? 15:08:12 unless people have other things that they want to add 15:09:01 dulek: https://bugs.launchpad.net/openstack-helm/+bug/1690863 ? 15:09:02 Launchpad bug 1690863 in openstack-helm "Helm 2.4.0 Issues with Openstack-Helm" [High,New] 15:09:21 sure, it's general. maybe we should get more specific in that launchpad bug? 15:09:24 v1k0d3n: Yeah, but this isn't listing the issues at all. 15:09:48 I'm fine with closing my bug and expanding description of 1690863. 15:10:20 this was really a placeholder. i thought that some folks were working against this issue. i'm completely fine with more details, and however the team wants to address doing that is fine with me. 15:10:44 do we want to close 1690863 in favor of more detailed helm 2.4 related bug issues? 15:10:52 yes 15:10:54 (assuming that's a bit better). 15:11:31 cool. done. so srwilkers are you aware of other 2.4 issues that could open launchpad bugs against? 15:12:16 the 1690863 was really just a place holder to let users know that 2.3 is still recommended. 15:12:23 not that im aware of, but if there are any, id rather see a verbose description of bugs 15:12:29 the only one I'm aware of is the hard failure of gotpl rendering errors 15:12:34 launchpad bugs aren't really appropriate for pinning to a helm version 15:12:40 should just be noted in the docs 15:12:45 i'm using 2.4 and wondering what the issue is 15:12:57 sure, as long as we can recommend to users that for now the recommendation is to use 2.3 until we're able to resolve and work against them. 15:13:01 lack of ceph secrets in helm-toolkit 15:13:07 anticw: ^^ 15:13:13 renders fine if they are present 15:13:27 right. +1 related to ceph issues. 15:13:29 2.3 is noted in the all-in-one docs and the multinode, so we should be good 15:13:40 (in etherpad). 15:14:37 ok, i'll close out the general 1690863. dulek can you mention using 2.3 in your issue until resolved? 15:15:05 v1k0d3n: i dont think we should close out 1690863? 15:15:42 ok 15:15:48 v1k0d3n: I've commented. 15:15:59 thanks dulek. next then. 15:16:35 are there really any other ceph issues than what's already been described? anticw i thought you may have run into issues, or was that only networking? 15:16:50 (my brain is all over the place atm) 15:17:06 portdirect: really its the entire conversion of the ceph chart to the modern way -- from ceph.conf templates, secrets, to dependency checking 15:17:36 +1 alanmeadows 15:17:52 good point alanmeadows. +1 15:18:14 alanmeadows: agreed (I think) though did you mean to direct that at me? 15:18:39 I'm 100% on the same page that it needs redone, but thought somone else was on it 15:19:13 i volunteered anticw, but just mentioning for awareness to all, ceph issues more then just the helm 2.4 secrets 15:19:50 :) yeah - in addition the the points you raise we should also have a set of snae defaults for diff size clusters 15:20:01 s/snae/sane 15:20:03 portdirect: oh, i have some stuffs there 15:20:37 nice :) 15:21:32 longer-term we should split multinode to mutinode.rst and largescale.rst 15:21:40 +1 15:21:52 ok, moving on. networking issues. i think anticw has been working some of these as well. SK has some interest as well. 15:21:53 but we don't know what the latter should really look like except in an abstract sense 15:22:41 To finish off your question, there is probably a long list of ceph issues, but I think many of them can simply be summed up with the fact that the first priority is modernizing the chart and bring it inline with the rest of openstack-helm--you can't override anything, you must set the proper environment variables before running sigil (huh), and you need to 15:22:41 have magic stuff in helm-toolkit/secrets 15:23:24 (plug) I think we should explore using the way i did secret gen for the upstream chart? 15:24:03 eg: https://github.com/ceph/ceph-docker/blob/master/examples/helm/ceph/templates/jobs/configmap.yaml 15:24:18 which removes the req for having anything installed on the host 15:24:26 v1k0d3n: the 'networking issue' i think is the choice if IPs again in ceph.conf magics 15:25:34 worth a look, sure -- I'm frankly happy with starting simple, because this is a pain point for many -- static 'secrets' defined in values.yaml might be the best place to start that advanced users can override 15:26:50 +1. most of the issues that i hear about are related to preparing the environment for ceph (in BM installations). 15:27:06 easing this would be a huge win. 15:28:14 an alternate to condsider may be passing this off to a plugin - which is what i do for my personal lab: https://github.com/portdirect/marina#quckstart 15:28:24 @alanmeadows we have all of the items we want on the roadmap for ceph, right? 15:28:49 perhaps we can get these items in as individual blueprints to work against. 15:29:33 This is the full issue list that was created: https://docs.google.com/document/d/1uY8U1DZgGa-IT40fYmqNlpLqUnWGC0_CbwH_n2n21Aw/edit?usp=sharing 15:29:39 It needs to likely work its way into blueprints 15:30:06 ok. i will get with the right folks, and work on getting these in. 15:30:10 we can move on. 15:30:30 is jayahn about? 15:31:07 * portdirect must be too late over that side of the big blue ball... 15:32:17 korzen did some great work putting together an etherpad: https://etherpad.openstack.org/p/openstack-helm-meeting-neutron-multibackend 15:32:58 I think that jayahn was wanting to do the linuxbridge backend 15:33:40 and in my own time I'd be very keen to do OVN, as I've worked on this quite a bit (and my old openstack-on-k8s was built around it) 15:33:58 always pushing an agenda this one 15:34:03 for each of these items that get worked, can we make sure there are blueprints for them? 15:34:22 that would help the community keep track of what's outstanding and who's working it. 15:34:26 yeah - what we really ned to determine is the path of least evil.. 15:34:36 lines 30:32 15:34:45 I saw him sipping coffee from an OVN mug. 15:34:51 i want coffee 15:34:59 I have published some draft: https://review.openstack.org/#/c/466293/ 15:35:09 we discussed this quite a bit at the summit - but we need to prototype some things i think so determine the best path 15:35:14 LB can be incorporated in neutron chart 15:35:15 i think @alanmeadows should probably weigh in on that etherpad 15:35:31 oh sweet nice korzen :) 15:35:31 I think the network node concept was something missing from the beginning, that is a welcome addition 15:35:50 +1 15:36:03 means we can prob get rid of that ovs labeling weirdness 15:36:17 other SDNs need to have their separate charts 15:36:48 and labeling to mark which node should operate which Neutron backend seems like a solution 15:36:50 well, you still have the need to get ovs both on compute nodes, and now network nodes 15:37:24 yeah - just that it could now be loosedned up a bit and made more generic 15:37:34 ie: sdn-agent or similar 15:37:38 so the same conundrum still exists, but with different names 15:37:58 sure, openvswitch label becomes something more palatable 15:38:11 but same requirement for that multi-category label, unless someone has better ideas 15:38:42 we could just make it that the sdn runs on compute and network nodes? 15:38:56 and dump it altogether 15:40:15 sdn can have separate labels 15:40:30 thats what we do today, with a third label, to have it scheduled to two separate labels, we would have to get more fancy then nodeSelector 15:40:47 which sounds fine to me, I just couldn't be bothered 15:41:15 lol - i think that should be a twenty min fix :P 15:41:29 * alanmeadows starts the clock. 15:41:33 * portdirect suspects that will bite him 15:41:36 :) 15:42:12 so maybe the best approach alanmeadows is to weigh in on that etherpad? 15:42:24 i will, but with this etherpad 15:42:40 did this group work out whether they feel all of these options listed here, which are pretty wide reaching 15:42:48 from calico, to (open)contrail to ovs 15:42:57 not yet its still a wip 15:42:57 whether they can be handled in a single neutron chart 15:43:16 or whether neutron-foobar would be inevitable 15:43:20 nope - thats the fundimental question at this stage 15:43:32 ok so more workthrough/attempts need to be done 15:43:35 for us to answer that question 15:43:55 at the summit we agreed that we would prototype both approaches in PS's and then determin the most paletable path forward 15:44:26 okay 15:44:37 lines 34:47 are where korzen has done a great job of highlighting most of the areas we need to consider 15:45:09 15 minutes til top of the hour folks. 15:46:30 let's talk aboutu these neutron improvements in the openstack-helm channel. 15:46:58 one thing that i think may be creeping up are cinder issues. 15:47:21 i've heard a couple of folks mention this. can anyone go a bit deeper? anticw ? 15:47:45 v1k0d3n: couple of issues, the default endpoint 15:47:48 and secondly it doesn 15:47:57 gah stupid ' button 15:48:00 do not work 15:48:25 the endpoint is an easy fix, we get obscure 500s without it 15:48:36 do not work = tries to use iscsi in nova-compute for unknown reasons 15:49:30 anticw: you cant do iscsi in network namespace for kernel reasons 15:49:31 anticw: are these things you have fixes for that could be submitted/reviewed, or are you still in the process of testing through? 15:49:59 v1k0d3n: just local becuase i aren't sure if it's me or something larger 15:50:07 SamYaple: don't want iscsi ... it should be using ceph 15:50:17 ah sorry misread that anticw 15:50:24 https://pastebin.com/5Uvjk7hR 15:51:33 SamYaple: fwiw, i think we could get iscsi working as most of that stuff seems to run in the host namespace 15:51:49 (but that's OT) 15:51:50 anticw: let me redeploy a bare metal install, test and see if i get some of the same. i can reach out and we can work through them if you want/have time? 15:51:58 v1k0d3n: no 15:52:04 err, yes 15:52:06 anticw: iscsi runs in the host namespace fine, there is a kernel bug for running in network namespaces 15:52:13 :) haha 15:52:29 8 mins left 15:52:35 SamYaple: the only thing that jumps out at me is this: https://github.com/openstack/openstack-helm/blob/master/nova/values.yaml#L322 15:52:54 but that should not effect volume attach right? 15:53:09 that should be how nova stores ephemeral disks 15:53:28 cinder-api is giving cinder-volume some json which it's not dealing with right, unclear which end is to blame 15:53:41 we can talk about this over <- there later if you like 15:53:53 that works anticw 15:54:07 portdirect: really easy to repro if you have 5 mins 15:54:22 one last thing too. i think SamYaple and alanmeadows: you guys are still exploring rootwrap use...is that correct? 15:54:53 or have you guys come to some form of agreement? maybe we should stand up an etherpad so we can take it offline? 15:55:01 (unless there already is one). 15:55:36 the summation is i say rootwrap.conf is a config (hence why it is in /etc/) and it is not in the LOCI images 15:55:48 the fear is it is image specific stuff, which i dont agree with that 15:55:52 +1 15:55:54 I have been going through OOTB rootwraps trying to discover something that would end up being quite image-centric 15:56:08 for glance and cinder if you use ceph you dont even NEED rootwrap, so its also a greater attack surface 15:56:23 the deploy tool would be able to limit that attack surface but it must control the rootwrap files 15:56:32 for cider you do for backup I think? 15:56:42 i dont think so, but i oculd be wrong 15:56:52 i could be wrong about ALL of this, but i havent seen anything to suggest i am 15:56:58 so this is really pending more info 15:56:58 I have not been able to find an example yet, so I am willing to concede on bringing these into the fold of OSH, provided we come up with a path to customize not only rootwrap.conf, but allow injection of additional rootwrap.d files 15:57:15 im happy to come back to it once/if an issue is found 15:57:30 to be honest, the main reason it was skipped for configuration overrides 15:57:35 was it was a substantial amount of work 15:57:45 alanmeadows, i can tackle that 15:57:52 yea sure alanmeadows, but to be fair it can be wholesale copied over at first 15:57:59 maintenance on a straight copy is insigificant 15:58:17 any paramaterized rootwrap is a thing that comes later 15:58:25 well, but I mean if we're going to own it, we have to give end users the ability to do things with it (templates) 15:58:29 they are pretty well commented atm - should just be a case of wrapping in conditionals i think 15:58:33 if we own it, and then jam it in, we've done a disservice 15:59:02 in fact, I think because this is not oslo-gen capable 15:59:12 'dont let perfect be the enemy of good' 15:59:21 but im with you, i dont want it to sit in there 15:59:25 the function introduced with the cinder stuff, namely the helm-toolkit "toIni" function (casing correction needed) 15:59:31 that doesnt mean it cant sit in there AT FIRST 15:59:37 may be perfect for rootwrap.conf 16:00:02 reminder, openstack config files are not ini. that is all 16:00:12 i think rootwrap follows ini still though 16:00:29 we unfortunately have to wrap up. 16:00:35 its standard [HEADER] item=foo 16:00:39 sorry that the meeting took a while today. 16:01:01 alanmeadows and SamYaple...want to continue in #openstack-helm? 16:01:11 sure i think we are basically done though 16:01:21 sounds good. 16:01:36 ok calling the meeting folks. thanks everyone for coming and contributing! 16:01:37 I think we can close with saying srwilkers can own this, just an etherpad with some comments to start out a trial on this 16:01:48 o/ 16:01:52 bai 16:02:00 #endmeeting