15:00:24 #startmeeting openstack-helm 15:00:25 Meeting started Tue Dec 5 15:00:24 2017 UTC and is due to finish in 60 minutes. The chair is mattmceuen. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:28 The meeting name has been set to 'openstack_helm' 15:00:40 #topic rollcall 15:00:46 GM all! 15:00:50 Agenda: https://etherpad.openstack.org/p/openstack-helm-meeting-2017-12-05 15:01:29 o/ 15:02:15 o/ 15:02:21 o/ 15:03:05 Giving another min or so for agenda edits to complete 15:03:13 o/ 15:03:24 o/ 15:03:48 a wild alanmeadows appears 15:03:57 straight from the bush 15:04:04 don't spook him! 15:04:11 I'm in transit, so may be less verbose than usual. (You can all sigh with relief now) 15:04:13 * srwilkers uses flash -- it's not very effective 15:04:19 #topic When to use a spec in OSH 15:04:39 Ok -- I thought it would be a good time to refresh our local practice of spec authoring 15:04:52 Both as FYI to new team members, as well as a refresher for the rest of us :) 15:05:27 At this point in the OSH lifetime, we don't expect specs for every code change 15:05:36 When we do expect specs are: 15:05:43 o/ 15:05:49 1. when a change impacts multiple charts 15:06:09 2. when a change needs design feedback from the larger team prior to implementation 15:06:24 3. when a change does something substantially new that'll be modeled in other charts later 15:07:06 The gist being: write specs as a means to drive common understanding (think: useful documentation) and common direction (think: everyone's aligned) 15:07:17 thoughts/questions? 15:07:37 sounds good to me. 15:07:42 good recap 15:07:57 Sounds good mattmceuen 15:08:11 cool beans. thanks guys I'll get off the process soapbox. 15:08:17 Next: 15:08:27 #topic Carryover CICD topics from last week 15:08:49 Great focus-meeting on CICD last week -- we couldn't fit everything in :) good problem to have. 15:09:09 portdirect: want to speak to helm test and friends? 15:09:21 was really sad to miss last week, but couldn't make it. was there some docs/notes (i'm assuming the same etherpad format)? 15:09:53 Yep! notes and transcript: http://eavesdrop.openstack.org/meetings/openstack_helm/2017/ 15:10:07 awesome. thanks mattmceuen 15:10:21 So I'm thinking we need a spec for what we have in helm test 15:11:08 Sounds pretty cross-chart to me. What do you want to get out of said spec? 15:11:27 As currently we have been pretty good with what I hope we decide as the core rationale for what we want from this functionality but as more contributors come in we should formalize a bit. 15:12:24 So from my perspective we should be able to run 'helm test' at any point in a charts life, without lasting impact on the environment 15:12:55 This means that by definition the tests should be non impacting, or destructive 15:12:55 ++ 15:13:18 agree. 15:13:33 Though we are limited by what helm currently provides us with 15:13:51 are they impacting or destructive currently? we've just recently started testing with custom rally yaml. 15:14:02 I'd like to export developing a pattern for running a test on each node in the cluster 15:14:22 v1k0d3n: no, but this is not formalised at present 15:14:48 ok. sounds good. 15:15:15 Can we document the pattern for testing openstack vs. non-openstack charts in that spec as well? 15:15:32 Is there also appetite for adding a 'really hammer this thing' flag? That would enable destructive testing? 15:16:20 mattmceuen: yes def the pattern we write up in the spec should be application agnostic 15:16:29 portdirect: yep. `helm test` is generally meant as a smoke test for verifying a chart just deploys something functional. i've been toying around with the idea of having a chart for running helm tests against specific chart groups in openstack-helm-infra. dont know if that makes sense the way i worded it, but without something like rally for services outside of openstack, it's been difficult verifying 15:16:29 everythings working the way it should 15:17:04 as using `helm test` the way i have been with those charts, it's really introducing dependencies on the other services when it really shouldnt if we're treating each chart as its own entity 15:18:15 portdirect: i've got an appetite for such a flag 15:18:19 This needs to be encapsulated in the spec for sure, I'm assuming that you are r3fering to things like the mysql exporter? 15:18:29 that speaks more to my thought portdirect -- I agree the spec should be agnostic, but am thinking of advice like "if you're testing an OS service, you can use rally; otherwise, here are some guidelines" 15:18:41 referring to fluentd/elasticsearch/prometheus and friends 15:19:40 srwilkers: yeah, and here I think we need to specify a 'minimum' set of infra that we have running, so we don't end up with a ceilometer type situation 15:19:46 ++ 15:20:02 Where tests were passing but nothing useful was being collected.... 15:20:10 yep 15:20:31 portdirect: are you looking for a volunteer for the spec, or are you volunteering for it? 15:20:48 sounds like he's volunteering, and i'd be happy to throw some input on it as well 15:20:59 I'm volunteering to do it, unless there is someone who really wants it. 15:21:26 Sounds good - thanks portdirect. Excellent idea. 15:21:32 i dont really want it, but willing to contribute/do it if there's nobody else 15:21:43 #action portdirect to work on a spec formalizing our `helm test` approach 15:21:54 next: do we have lamt in the house? 15:22:29 We'll table his item till next time. 15:22:40 #topic LMA updates 15:22:48 swilkers what's goin on! 15:23:07 ONAP and coffee mostly :) 15:23:08 but 15:23:47 we got prometheus merged in (finally). osh-infra currently has charts for: prometheus, kube-state-metrics, node-exporter, and alertmanager 15:23:55 WOOO 15:24:11 this gives us a solid base for monitoring the underlying infrastructure, along with rbac rules that match those in the kubernetes/charts/stable repo 15:24:35 along with that (tied in to the previous topic), we now have support for executing helm tests on charts in osh-infra 15:24:47 Can we get these exporting logs in the gate as a priority, or did I miss that getting added? 15:24:51 and prometheus currently passes some basic smoke tests, so huzzah 15:25:01 hi 15:25:06 portdirect: that'll be fluent-loggings job 15:25:17 im currently adding support for doing that as we speak 15:25:22 expect a patchset in the next hour or so 15:25:43 the idea is that we can use a deployed fluentbit instance to export logs from the pods running in the osh-infra gates 15:26:12 and we can also use prometheus to export the metrics gathered during the gate run to give an idea of the services' performance in the gate jobs 15:26:19 that will also be added today 15:27:02 Nice, we really need that and the supporting docs for how to use/ingest them asap 15:27:05 also big shoutout to jayahn and sungil for the work they did on fluent-logging 15:27:09 portdirect: yep 15:27:25 just took a few tweaks to get it to the finish line, but im really happy with how its working right now 15:27:59 we found out that version matching between fluent-bit and fluentd is somewhat sensitive. 15:28:10 jayahn: yeah, ive noticed the same 15:28:22 if we want more simple stuff, we can only run fluent-bit. 15:28:43 we have a plan to add that "selection" possible through fluent-logging chart. 15:28:54 once the log and metrics exporting is done in the osh-infra gates, the next step is to get the prometheus chart running prometheus 2.0 15:29:46 prometheus 2.0 makes me happy. the rework to the underlying storage layer is really solid, and the overall resource consumption has been reduced significantly 15:29:56 W00t 15:30:19 Any loss of features that hit us, e.g. openstack exporter? 15:30:28 Or they all sweet? 15:30:35 nope, they're all sweet 15:30:40 Nice 15:30:47 love backwards compatibility. 15:30:53 nice! 15:31:04 im chatting with some of the prometheans wednesday here at kubecon, and going to ask them about the maturity of the openstack service discovery mechanisms they're adding to prometheus 15:31:21 as that will actually reduce the necessity of some of the openstack-exporter's responsibility 15:32:18 anyway, thats it for me 15:32:24 awesome - thanks srwilkers 15:32:44 #topic Review Needed 15:32:51 * jayahn portdirect really being less verbose? 15:33:11 need to rebase, i guess. but again for adding lbaas to neutron 15:33:12 * srwilkers thinks portdirect needs a built-in -vvv flag 15:33:22 add lbaas to nuetron: https://review.openstack.org/#/c/522162/ 15:33:40 but it requires kolla neutron version 4.0.0 since 3.x does not have that 15:33:41 * portdirect fingers on fire, this phone is being worked.. 15:34:02 will it be any problem for the current upstream? 15:34:14 jayahn: for current upstream yes 15:34:34 But we could turn it off by default, which I think would be ok? 15:34:40 sure 15:35:10 What is the status of lbaas in neutron currently? 15:35:22 Any other PS we need some extra eyes on, all? 15:35:36 not sure. we only used lbaas in neutron, not octavia 15:35:37 Is it being supported moving forward? Or is everyone all in on ocavia? 15:36:04 our use case is integrating with vendor appliance, such as a10, f5, etc 15:36:06 mattmceuen: I'd love some feedback on the dev guide I've been working on 15:36:13 so does not really need octativa 15:37:11 jayahn: roger, that kinda fits with what I'd seen, in that vendors are still using it 15:37:38 yeah. they have lbaas v2 support as long as I know. 15:37:53 updated dev guide: https://review.openstack.org/#/c/523173/ 15:38:04 i only sereiously did integration work with a10 though. 15:39:17 #topic things to share 15:39:33 Hey jayahn you had an item here, go for it 15:39:55 https://github.com/sktelecom-oslab/taco-scripts >> skt's version of osh aio installation 15:40:06 Nice 15:40:06 oh awesome 15:40:13 fully inspired by portdirect's scripts on sydney 15:40:50 It would be great to see if we could get elements of this merged into the dev docs ps above 15:40:54 will give that a spin, jayahn! 15:41:16 thanks. :) btw, this is ocata. :) 15:41:26 Having examples of deploying with kubespray and co is awesome 15:41:44 * srwilkers has a sudden urge to find tacos 15:41:50 * mattmceuen I know right 15:42:01 * jayahn luck you srwilkers. you are in austin 15:42:06 Jay, you keep the floor: 15:42:18 #topic destructive testing tool comparison 15:42:31 * portdirect is about to leave new Orleans:D 15:42:35 as requested, we did a quick comparison 15:42:56 * mattmceuen full disclosure: I just got distracted and will have to catch up on these deets from the chat logs 15:43:47 cookiemonster has more flexibility comparng to other tools. can define more types to kill, have REST API call to start and stop 15:44:04 but, all of three are doing similar things. 15:44:50 we will add more documentations and use cases. 15:45:25 happy to get any questions on how we use this cookiemonster in our ci, and for some demo 15:45:52 Also - there are some good details jayahn added as comparision in the agenda (bottom): https://etherpad.openstack.org/p/openstack-helm-meeting-2017-12-05 15:46:31 that is all from me 15:46:34 any questions on the comparison at this point? 15:47:14 Thanks jayahn - :) eeiden is out at the OPNFV conference digging to get their thoughts on destructive testing too 15:47:29 Can we drive it from so.thong other than the api? 15:47:38 Lol something 15:48:31 so.thong.. it exactly sounded like bull poo. (in korean word) 15:48:43 lol 15:49:13 portdirect: could you explain a bit more? 15:49:17 something like? 15:49:38 A configmap/file 15:50:08 not at the current version. but open to any feature suggestion 15:50:21 Having an extra endpoint to manage, (auth etc) always makes me sad 15:50:24 But for dev this is great 15:51:00 i will note your question, and will talk to my developer 15:52:28 Thanks guys. 15:52:29 portdirect: could you tell me more about "It would be great to see if we could get elements of this merged into the dev docs ps above" 15:52:44 your comments on installation scripts. 15:53:24 we can add "how to" guide with tools we are using. but need to know what you exactly want to have. 15:54:00 ping me anytime. I will happy to listen and follow. :) 15:55:05 T-minus five 15:55:10 #topic roundtable 15:55:18 Any other topics for today? 15:56:14 looking forward to seeing any folks who are doming to kubecon :) 15:56:21 same! 15:56:35 Alrighty -- thanks guys, see you in the chat room 15:56:39 #endmeeting