14:00:39 #startmeeting airship 14:00:40 Meeting started Tue Jan 8 14:00:39 2019 UTC and is due to finish in 60 minutes. The chair is mattmceuen. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:44 The meeting name has been set to 'airship' 14:00:46 #topic rollcall 14:00:53 good morning / evening / middle of night everyone! 14:01:04 o/ 14:01:05 Hi! 14:01:08 o/ 14:01:10 o/ 14:01:17 o/ 14:01:33 Agenda for today: https://etherpad.openstack.org/p/airship-meeting-2019-01-08 14:01:35 o/ 14:01:46 Please add in anything additional you'd like to discuss today 14:03:41 Alrighty 14:04:00 #topic Accessing tiller from behind a proxy 14:04:33 There was a discussion on the ML around accessing tiller from behind the proxy 14:04:44 Yes, there is a problem with configuring Armada behind the proxy, no_proxy in grpc (tiller client) does not support cidr for no_proxy, as a result it cannot connect to Tiller. 14:04:51 It led to some good thoughts around how to streamline armada-tiller-shipyard communication in general 14:05:08 but first I was hoping evgenyl could walk through his use case to ground us 14:05:27 Scott suggested to configure "proxy" parameter for every repo, I'm wondering if it can be done globally. 14:06:41 If we configure tiller with DNS name and use it instead of IP, this should help with no_proxy. 14:06:49 is the proxy issue you're getting happening when you pull images from the repo, or after you pull images and armada is trying to talk to tiller? 14:06:57 tiller does have a dns name 14:06:59 trying to understand how the repo part fits into this 14:07:04 there is a tiller service that gets a in-cluster DNS entry 14:07:48 Ignore me 14:07:56 lol 14:07:58 That was the previous tiller chart in OSH 14:08:01 ahh 14:08:01 mattmceuen: it happens when Armada starts to pull git repos. 14:08:39 I see - that's before armada is actually talking to tiller 14:08:41 This is why we define proxies per repo 14:08:53 Because it gives the granularity needed 14:09:31 evgenyl can you give sthussey's idea a try and let us know if it helps, and/or follow on issues? 14:09:58 I think the issue Evgeny was having isn't functional, it is UX 14:10:04 Yes. 14:10:17 I was wondering if it can be redefined globally. 14:10:18 Because in the case of AIAB behind the proxy, he needed to update the proxy for every chart in versions.yaml 14:10:43 Does anybody know why DNS was removed from OSH chart? 14:10:53 We had two charts 14:10:56 We stopped using OSH 14:11:10 And use the chart in the Armada repo 14:11:15 #topic Streamlining tiller lookup logic 14:11:19 Segueway :) 14:11:26 We can certainly add a service to the chart 14:11:40 Which will yield a DNS entry that could be used 14:12:08 would just be a revert of: https://github.com/openstack/airship-armada/commit/eb7c112d2ee8e95ab5a4eda0076c69fdd3aeaf66 14:12:36 Yes. There also an idea to move logic of tiller discovery from shipyard into Armada. 14:12:49 That likely will happen. 14:13:14 ++ 14:13:21 Tiller will be running as a sidecar to the armada API in the near future 14:13:45 So resolving what address to access tiller at will need to be in Armada, so might as well make that location logic good for the CLI as well 14:13:54 o/ 14:14:02 o/ roman_g 14:14:30 sthussey: will it have a separate DNS name when running as a sidecar? 14:14:43 Ot will it be possible to configure? 14:14:52 When running as a sidecar it will be inaccessible to anything but the Armada API it runs with 14:15:19 So there will need to be a few deployment patterns to support use cases of the Armada CLI 14:15:34 I'm trying to understand if this DNS change will make sense if Tiller lands as a sidecar. 14:16:22 presumably then you'd just have it listen on localhost? so the service would not be exposed at all? 14:16:41 Oh, in this case PodIP discovery won't be needed. 14:16:48 That is in the API case 14:16:53 Does work for the CLI case 14:16:56 sthussey, are you saying that we'd run tiller-in-armada-pod for an armada API based deployment, but to support the CLI we'd also support standalone tiller pods w/ Service (DNS) ? 14:16:58 Doesn't * 14:17:15 Right, would need to support multiple patterns 14:17:19 gotcha 14:18:11 so to say differently, you'd choose your adventure, one of: 14:18:11 1. Deploy the Armada chart 14:18:11 2. Deploy the Tiller chart (which will include a Service / DNS) and use the Armada client 14:18:29 depending on the operator use case 14:19:20 So in that case 14:19:30 So you can use Armada client without having Armada API service? 14:19:46 yes 14:19:56 Oh, ok, this is clear now. 14:19:59 The CLI looks for the tiller client with labels 14:20:06 tiller pod* 14:20:57 The normal way to use the Armada client is ArmadaClient->Tiller directly 14:21:09 Although it can be configured to be ArmadaClient->ArmadaAPI->Tiller as well 14:22:11 So to summarize it would like: ArmadaClient-> (DNS) Tiller, ArmadaClient->ArmadaAPI->(localhost)Tiller 14:22:47 Would you be running the armada client from inside the cluster or outside the cluster? Just making sure 14:22:56 AFAIK, nobody is using the configuration of Client -> API -> Tiller 14:23:36 I ask because, if inside the cluster, then the Service will be enough to provide DNS-based routability to tiller 14:24:13 mattmceuen: I'm interested in a "standard" Shypard -> ArmadaAPI -> Tiller use case 14:24:16 but if you're trying to get to it from outside the cluster, you'd additionally need an ingress for the service into the cluster, along with *publicly* accessible DNS 14:24:20 ok 14:24:47 or just expose a port, and use the ip 14:24:53 yeah 14:25:23 the reason we added the service in osh, was much less about dns, but about having a stable vip to hit tiller with 14:26:15 The problem with IP gets back to the initial issue with grpc not supporting cidr in no_proxy, isn't it? 14:26:38 Which isn't standard 14:26:42 So not surprising 14:26:45 Yes. 14:26:46 yeah - it does not solve that issue, just providing ontext for where it came from 14:26:55 *context 14:27:02 So there are two issues in parallel - 1) address resolution and 2) routing 14:27:40 Address resolution can use DNS, explicit address or label selection. Only DNS would solve the routing issue 14:28:22 If we add a `proxy` setting at the armada/Manifest/v1 level that will apply to all charts, it solves #2 and makes #1 moot 14:30:41 So are there any objections regarding to this? If not I can start looking into that in my spare time. 14:30:45 evgenyl can you try setting the proxy for each chart like sthusey had suggested earlier for now, and we can look to add a manifest-wide proxy setting in the future? 14:30:54 that would be awesome evgenyl :) 14:31:32 Sure, let me update AIAB doc with this info as a quick fix. 14:31:41 thanks man 14:31:59 to wrap up the "tiller lookup" topic: 14:32:33 when we move tiller in-pod with armada, and armada changes to look tiller up via localhost, seems like a natural point to go ahead and take the lookup functionality out of shipyard, any objections? 14:32:44 any other loose ends? 14:33:13 alright - moving on: 14:33:17 #topic Read the Docs jobs not updating documentation 14:33:24 This one's yours dwalt 14:33:31 This is still an issue AFAIK 14:33:42 For example, https://review.openstack.org/#/c/628420/1 doesn't appear to have triggered an update 14:34:22 Has anyone heard anymore official information (outside of http://lists.openstack.org/pipermail/openstack-infra/2018-December/006247.html) from OpenStack regarding this? 14:34:29 @roman_g 14:34:55 My understanding is infra is waiting for this to be merged 14:34:57 #link https://github.com/rtfd/readthedocs.org/issues/4986 14:35:32 https://github.com/rtfd/readthedocs.org/pull/5009 14:36:00 though no indication of the timeline before this is closed out 14:36:11 it seems pretty active 14:36:23 Yes, it is broken for several months already. 14:36:30 I'm leaning toward doing manual rtd updates and give this a chance to get fixed 14:36:40 yes but promising that there's discussion 5 days ago too:) 14:36:59 Sorry, not several months, for almost a month :) 14:37:30 i.e. hopefully the end is in sight... we can revert to a token-based push if it does stretch out a long time, but better not to spend the cycles on it if the issue gets resolved soon is what I'm thinking. 14:37:45 agree/disagree? 14:37:45 Agreed. 14:37:51 ++ 14:38:02 awesome 14:38:05 next topic: 14:38:09 #topic Adding Airship in the CNCF Landscape 14:38:16 Who is doing this manual updates? 14:38:19 these* 14:38:34 Kaspars is what I hear, sthussey 14:38:57 I am not sure if that's for all repos or just treasuremap 14:39:13 It has to be manually triggered via RTD from what I understand 14:39:23 So whoever has access to the account 14:39:24 let's bring it up in this chat when we need doc updates, kaspars or anyone with rtd access to our account can do it 14:39:38 (I think I might have access, it's been a while :) ) 14:40:14 hogepodge for CNCF landscape, this one's yours! Please educate us on what this thing is 14:40:22 https://landscape.cncf.io/grouping=no&license=open-source 14:40:28 I see Zuul in there 14:40:54 So the CNCF landscape is an interactive document that tries to captures integrations between CNCF projects an other open source projects 14:41:10 So, Zuul is there because it runs CI jobs for K8s on OpenStack, for example 14:41:23 Or Kata because it can be a container runtime for Docker and Kubernetes. 14:41:40 Since Airship is essentially deployment and hosting tooling, it makes sense to list it in the landscape 14:42:08 odd that it shows the market cap of the backing company for single vendor projects 14:42:09 Sounds reasonable to me 14:42:16 If there are parts of Airship that can stand alone (like Armada say), there's also a possibility of listing those. 14:42:16 but that sounds great 14:42:22 portdirect: it has some... limitations 14:42:58 is it as simple as filling out some request form hogepodge? Or is there beurocracy :) 14:43:08 Like they want to pull all data from github, which is fine with mirrors, but for projects run in Gerrit it loses a bunch of data about issues, code reviews, and so on 14:43:23 I've never actually added anything myself, I think it's a pull request 14:43:37 #link https://github.com/cncf/landscapeapp 14:44:24 #link https://github.com/cncf/landscape 14:44:49 "If you think your company or project should be included, please open a pull request to add it to landscape.yml. For the logo, you can either upload an SVG to the hosted_logos directory or put a URL as the value, and it will be fetched." 14:45:15 Sounds reasonable to me -- I think perhaps a single PR with two entries, Airship (as a whole) and Armada, using the same Airship logo would be great 14:45:28 Any volunteers for putting in this PR with CNCF? 14:45:47 i could if you want? 14:46:06 I was about to offer, but portdirect got there first ;-) 14:46:13 lol 14:46:23 he did have a question mark 14:46:40 portdirect if you have bandwidth that would be awesome, ty 14:46:49 np 14:46:56 voluntold 14:47:04 thanks for the find hogepodge, free publicity! 14:47:18 evrardjp: quite the opposite ;) 14:47:30 I've been working with Dan on some of the openstack stuff, so it's been on my mind 14:47:34 volunplease 14:48:06 anything else on this topic? 14:48:23 #topic proposed topic: design overview of ovs-dpdk integration 14:48:33 georgek - this one's yours! What do you have in mind? 14:48:39 ok, thanks 14:49:01 so, as you know there is a need and some discussions around ovs dpdk intergration in Airship 14:49:22 I wanted to bring the topic here to have a sanity check of work items that have been indentified 14:49:30 and to figure out who is wokring on this 14:49:38 #link https://wiki.akraino.org/display/AK/Support+of+OVS-DPDK+in+Airship 14:50:14 this wiki pages lists a couple of high-level work items that I have identified which need to be done in order to get the integration done 14:50:54 georgek are there user stories in storyboard for this you could point me toward? I'm catching up on this topic 14:51:01 ^^ 14:51:43 right, that would be a good next step. no, I am not aware of those 14:52:10 but I am not a huge fan of searching in storyboard (might just be me) 14:52:26 if this list is reasonable, I happy transfer it to storyboard 14:52:45 it would provide more visibility for sure 14:53:23 that wiki seems to be a bit misinformed 14:53:24 portdirect, can we carry the OSH-related bits of this to the OSH meeting coming up next in openstack-meeting-5? 14:53:25 georgk: aside, if you can file an issue about searching in storyboard in the storyboard storyboard we might have some resources coming online to take a look at it 14:53:29 Since OSH isn't part of Airship 14:53:58 i was side-stepping that for the moment sthussey 14:54:05 yeah, let's focus on the Airship-specific bits in the few mins left we have (I need to steal one min at the end for another topic) 14:54:17 "deploy neutron openvswitch agent ensure chart of openvswitch agent is deployed" 14:54:33 but i think in general, if you want to get traction in airship then storyboard will be the forum to do that in 14:54:47 (the same applies for osh, but i digress) 14:55:13 as not many of us are chacking akrano wiki often, and it doesnt allow us to tie the work in with patchsets 14:55:22 ok, I agree that it should be in storyboard, I wanted to get a sanity check if the items first 14:55:56 for deployment of the OVS agent - from an AIrship perspective, I think that item above is just a matter of supplying the right configuration to the helm chart via deployment manifests 14:56:10 Most of the Airship items on that list work today 14:56:31 `DPDK host config: mount hugepages` this is the area i expect you'll find most sticky 14:56:31 sthussey: even better 14:56:35 Yeah, they're mostly "configs" which are operator-specific 14:56:52 so georgek they may take the form of "as an operator, how do I ..." which we can def help answer 14:57:16 I generally consider most things to be configuration issue, but I´d like to avoid missing major gaps in Airship that need to be addressed 14:57:24 for sure 14:57:36 and I haven't looked deep in the list, there may be enhancements to airship needed 14:57:48 i dont really see any tbh 14:57:55 even better :) 14:58:03 sorry to rush along, but one timely topic needed 14:58:13 #topic pre-pre-PTG planning 14:58:24 We need to reserve a room for Airship at the PTG in Denver 14:58:29 ok, so, we continue on the openstack-helm call, I suppose 14:58:31 seems like we were just in the PTG in Denver 14:58:40 that's what I'm mthinking too georgek 14:59:12 Need to have a ROUGH headcount. Note I don't know how many folks my company will be able to send even, but broadly, who here suspects their company will be sending folks to the PTG? 14:59:41 Note PTG is same week as Open Infra Summit, so you can combine both events (or pick just one) 15:00:11 two for one, how can I say no 15:00:15 :) 15:00:34 well give it some thought guys, I'll try to ballpark a number and get it back to the OSF today 15:00:50 we're out of time! Thanks everyone for the discussion! 15:00:55 #endtopic 15:00:59 #endmeeting