14:00:12 #startmeeting tc 14:00:13 Meeting started Thu Aug 8 14:00:12 2019 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:17 The meeting name has been set to 'tc' 14:00:19 #topic roll call 14:00:25 o/ 14:00:26 ohai 14:00:57 o/ 14:01:06 welcome tc-members :) 14:01:27 o/ 14:01:38 welcome to you as well 14:01:59 ahoy 14:02:12 \o 14:02:25 o/ 14:02:38 ok so i count atleast 7 of us if my math is right 14:02:48 that is 7 14:02:53 and i think my math tells me that we're good 14:03:04 evrardjp was here earlier 14:03:10 I count 8 14:03:12 i have not had enough caffeine to math yet 14:03:30 heh, well lets get started. 14:03:34 #topic Follow up on past action items 14:03:38 #info fungi to add himself as TC liaison for Image Encryption popup team 14:03:44 i believe this was already done and addressed 14:03:49 o/ 14:04:13 #link https://governance.openstack.org/tc/reference/popup-teams.html#image-encryption 14:04:24 like ragu, it's in there 14:04:35 * dhellmann slides in the back late 14:04:42 #link https://review.opendev.org/#/c/670370/ 14:04:50 cool 14:04:54 #info fungi to draft a resolution on proper retirement procedures 14:05:03 this merged not long ago 14:05:05 #link https://review.opendev.org/#/c/670741/ 14:05:31 and on our website 14:05:32 #link https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html 14:05:50 cool, i missed it getting approved 14:05:56 happened before your coffee :) 14:06:05 #topic Active initiatives 14:06:15 #info Python 3: mnaser to sync up with swift team on python3 migration 14:06:34 i believe that this is probably wrapped up, most of the patches are in and i think that swift is ok for py3? gmann mentioned something about this too 14:06:47 yeah py3 integration job is running fine on swift. 14:07:11 timburke also removed swift from disable-py3-repo list on devstack side. 14:07:15 http://replygif.net/i/417.gif 14:07:19 looks like its also moving well -- https://review.opendev.org/#/q/topic:py3-func-tests+(status:open+OR+status:merged) 14:07:22 so thats awesome 14:07:38 #info mugsie to sync with dhellmann or release-team to find the code for the proposal bot 14:07:43 sorry this one wasnt written out nicely 14:07:44 I found it 14:07:57 and am working on it at the moment 14:08:20 ok cool, for context -- this is making sure that when we cut a branch, proposal bot automatically pushes up a patch to add the 'jobs' for that series 14:08:22 there was some work done in the goal tools repo already, so trying to not re-write the world 14:08:26 ..right? 14:08:26 yes 14:09:00 speaking of, we should add the task of defining versions to our TODO list 14:09:07 mugsie: some of the stuff from https://review.opendev.org/#/c/666934/ might help 14:09:16 'jobs' ? for new python version right ?or stable ? 14:09:25 gmann: i think the 'series' specific job templates 14:09:32 like openstack-python-train-jobs or whatever it's called now 14:09:34 for the openstack-python3-train-job templates 14:09:42 ok 14:10:12 ok, well that's progressing so we'll follow up on that.. we still have sometime before the next release but it'd be nice to have it ready a little bit before 14:10:22 one difficulties in that might be few projects might need old py version testing like charm-* 14:10:58 gmann: they can add custom ones as needed, these are just for a standard set 14:11:23 mugsie: yeah. adding new should be ok as long as we do not remove their old supported one 14:11:25 charm-* *should* be good, as we based the py3 version off the LTS py3 version for each distro 14:11:58 ok, we can discuss more of the impl. details in office hours :> 14:11:59 for exmaple, they need to test and support py35 14:12:08 yeah. we can discuss later 14:12:12 =1 14:12:14 +1* 14:12:43 #info Forum follow-up: ttx to organise Milestone 2 forum meeting with tc-members (done) 14:12:58 yeah so we raised it and etherpads were created 14:13:19 let me dig links 14:13:44 We only have one volunteer (jroll) for the programming committee 14:13:54 anyone else interested in the short list not going for reelection? 14:14:15 the proposed list was: asettle mugsie jroll mnaser ricolin, ttx and zaneb. (those that qualify) 14:15:00 any volunteers? :> 14:15:06 mnaser: is there a specific document for Forum topics ideas ? 14:15:12 I can only find a PTG one 14:15:20 ttx: https://etherpad.openstack.org/p/PVG-TC-brainstorming ? 14:15:27 ok 14:15:30 I would like to do it, but not sure on time commitments - what was the requirements ? 14:15:33 #link https://etherpad.openstack.org/p/PVG-TC-brainstorming 14:15:54 mugsie: Beyond encouraging people to submit proposals, the bulk of the selection committee work happens after the submission deadline (planned for Sept 16th) and the Forum program final selection (planned for Oct 7th). 14:16:32 you help select, refine, merge. But there aren't taht many proposals so it's less work imho that a conference track chair 14:16:41 i did it last time, we accepted everything 14:16:44 OK, I don't have travel planned right now, so I should be OK for that 14:16:47 because there wasn't enough 14:16:56 yes, usually it's more about merging duplicates 14:16:58 i assume the situation will be similar this time 14:17:06 and deciding what is worth doublesessions 14:17:25 i heard rumors we may not be able to accept every forum proposal this time around, but i don't really know what the capacity for it is 14:17:33 basically aligning the number of slots available with the proposals received 14:18:00 also it probably depends a bunch on how many sessions get proposed in the first place 14:18:08 fair enough, ok, so mugsie is a "maybe" and we can discuss that a 'tad bit more in office hours or over the ml 14:18:09 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008188.html 14:18:19 ++ 14:18:30 #topic Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/) 14:18:36 I expect jimmy and Kendall to reach out soon for names 14:18:36 #undo 14:18:36 Removing item from minutes: #topic Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/) 14:18:40 #info Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/) 14:18:59 ttx count me in for volunteer 14:19:00 Yeah this is still missing reviews, no standing -1 14:19:28 so please review so we can cross it out 14:19:54 its been sitting around for a while so yeah 14:20:06 i will do that tomorrow 14:20:55 I really think we need to done this long before summit so we actually got time to sort/get more the proposal lists 14:21:15 good idea, well please lets go through it when you can then (but after we're done :)) 14:21:20 #topic Attendance for leadership meeting during Shanghai Summit on 3 November 14:21:39 alan reached out to me about this 14:21:54 wondering who from the tc might be able to make it then (and i assume this is somewhat related to https://etherpad.openstack.org/p/PVG-TC-PTG) 14:22:00 I should be there unless my visa applciation goes wrong 14:22:06 #link https://etherpad.openstack.org/p/PVG-TC-PTG 14:22:16 I should be there 14:22:25 I expect to be there 14:22:25 * ricolin will definitely be there 14:22:27 is it safe to assume that anyone going to ptg will likely be at that leadership meeting? 14:22:33 probably 14:22:44 of people already on the TC, I'd say yes 14:22:50 ok, we have 5 names down 14:23:02 i will be there but did not add my name till election... 14:23:14 oh yes, that's happening 14:23:28 anyone know off the top of their head 14:23:31 when the election starts/ends 14:23:35 I was just going to mention that would be a thing... 14:23:44 I think nominations opens end of August timeframe 14:23:46 nominations start on 27th 14:23:49 https://governance.openstack.org/election/ 14:23:56 TC Nominations 14:23:58 14:24:00 Aug 27, 2019 23:45 UTC 14:24:02 14:24:04 Sep 03, 2019 23:45 UTC 14:24:19 14:24:20 ouch so only by Sep 17, 2019 23:45 UTC can we really have a final tc list 14:24:21 Sep 17, 2019 23:45 UTC 14:24:25 Election end ^ 14:24:31 yeah 14:24:44 that might be hard for those who are on the "i can go if i hold a role" thing 14:24:50 probably too late for people to join the leadership thing if they did not plan to 14:25:20 we should probably address that timeframe issue for the future 14:25:42 i expect to be i shanghai at the board/leadership meeting, but as my term is up i will refrain from listing myself as an attendee unless reelected 14:25:58 fair enough 14:26:07 To be fair, the leadership thing does not require everyone imho 14:26:11 mnaser : the usual approach has been to recommend that candidates be prepared to attend, but travel budgets aren't what they used to be 14:26:21 I've been advocating for the people who are there to represent the others 14:26:25 ttx makes a good point 14:26:33 yeah and also 1 month before the actual summit itself is hard for people in general 14:26:37 PTG is a much more important moment imho 14:26:39 esp. if theres a process like a visa or something 14:26:41 especially with the change in the nature of that meeting 14:26:53 if reelected, i'll do my best to represent the positions of other tc members who cannot attend 14:27:06 i.e everyone should participate in drafting the message/position, and whoever can make it can represent 14:27:26 so i think at the end of the day, our message to alan will be: yes, the tc will have a presence at the leadership meeting 14:27:35 some presence 14:27:40 sounds right 14:27:52 #action mnaser to contact alan to mention that tc will have some presence at shanghai leadership meeting 14:27:56 we can have more precise numbers at the end of next month 14:28:16 cool, that sounds good to me 14:28:27 anyone has anything before moving on the next topic? 14:29:32 ETIMEOUT 14:29:38 #topic Reviving Performance WG / Large deployment team into a Large scale SIG (ttx) 14:29:46 Yeah, so a couple of weeks ago I was in Japan visiting some large OpenStack users 14:29:56 Yahoo! Japan for example, which runs 160+ clusters totalling 80k hypervisors and 60Pb storage 14:30:07 Or LINE, which *tripled* its OpenStack footprint over the last year alone, reaching 35k VMs (CERN's level) 14:30:17 wow, that's awesome 14:30:17 In those discussions there was a common thread, which is the need to improve scalability 14:30:44 What's even more awesome is taht they run those with pretty little teams 14:30:50 It's currently hard to go beyond a certain size (500-1000 hypervisors / cluster), and those users would love to 14:31:02 They cited RabbitMQ starting to fall apart, API responses to things like listing VMs getting too slow 14:31:08 ttx: was that a typo? reasonably confident CERN has more than 35k VMs :) 14:31:12 Obviously I tried to push them to invest upstream in that specific area 14:31:25 ++ 14:31:26 zaneb: I'm pretty sure they are not. 36k VMs was last count 14:31:26 +1 14:31:53 ttx: mriedem actaully raised this email to the mailing list a few days ago 14:31:53 oh, ok 14:31:54 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008204.html 14:32:02 but they also run Magnum clusters which might not be included 14:32:16 anyway -- I realized I had nowhere to really point them to 14:32:18 We used to have a bunch of groups tackling that "large scale" angle 14:32:27 We had the "Performance" team which was formed around Rally and osprofiler, but died in The Big Mirantis Shakedown 14:32:38 We have the "Large Deployments" team on the UC side, but afaict it is inactive since 2015 14:32:48 It feels like we need a place to point people interested in openly collaborating to tackle that specific "large scale" angle 14:32:54 Do you think a "Large scale" SIG could make sense ? 14:33:00 (assuming we clean up the remnants of the former teams) 14:33:08 I think it does, as long as people actually show up for it 14:33:10 * mnaser looks at current list of sigs 14:33:13 it seems like it makes sense, but are there people to join the sig and do the work? 14:33:46 I feel like it's easier to point people to a thing that is forming (Large scale SIG), than to a thing that is dead (Large deployemnt team) 14:33:48 I suspect it could because the larger operators I think of are the scientific operators, and commercial operators may not realize the scale the scientific folks tend to operate at 14:34:04 yeah that is important point to get the volunteer first 14:34:06 ttx any chances you mention this SIG idea to LINE or Yahoo JP? Just wondering what they think about this 14:34:14 yeah. sigs are cheap anyway, so if it fails to get traction we can spin it back down 14:34:17 or even to ask them as their first contribution to set up a SIG 14:34:23 tbc, my email started from a conversation in -nova with eandersson (blizzard) 14:34:39 who is last i checked not a scientist 14:34:44 ricolin: yes -- I just wanted to check the idea with y'all before pushing 14:34:52 Yahoo and LINE are not scientists 14:34:58 we're all scientists here ;) 14:35:06 YahooJapan should I say, different from YahooInc 14:35:07 ok so it makes sense to have something seperate for it 14:35:21 I just wanted to gut-check that it was not a stupid idea 14:35:22 * mnaser would be +2 on a change that is proposed to create a sig from their part 14:35:36 ttx it's a great idea IMO:) 14:35:48 * jroll would also +2 that 14:35:50 Like YahooJapan was talking of running new benchmarks on oslo.mesaging backends 14:35:57 worst case nobody joins and we're in the same spot 14:36:08 I'd love if they did it as part of that new group 14:36:10 +1 having something and in active state can be very useful for other org also. 14:36:27 I'll try to compile a list of orgs that may be interested in participating 14:36:32 ttx: should i make that an action to you to reach out to them and contact them? 14:36:44 Let's see if we can get some momentum around that. If not, taht;s not a big cost 14:37:01 i guess one other example is the (presumably definct) lcoo "large contributing openstack operator" working group 14:37:03 mnaser: yes sure! Anyone else interested in helping? 14:37:12 fungi: yeah I tried not to mention that one 14:37:18 heh, fair 14:37:28 seemed more like an excuse to create a bureaucracy 14:37:28 ttx: your plan is to make it immediately ? or propose the idea in shanghai forum and see the response and volunteer ? 14:37:29 ttx I will go update some SIG guideline docs so this process might be easier for new SIG like this 14:37:41 you can tag me on, i can help being there and sharing operator knowledge but i dont know if i have a ton of bandwidth to do 'run' the sig itself 14:37:42 gmann: in time to get people together in Shanghai, for sure 14:38:29 if we early start this SIG, like right this/next week, it can propose it's own PTG schedule 14:38:36 #action ricolin update sig guidelines to simplify process for new sigs 14:38:50 The whole story is also a useful reminder that we have lots of users out there, mostly invisible... and really need to bridge that gap and get them involved 14:39:03 I'll try to recruit some folks from verizon media to work with the sig as well, we're getting to a point where we might have some people-time to contribute 14:39:06 I see this SIG as a way to make it win-win 14:39:17 #action ttx contact interested parties in a new 'large operators' sig (help with mnaser, jroll reaching out to verizon media) 14:39:23 i think the hardest part is getting someone to take care of the logistics 14:39:45 mnaser: I said "Large scale", not "Large operators" because I feel like it's a slightly different concern 14:39:48 people will show up and talk but the whole note keeping / scheduling / running things is where people might disappear and not do 14:39:50 #undo 14:39:51 Removing item from minutes: #action ttx contact interested parties in a new 'large operators' sig (help with mnaser, jroll reaching out to verizon media) 14:39:56 scalability sig ;) 14:40:00 #action ttx contact interested parties in a new 'large scale' sig (help with mnaser, jroll reaching out to verizon media) 14:40:03 You can be happily operating smaller clusters 14:40:11 this is about scaling cluster size 14:40:25 and pushing things like cells in other projects 14:40:44 pushing the limits basically 14:41:02 I agree the overlap with large operators is probably very significant 14:41:07 * mnaser plays 'push it to the limit' 14:41:12 but yeah, i agree, i think that's very useful 14:41:29 It's a bit more.. targeted than just shring operations woes between large operators 14:42:04 anyway, thanks for helping me gut-check if that was a good idea 14:42:15 it's also a good place to target with Ironic power sync issue with large scale or co-work that issue with baremetal SIG:) 14:42:40 cool, with that we've gone through most of our topics. 14:42:43 #topic other discussions 14:42:54 anyone (tc or not) have any things that was not on our agenda that's not office-hours-y ? 14:42:56 ICYMI, Cinder is about to remove half of their drivers because they did not update to Py3 14:43:02 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008275.html 14:43:08 OSF is looking at pulling a few more strings to see if that triggers any last-minute save, but I'm not very optimistic 14:43:08 wow, that much? 14:43:23 i diidn't click but didnt know there was that much 14:43:36 mnaser: that is how I interpret that etherpad 14:43:47 yeah, I am trying to contact NEC driver team to migrate to py3 14:44:03 this isnt something the community can help with right? 14:44:08 because the CI inherently is just running py2 14:44:09 mnaser: not really 14:44:13 main thing is CI access 14:44:35 even as NEC contributor I do not have their CI access so cannot help on that 14:44:48 there might be some ex tra review work on Cinder if driver teams suddenly wake up 14:45:33 but otherwise it's mostly about kncking at every door we know of 14:45:39 With ironic, I had to explicitly go to each 3rd party CI and ask for them to plan and account for switching approximately half their jobs to py3. It took some leg work, but most everyone was responsive.... 14:46:16 sounds like a bunch of the cinder driver maintainers/ci operators are just not responsive at all 14:46:16 Essentially it was "knocking on every door" 14:46:26 TheJulia: yeah, maybe that approach was not doable with Cinder 14:46:33 lets remember openstack-discuss is huge traffic, it might just not be visible 14:46:49 does the user survey capture cinder drivers used? 14:46:49 I suspect people will show up and complain when the patch goes up to remove it 14:46:49 well, jay reached out to them all individually, he said 14:47:00 Jay did send emails to the contact emails he had 14:47:05 and something like half never replied at all 14:47:11 jungleboyj has tried emailing all the contact info listed in the third party CI wiki, but apparently the info there is very out of date or black hole addresses. 14:47:24 which may point to outdated contact info, but in the end same result 14:47:35 we do also have a third-party ci announcements ml we recommend they all subscribe to 14:47:38 What about last people to edit the files? 14:47:52 That's why we are pulling contact strings we have for OSF member companies 14:48:09 those are likely still active and may trigger a response 14:48:18 i think that's probably the best way to move forward 14:48:56 +1 14:48:58 My take is that the removal of some of these might be a good thing. And for the others, maybe a good wake up call to get them to know that they can't just put out driver code and assume they are done if they want to stay up to date. 14:49:16 it's a bit of a dead-man switch, yes 14:49:22 smcginnis, agree 14:49:39 periodic overhauls have a tendency to shake out what's not actually being maintained 14:49:50 if it does get pulled and some vendor realizes this once the release is out 14:50:01 is it possible that cinder uses an 'out of tree' driver? 14:50:12 as a stop gap till it makes it again in the upcoming release? 14:50:18 there are scads of oot drivers for cinder, if memory serves 14:50:31 mnaser: Customers are always able to use out of tree drivers and we do have some vendors that prefer that route versus being upstream. 14:50:41 the part where we find out which drivers aren't maintained is good. the part where there are a lot of drivers not really being maintained is not good 14:50:58 i am just trying to think of the operators/users that can at least work on a temporarily route till they add support again or what 14:51:30 Yep, that would be a valid option for them. 14:52:03 marking unsupported and warning on multiple platform ML, newsletter etc can be good before removing. 14:52:17 ok so as long as its workaround-able for our users, im happy with removing them 14:52:19 ttx: should we add it in newsletter if not so late ? 14:52:27 also the point at which vendors stop caring about particular hardware or platforms is the point at which those that are still popular may see new grassroots support teams form around them from their users 14:52:36 i dunno if we wanna use our newsletter to 'shame' those who arent maintaining things :p 14:52:37 gmann: it's really too targeted of a message for the newsletter imho 14:52:38 fungi: ++ 14:52:54 (or might just not know that they're out of date because someone forgot to update a contact email) 14:53:03 anyways 14:53:14 numerous drivers in the linux kernel are not maintained by vendors or commercial integrators, but by users who want their hardware to keep working 14:53:15 I think it's healthy for us to be encouraging out of tree drivers 14:53:18 At some point, you just have to remove them though. You can warn and try to raise red flags again, but if people are not maintaining it is better to remove them... no matter how painful it feels for the project leaders. 14:53:28 TheJulia: ++ 14:53:37 wanted to leave a bit more time for any other topics if any other community or tc members had that weren't office hour-y ? 14:53:51 agreed, just want to do a bit more due diligence with people we have contacts with 14:54:12 try to catch the 5% who accidentally overlooked it 14:54:17 in other topics, new openstack security advisory this week 14:54:22 #link https://security.openstack.org/ossa/OSSA-2019-003.html 14:54:26 ttx: ++ 14:54:31 not the 95% who did not pay attention since rocky 14:54:43 ttx: not sure if you've checked the announce ml moderation queue, it may be hung up in there for the past couple days 14:55:10 checkimg 14:55:21 ossa-2019-003 is also an interesting test case for our extended maintenance model 14:55:32 mriedem made patches all the way back to stable/ocata 14:55:33 I don;t get notified so 14:55:58 fungi: done 14:56:04 thanks ttx! 14:56:13 fungi: you might want to add yourself to that one and be able to clear OSSAs 14:56:26 aren't we still supposed to merge things to extended maintenance branches? 14:56:37 ttx: happy to, thanks for the invitation 14:56:47 * jroll notes that ocata isn't merged 14:56:55 i'm personally curious to see how long it takes to get changes merged to some of the older stable branches, particularly how viable the ci jobs still are 14:56:58 i think the topics are slowly moving towards office hour-y things so ill close us up :) 14:57:06 we can carry this conversation onto office hours 14:57:08 thanks mnaser! 14:57:08 thanks everyone! 14:57:11 #endmeeting