21:00:50 #startmeeting swift 21:00:51 Meeting started Wed Mar 27 21:00:50 2019 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:54 The meeting name has been set to 'swift' 21:01:03 who's here for the swift meeting? 21:01:05 o/ 21:01:10 o/ 21:01:16 o/ 21:01:22 o/ 21:01:26 o/ 21:01:59 agenda's pretty short 21:02:04 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:02:11 hi 21:02:20 #topic Swift 2.21.0 21:02:29 woo! 21:02:49 yey 21:02:58 so we had a release! that'll be our stein release; stable branch has already been cut 21:03:54 a lot of great stuff went out in it; thanks to everyone for writing and reviewing patches, and just generally making swift better :-) 21:04:05 nice 21:04:33 #topic pyeclib release 21:05:09 it's been a bit since we did one of these 21:05:42 and i was thinking it might be good (by which i mean, it would make my life easier ;-) 21:06:08 authors/change log patch is up at https://review.openstack.org/#/c/647656/ 21:06:18 things happen in swift for two reasons 1) make timburke's life easeir 2) for the lulz 21:06:51 if anyone has anything else they'd like to see for that, you should probably speak up in the next couple days 21:06:54 fwiw, it would also make my life easier 21:07:11 only pyeclib? liberasurecode too? 21:07:29 kota_, i hadn't thought that hard about it :-) 21:07:34 i just need pyeclib for now, but we still have libec patch to review... 21:07:45 making life easier is a great reason for a new release :) 21:07:47 one sec, let me dig it up 21:07:53 ok 21:08:20 https://review.openstack.org/#/c/635605/ 21:08:30 oh, yeah... zaitcev's patch might be good to release... 21:09:09 https://review.openstack.org/#/c/636556/ 21:09:12 @timburke can we get them all on priority reviews between now and next meeting and plan to do the release the week after that (2 weeks from now-ish)? 21:10:47 clayg, i think the stuff i'm really interested in releasing is landed... tdasilva's right that we should really review the quadiron patches (since we invited them to submit them and all), but idk that it should keep us from doing a release sooner rather than later 21:11:09 was the aim to get a new tagged released packaged by downstream as stein? 21:11:34 nope; too late for that. this'd be part of train regardless at this point 21:12:12 i just want something pip-installable that fixes https://bugs.launchpad.net/pyeclib/+bug/1780320 21:12:13 Launchpad bug 1780320 in PyECLib "If find_library('erasurecode') in setup.py does not return a library version, try to append it " [Undecided,Fix released] 21:12:38 tdasilva: ok well at a minimum they have some pre-req "fix" patches that are probably good candidates for "priority" review along with zaitcev's crc32 fix thing? 21:12:40 yeah, if we need some patches for stein, anyway backporting to stable/stein branch is needed. 21:13:10 just realized that libec is already 1.6.0 21:13:20 neither pyeclib nor liberasure code have stable branches, as i understand it 21:13:21 so land good code and cut a release then do backports - is anyone against the timeline of "on priority review this week; merge what we can next week; then cut a release"? 21:13:46 bam - stable branches are for scardy cats 21:14:27 oh, the crc32 is already landed 21:14:38 fwiw, the lists of open patches are remarkably short: https://review.openstack.org/#/q/project:openstack/pyeclib+is:open https://review.openstack.org/#/q/project:openstack/liberasurecode+is:open 21:15:09 so do we have important patches outstanding that we could realistically land before a release? maybe timburke was just saying "new tag incoming" 21:15:27 i think the quadiron changes (and my prep-for-release patch) are the only "live" changes 21:15:30 🤦 21:15:31 then it's priority reviews until patches are landed or until we just want to cut a relase. be it next week or earlier (if things have landed) 21:15:57 clayg: i don't think we have anything that is so important that needs to land before a release, agree quadiron could wait for next release as it will take a bit of time to land 21:16:20 ok, good talk 👍 21:16:20 +1 21:16:46 so, on to updates! 21:16:50 also WTG everyone who's been doing work/review on py/libec! that's an amazingly short backlog 21:17:23 clayg, it's almost like we don't really think much about it ;-) 21:17:26 #topic losf update 21:17:51 kota_, rledisez: how's the feature branch going? 21:18:29 not much move for the last days. alecuyer was working on a patch to automatically clean empty volumes. he will start working again on replacing grpc with http next week 21:18:48 sorry, not so much from my side because of business trip in the last week. I had asked to Norio to push their docs, he has bee preparing the patch I think. 21:19:27 I'll work in this week and the next to setup packaging and testing for that, in my plan. 21:19:41 cool! eventlet-friendly losf seems good, as do more docs :-) 21:20:08 #topic py3 updates 21:20:40 looks like zaitcev isn't here today, but i know he's been pushing up patches for DLO recently 21:21:04 i've been spiking hard on getting some in-process functional tests running in the gate 21:21:17 timburke: nice work on that 21:21:20 for example, https://review.openstack.org/#/c/645895/ 21:22:20 they'll have similar ratchets as the unit tests. currently targeting py37 (largely because that's what i've got as my default py3) 21:22:44 great! 21:22:46 excellent! 21:23:13 and mattoliverau's been working on getting staticweb running on py3! 21:23:35 just picked a middleware that hasn't been touched yet :) 21:24:43 i think that's about it for updates -- am i missing anything? 21:25:13 #topic open discussion 21:25:41 since we've got both rledisez and m_kazuhiro here, how about we bring up https://review.openstack.org/#/c/601950/ again? 21:26:40 does it need another rev - or it's good to go? 21:26:58 sure, give me a second to re-read the last answer of m_kazuhiro :) 21:27:07 looks like mostly docs 21:27:39 Yes. Another rev and rledisez's answer will be helpful for me. 21:28:29 feel free to sqash the docs update into that patch if you want: https://review.openstack.org/#/c/616076 21:28:47 or we can land them one after the other, whatevs 21:28:58 i thought object-server needed an internal-client so it couldn't use the object-server's config because that config already has a pipeline section that points to the object-server:app instead of the proxy:app? 21:29:11 I guess we can ask here opinions: does it seem reasonable to decide of the behavior of a daemon based on the name of the config file? 21:29:42 clayg, surely we could point to an internal-client.conf or something though, yeah? 21:29:48 mattoliverau: it looks like that patch already was squashed. 21:29:51 mattoliverau: I have already squashed the patch. 21:29:57 oh, great 21:30:01 never mind me then :P 21:30:30 i see some code that says "read_conf_for_queue_access" - but that's no on master is it? 21:30:30 mattoliverau: appreciated for working that :P 21:30:42 oh it is 21:32:14 rledisez, that does seem a little odd... 21:32:58 yeah i'm confused, having the object expirer use the internal-client.conf for the pipeline config seems absolutely brilliant tho... 21:33:22 why can't we just drive off the dequeue_from_legacy setting? 21:34:32 so it looks like this patch is not done - or at least all the core reviewers that look at it get confused really quickly which isn't a great sign for "ready" - but maybe we need to do a better job of enumerating the complete set of issues which much be resolved if we want it to land 21:35:02 seems about right 21:35:17 AFAIK this is a pre-req for the generalized task queue which everyone thinks is a brilliant idea, so ... maybe I'll put it on my list for Friday? 21:35:18 anyone have anything else to bring up? 21:37:05 clayg: Thank you for your question review, I'll add answer review to the patch. 21:37:21 timburke: I hear this one person using s3api wants bulk delete to be more faster/async? is that right? 21:37:43 yup. so... i'm gonna be thinking about how to do that in a sane way 21:38:51 it's looking... messy. like, i'm debating about making some REPLICATE requests from the proxy to clear out listings earlier. not sure yet about how good any of my ideas are 21:39:12 timburke: that's interesting, even for non-s3. i used to pass a regex with DELETE that was matching all entries of a container and create an object-expirer entry before removing the line from the container. 21:40:31 rledisez, that sounds rather similar to what i'm thinking about trying to do -- bulk-insert some expirer entries, then bulk-insert some tombstones... 21:40:48 how did you get around the expirer wanting to send x-if-delete-at? 21:41:45 timburke: it was 5 years ago (what, already?!) and we removed it from prod since, so I would have to check 21:42:28 eh, don't worry too much -- just curious. my current plan is to use a distinct content-type on the expirer queue entry 21:43:13 https://review.openstack.org/#/c/635040/ seems like a good thing, but maybe a little scary 21:43:23 (Include some pipeline validation during proxy-server start-up) 21:44:22 we've had a few bugs get reported that ultimately came down to badly-configured pipelines (or badly-behaved auto-insertion of required middlewares) 21:45:22 that patch tries to prevent such situations from arising, but may keep your proxy from starting on upgrade (with old configs) 21:45:58 yeah, I think we need to do something to ease pipeline issues. at least the major gotchas. 21:47:24 https://review.openstack.org/#/c/645624/ addresses some issues with our lower-constraints job, which apparently wasn't testing what we thought it was testing 21:48:44 it involves a couple dependency up-revs (for cryptography and netifaces, in particular), but i think they're old enough that it shouldn't really impact anyone? 21:49:45 if someone could take another look at that, i'd appreciate it -- i did enough to fix it up that i'm not sure i should be the one to +A 21:49:55 timburke: why can't you just +A that one - the infra guys all signed off - do you need someone else to load it into their head for any specific reason? 21:50:23 fine -- done :P 21:50:34 anyone have anything else? 21:51:09 I mean getting it on master is a great way to find out if it breaks "something else we don't know about" - and even then the remediation is the same "figure out how to have both things works and merge the fix" 21:51:23 getting it on master *right after a release I might say 21:52:55 all right, i think i'm calling it then 21:53:09 thanks for coming everyone, and thank you for working on swift! 21:53:19 #endmeeting