Wednesday, 2019-07-24

*** yamamoto has joined #openstack-meeting00:02
*** brinzhang has joined #openstack-meeting00:03
*** iyamahat has joined #openstack-meeting00:05
*** brinzhang_ has quit IRC00:06
*** iyamahat has quit IRC00:11
*** _hemna has joined #openstack-meeting00:15
*** ianychoi has quit IRC00:18
*** bbowen has quit IRC00:19
*** bbowen has joined #openstack-meeting00:19
*** ianychoi has joined #openstack-meeting00:20
*** iyamahat has joined #openstack-meeting00:21
*** martial has quit IRC00:22
*** gyee has quit IRC00:31
*** iyamahat has quit IRC00:36
*** mattw4 has quit IRC00:37
*** tetsuro has joined #openstack-meeting00:40
*** jamesmcarthur has joined #openstack-meeting00:52
*** jamesmcarthur has quit IRC00:58
*** Liang__ has joined #openstack-meeting00:59
*** ricolin has joined #openstack-meeting00:59
*** yamamoto has quit IRC01:04
*** igordc has quit IRC01:04
*** _hemna has quit IRC01:14
*** eharney has quit IRC01:15
*** jamesmcarthur has joined #openstack-meeting01:20
*** ayoung has quit IRC01:21
*** hongbin has joined #openstack-meeting01:29
*** hongbin_ has joined #openstack-meeting01:32
*** hongbin has quit IRC01:33
*** _erlon_ has quit IRC02:05
*** hongbin_ has quit IRC02:06
*** brinzhang_ has joined #openstack-meeting02:08
*** yamamoto has joined #openstack-meeting02:11
*** brinzhang has quit IRC02:12
*** jamesmcarthur has quit IRC02:16
*** jamesmcarthur has joined #openstack-meeting02:17
*** jamesmcarthur has quit IRC02:21
*** jamesmcarthur has joined #openstack-meeting02:22
*** hongbin has joined #openstack-meeting02:23
*** jamesmcarthur has quit IRC02:24
*** jamesmcarthur has joined #openstack-meeting02:24
*** tetsuro has quit IRC02:29
*** jamesmcarthur has quit IRC02:32
*** dklyle has quit IRC02:34
*** david-lyle has joined #openstack-meeting02:35
*** diablo_rojo has joined #openstack-meeting02:36
*** diablo_rojo has quit IRC02:58
*** tetsuro has joined #openstack-meeting03:01
*** tetsuro_ has joined #openstack-meeting03:06
*** tetsuro has quit IRC03:08
*** tetsuro_ has quit IRC03:11
*** mattw4 has joined #openstack-meeting03:21
*** psachin has joined #openstack-meeting03:31
*** dviroel has quit IRC03:33
*** hongbin has quit IRC03:33
*** mattw4 has quit IRC03:37
*** mattw4 has joined #openstack-meeting03:43
*** hongbin has joined #openstack-meeting03:47
*** armax has quit IRC03:50
*** mattw4 has quit IRC03:51
*** whoami-rajat has joined #openstack-meeting03:52
*** igordc has joined #openstack-meeting03:58
*** igordc has quit IRC04:01
*** hongbin has quit IRC04:06
*** apetrich has quit IRC04:20
*** pcaruana has joined #openstack-meeting04:27
*** shubham_potale has left #openstack-meeting04:33
*** tetsuro has joined #openstack-meeting04:35
*** brault has joined #openstack-meeting04:36
*** brault has quit IRC04:37
*** jaypipes has joined #openstack-meeting04:49
*** Luzi has joined #openstack-meeting04:55
*** tetsuro has quit IRC04:58
*** pcaruana has quit IRC05:12
*** boxiang has joined #openstack-meeting05:25
*** pcaruana has joined #openstack-meeting05:25
*** kopecmartin|off is now known as kopecmartin06:11
*** baojg has joined #openstack-meeting06:26
*** baojg has quit IRC06:27
*** tetsuro has joined #openstack-meeting06:28
*** baojg has joined #openstack-meeting06:28
*** jawad_axd has joined #openstack-meeting06:29
*** rsimai has joined #openstack-meeting06:44
*** brinzhang has joined #openstack-meeting06:48
*** tetsuro has quit IRC06:54
*** nnsingh has quit IRC07:01
*** rcernin has quit IRC07:04
*** tesseract has joined #openstack-meeting07:05
*** slaweq has joined #openstack-meeting07:10
*** _hemna has joined #openstack-meeting07:12
*** _pewp_ has quit IRC07:12
*** _pewp_ has joined #openstack-meeting07:13
*** _hemna has quit IRC07:16
*** dmacpher has joined #openstack-meeting07:21
*** ttsiouts has joined #openstack-meeting07:25
*** brault has joined #openstack-meeting07:30
*** brault has quit IRC07:30
*** tssurya has joined #openstack-meeting07:39
*** ttsiouts has quit IRC07:43
*** ttsiouts has joined #openstack-meeting07:44
*** ttsiouts has quit IRC07:48
*** ralonsoh has joined #openstack-meeting07:53
*** e0ne has joined #openstack-meeting07:58
*** belmoreira has joined #openstack-meeting07:59
*** apetrich has joined #openstack-meeting08:05
*** imsurit has joined #openstack-meeting08:17
*** ttsiouts has joined #openstack-meeting08:18
*** priteau has joined #openstack-meeting08:19
*** tetsuro has joined #openstack-meeting08:30
*** belmoreira has quit IRC08:34
*** imsurit_ofc has joined #openstack-meeting08:41
*** tetsuro has quit IRC08:42
*** priteau has quit IRC08:43
*** imsurit has quit IRC08:43
*** priteau has joined #openstack-meeting08:44
*** priteau has quit IRC08:44
*** imsurit_ofc has quit IRC08:46
*** altlogbot_3 has quit IRC08:51
*** irclogbot_2 has quit IRC08:51
*** altlogbot_0 has joined #openstack-meeting08:52
*** irclogbot_2 has joined #openstack-meeting08:52
*** belmoreira has joined #openstack-meeting09:06
*** lpetrut has joined #openstack-meeting09:07
*** ttsiouts has quit IRC09:21
*** ttsiouts has joined #openstack-meeting09:22
*** Liang__ has quit IRC09:25
*** ttsiouts has quit IRC09:27
*** priteau has joined #openstack-meeting09:31
*** ttsiouts has joined #openstack-meeting09:32
*** boxiang has quit IRC09:42
*** lpetrut has quit IRC09:47
*** jamesmcarthur has joined #openstack-meeting09:49
*** yamamoto has quit IRC09:51
*** apetrich has quit IRC09:56
*** jamesmcarthur has quit IRC09:58
*** jamesmcarthur_ has joined #openstack-meeting09:58
*** trident has quit IRC10:06
*** trident has joined #openstack-meeting10:08
*** lpetrut has joined #openstack-meeting10:09
*** ttsiouts has quit IRC10:09
*** ttsiouts has joined #openstack-meeting10:10
*** brinzhang has quit IRC10:10
*** jamesmcarthur_ has quit IRC10:11
*** yamamoto has joined #openstack-meeting10:13
*** ttsiouts has quit IRC10:14
*** boxiang has joined #openstack-meeting10:18
*** carloss has joined #openstack-meeting10:49
*** ttsiouts has joined #openstack-meeting10:59
*** yamamoto has quit IRC11:07
*** boxiang has quit IRC11:10
*** boxiang has joined #openstack-meeting11:11
*** yamamoto has joined #openstack-meeting11:14
*** Lucas_Gray has joined #openstack-meeting11:15
*** yamamoto has quit IRC11:15
*** yamamoto has joined #openstack-meeting11:17
*** Lucas_Gray has quit IRC11:21
*** Lucas_Gray has joined #openstack-meeting11:24
*** bdperkin has quit IRC11:32
*** irclogbot_2 has quit IRC11:33
*** raildo has joined #openstack-meeting11:34
*** belmoreira has quit IRC11:34
*** irclogbot_2 has joined #openstack-meeting11:35
*** bdperkin has joined #openstack-meeting11:38
*** yamamoto has quit IRC11:42
*** belmoreira has joined #openstack-meeting11:46
*** apetrich has joined #openstack-meeting11:49
*** yamamoto has joined #openstack-meeting11:55
*** yamamoto has quit IRC11:55
*** yamamoto has joined #openstack-meeting12:10
*** yamamoto has quit IRC12:14
*** dviroel has joined #openstack-meeting12:20
*** yamamoto has joined #openstack-meeting12:21
*** mriedem has joined #openstack-meeting12:35
*** ttsiouts has quit IRC12:36
*** ttsiouts has joined #openstack-meeting12:36
*** ttsiouts has quit IRC12:41
*** yamamoto has quit IRC12:43
*** yamamoto has joined #openstack-meeting12:44
*** coreycb has quit IRC12:45
*** coreycb has joined #openstack-meeting12:47
*** eharney has joined #openstack-meeting12:49
*** belmoreira has quit IRC12:54
*** liuyulong has joined #openstack-meeting12:56
*** e0ne has quit IRC12:57
*** e0ne has joined #openstack-meeting12:57
*** yamamoto has quit IRC13:01
*** jawad_axd has quit IRC13:01
*** ttsiouts has joined #openstack-meeting13:12
*** artom has quit IRC13:25
*** yamamoto has joined #openstack-meeting13:28
*** ttsiouts has quit IRC13:28
*** ttsiouts has joined #openstack-meeting13:29
*** ttsiouts has quit IRC13:33
*** brault has joined #openstack-meeting13:39
*** liuyulong has quit IRC13:41
*** enriquetaso has joined #openstack-meeting13:42
*** ttsiouts has joined #openstack-meeting13:45
*** apetrich has quit IRC13:47
*** ayoung has joined #openstack-meeting13:52
*** yamamoto has quit IRC13:53
*** armax has joined #openstack-meeting13:54
*** jamesmcarthur has joined #openstack-meeting13:55
*** liuyulong has joined #openstack-meeting13:55
liuyulongtest13:55
liuyulongtest13:56
*** apetrich has joined #openstack-meeting13:56
liuyulong#startmeeting neutron_l314:00
openstackMeeting started Wed Jul 24 14:00:04 2019 UTC and is due to finish in 60 minutes.  The chair is liuyulong. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: neutron_l3)"14:00
openstackThe meeting name has been set to 'neutron_l3'14:00
*** belmoreira has joined #openstack-meeting14:00
slaweqhi14:00
liuyulonghi14:02
liuyulong#topic Announcements14:02
*** openstack changes topic to "Announcements (Meeting topic: neutron_l3)"14:02
njohnstono/14:02
*** liuyulong_ has joined #openstack-meeting14:03
liuyulong_Any announcements?14:06
liuyulong_OK, let's move on.14:06
liuyulong_#topic Bugs14:06
*** liuyulong has quit IRC14:06
*** liuyulong_ is now known as liuyulong14:06
liuyulong#topic Bugs14:07
*** openstack changes topic to "Bugs (Meeting topic: neutron_l3)"14:07
*** wwriverrat has joined #openstack-meeting14:07
liuyulong#link http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007952.html14:07
liuyulongHongbin was our bug deputy last week, thanks.14:07
liuyulongIMO, it is a quiet week for L3, (we are in neutron_l3 meeting) : )14:07
*** brault has quit IRC14:07
liuyulongSo today, I will re-raise some old bugs. And I've reset some bugs with a higher level, because it has been submitted for a really long time.14:08
liuyulong(Maybe I should change the bug level more higher if it still does not have much activities. LOL)14:08
liuyulong#link https://bugs.launchpad.net/neutron/+bug/182669514:09
openstackliuyulong: Error: Could not gather data from Launchpad for bug #1826695 (https://launchpad.net/bugs/1826695). The error has been logged14:09
liuyulongWhat happened?14:09
liuyulongThe bug title is "[L3][QoS] cache does not removed when router is down or deleted"14:10
liuyulongThe fix is here:14:10
liuyulonghttps://review.opendev.org/#/c/656105/14:10
slaweqI would say opposite - if bug is there for long time and nobody really cares about it, we should IMO decrease its priority :)14:10
*** liuyulong_ has joined #openstack-meeting14:10
* njohnston thinks Launchpad is having issues14:11
ralonsohnjohnston, I can't load anything14:12
liuyulongslaweq, until someday nobody care about the entire project? LOL14:12
slaweqliuyulong: who knows :)14:13
liuyulongNext14:13
liuyulong#link https://bugs.launchpad.net/neutron/+bug/181135214:14
openstackliuyulong: Error: Could not gather data from Launchpad for bug #1811352 (https://launchpad.net/bugs/1811352). The error has been logged14:14
*** Luzi has quit IRC14:14
liuyulongopenstack, all right, I know!14:14
liuyulongWe need this for Shanghai related topic:14:14
liuyulonghttps://review.opendev.org/#/c/650062/14:14
liuyulongThe CLI patch is here ^^14:14
liuyulongThe progress is a bit slow. All OSC core reviewers has been added to that patch. : (14:17
* tidwellr wanders in late and lurks14:17
*** lennyb has quit IRC14:18
liuyulongBut it's OK, we can tag it locally and install it for the demo.14:18
liuyulongNext one: #link https://bugs.launchpad.net/neutron/+bug/160921714:18
openstackliuyulong: Error: Could not gather data from Launchpad for bug #1609217 (https://launchpad.net/bugs/1609217). The error has been logged14:18
slaweqliuyulong: do You have any presentation about port forwarding in Shanghai?14:19
liuyulongThis is really an old one, the title is "DVR: dvr router ns should not exist in scheduled DHCP agent nodes"14:19
liuyulongThe fix is here, it adds a new config for cloud deployment: https://review.opendev.org/#/c/364793/14:19
liuyulongslaweq, yes, mlavalle submitted a topic.14:20
slaweqgood to know :)14:20
slaweqthx for info14:20
liuyulongI will not repeat the reason of the fix, if you are interested in this bug, this will be the full scenarios I added before:14:21
liuyulonghttps://review.opendev.org/#/c/364793/3//COMMIT_MSG14:21
liuyulongIt makes large scale deployment really happy.14:21
liuyulongNext14:22
liuyulong#link https://bugs.launchpad.net/neutron/+bug/181378714:22
openstackliuyulong: Error: Could not gather data from Launchpad for bug #1813787 (https://launchpad.net/bugs/1813787). The error has been logged14:22
liuyulongThe bug title is "[L3] DVR router in compute node was not up but nova port needs its functionality"14:23
liuyulongThe main fix is here: https://review.opendev.org/#/c/633871/14:23
liuyulongWe already have some related fix, but not aim to the root cause. This one is one approach.14:24
liuyulongWe have run such code locally for a long time. It acts good.14:24
liuyulongNext #link https://bugs.launchpad.net/neutron/+bug/182515214:25
openstackliuyulong: Error: Could not gather data from Launchpad for bug #1825152 (https://launchpad.net/bugs/1825152). The error has been logged14:25
*** jamesmcarthur has quit IRC14:25
*** ayoung has quit IRC14:25
liuyulongThe title is "[scale issue] the root rootwrap deamon causes l3 agent router procssing very very slow"14:25
liuyulongThese two config options really hurt the performance: `use_helper_for_ns_read=` and `root_helper_daemon=`.14:26
liuyulongThe fix https://review.opendev.org/#/c/653378/ just set it to False by default, since we should set the more proper value for the widely used distro.14:26
slaweqabout this one I still don't agree that we should change default value which can possibly break some deployments during upgrade14:27
liuyulongYes, another large scale issue. And we also have a nice performance improvement locally.14:27
slaweqIMO this should be well documented what to do to potentially improve performance here14:28
slaweqbut IMO changing default value isn't good solution14:28
*** mriedem has quit IRC14:29
ralonsohslaweq, that's the point, this is a potential issue in some environments14:29
liuyulongslaweq, thanks for the advice14:29
liuyulongI will update the doc14:30
slaweqthx14:30
liuyulongBut may I know the real distro which rely on this? XEN?14:30
ralonsohnot only XEN but environments where the user can't access to the namespaces14:31
liuyulongOK, last one14:31
*** mattw4 has joined #openstack-meeting14:31
liuyulong#link https://bugs.launchpad.net/neutron/+bug/182849414:32
openstackliuyulong: Error: Could not gather data from Launchpad for bug #1828494 (https://launchpad.net/bugs/1828494). The error has been logged14:32
slaweqliuyulong: TBH I don't know - maybe if You want to change default value You can start thread on ML to ask other operators/distro maintainers who can potentially be hurt by this change and maybe we can change it in the future14:32
liuyulongThe title is "[RFE][L3] l3-agent should have its capacity"14:32
liuyulongIt is a RFE14:32
liuyulongslaweq, OK, thank you : )14:33
liuyulongThe spec "L3 agent capacity and scheduling"14:33
liuyulonghttps://review.opendev.org/#/c/658451/14:33
slaweqthis needs to be discussed by drivers team first14:34
liuyulongAnd the ready to review code:14:34
liuyulonghttps://review.opendev.org/#/c/661492/14:34
liuyulongslaweq, yes14:34
*** zaneb has joined #openstack-meeting14:35
slaweqbut I'm also not sure if this is good idea14:35
liuyulongBut I do not get a slot in amlost 3 months. : )14:35
slaweqit sounds for me a bit like implementing placement in neutron14:35
ralonsohI had the same impression14:35
liuyulongslaweq, why?14:36
ralonsohall resource tracking should be done in the placemente14:36
ralonsohnot in the projects14:36
ralonsohto centralize the information14:36
slaweqliuyulong: because generally placement is used to get reports about resources, track usage and propose candidates for place new services based on some criteria14:36
ralonsohfor example: the router BW14:36
slaweqI can understand that You want to do something in easiest and fastest possible way but IMO it's not good idea - maybe we should instead try to integrate this with placement14:37
slaweqand don't get me wrong - I'm just asking questions to think about it :)14:38
liuyulongThat just make things complicated. Nova scheduler already hurt by it from our colleagues complaints14:38
ralonsohthis is not nova sheduler14:38
ralonsohactually nova scheduler is being deprecated14:39
liuyulongI mean nova scheduler has been hurt by placement...14:39
slaweqalso, it's not trivial thing to report resources and decide how many bandwidth You have available14:39
liuyulongIt makes nova team refactor and refactor14:39
slaweqone host can be connected to various physical networks14:39
slaweqYou can have router which will later have interfaces on networks which uses different physical nets14:40
liuyulongslaweq, yes, this is a good point, and can be easy to implement14:40
slaweqhow You want to choose this bandwidth during router creation?14:40
liuyulongThis is scheduler mechanism for router, yes14:41
*** psachin has quit IRC14:41
*** dmacpher has quit IRC14:41
*** belmoreira has quit IRC14:42
liuyulongrandom chice, and minimum quantity scheduling, is not as good as enough14:42
*** ricolin_ has joined #openstack-meeting14:42
slaweqI will read this spec once again in this week14:42
slaweqand will write my comments there14:42
liuyulongYou can not say your L3 agent has an unlimited capacity14:43
slaweqbut IMO there is many cases there which may be hard to deal with14:43
liuyulongBut you have no way to prevent the router creating on that, until someday, boom...14:43
slaweqalso if You want this rfe to be discussed in drivers meeting, please ping mlavalle about that14:43
liuyulongYour host die, and your custom complaints again, : )14:44
*** ricolin has quit IRC14:45
*** david-lyle is now known as dklyle14:45
slaweqbut with this change You will end up with no space on network nodes where there will be many routers which are doing nothing14:45
slaweqand your customer will complain due to error while creation of router :)14:45
liuyulongAn none-exist resource error is easy to explain.14:46
liuyulongAn data-plane down means you may pay money for it.14:46
slaweqso You can use https://github.com/openstack/neutron/blob/master/neutron/scheduler/l3_agent_scheduler.py#L346 now and monitor number of routers on each L3 agent14:47
slaweqor propose new scheduler which would have simply configured max number of routers on it - without reporting bandwidth and things like that14:47
*** belmoreira has joined #openstack-meeting14:47
liuyulongAPI error and host-down are totally a different level.14:47
slaweqyes, so why not just new simply scheduler with limited number of routers per agent?14:49
*** bobh has joined #openstack-meeting14:49
*** igordc has joined #openstack-meeting14:49
*** belmoreira has quit IRC14:50
liuyulongslaweq, I considered it once, it is a bit simple and rough, it is not facing the real capacity: NIC bandwidth.14:50
slaweqbut You may have many NICs on network node14:50
slaweqand router can consume bandwidth from each of them14:51
slaweqhow You want to know which bandwidth it will consume?14:51
*** mattw4 has quit IRC14:51
slaweqnext question: what about L3 HA?14:51
*** artom has joined #openstack-meeting14:52
slaweqfrom which agent You will then "consume" this bandwidth?14:52
*** jbadiapa has quit IRC14:52
liuyulongslaweq, all routers will have to schedule14:52
liuyulongso the bandwidth_ratio will have its value.14:53
*** belmoreira has joined #openstack-meeting14:53
slaweqanother question - what about dvr routers? what this "bandwidth" attribute will mean for them?14:54
liuyulongIt means, if a HA router needs two nodes with 10Mbps, the scheduler will find two l3-agents for it with 10Mbps free bandwidth.14:54
slaweqI will go throug this spec once again this week and will write those questions there for further discussion14:54
slaweq10 Mbits per interface? for all interfaces? on specific physical segment? or all physical segments?14:55
slaweqalso what about other resources? like memory for example?14:56
liuyulongrouter can only have one external gateway, so this one.14:56
slaweqbut router can also not have any external gateway - what about them?14:57
*** belmoreira has quit IRC14:57
wwriverratbrief status update for (if time allows): multiple segments per host WIP14:58
liuyulongwwriverrat, go ahead14:59
wwriverratFor status on https://review.opendev.org/#/c/62311514:59
wwriverratRe-working WIP patch to have a check method on base classes: `supports_multi_segments_per_host` (False by default). For LinuxBridge implementations it would return True.14:59
wwriverratWhen False, takes data from old self.network_map[network_id]. When true, it gives all from self.segments for that network_id. Naturally code may have to either handle single segment or list of segments.14:59
wwriverratThe code I was working before spread too far and wide. If other drivers suffer same problem, they can implement supports_multi_segments_per_host too.14:59
liuyulongslaweq, please add your question to the patch, I will reply it.14:59
liuyulongTime is up.14:59
ralonsohmaybe next time we can talk about #link https://bugs.launchpad.net/neutron/+bug/183763515:00
openstackLaunchpad bug 1837635 in neutron "HA router state change from "standby" to "master" should be delayed" [Undecided,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)15:00
slaweqliuyulong: sure15:00
liuyulong#endmeeting15:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:00
openstackMeeting ended Wed Jul 24 15:00:00 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_l3/2019/neutron_l3.2019-07-24-14.00.html15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_l3/2019/neutron_l3.2019-07-24-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_l3/2019/neutron_l3.2019-07-24-14.00.log.html15:00
slaweqand sorry for taking so much time for this one rfe :)15:00
tidwellrliuyulong: I owe you a review on that spec.....15:00
*** mattw4 has joined #openstack-meeting15:00
*** belmoreira has joined #openstack-meeting15:00
slaweqo/15:01
liuyulongI once just want one a quick through of the meeting, you experts do not allow me, haha15:01
*** ricolin_ is now known as ricolin15:02
*** liuyulong_ has quit IRC15:02
*** liuyulong has quit IRC15:02
*** jbadiapa has joined #openstack-meeting15:06
*** mriedem has joined #openstack-meeting15:09
*** tosky has joined #openstack-meeting15:11
*** yamamoto has joined #openstack-meeting15:13
*** gyee has joined #openstack-meeting15:13
*** iyamahat has joined #openstack-meeting15:14
*** ayoung has joined #openstack-meeting15:14
*** zhengMa has joined #openstack-meeting15:22
*** yamamoto has quit IRC15:24
*** belmoreira has quit IRC15:25
*** ttsiouts has quit IRC15:28
*** ttsiouts has joined #openstack-meeting15:29
*** ayoung has quit IRC15:32
*** tssurya has quit IRC15:32
*** ttsiouts has quit IRC15:34
*** hongbin has joined #openstack-meeting15:36
*** diablo_rojo has joined #openstack-meeting15:39
*** hongbin has quit IRC15:42
*** hongbin has joined #openstack-meeting15:42
*** hongbin has quit IRC15:43
*** hongbin has joined #openstack-meeting15:44
*** mattw4 has quit IRC15:44
*** mattw4 has joined #openstack-meeting15:44
*** davidsha has joined #openstack-meeting15:45
*** mattw4 has quit IRC15:49
*** boxiang has quit IRC15:50
*** rosmaita has joined #openstack-meeting15:50
*** boxiang has joined #openstack-meeting15:50
*** Liang__ has joined #openstack-meeting15:54
*** Liang__ is now known as LiangFang15:59
jungleboyj#startmeeting Cinder16:00
openstackMeeting started Wed Jul 24 16:00:30 2019 UTC and is due to finish in 60 minutes.  The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: Cinder)"16:00
openstackThe meeting name has been set to 'cinder'16:00
davidshaHey16:00
geguileohi! o/16:00
jungleboyjCourtesy ping:  jungleboyj whoami-rajat rajinir lseki carloss pots woojay erlon geguileo eharney rosmaita enriquetaso e0ne smcginnis davidsha walshh_ xyang hemna _hemna tosky16:00
whoami-rajatHi16:01
enriquetasoo/16:01
eharneyhi16:01
xyanghi16:01
e0nehi16:01
jungleboyj@!16:01
_pewp_jungleboyj (=゚ω゚)ノ16:01
pots o/16:01
rosmaitao/16:01
e0ne#link https://etherpad.openstack.org/p/cinder-train-meetings16:01
walshh_hi16:02
* smcginnis is here but not really16:02
jungleboyjsmcginnis:  NOOOOO!  You must be here.  ;-)16:02
* jungleboyj is lost without ShadowPTL16:03
smcginnishah!16:03
jungleboyj:-)16:03
* jungleboyj defers to rosmaita Instead16:03
*** thgcorrea has joined #openstack-meeting16:03
jungleboyjOk.  Quite a few things to get to today.16:04
rosmaitathat won't do you any good16:04
jungleboyj:-p16:04
jungleboyjSo, reminder that Train Milestone 2 is this week.16:04
jungleboyj#link https://releases.openstack.org/train/schedule.html16:05
jungleboyjThat means that I will start looking at CI runs to see if they are running Pyton3.16:05
e0nethis week == tomorrow16:05
smcginnise0ne: ;)16:05
jungleboyje0ne:  ++16:05
*** brault has joined #openstack-meeting16:06
jungleboyjAny questions about those two questions?16:06
jungleboyjTwo items...16:07
jungleboyjTake that as a no.16:07
jungleboyjCinder Mid-Cycle Topics ... Please take a look at the etherpad.16:07
jungleboyj#link https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning16:07
jungleboyjPlease add topics there so that we can make sure to have a productive mid-cycle.16:08
jungleboyj#topic Spec: Leverage compression hardware accelerator16:09
*** openstack changes topic to "Spec: Leverage compression hardware accelerator (Meeting topic: Cinder)"16:09
LiangFanghi16:09
LiangFangthanks jungleboyj and rosmaita to give +216:10
jungleboyjSo, my comments have bee addressed.  I think we just need Eric to sign off.16:10
jungleboyjeharney: ^^16:10
eharneyi haven't caught up with the last round of discussion between LiangFang and Brian on this one16:10
*** brault has quit IRC16:11
jungleboyjOk.  If you can take a look that would be good.  rosmaita gave a _216:11
jungleboyj+216:11
rosmaitait was software fallback if no accelerator, and a config option about whether to allow any compression or not16:11
jungleboyjrosmaita:  Good.16:12
eharneysounds good16:12
jungleboyjLiangFang: Anything else to address?16:12
*** diablo_rojo has quit IRC16:12
LiangFangrosmaita ask me about nova impact last week16:13
LiangFangzhengMa has implement the code, and it seems no impact16:13
*** pcaruana has quit IRC16:13
rosmaitathat's good16:14
LiangFanghe have successfully created VM using container format 'compressed'16:14
rosmaitathat's surprising, but ok16:14
jungleboyjrosmaita: Surprising ?16:15
rosmaitayeah, nova had to know to decompress the image before trying to use it16:15
rosmaitathought it might just fail with unsupported format or something16:16
LiangFangwhen cinder download image, cinder know the image container format16:16
LiangFangso cinder help to decompressed it16:16
LiangFangso what nova get is a compressed volume16:17
rosmaitaok, so it was a boot from volume VM16:17
LiangFangyes16:17
rosmaitawe need to check what happens if you try to just boot from image the normal way with container_format == 'compressed'16:17
smcginnisWhat about when Nova doesn't use a Cinder volume?16:17
rosmaitawhat smcginnis said16:18
rosmaitabecause you know some user will try to use this the wrong way16:18
jungleboyjrosmaita: ++16:19
rosmaitai am thinking nova will fail gracefully, we just want to verify that16:19
rosmaitaso to be clear, we aren't expecting you to implement boot from compressed image in nova16:19
LiangFangoh, has not verified this yet16:19
rosmaitajust want to make sure nothing breaks badly16:19
jungleboyjrosmaita:  Any other concerns there?16:20
rosmaitano, that's all16:21
jungleboyjOk.  So, I think we just need eharney to review.16:21
rosmaitait's not really a problem with the spec, just a courtesy check on behalf of nova16:21
LiangFangthanks16:21
LiangFangwe will check as soon as possible16:22
jungleboyj#topic Status and Zuul v3 porting of the remaining legacy Zuul jobs16:22
*** openstack changes topic to "Status and Zuul v3 porting of the remaining legacy Zuul jobs (Meeting topic: Cinder)"16:22
toskyhi!16:22
jungleboyjHi.16:22
toskydo you want to me to show what I found out so far, or should I add some notes to the etherpad and then we can discuss about them?16:22
*** ricolin has quit IRC16:23
smcginnisEtherpad link?16:23
toskyto the etherpad of the meeting, I mean, unless you prefer a separate document16:23
*** ayoung has joined #openstack-meeting16:23
jungleboyjGo ahead and share.16:23
toskysorry for the initial mess, I think it should be readable now on https://etherpad.openstack.org/p/cinder-train-meetings16:26
jungleboyjOk.  Anything that you need to highlight.16:26
toskyfirst, if there is anything which I don't know and that I should consider :) like some important non-documented reasons about some design decisions from that past that I should consider16:27
toskysecond, there are a few open questions (namely, whether I can go forward with replacing the LIO jobs with its already native counterpart from cinder-tempest-plugin, and other small architectural questions)16:28
tosky(like whether it's fine to nuke the zeromq job from all existing branches)16:29
smcginnisI think so. As far as I understand, the ZMQ stuff is all dead.16:29
jungleboyjOk.  Trying to understand all this.  Didn't know about this effort.16:29
eharneyreplacing the LIO jobs should be fine as long as we end up with something that runs the same configuration and tests somewhere (LIO, Barbican, and maybe one other thing that's turned on in there that i forget)16:29
toskyeharney: and that's the point; if you check the  cinder-tempest-plugin-lvm-lio, is basically doing that already16:30
eharneyright16:30
toskythe blacklist is a bit different and it lacks the cinderlib tests, but those are easy to fix16:30
smcginnisjungleboyj: Infra has stated the support for those legacy jobs will be going away. Not sure on timeframe, but we need to get updated to Zuul v3 native jobs as soon as we can.16:31
jungleboyjsmcginnis:  Ah, I see.16:31
e0nesmcginnis: thanks for this update!16:31
toskyalso, the native jobs are easier to deal with; there is no more devstack-gate in between, and in my experience modifying them is easier16:32
smcginnis++16:32
e0netosky: +116:32
toskyof course there are many questions to digest but we can talk about this anytime; I'm now hanging around on #openstack-cinder too, so feel free to ping me anytime I'm connected (and/or comment on the reviews)16:33
jungleboyjtosky:  Sounds good.16:34
smcginnisThanks for looking at that tosky16:34
jungleboyjsmcginnis: ++16:34
jungleboyjtosky:  Is there anything else you need from us right now?16:34
toskyno, I guess I have a general "go on, let's figure out the details", so that's fine, thanks!16:34
jungleboyjtosky:  Ok great.  Thank you for working on this.16:34
jungleboyjOk if we move on?16:35
jungleboyjTake that as a yes.16:36
toskyyep16:36
*** mszwed_ has joined #openstack-meeting16:36
jungleboyj#topic stable branches release update16:37
*** openstack changes topic to "stable branches release update (Meeting topic: Cinder)"16:37
jungleboyjrosmaita: All your's.16:37
rosmaitathere was a discussion yesterday among most of the cinder stable-maint cores in #openstack-cinder about the holdup releasing cinder stable/rocky (and hence stable/queens)16:37
rosmaita#link http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2019-07-23.log.html#t2019-07-23T16:50:3516:37
rosmaitathe tl;dr is we agreed to revert the patch declaring multiattach support for HPE MSA from stable/rocky16:37
rosmaitai restored mriedem's reversion patch16:37
rosmaita#link https://review.opendev.org/#/c/670086/16:37
rosmaitabut the commit message is kind of hyperbolic, particularly the part about no 3rd party CI running on the dothill driver16:37
rosmaita(there is CI, it's just run on subclasses of the driver)16:37
rosmaitai don't know how much the commit message matters, TBH16:38
rosmaitabut i do know that we (the cinder team) tend to be a bit un-conservative with respect to what we consider to be appropriate driver backports16:38
*** ayoung has quit IRC16:38
rosmaitaand in fact, we're not rejecting the backport because it could be considered to be adding a feature16:38
rosmaitabut because further testing has indicated that multiattach isn't working for that driver16:38
rosmaitalike i said, i don't know if anyone pays attention to commit messages16:38
rosmaitabut it might give us more flexibility in the future if it's more accurate in this case16:38
rosmaita(this is where i stop to see if anyone has a comment)16:38
e0neI think that drivers code should follow the same policy as the rest of cinder: no features backports16:39
e0newith some exceptions when we need to introduce some config option to fix some bug :(16:39
potsi think the comment about the 3rd party CI was referring to whether the specific multiattach tests were being run correctly, which was a fair criticism.16:40
rosmaitaok, maybe i am being too sensitive16:41
jungleboyjpots:  Right.16:41
e0neafaik, we don't have a good 3rd-party CI coverage for stable branches16:41
e0neit would be great if I'm wrong here16:42
jungleboyjrosmaita:  I think we would find most people haven't added the multi-attach tests.16:42
jungleboyjProbably something we should check into.16:42
rosmaitathey don't seem to be running on a lot of drivers16:42
jungleboyje0ne: There is also that.16:42
jungleboyjWe don't have that as a requirement though.16:42
rosmaitaok, i withdraw my comment about the commit message16:42
jungleboyj:-)16:42
*** diablo_rojo has joined #openstack-meeting16:43
rosmaitabut i will need a stable core to +2 the revert so i can update the release patch16:43
jungleboyjrosmaita:  Patch?16:43
rosmaita#link https://review.opendev.org/#/c/670086/16:43
rosmaitaok, that's all from me16:44
jungleboyjOk, yeah, the commit message isn't really accurate anymore here.  I will update that and then we can get that patch in.16:44
jungleboyjrosmaita: Make sense?16:44
smcginnisMakes sense to me.16:45
rosmaitaok16:45
jungleboyjOkie dokie.  Will do that after the meeting.16:45
jungleboyjOk.  So now we can move on.16:45
*** mattw4 has joined #openstack-meeting16:46
jungleboyj#link https://review.opendev.org/#/c/523659/2116:46
jungleboyjA few comments have been addressed by the Infortrend driver.16:46
* e0ne is waiting for geguileo's review16:46
geguileoe0ne: on which patch?16:46
jungleboyjI think it is ok to me, the reamaining issue, if there is one, could be fixed with a follow-up.16:47
e0negeguileo: actually, you did it already for my spec. thanks!16:47
geguileoe0ne: yeah, I just did that review  XD16:47
jungleboyjWe have discussion around the Seagate driver earlier this week:16:47
jungleboyj#link https://review.opendev.org/#/c/671195/16:48
jungleboyjI think that we can let this slip a bit as it is a rename and pots has other patches to backport first.16:48
jungleboyjDoesn't anyone have an issue with that?16:48
smcginnisYeah, I don't see that as a new driver.16:48
jungleboyjsmcginnis:  Ok.  Good you agree there given I am kind-of close to that one.  ;-)16:49
jungleboyjI see that the MacroSan driver was added into the list.16:49
smcginnisIt really is just a rebrand. DotHill is gone, it is now Seagate. It makes sense to get that updated to show that.16:49
whoami-rajatjungleboyj: yep, i added it16:49
jungleboyjCool.16:49
jungleboyjwhoami-rajat:  So, it has a -2 from eharney16:50
whoami-rajatjungleboyj: the maintainer keeps querying about reviews almost everyday and updates the patch regularly16:50
whoami-rajatjungleboyj: it's an old -216:50
smcginnisI haven't looked. Have they gone through the new driver checklist and addressed everything? Is there CI reporting now?16:51
jungleboyjOk.  I guess I had missed those pings.16:51
smcginnisLast I looked it was quite a way off.16:51
jungleboyjYeah, I don't see a CI reporting.16:51
whoami-rajatsmcginnis: they've turned off the CI reporting quite for a while now, but it has been reporting on other patches, need to report on this one too.16:51
whoami-rajatsmcginnis: seemingly, the driver checklist has been addressed (as i checked last, maybe i missed something)16:52
smcginnisIf it's the deadline and there hasn't been CI reporting on the new driver and other patches for at least several days, that's concerning.16:52
jungleboyjsmcginnis:  ++16:53
whoami-rajatsmcginnis: yea it is16:53
jungleboyjSo, I am concerned about trying to get that one through.16:54
smcginnisFolks should review it and put specific issues/concerns on the review so they know what and why.16:55
*** Lucas_Gray has quit IRC16:55
jungleboyjI also haven't followed that one.  .... So, I defer to those who have looked at it/followed it.16:55
jungleboyjLet me take a look at that driver and we can follow up in the channel after the meeting.16:56
jungleboyjeharney:  Says he fixed his volume rekey spec.  I think I am good with getting that in.16:57
jungleboyj#topic Final run-through of open patches for milestone-216:57
*** openstack changes topic to "Final run-through of open patches for milestone-2 (Meeting topic: Cinder)"16:57
jungleboyje0ne: geguileo  Has comments on your spec.16:58
jungleboyj#link https://review.opendev.org/#/c/556529/16:58
jungleboyjCan we get eyes on the encryption one and see if we can merge that:  https://review.opendev.org/#/c/608663/16:58
jungleboyjNot sure if there is more discussion on those specs.16:59
jungleboyjJust please review and respond to comments please.16:59
jungleboyjOk.  That is all our time for today.17:00
smcginnis\o17:00
jungleboyjThanks for joining the meeting.17:00
whoami-rajatThanks!17:00
jungleboyjTalk to you all next week.17:00
geguileothanks!17:00
jungleboyj#endmeeting17:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"17:00
openstackMeeting ended Wed Jul 24 17:00:48 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-07-24-16.00.html17:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-07-24-16.00.txt17:00
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-07-24-16.00.log.html17:00
davidshaThanks!17:01
*** davidsha has quit IRC17:01
*** mszwed_ has quit IRC17:01
*** rosmaita has left #openstack-meeting17:01
*** tosky has left #openstack-meeting17:02
*** mmethot_ is now known as mmethot17:02
*** zhengMa has quit IRC17:03
*** ayoung has joined #openstack-meeting17:05
*** LiangFang has quit IRC17:12
*** igordc has quit IRC17:16
*** igordc has joined #openstack-meeting17:17
*** ayoung has quit IRC17:27
*** igordc has quit IRC17:27
*** ayoung has joined #openstack-meeting17:30
*** priteau has quit IRC17:30
*** e0ne has quit IRC17:37
*** ayoung has quit IRC17:37
*** ayoung has joined #openstack-meeting17:38
*** ekcs has joined #openstack-meeting17:39
*** pcaruana has joined #openstack-meeting17:41
*** bobh has quit IRC17:42
*** ralonsoh has quit IRC17:55
*** igordc has joined #openstack-meeting18:01
*** diablo_rojo has quit IRC18:02
*** gary_perkins_ has quit IRC18:19
*** gary_perkins has joined #openstack-meeting18:20
*** panda has quit IRC18:25
*** ayoung has quit IRC18:29
*** panda has joined #openstack-meeting18:33
*** bdperkin has quit IRC18:39
*** tesseract has quit IRC18:49
*** lpetrut has quit IRC18:50
*** diablo_rojo has joined #openstack-meeting18:51
*** ayoung has joined #openstack-meeting18:56
*** thgcorrea has quit IRC19:03
*** bobh has joined #openstack-meeting19:07
*** kopecmartin is now known as kopecmartin|offf19:13
diablo_rojoNo storyboard meeting this week! Join us in #storyboard if you have questions/comments19:19
*** ayoung has quit IRC19:25
*** e0ne has joined #openstack-meeting19:25
*** vishalmanchanda has quit IRC19:32
*** priteau has joined #openstack-meeting19:40
*** e0ne has quit IRC19:46
*** e0ne has joined #openstack-meeting19:57
*** priteau has quit IRC20:02
*** eharney has quit IRC20:05
*** e0ne has quit IRC20:19
*** bobh has quit IRC20:28
*** bobh has joined #openstack-meeting20:30
*** ayoung has joined #openstack-meeting20:35
*** panda has quit IRC20:35
*** panda has joined #openstack-meeting20:37
*** bobh has quit IRC20:39
*** hemna has quit IRC20:50
*** hemna has joined #openstack-meeting20:50
*** eharney has joined #openstack-meeting20:56
timburke#startmeeting swift21:00
openstackMeeting started Wed Jul 24 21:00:02 2019 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: swift)"21:00
openstackThe meeting name has been set to 'swift'21:00
timburkewho's here for the swift meeting?21:00
kota_o/21:00
*** patchbot has joined #openstack-meeting21:00
mattoliverauo/21:00
rledisezo/21:01
*** tdasilva_ has joined #openstack-meeting21:01
*** tdasilva_ is now known as tdasilva21:01
timburkeagenda's at https://wiki.openstack.org/wiki/Meetings/Swift21:02
timburkesorry, i haven't updated it -- but i think we're mostly following up on things anyway21:02
timburke#topic shanghai21:03
*** openstack changes topic to "shanghai (Meeting topic: swift)"21:03
timburkereminder that that's a thing. you should've gotten discount codes recently21:03
claygo/21:03
rledisezgot it21:03
*** zaitcev has joined #openstack-meeting21:04
*** e0ne has joined #openstack-meeting21:04
kota_oh really?21:04
timburkei don't need the topics fully fleshed out or anything, but it'll be good to get the foundation a rough headcount to expect21:04
claygis anyone besides tim going *for sure* at this point?21:04
timburkekota_, maybe it'll arrive in the next day or two21:04
kota_timburke: ok21:04
*** pcaruana has quit IRC21:05
timburkei know they often have to stagger these things so they don't immediately get blackholed as spam21:05
kota_ah... i have to check older e-mail box too.21:05
claygmine was titled "Invitation & Discount Registration to Open Infrastructure Summit Shanghai" and it came from summitreg@openstack.org - but it looks like they used e2ma.net as the mass mailing service21:05
mattoliverauYeah, I got it during the night, Ie I saw it in my email this morning21:05
rledisezclayg: alecuyer and I are still expecting to go, no formal confirmation yet21:05
kota_I'll check that today. thanks good info.21:06
timburkei still need to write an email to the mailing list to extend an invitation to not-you-guys and try to get a feel for the onboarding vs design/discussion split21:06
timburke#topic release21:07
*** openstack changes topic to "release (Meeting topic: swift)"21:07
timburkewe've got a 2.22.0 release out! it's got some py3 support!21:07
clayg🥳21:08
timburke#link http://lists.openstack.org/pipermail/release-announce/2019-July/007429.html21:08
zaitcevSome?21:08
timburkezaitcev, probe tests don't pass. i found a few (mostly small?) issues when writing https://review.opendev.org/#/c/671333/21:09
patchbotpatch 671333 - swift - py3: (mostly) port probe tests - 2 patch sets21:09
timburkebut we can get to that in a sec ;-)21:10
timburkethere's a bunch of *other* great stuff, too, though! which is kind of amazing21:11
timburkeconfigurable, anonymizable logs!21:11
timburkedocker image!21:11
timburkecontinuous improvements to a variety of subsystems!21:12
timburkethank you all so much for helping to make this the best Swift yet :-D21:12
mattoliverau\o/21:12
timburkeshortly after that went in, i approved a few patches that i wanted to wait until after the release21:13
claygbest swift evar!21:13
timburkep 665758, p 657034, p 669511 in particular21:14
patchbothttps://review.opendev.org/#/c/665758/ - swift - Bump up our minimum eventlet version (MERGED) - 4 patch sets21:14
patchbothttps://review.opendev.org/#/c/657034/ - swift - Make py36 job voting (MERGED) - 1 patch set21:14
patchbothttps://review.opendev.org/#/c/669511/ - swift - Add Python 3 Train unit tests (MERGED) - 1 patch set21:14
*** ayoung has quit IRC21:15
timburkeon to other updates!21:15
timburke#topic py321:15
*** openstack changes topic to "py3 (Meeting topic: swift)"21:15
tdasilvatimburke: you also proposed to devstack to start running swift under py3!!!21:15
timburkeyup! p 670353 has one +2, waiting for final approval21:16
patchbothttps://review.opendev.org/#/c/670353/ - devstack - Remove Swift from default DISABLED_PYTHON3_PACKAGES - 1 patch set21:16
timburkewhich will unblock p 671554 -- bringing back tempest tests21:17
patchbothttps://review.opendev.org/#/c/671554/ - swift - Add integrated gate tempest and grenade job - 3 patch sets21:17
timburke(py3 support wasn't really the blocker there, i don't think -- but it was certainly a nice-to-have)21:17
*** artom has quit IRC21:17
clayg@timburke that does change how the func tests get started?21:18
claygi only ask cause... we still run them under py2, right?21:18
*** mattw4 has quit IRC21:18
timburkeclayg, no -- it just runs another set of independently-developed func tests21:18
timburkethere are a bunch of places where we still need to port the func tests themselves to be able to run in-process py3 func tests21:19
timburkebut i realized that, since we now run *all func tests* (under py2) against services running py3, the next-most-valuable testing is (or, seems to me to be) probe!21:20
timburkeleading to p 67155421:21
patchbothttps://review.opendev.org/#/c/671554/ - swift - Add integrated gate tempest and grenade job - 3 patch sets21:21
timburkeer..21:21
timburke#undo21:21
openstackRemoving item from minutes: #link https://review.opendev.org/#/c/671554/21:21
timburkebah. whatever.21:21
timburkewrong bot21:21
timburkep 67133321:21
patchbothttps://review.opendev.org/#/c/671333/ - swift - py3: (mostly) port probe tests - 2 patch sets21:21
timburkethat kicked out p 671167 p 670932 p 670933 and p 67116821:22
patchbothttps://review.opendev.org/#/c/671167/ - swift - py3: fix up listings on sharded containers - 1 patch set21:22
patchbothttps://review.opendev.org/#/c/670932/ - swift - py3: fix up swift-orphans - 1 patch set21:22
patchbothttps://review.opendev.org/#/c/670933/ - swift - py3: fix object-replicator rsync output parsing - 2 patch sets21:22
patchbothttps://review.opendev.org/#/c/671168/ - swift - Fix up errno checking (MERGED) - 1 patch set21:22
timburkeand that last one's even merged already! thanks tdasilva :-)21:22
tdasilvao/21:24
timburkethat's about what i've got for py3. of course, reviewing any of those or the already-written py3 func test changes would be tremendously helpful21:24
timburkeas would writing new patches to port other func tests -- in particular, the s3api tests seemed hairy (FWIW)21:25
kota_:/21:25
*** zaneb has quit IRC21:25
zaitcevSo21:26
timburkei seem to recall running into some issue with boto and eventlet interactions leading to a recursion problem in down in stdlib's ssl... even though i wasn't using https :-(21:26
zaitcevI still have the problem with duplicate keys even on all-py3 cluster.21:26
zaitcevmetatest.txt: "X-Account-Meta-Uni\u00e0\u00b8\u0092": ["1", "1563926869.20886"],21:27
zaitcevmetatest.txt: "X-Account-Meta-Uni\u0e12": ["1", "1563407142.42696"],21:27
kota_curious21:27
zaitcevUnfortunately, it was a dirty experiment. I realized that I left a handoff db intact when I purged the meta with  UPDATE account_stat SET metadata = '';21:28
timburkeoh, interesting... so you tried clearing everything out and running functests again, and it showed up again?21:28
zaitcevre-trying now21:29
timburkei still want to figure out what's going on there -- zaitcev, can we work on that after the meeting?21:29
zaitcevsee, the timestamps are different now. So native is old, and the latin-1 is new, re-introduced by functests. I think it's a bad idea to store latin-1, actually... It's incompatible with py2 for sure.21:30
*** e0ne has quit IRC21:30
timburkei agree -- the wsgi-fied string is no good21:31
*** mattw4 has joined #openstack-meeting21:31
timburkelet's keep moving for now21:32
timburke#topic lots of small files21:32
zaitcevI didn't have this problem with objects thus far.21:32
*** openstack changes topic to "lots of small files (Meeting topic: swift)"21:32
zaitcevok21:32
timburkerledisez, kota_ anything new to report?21:32
kota_sorry nothing new for me21:33
timburkethat's fine! i just want to check in :-)21:33
rlediseztimburke: not as I know. alecuyer is now off for 3 weeks21:33
timburke👍21:34
timburke#topic auto sharding21:34
*** openstack changes topic to "auto sharding (Meeting topic: swift)"21:34
timburkemattoliverau, i saw you rebased21:34
mattoliverauyeah21:35
mattoliverauI rebased the patch chain off the new ring info patch21:35
mattoliverauI've also been working on the sharder a bit more.21:35
claygi think i missed the ring info patch - is it amazing!?21:36
timburkep 67067321:36
patchbothttps://review.opendev.org/#/c/670673/ - swift - ring: Track more properties of the ring - 3 patch sets21:36
timburkemattoliverau, need anything from us? or just reviews? ;-)21:36
mattoliverauTrying out the idea of when the scanner has scanned and checked that they are still the scanner, instead of just writing the shard ranges to their DB. It tries to UPDATE the shard ranges to the other primaries, if any succeed, it's safe to write locally.21:37
mattoliverauthat way we always have at least 2 shard ranges out there.21:37
mattoliveraunot 100% convinced, but thought about it yest to giving it a whirl.21:37
timburkeinteresting...21:38
timburkei'll try to take a look. not sure i can get to it this week though21:38
timburke#topic symlinks and versioning21:38
*** openstack changes topic to "symlinks and versioning (Meeting topic: swift)"21:38
mattoliverauI wanted to write down then replicate and then roll back (potentually if all failed) but you don't know if replication has happened in between.21:38
timburke#undo21:39
mattoliveraunps21:39
openstackRemoving item from minutes: #topic symlinks and versioning21:39
*** e0ne has joined #openstack-meeting21:39
mattoliveraulike I said, just thinking out loud and in code.. not 100% convinced :P21:39
mattoliverauI'll push up what I have today (I was meant to last night but forgot)21:39
mattoliveraunow you can move on :P21:39
claygp 633094 was done, but tdasilva was asking if we could figure out how to use x-etag-is-at headers on put to make hardlinks to slo's use the slo quoted etag instead of the manifest etag...21:40
patchbothttps://review.opendev.org/#/c/633094/ - swift - Allow "harder" symlinks - 16 patch sets21:40
timburkeheh, yeah -- i'm still torn, too. i'm not sure that non-leader should be the first one responsible for owning shard ranges...21:40
timburke#topic symlinks and versioning21:40
*** openstack changes topic to "symlinks and versioning (Meeting topic: swift)"21:40
claygi started to rework the versions one on the assumption that hardlink target etag should be the manifest etag and while I mostly got that "working" there's still some knowledge about slo bleeding into versionsing 😞21:41
claygmeanwhile I need to freshen up my s3-versions patch ontop of all this21:41
claygso ... it's all pretty shaky21:41
claygbut also unfortunately pretty serialized - so I'll keep chugging and let ya'll good folks know when I think it's done, or closer, or I need a review or something21:42
timburkeit's hard making sure that 3 dramatically different features all work together the "right" way21:42
kota_thx for working s3 versioning!21:42
timburkethanks for working to make it all reasonable for the rest of us!21:42
tdasilvaclayg: any way we can help atm?21:42
claygyeah last week the subject of versinsing and object expiration came up?21:42
claygbasically we don't know how it really interacts and how symlinks will change that - or how it *should* work21:43
*** brault has joined #openstack-meeting21:43
claygwe thought maybe we should look at s3 bucket policies for versioned buckets and their expiration features to get an idea of what we would aim to support as s minimum subset21:43
claygbut I haven't done that - so someone could think about that21:44
clayg^ tdasilva21:44
claygbut other than that, nothing I can think of21:44
*** brault has quit IRC21:44
*** brault has joined #openstack-meeting21:44
tdasilvafwiw, aws states that for s3 buckets can't have versioning and expiration policy together21:45
tdasilvagotta pick one or the other21:45
timburkeinteresting21:45
timburkebut they have a way to expire old versions, no?21:46
claygtdasilva: rly?  do you have that reference handy - it's possible I don't understand which features those words map too21:46
*** e0ne has quit IRC21:46
tdasilva"You can use the Object Expiration feature on buckets that are stored using Standard or Reduced Redundancy Storage. You cannot, however, use it in conjunction with S3 Versioning (this is, as they say, for your own protection). You will have to delete all expiration rules for the bucket before enabling versioning on that bucket."21:47
tdasilvagot it from here: https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/21:47
claygthanks!21:47
tdasilvabut just noticed it's from 2011, maybe it has changed???21:47
tdasilvai'll try it out tomorow21:47
timburketdasilva, would you be able to get a summary of the state of policies and versioning (and any other relevant features) for next week? or just to drop in channel in the next day or two?21:48
tdasilvatimburke: yep!21:48
timburkecool. i've got one more topic i'd like to bring up...21:48
timburke#topic 404s from handoffs21:48
*** openstack changes topic to "404s from handoffs (Meeting topic: swift)"21:48
timburkeso i still need to write up a good bug to go with my patch, but...21:49
timburke#link https://review.opendev.org/#/c/672186/21:49
patchbotpatch 672186 - swift - Ignore 404s from handoffs for objects when calcula... - 4 patch sets21:49
timburkei had some complaints about an EC object not turning up21:50
timburkeand it looked like we found some frags but not enough to reconstruct21:50
timburkeso the proxy dug into handoffs and found a quorum's worth of 404s21:50
timburkethis is somewhat related to https://bugs.launchpad.net/swift/+bug/1833612 but at the object level21:51
openstackLaunchpad bug 1833612 in OpenStack Object Storage (swift) "Overloaded container can get erroneously cached as 404" [Undecided,Fix released]21:51
timburkebut it looks like we have some pretty solid tests that explore the behavior and decided on 404 over 50321:53
mattoliverauoh, so where there enough frags out there to reconstruct?21:55
mattoliverauand 404s just got quorum first?21:55
timburkeif the servers weren't overloaded, yes21:56
mattoliveraujust making sure :P21:57
timburkesee https://github.com/openstack/swift/blob/2.22.0/test/unit/proxy/controllers/test_obj.py#L2118-L2150 for one of the tests i had to change (iirc)21:58
timburkewe find frags, they're marked durable, but we go with a 404 ATM21:58
timburkei feel like i'm dramatically changing the way eventual consistency currently works for swift21:59
timburkeplease take a look at the patch if you get a chance21:59
timburkei'm afraid i let us run a bit long21:59
notmynametimburke: link the etherpad too21:59
notmyname#link https://etherpad.openstack.org/p/swift-fail-responses21:59
kota_notmyname: !21:59
notmynamekota_: !!22:00
timburke:-) thanks notmyname22:00
timburkebut now we really *are* at time22:00
timburke#endmeeting22:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:00
openstackMeeting ended Wed Jul 24 22:00:21 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2019/swift.2019-07-24-21.00.html22:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2019/swift.2019-07-24-21.00.txt22:00
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2019/swift.2019-07-24-21.00.log.html22:00
*** patchbot has left #openstack-meeting22:00
*** zaitcev has left #openstack-meeting22:01
*** mattw4 has quit IRC22:12
*** mattw4 has joined #openstack-meeting22:16
*** slaweq has quit IRC22:45
*** iyamahat has quit IRC22:56
*** slaweq has joined #openstack-meeting23:11
*** carloss has quit IRC23:15
*** slaweq has quit IRC23:15
*** hongbin has quit IRC23:18
*** rcernin has joined #openstack-meeting23:30
*** iyamahat has joined #openstack-meeting23:38
*** mattw4 has quit IRC23:44
*** diablo_rojo has quit IRC23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!