Wednesday, 2019-06-26

*** slaweq has joined #openstack-meeting00:11
*** ianychoi has quit IRC00:12
*** ianychoi has joined #openstack-meeting00:14
*** slaweq has quit IRC00:15
*** ianychoi has quit IRC00:20
*** ianychoi has joined #openstack-meeting00:21
*** lbragstad has quit IRC00:23
*** ianychoi has quit IRC00:30
*** ianychoi has joined #openstack-meeting00:31
*** brinzhang has joined #openstack-meeting00:32
*** brinzhang has quit IRC00:55
*** ianychoi has quit IRC01:03
*** ianychoi has joined #openstack-meeting01:05
*** _alastor_ has quit IRC01:12
*** brinzhang has joined #openstack-meeting01:18
*** ykatabam has joined #openstack-meeting01:21
*** ayoung has joined #openstack-meeting01:32
*** baojg has joined #openstack-meeting01:49
*** apetrich has quit IRC01:57
*** diablo_rojo has quit IRC01:57
*** igordc has quit IRC01:58
*** rbudden has quit IRC02:05
*** ykatabam has quit IRC02:09
*** slaweq has joined #openstack-meeting02:11
*** ianychoi has quit IRC02:14
*** ianychoi has joined #openstack-meeting02:16
*** slaweq has quit IRC02:16
*** ianychoi has quit IRC02:38
*** ianychoi has joined #openstack-meeting02:40
*** hongbin has joined #openstack-meeting02:44
*** ayoung has quit IRC02:48
*** ricolin has joined #openstack-meeting03:01
*** deardooley has quit IRC03:04
*** whoami-rajat has joined #openstack-meeting03:04
*** hongbin has quit IRC03:18
*** ykatabam has joined #openstack-meeting03:21
*** psachin has joined #openstack-meeting03:35
*** imsurit has joined #openstack-meeting03:52
*** hongbin has joined #openstack-meeting03:54
*** slaweq has joined #openstack-meeting04:11
*** browny_ has quit IRC04:12
*** hongbin has quit IRC04:12
*** slaweq has quit IRC04:16
*** jhesketh has quit IRC04:19
*** jhesketh has joined #openstack-meeting04:19
*** dkushwaha has joined #openstack-meeting04:21
*** dkushwaha has left #openstack-meeting04:21
*** shilpasd has joined #openstack-meeting04:29
*** e0ne has joined #openstack-meeting05:19
*** e0ne has quit IRC05:23
*** cheng1 has quit IRC05:36
*** cheng1 has joined #openstack-meeting05:41
*** mmethot has quit IRC05:46
*** shilpasd has quit IRC05:56
*** lpetrut has joined #openstack-meeting05:56
*** lpetrut has quit IRC05:57
*** lpetrut has joined #openstack-meeting05:57
*** tetsuro has joined #openstack-meeting05:59
*** slaweq has joined #openstack-meeting06:11
*** slaweq has quit IRC06:16
*** artom has joined #openstack-meeting06:16
*** artom is now known as artom|gmtplus306:16
*** Adri2000 has quit IRC06:17
*** pcaruana has joined #openstack-meeting06:26
*** apetrich has joined #openstack-meeting06:31
*** belmoreira has joined #openstack-meeting06:33
*** ianychoi has quit IRC06:34
*** ianychoi has joined #openstack-meeting06:41
*** kopecmartin has joined #openstack-meeting06:43
*** slaweq has joined #openstack-meeting06:44
*** belmoreira has quit IRC06:45
*** belmoreira has joined #openstack-meeting06:47
*** rcernin has quit IRC07:06
*** ykatabam has quit IRC07:06
*** belmoreira has quit IRC07:13
*** belmoreira has joined #openstack-meeting07:14
*** tesseract has joined #openstack-meeting07:16
*** belmoreira has quit IRC07:34
*** tetsuro has quit IRC07:40
*** rajinir has quit IRC07:45
*** e0ne has joined #openstack-meeting07:47
*** belmoreira has joined #openstack-meeting07:48
*** ttsiouts has joined #openstack-meeting07:48
*** ralonsoh has joined #openstack-meeting07:49
*** psachin has quit IRC07:55
*** tetsuro has joined #openstack-meeting07:58
*** ianychoi has quit IRC07:59
*** tssurya has joined #openstack-meeting08:00
*** ttsiouts has quit IRC08:00
*** ianychoi has joined #openstack-meeting08:01
*** ttsiouts has joined #openstack-meeting08:01
*** ttsiouts has quit IRC08:05
*** ttsiouts has joined #openstack-meeting08:06
*** tetsuro has quit IRC08:28
*** Luzi has joined #openstack-meeting08:31
*** tesseract has quit IRC08:38
*** tesseract has joined #openstack-meeting08:40
*** ociuhandu has joined #openstack-meeting08:47
*** rcernin has joined #openstack-meeting08:47
*** ociuhandu has quit IRC09:04
*** ianychoi has quit IRC09:07
*** rcernin has quit IRC09:07
*** ianychoi has joined #openstack-meeting09:10
*** Lucas_Gray has joined #openstack-meeting09:35
*** psachin has joined #openstack-meeting09:39
*** Lucas_Gray has quit IRC09:48
*** Lucas_Gray has joined #openstack-meeting09:49
*** ociuhandu has joined #openstack-meeting10:06
*** artom|gmtplus3 has quit IRC10:06
*** ttsiouts has quit IRC10:10
*** ttsiouts has joined #openstack-meeting10:10
*** artom has joined #openstack-meeting10:10
*** liuyulong has joined #openstack-meeting10:14
*** ttsiouts has quit IRC10:15
*** brinzhang has quit IRC10:18
*** heikkine has quit IRC10:33
*** davidsha has joined #openstack-meeting10:40
*** brinzhang has joined #openstack-meeting10:41
*** imsurit has quit IRC10:42
*** liuyulong has quit IRC10:47
*** carloss has joined #openstack-meeting11:00
*** baojg has quit IRC11:12
*** ttsiouts has joined #openstack-meeting11:13
*** ociuhandu has quit IRC11:23
*** ociuhandu has joined #openstack-meeting11:23
*** shilpasd has joined #openstack-meeting11:31
*** ianychoi has quit IRC11:40
*** raildo has joined #openstack-meeting11:43
*** ianychoi has joined #openstack-meeting11:45
*** baojg has joined #openstack-meeting11:47
*** eharney has quit IRC11:55
*** _erlon_ has joined #openstack-meeting11:59
*** _alastor_ has joined #openstack-meeting12:00
*** _alastor_ has quit IRC12:04
*** ttsiouts has quit IRC12:13
*** ttsiouts has joined #openstack-meeting12:13
*** ianychoi has quit IRC12:14
*** lbragstad has joined #openstack-meeting12:17
*** ttsiouts has quit IRC12:18
*** ttsiouts has joined #openstack-meeting12:20
*** ianychoi has joined #openstack-meeting12:22
*** mriedem has joined #openstack-meeting12:25
*** Luzi has quit IRC12:35
*** baojg has quit IRC12:38
*** brinzhang has quit IRC12:38
*** njohnston_ has joined #openstack-meeting12:39
*** brinzhang has joined #openstack-meeting12:39
*** njohnston has quit IRC12:40
*** mmethot has joined #openstack-meeting12:57
*** rbudden has joined #openstack-meeting13:03
*** mmethot is now known as mmethot|brb13:10
*** yamamoto has joined #openstack-meeting13:11
*** eharney has joined #openstack-meeting13:15
*** rajinir has joined #openstack-meeting13:18
*** rf0lc0 has joined #openstack-meeting13:18
*** rfolco has quit IRC13:21
*** raildo has quit IRC13:27
*** njohnston has joined #openstack-meeting13:28
*** liuyulong has joined #openstack-meeting13:28
*** njohnston_ has quit IRC13:29
*** eharney has quit IRC13:29
*** mmethot|brb is now known as mmethot13:29
*** yamamoto has quit IRC13:32
*** lseki has joined #openstack-meeting13:32
*** raildo has joined #openstack-meeting13:39
*** rf0lc0 has quit IRC13:42
*** rf0lc0 has joined #openstack-meeting13:42
*** raildo has quit IRC13:44
*** raildo has joined #openstack-meeting13:46
*** eharney has joined #openstack-meeting13:48
*** eharney has quit IRC13:48
*** eharney has joined #openstack-meeting13:49
*** raildo has quit IRC13:52
*** wwriverrat has joined #openstack-meeting13:54
*** mlavalle has joined #openstack-meeting13:54
*** belmoreira has quit IRC13:55
liuyulonghi13:59
liuyulong#startmeeting neutron_l313:59
openstackMeeting started Wed Jun 26 13:59:32 2019 UTC and is due to finish in 60 minutes.  The chair is liuyulong. Information about MeetBot at http://wiki.debian.org/MeetBot.13:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:59
*** openstack changes topic to " (Meeting topic: neutron_l3)"13:59
openstackThe meeting name has been set to 'neutron_l3'13:59
mlavalleo/13:59
liuyulong#chair mlavalle14:00
openstackCurrent chairs: liuyulong mlavalle14:00
ralonsohhi14:00
liuyulongLet's wait a minute for more attendees. We are going to start the topics after 60 seconds officially. : )14:00
liuyulonghi14:00
mlavallenice!14:01
wwriverrato/14:01
liuyulong#topic Announcements14:01
*** openstack changes topic to "Announcements (Meeting topic: neutron_l3)"14:01
liuyulongImportant things were highlighted in neutron team meeting yesterday, so I will not repeat them again.14:02
*** belmoreira has joined #openstack-meeting14:02
liuyulongSo no announcements from me now, anyone has any other?14:02
*** brinzhang has quit IRC14:03
slaweqhi14:03
mlavallenone from me14:03
liuyulongslaweq, hi14:03
*** brinzhang has joined #openstack-meeting14:03
liuyulongOK, let's move on.14:03
liuyulong#topic Bugs14:04
*** openstack changes topic to "Bugs (Meeting topic: neutron_l3)"14:04
liuyulongyamamoto was our bug deputy last week.14:04
liuyulong#link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007290.html14:04
liuyulongLooks like a busy week, : )14:04
njohnstono/14:04
mlavalleyeah, it ws kind of busy14:04
liuyulong#link https://bugs.launchpad.net/neutron/+bug/183371714:05
openstackLaunchpad bug 1833717 in neutron "Functional tests: error during namespace creation" [High,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)14:05
liuyulongThe fix: https://review.opendev.org/#/c/666845/. It's done now.14:05
mlavallethanks ralonsoh14:05
ralonsohno problem14:05
liuyulong#link https://bugs.launchpad.net/neutron/+bug/183425714:06
openstackLaunchpad bug 1834257 in neutron "dhcp-agent can overwhelm neutron server with dhcp_ready_on_ports RPC calls" [High,In progress] - Assigned to Sebastian Lohff (sebageek)14:06
liuyulongA WIP fix is here: https://review.opendev.org/#/c/667472/.  And it's based on this: https://review.opendev.org/#/c/659274/.14:07
liuyulongCode basically looks good to me, it makes sense, but I have a concern is that how will this influence the port DHCP provisioning block during instance boot procedure.14:07
mlavalleare referring to the first or second patch?14:08
liuyulongIf I have some port list like this [0, 63] [64, 127] ... [...]. That dhcp_ready_on_ports is a PRC `call` (wait and block until return) method, the following sets of DHCP ready ports will wait until the the formers completed.14:08
liuyulongSo the later the port stays in the waiting queue, the easier it fails to start the VM due to the provisioning block timeout.14:09
liuyulongmlavalle, the base https://review.opendev.org/#/c/659274/.14:09
liuyulongany comments in such perspective?14:11
mlavalleI see your concern but I also see that others have other opinions14:11
mlavallelike slaweq14:11
liuyulongOr I'm just over-thinking?14:11
mlavalleI'll go over the comments later today and add my 2 cents14:12
*** ayoung has joined #openstack-meeting14:12
liuyulongmlavalle, cool14:12
slaweqliuyulong: basically You're right IMO14:12
slaweqbut14:12
slaweqin current approach if You have e.g. 127 ports it will be send in one rpc message and then new port will wait long time until it will be send14:13
mlavalleprocessed, you menat, right?14:13
slaweqand IMHO if You will send those messages in smaller chunks, it will be spread between many rpc workers on server side, so in overall it shouldn't be big problem14:14
slaweqmlavalle: right14:14
slaweqand also, the issue here is mostly during full sync (e.g. after restart agent)14:14
ralonsohalso we should not forget the main problem addressed in the patch: to solve the RPC timeouts when sending big sets of ports14:15
mlavallein normal processing it shouldn't be a big issue14:15
slaweqin such case IMHO it's better to process everything without timeouts and repeats and then start "normal" work14:15
ralonsohand this problem is solved there14:15
*** _alastor_ has joined #openstack-meeting14:15
liuyulongsuch _dhcp_ready_ports_loop is in a single thread, not?14:15
slaweqduring agent's normal work, this shouldn't be problem as usually there is no so many ports to send at once14:15
slaweqliuyulong: yes, it's single eventlet worker now14:16
liuyulongso, the neutron-server will always process these ports in one worker for one such RPC call of one DHCP agent.14:17
*** enriquetaso has joined #openstack-meeting14:18
liuyulongno matter the trunk size14:18
liuyulongSo why, it is slow in neutron-server side?14:18
liuyulongThis should be the root cause IMO. I will raise a L3 DB slow query bug later.14:19
liuyulongmaybe something similar14:19
slaweqliuyulong: it is slow because it iterates over all those ports and try to remove provisioning block for each port14:20
slaweqand yes, it is slow14:20
slaweqwe should think about optimizing it later but IMHO patch for dhcp agent is good14:20
liuyulongYes, code is fine to me. Actually I do not think nova or neutron will handle a 64**+ VM concurrent booting in one single node too many times.14:23
*** raildo has joined #openstack-meeting14:23
*** _erlon_ has quit IRC14:23
mlavalleso, let's move on then and let it go forward14:23
liuyulongNext14:23
liuyulong#link https://bugs.launchpad.net/neutron/+bug/183365314:23
openstackLaunchpad bug 1833653 in neutron "We should cleanup ipv4 address if keepalived is dead" [Medium,In progress] - Assigned to Yang Li (yang-li)14:23
liuyulongI'm not quite sure if this is a system spontaneous problem.14:23
liuyulongBut according to the comments in the fix: https://review.opendev.org/#/c/667071/. Yang Li  says `restart the l3-agent too many time`, so looks like a artificial extreme scenario.14:24
*** heikkine has joined #openstack-meeting14:24
liuyulongIMO, maybe we should add a FLAG for agent restart succeed, and then remind the user in the DOC not to trigger restart many times when no such flag raised, or wait some times then restart again. But I still not quite sure why, restart again and again?14:26
liuyulongBut as he/she said in the gerrit, https://bugs.launchpad.net/neutron/+bug/160232014:27
openstackLaunchpad bug 1602320 in neutron "ha + distributed router: keepalived process kill vrrp child process" [Medium,Fix released] - Assigned to He Qing (tsinghe-7)14:27
liuyulongmaybe we should take a look at this bug again.14:27
liuyulongNext14:27
liuyulong#link https://bugs.launchpad.net/neutron/+bug/173206714:27
openstackLaunchpad bug 1732067 in neutron "openvswitch firewall flows cause flooding on integration bridge" [High,In progress] - Assigned to LIU Yulong (dragon889)14:27
liuyulongThis is a L2 bug IMO14:27
liuyulongWe already have a patch here: https://review.opendev.org/#/c/639009/, but it breaks the dvr east-west traffic.14:28
liuyulongSo, I raised it to here again.14:28
liuyulongThe fix may also face the same issue I mentioned last week.14:28
liuyulongEvery L3 scenario traffic will be affected, we can not test all the scenarios in upstream neutron CI, or manually test.14:29
* haleyb wanders in late14:29
liuyulongSo a minimum change is required.14:30
liuyulongAs a consequence, based on the openflow firewall design, I have an alternative fix: https://review.opendev.org/#/c/666991/. When the bridge tries to flood the packets, we use the dest MAC to direct the traffic to the right OF port, since neutron has the full acknowledge of every ports.14:30
ralonsohwe should use the MAC and the VLAN tag14:31
ralonsohwe can have the same mac in multiple VLANs14:32
liuyulongralonsoh, good to know, I will refactor the patch for this point.14:32
mlavallethanks for proposing it14:33
mlavallelet's review it14:33
liuyulongSo, in such perspective, we do not touch too many tables of the OF firewall.14:33
liuyulongOnly one table now, and only for accepted egress traffic.14:34
*** mattw4 has joined #openstack-meeting14:35
liuyulongI tested the DVR, legacy, dvr+ha, dvr_no_external, east-west, floating IPs, all works fine now, no flood in the bridge.14:35
liuyulongOK, last one from me.14:36
liuyulong#link https://bugs.launchpad.net/neutron/+bug/183430814:36
openstackLaunchpad bug 1834308 in neutron "[DVR][DB] too many slow query during agent restart" [Medium,Confirmed]14:36
liuyulongRecently, during our local testing we meet such issue.14:36
liuyulongThe restart time is too long. With large concurrency number, the situation is even worse.14:36
liuyulongThe database CPU utilization is almost 100% for every used core.14:37
liuyulongAnd We counted the slow query logs, some of them will be triggered 200K+ times.14:38
ralonsohdid you tried to profile those DB queries?14:38
liuyulongHave you guys meet such issue locally?14:39
liuyulongralonsoh, yes, I have a fix locally, it basically works.14:39
ralonsohok14:39
haleybthe bug is very generic, it seems as if there might be more than one issue here to fix?14:39
ralonsohbtw, I have one more bug14:40
liuyulongWe tested restart the dhcp, metadata, l3 agent  in 80 nodes concurrently.14:40
liuyulongNo 0.5s+ slow DB query during such restart.14:41
liuyulongOK14:41
liuyulongralonsoh, please, go ahead14:41
ralonsohthanks14:41
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/173245814:41
openstackLaunchpad bug 1732458 in neutron "deleted_ports memory leak in dhcp agent" [Medium,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)14:41
ralonsohpatch: #link https://review.opendev.org/#/c/521035/14:41
ralonsoha bit old14:42
*** yamamoto has joined #openstack-meeting14:42
*** yamamoto has quit IRC14:42
ralonsohI talked to the submitter and he agreed to have another strategy14:42
ralonsohso I submitted PS614:42
ralonsohI'm using FixedIntervalLoopingCall to cleanup the deleted_ports variable14:43
*** yamamoto has joined #openstack-meeting14:43
ralonsohthat's all!14:43
mlavalleseems to be a description too broad. Are you planning to add finer detail to the bug? I know we have slow DB queries....14:44
mlavalleor how do you propose to move forward with it?14:44
liuyulongralonsoh, thanks for bring this up.14:44
liuyulongmlavalle, as I add [DVR] in the bug title.14:44
mlavalleok14:45
liuyulongmlavalle, most of them basically related to DVR DB query.14:45
liuyulongWhen restart L2 agent (ovs-agent) concurrently in 50-80 nodes, the openvswitch_dvr_agent will also do some DVR related DB query.14:46
mlavallelet's add these details to the bug filing. it will be easier to understand and also to review the fix when it is proposed14:47
liuyulongSome of them is really time consuming, and some of the will scan the ports table entirely.14:47
liuyulongThat's all bugs from me. So, are there any other bugs that need the team to pay attention?14:48
haleybmlavalle: +1 to that, i'd rather have more bugs that are specific than just one14:48
liuyulongLet's move on.14:49
*** yamamoto has quit IRC14:49
liuyulong#topic Routed Networks14:49
*** openstack changes topic to "Routed Networks (Meeting topic: neutron_l3)"14:49
liuyulongmlavalle, tidwellr, wwriverrat: your turn now.14:49
*** cheng1 has quit IRC14:49
mlavalleon my part I have patch deployed in my set up14:49
mlavalleI have to conduct testing14:50
*** yamamoto has joined #openstack-meeting14:50
mlavallethis is OVS centered14:50
mlavalleI got a bit sidetracked over the past few days14:50
*** cheng1 has joined #openstack-meeting14:50
mlavallebut I am coming back to it14:50
mlavalleI am talking about multiple segments per host14:50
wwriverratAs for multi-segment work, I've also been sidetracked with my evil job ;-).14:51
*** yamamoto has quit IRC14:51
mlavallewwriverrat: oh, you have one of those too?14:51
wwriverratI have been digesting all of the comments in both the spec as well as the WIP14:51
*** yamamoto has joined #openstack-meeting14:51
*** lpetrut has quit IRC14:52
wwriverratIt looks like there may be new code that also raises exception for a network with more than one segment14:52
wwriverrattrying to figure out how to get around that14:52
wwriverrathttps://review.opendev.org/#/c/633165/20/neutron/plugins/ml2/plugin.py14:52
wwriverratNot sure if routed network will pass through that code or not14:53
wwriverratLonger story shortened: I've been freed up a little to get more work done here.14:54
mlavalleI'll test that part14:54
mlavallein my setup14:54
wwriverratI'm hoping to have something around the WIP updated ideally by Monday. An update to the spec hopefully before end of today.14:55
mlavallethanks!14:56
liuyulongRouted Networks is a little new to me. And we haven't used it in our environment locally. But it's worth a try if needed. : )14:56
liuyulongWe are running out of time.14:57
liuyulong#topic On demand agenda14:58
*** openstack changes topic to "On demand agenda (Meeting topic: neutron_l3)"14:58
liuyulongAnything else to discuss today?14:58
*** yamamoto has quit IRC14:58
liuyulongOK, happy coding, see you guys online.14:58
mlavallenot from me14:58
liuyulong#endmeeting14:58
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:58
openstackMeeting ended Wed Jun 26 14:58:53 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_l3/2019/neutron_l3.2019-06-26-13.59.html14:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_l3/2019/neutron_l3.2019-06-26-13.59.txt14:58
ralonsohbye14:58
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_l3/2019/neutron_l3.2019-06-26-13.59.log.html14:58
mlavalleo/14:59
mlavalleliuyulong: I don't know what you did for your connection today, but it was flawless :-)14:59
slaweqo/14:59
*** liuyulong has left #openstack-meeting14:59
*** yamamoto has joined #openstack-meeting14:59
*** mlavalle has left #openstack-meeting15:00
*** Lucas_Gray has quit IRC15:02
*** mattw4 has quit IRC15:05
*** njohnston has quit IRC15:09
*** Lucas_Gray has joined #openstack-meeting15:12
*** brinzhang has quit IRC15:13
*** brinzhang has joined #openstack-meeting15:14
*** njohnston has joined #openstack-meeting15:16
*** psachin has quit IRC15:19
*** whoami-rajat has quit IRC15:22
*** brinzhang has quit IRC15:27
*** diablo_rojo has joined #openstack-meeting15:31
*** whoami-rajat has joined #openstack-meeting15:38
*** armstrong has joined #openstack-meeting15:41
*** ttsiouts has quit IRC15:51
*** ttsiouts has joined #openstack-meeting15:51
*** tssurya has quit IRC15:55
*** ttsiouts has quit IRC15:56
*** mattw4 has joined #openstack-meeting15:57
*** igordc has joined #openstack-meeting15:59
*** woojay has joined #openstack-meeting16:00
jungleboyj#startmeeting Cinder16:00
openstackMeeting started Wed Jun 26 16:00:19 2019 UTC and is due to finish in 60 minutes.  The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: Cinder)"16:00
openstackThe meeting name has been set to 'cinder'16:00
whoami-rajatHi16:00
enriquetasoo/16:00
jungleboyjcourtesy ping: jungleboyj whoami-rajat rajinir lseki carloss pots woojay erlon geguileo eharney rosmaita enriquetaso e0ne smcginnis davidsha walshh_ xyang hemna _hemna16:00
davidshao/16:01
lsekio/16:01
geguileohi! o/16:01
smcginniso/16:01
woojayhi16:01
e0nehi16:01
walshh_hi16:01
jungleboyj@!16:01
_pewp_jungleboyj ćƒ¾(-_-;)16:01
*** _erlon_ has joined #openstack-meeting16:02
jungleboyjOk.  Looks like we have a decent crowd16:02
jungleboyjLets get started.16:02
_erlon_Hey16:02
jungleboyj#topic announcements16:03
*** openstack changes topic to "announcements (Meeting topic: Cinder)"16:03
jungleboyjDon't realy have anything major here other than the usual reminder that we are heading toward MileStone216:03
whoami-rajatjungleboyj: spec freeze?16:03
jungleboyjAlso the friendly reminder to start thinking about the Train Mid-Cycle:16:04
jungleboyj#link https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning16:04
jungleboyjwhoami-rajat:  Yes.16:04
jungleboyjI still need to follow up on trying to get a discounted hotel rate.16:04
jungleboyjI need to go hide somewhere and do spec/driver reviews.16:05
jungleboyjThat was all I had for announcements.  Anyone have anything else to mention there?16:06
jungleboyjOk.  Moving on then.16:07
jungleboyj#topic replication16:07
*** openstack changes topic to "replication (Meeting topic: Cinder)"16:07
jungleboyjwalshh_:  You here?16:07
walshh_yes16:07
jungleboyjThe floor is yours.16:07
walshh_just an enquiry on replication_device16:07
walshh_right now there is a 1:1 relationship between backend and replication16:08
_erlon_walshh_: not necessarily16:08
walshh_just wondering is there a reason why it could not be an extra spec on the volume type instead16:08
_erlon_it depends on how you implement it16:08
walshh_ok, so it is possible to have multiple replication types per backend right now?16:09
_erlon_walshh_: you choose how many replication_devices you wanna have16:09
_erlon_walshh_: yes16:09
_erlon_lets say you configure [b1]16:10
*** artom has quit IRC16:10
_erlon_[b1]16:10
_erlon_replication_device=repdev116:10
_erlon_replication_device=repdev116:10
_erlon_replication_device=repdev216:10
walshh_but if I have 2 replication types won't both be used for each volume?16:10
walshh_I would like to be able to pick which type I use on a per volume type basis16:11
_erlon_Im not sure exactly what are you trying to achieve16:11
_erlon_so, each volume would be casted to a different device?16:12
walshh_we support 3 different types of replication16:12
walshh_synchronous, asynchronous and metro16:12
jungleboyjSo, I would think that you could set up your driver to do that based on an extra spec.16:13
_erlon_walshh_: yes, so at volume creation you ant to choose one of them16:13
jungleboyjThen have different volume types for each type.16:13
walshh_correct16:13
_erlon_jungleboyj: +1, yes, your driver needs to understand/parse that extra-spec and choose16:14
walshh_volume_type_1 = 'async'16:14
walshh_volume_type_2 = 'sync' etc16:14
walshh_I don't think this is possible right now.16:14
_erlon_yes, dont forget to report that in your backend16:15
walshh_but please correct me if I am wrong16:15
_erlon_it woud be something like:16:15
_erlon_cinder type-key rep1 set rep_sync_type=async16:16
_erlon_cinder type-key rep2 set rep_sync_type=sync16:16
_erlon_the backend/pool must report both, rep_sync_type, so the scheduler will not filter it out16:17
walshh_thanks Erlon, perhaps we could take this to openstack-cinder rather than me highjacking the meeting.16:18
jungleboyjwalshh_:  ++ Sounds good.16:18
jungleboyjSounds like just a matter of designing and coding.16:18
_erlon_walshh_: sure, Ill be happy to help16:18
jungleboyj:-)16:18
walshh_great, thanks guys16:18
_erlon_Im doing the same implementation for our driver :)16:19
walshh_better again!16:19
jungleboyj_erlon_:  Oh, that is good.  Have similar implementations would be good then.16:19
jungleboyjwalshh_:  Next topic is yours as well.16:19
walshh_yes, we had offered to test the FC portion of this blueprint16:20
jungleboyj#topic Validate Volume WWN Upon Connect16:20
*** openstack changes topic to "Validate Volume WWN Upon Connect (Meeting topic: Cinder)"16:20
jungleboyj#link https://blueprints.launchpad.net/cinder/+spec/validate-volume-wwn-upon-connect16:20
*** belmoreira has quit IRC16:21
walshh_We can still help out with this if still required16:21
jungleboyjHmmm, Good question.16:21
jungleboyjavishay worked on that a bit and then disappeared.16:21
walshh_I'm not sure if anything was done in os-brick for FC yet16:22
jungleboyjgeguileo:  You know anything about this?16:22
geguileothe WWN validation on connection?16:22
jungleboyjYes for FC.16:23
geguileoI remember the general idea16:23
geguileobut I don't remember the status16:23
jungleboyjWe got changes in place for iSCSI but don't think that any follow up happened for FC.16:23
geguileowe probably didn't add the feature there  :-(16:24
walshh_anyway, offer still holds, if required.  Just thought I would follow up16:24
jungleboyjOk.  Could ping Avishay and see if there was any plan to do so.16:25
jungleboyjI can send him a note and see if we get any response.16:27
walshh_thanks, that is all I had16:27
jungleboyj#action jungleboyj to follow-up with Avishay.16:27
hemnawould it help to have os-brick report the wwn of the volume in the connector?16:27
jungleboyjI don't remember the details there.16:28
jungleboyjSo, the change was to check the WWN if it was provided in the connector.16:30
jungleboyj#link https://review.opendev.org/#/c/63367616:31
jungleboyjAnyway.16:32
jungleboyj#topic open discussion16:32
*** openstack changes topic to "open discussion (Meeting topic: Cinder)"16:32
whoami-rajati guess this was reverted here https://review.opendev.org/#/c/642648/16:33
jungleboyjDoes anyone have other topics for today?16:33
whoami-rajatjungleboyj:  ^16:33
jungleboyjwhoami-rajat:  Oh yeah.  I forgot that.16:33
jungleboyjThat would explain why there hasn't been any follow-up.16:33
hemnayah seems this needs some more investigation/work16:34
jungleboyjhemna:  ++16:34
*** tesseract has quit IRC16:35
hemnaceph-iscsi update?16:35
jungleboyjhemna:  Sure!16:36
hemnahttps://review.opendev.org/#/c/667232/16:36
hemnaso I've been working on that a few days now16:36
hemnathat's the devstack-plugin-ceph patch to do all the work to install all the requirements16:36
hemnafor getting ceph-iscsi installed and configured in devstack16:36
hemnabeen running into various things16:37
hemnalike, even though ubuntu's 4.15 kernel says it has the modules needed to run tcmu-runner, it doesn't work16:37
hemnaI think the ceph-iscsi team had patches against the required modules that only landed in the kernel in 4.1616:37
hemnaso I'm just ignoring that for now16:38
hemnaand assuming that we'll get a way to work in CI16:38
hemnaso in the mean time I've gotten most of the devstack plugin working16:38
hemnabut just ran into an ipv6 issue16:38
hemnaevidently the rbd-target-gw and rbd-target-api only work on systems that have ipv6 enabled.16:39
hemnawhich my vm doesn't16:39
hemnadoes anyone know if our CI images have ipv6 ?16:39
smcginnisI believe so.16:39
hemnaI couldn't get neutron to start in a vm where ipv6 was enabled on the host os and guest os16:39
hemnaran into a known bug with setting some socket permissions16:40
clarkbyou have to enable RAs16:40
hemnaRAs ?16:40
clarkbrouter advertisements at least for the nodes zuul uses the default with linux if it sends RAs is to not accept RAs16:41
clarkband neutron sends RAs so you hvae to ask the linux kernel nicely to accept RAs as well via sysctl16:41
clarkb(I don't remember which sysctl it is though)16:41
hemnaok, I have no clue what any of that is16:41
* hemna is not a networking guy16:41
clarkbrouter advertisements are how nodes discover their IPv6 address16:41
hemnaneutron is a black whole to me16:41
clarkbso your test VM likely needs to accept them to learn its address then separately neutron sends them for the networks it manages16:42
hemnaI worked around it by disabling ipv616:42
clarkbbut ya ipv6 should work fine on the zuul jobs16:42
clarkbwe run many ipv6 tests16:42
hemnaas ipv6 seems like a pain anyway16:42
hemnaok, so I'll just hope/pray that the ipv6 requirements for the ceph-iscsi daemons will work there16:43
hemna:)16:43
hemnawe'll find out when CI runs I suppose16:43
hemnaok, so I'll try and wrap up the devstack plugin work today or tomorrow16:43
hemnaand then figure out how to get CI firing that up to see if it works16:44
hemnathen we can CI the driver patch too16:44
hemnathat's the plan at least.16:44
hemnajungleboyj: that's it from me.16:45
whoami-rajatgeguileo: quick ques (since i couldn't find you in #openstack-cinder), cinderlib tests run in cinder-tempest-dsvm-lvm-lio-barbican job right?16:45
jungleboyjSounds good.  Thank you for continuing to work on that.16:45
geguileowhoami-rajat: sorry, I had problems with the bouncer and it didn't rejoin the cinder channel  :-(16:46
geguileowhoami-rajat: that is correct16:47
whoami-rajatgeguileo: oh no problem16:47
geguileowhoami-rajat: it is running on LVM and Ceph CI jobs16:47
jungleboyjOk.  Anything else we need to cover today?16:49
whoami-rajatgeguileo: i figured out some of the cinderlib tests failed since i constrained the volume_type_id field to NOT NULL in db, so the cinderlib  tests doesn't call c-api to create_volume but rather use cinderlib's defined method in gate jobs right?16:50
geguileowhoami-rajat: yes, that is correct16:50
geguileowhoami-rajat: that would require a change to cinderlib as well16:51
geguileowhoami-rajat: what's the review?16:51
whoami-rajatgeguileo: https://review.opendev.org/#/c/639180/16:51
whoami-rajatgeguileo: ok, that's what i thought. Thanks for the help.16:52
whoami-rajatgeguileo: tests failing http://logs.openstack.org/80/639180/22/check/cinder-tempest-dsvm-lvm-lio-barbican/dcc733e/job-output.txt.gz#_2019-06-26_13_31_35_59597416:52
jungleboyjAnything else?16:54
jungleboyj:-)16:54
geguileowhoami-rajat: yeah, the issue is like you said, because you've restricted it to be non-null16:54
whoami-rajatjungleboyj: sorry for holding the meeting, that's all from my side :)16:54
jungleboyjwhoami-rajat:  No worries.  It is Open Discussion16:54
whoami-rajatgeguileo: thanks :)16:54
jungleboyjIn not one else has topics I will wrap up the meeting.16:55
jungleboyjThanks everyone for joining.16:55
jungleboyjGo forth and review.16:55
jungleboyj#endmeeting16:55
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:55
openstackMeeting ended Wed Jun 26 16:55:54 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-06-26-16.00.html16:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-06-26-16.00.txt16:55
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-06-26-16.00.log.html16:55
*** woojay has left #openstack-meeting16:56
*** davidsha has quit IRC16:56
*** e0ne has quit IRC16:58
*** ricolin has quit IRC17:02
*** yamamoto_ has joined #openstack-meeting17:04
*** Lucas_Gray has quit IRC17:04
*** yamamoto_ has quit IRC17:05
*** yamamoto_ has joined #openstack-meeting17:06
*** yamamoto has quit IRC17:06
*** zbr|ruck is now known as zbr17:18
*** kopecmartin is now known as kopecmartin|off17:24
*** rf0lc0 is now known as rfolco17:30
*** panda has quit IRC17:31
*** panda has joined #openstack-meeting17:35
*** pleia2_ has joined #openstack-meeting17:49
*** zerick_ has joined #openstack-meeting17:50
*** e0ne has joined #openstack-meeting17:51
*** klindgren_ has joined #openstack-meeting17:52
*** pleia2 has quit IRC17:53
*** zerick has quit IRC17:53
*** klindgren has quit IRC17:53
*** yamamoto_ has quit IRC17:53
*** yamamoto has joined #openstack-meeting17:54
*** altlogbot_2 has quit IRC17:55
*** altlogbot_1 has joined #openstack-meeting17:59
*** altlogbot_1 has quit IRC18:01
*** altlogbot_2 has joined #openstack-meeting18:02
*** e0ne has quit IRC18:29
*** hongbin has joined #openstack-meeting18:33
*** ociuhandu has quit IRC18:36
*** e0ne has joined #openstack-meeting18:44
diablo_rojoStoryboard won't be having a regular meeting this week so if you have questions/comments/concerns, please join us in #storyboard!19:08
*** e0ne has quit IRC19:08
*** dklyle has quit IRC19:09
*** wwriverrat has left #openstack-meeting19:15
*** ralonsoh has quit IRC19:17
*** phughk has quit IRC19:18
*** whoami-rajat has quit IRC19:22
*** dklyle has joined #openstack-meeting19:36
*** panda has quit IRC19:39
*** panda has joined #openstack-meeting19:40
*** altlogbot_2 has quit IRC19:45
*** altlogbot_3 has joined #openstack-meeting19:47
*** mriedem has quit IRC20:04
*** mriedem has joined #openstack-meeting20:07
*** diablo_rojo has quit IRC20:09
*** enriquetaso has quit IRC20:14
*** altlogbot_3 has quit IRC20:15
*** eharney has quit IRC20:16
*** altlogbot_1 has joined #openstack-meeting20:19
*** altlogbot_1 has quit IRC20:42
*** altlogbot_1 has joined #openstack-meeting20:45
*** alecuyer has joined #openstack-meeting20:59
*** efried has left #openstack-meeting21:00
timburke#startmeeting swift21:00
openstackMeeting started Wed Jun 26 21:00:17 2019 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: swift)"21:00
openstackThe meeting name has been set to 'swift'21:00
timburkewho's here for the swift meeting?21:00
*** altlogbot_1 has quit IRC21:00
claygo/21:00
mattoliverauo/21:00
alecuyerhello21:00
timburkenot too much on the agenda21:01
timburke#link https://wiki.openstack.org/wiki/Meetings/Swift21:01
timburke#topic Shanghai CFP21:02
*** openstack changes topic to "Shanghai CFP (Meeting topic: swift)"21:02
timburkefirst, just a heads-up21:02
timburkethe call for proposals for summit talks ends in about a week21:02
timburkethat and other details were on the mailing list21:03
timburke#link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007373.html21:03
timburkethere isn't a dedicated "Storage" track or anything, but i feel like we could tell some interesting stories under "Private & Hybrid Cloud" or "Public Cloud" (which are awfully broad topics)21:03
timburkeor maybe even "Open Development", "AI, Machine Learning & HPC", or "Container Infrastructure"21:03
timburkeso, if you've got something you might like to present on, you've got until July 2 to submit it!21:04
timburkeany questions on the summit?21:05
mattoliverauYeah, I assume it'll be asia heavy, specicially china (obviously), usually that community is quiet, so I wonder if we'll see any interesting swift talks from them21:05
*** altlogbot_1 has joined #openstack-meeting21:05
mattoliverauwell quiet though the usual channels we use21:05
*** pcaruana has quit IRC21:05
timburkeme too. and apparently "Sessions will be presented in both Mandarin and English, so you may submit your presentation in either language." which is also interesting21:06
*** tdasilva has joined #openstack-meeting21:06
tdasilvaheelo, sorry i'm late21:06
mattoliverautdasilva: o/21:06
timburkenext up, updates!21:07
timburke#topic py321:07
*** openstack changes topic to "py3 (Meeting topic: swift)"21:07
timburkewe finished porting the unit tests!21:07
mattoliverau\o/21:07
timburkethere are still a handful of patches to get py2 func tests passing against py3 services though21:07
alecuyerbravo :)21:07
timburkei updated the set of patches on the priority reviews wiki21:08
timburke#link https://wiki.openstack.org/wiki/Swift/PriorityReviews21:08
timburkethere *is* a patch to actually have py2 tests against py3 running in the gate though!21:08
mattoliverauSorry it's hack week this week at SUSE, we're not suppose to do normal work and instead on  what ever we want instead. So haven't really gone and reviewed much py3.  Had a few distractions with the girls and flu shots but otherwise have been focusing on auto sharding because HACKWEEK21:10
timburkeso -- how are people feeling regarding a release? should i continue waiting for py2 func tests passing, or should i known-issues the last few things?21:10
timburkeyay autosharding! i saw those patches go by, i'm excited :-)21:10
timburke(fwiw i'm already planning on known-issuing the ECAppIter bug)21:11
mattoliverauWell it would be nice to have a way of func testing.21:12
mattoliverauis the first py3 release going to be marked as experimental?21:13
timburkeyeah. and i kinda want to push people to try it out in devstack around the same time as i announce the experimental support...21:13
timburkeyeah, that's my plan21:13
mattoliverauok if it's experimental that func tests are only good to have.21:14
mattoliverauis it just the 4 patches to get py2 - py3 func working?21:15
timburkeidk -- like, there's a workaround for https://bugs.launchpad.net/swift/+bug/1691075 but it'd be nicer if it just *worked*...21:16
openstackLaunchpad bug 1691075 in OpenStack Object Storage (swift) "swift-object-server fails to start in python 3.5 environment with devstack" [Undecided,Confirmed]21:16
timburkeyeah. https://review.opendev.org/#/c/665494/ to address ^^^21:16
timburkehttps://review.opendev.org/#/c/653548/ to clean up some lingering memcache issues (that i think relate to how keystonemiddleware/keystoneauth use it?)21:17
timburkehttps://review.opendev.org/#/c/662546/ to work around a cpython bug related to non-ascii headers21:17
timburkeand https://review.opendev.org/#/c/666942/ to actually have the full suite running in the gate21:18
timburkei think those are roughly in order of importance -- if i could get the first two, i'm more willing to known-issues the third (which is hairier anyway)21:19
timburkecan anyone commit to reviewing one or both of those top two?21:19
mattoliverauwell I t hink having some kinda of func tests happening in py3 would be awesome, especially as we add new Swift code. So how about we focus on the py2 func tests testing py3 swift, at least give it until the end this week/early next week. And then cut a release?21:19
mattoliverauI'll take a look at them, I do have more time (HACKWEEK!)21:20
timburkethanks mattoliverau!21:20
timburkesounds like a plan: release early next week21:20
timburkethen we can move on to getting func tests ported ;-)21:21
mattoliverau+121:21
timburke#topic lots of small files21:21
*** openstack changes topic to "lots of small files (Meeting topic: swift)"21:21
timburkealecuyer, how's it going?21:21
alecuyerI wrote more tests,21:22
alecuyerfound some issues (heh!), things like, if fsck removes the "wrong" file, you're not able to write to a partition until the operator fixes it21:22
*** ianychoi has quit IRC21:23
timburkeeep! good thing we're finding them :-)21:23
alecuyerSo, some fixes, and had to refactor some things for testing. I'd like to stop adding new tests for that patch because it's getting large, so 'ill try to clean it and "finish" it before I write tests for more things21:23
*** Lucas_Gray has joined #openstack-meeting21:24
alecuyertimburke:  Yes :) it's not something that I expect would happen often but we do know if it can happen, it will ;) so, that's it21:24
timburkethat sounds like a plan21:25
timburkeanything the rest of us can do to help you?21:25
mattoliveraunice find, capturing all edge cases is hard/impossible. great to find and know about these kind of issues :)21:25
alecuyerI think for this review it's best to wait a little because I know it's not ready21:26
*** rsimai_away has quit IRC21:26
alecuyermaye if anyone wants to think about grpc / eventlet and find something i couldn't21:26
timburkei can try to do that. i've been digging around in eventlet a bit lately anyway21:27
timburkei'll also plan on merging from master sometime this week (hopefully today)21:27
alecuyergreat, thanks!21:27
timburkebe warned, now you'll *have* to make things pass on py3 ;-)21:27
mattoliveraulol21:28
alecuyerheh yes I know I'll have some work thereā€¦ :-)21:28
timburke#topic open discussion21:28
*** openstack changes topic to "open discussion (Meeting topic: swift)"21:28
timburkethat's it for the agenda; does anyone have anything else to bring up?21:28
mattoliverauAs mentioned earlier I've working on autosharding.21:28
*** ianychoi has joined #openstack-meeting21:28
mattoliverau#link https://review.opendev.org/#/c/66703021:28
mattoliverauand21:28
mattoliverau#link https://review.opendev.org/#/c/66757921:29
mattoliverauso far.21:29
*** zaitcev_ has joined #openstack-meeting21:29
mattoliverauit's still very rough and a WIP21:29
mattoliverauworking on a bit of patch chain to get something up21:29
timburkeoh! and speaking of sharding, i started thinking about caching shard ranges21:30
mattoliverauthe latter one is just something I pushed up to get it off my machine late last night.21:30
timburke#link https://review.opendev.org/#/c/667203/21:30
mattoliverautimburke: yeah I saw that and plan to look more closely at it, great work21:30
*** raildo has quit IRC21:30
mattoliverau667579 has a large commit message ramble that might be interesting21:31
timburkei started with just the updating set -- listings were left for future work -- i mainly wanted to get the container GET out of the object write path21:31
mattoliverauie. currently we use the OP tool to scan all ranges and the push all of them in to start sharding, whereas the existing auto sharding (we only use in tests) has been designed to scan X at a time per round21:31
timburkecool21:33
mattoliverauI wonder if to simplify and to decrease the potential known edge cases, is if we do something similar. Ie the leader scans for ALL ranges. That was we can bail on inserting them (on the second election, ie am I still the leader) if any of the other primaries are in the SHARDING or SHARDED state21:33
mattoliverauie, the OP has adding the ranges or there was a weird election issue etc.21:34
mattoliverauAnyway. We have other larger priorities, so just wanted to bring it to peoples attention21:35
mattoliverauI'll push along and get something to play with up21:36
timburkei know that we (SwiftStack) always do all ranges at once -- i'd be ok going either way, i think. the one nice thing about just doing a few at a time is that you're less likely to do a bunch of work and then discard the result21:36
mattoliverauyeah21:36
*** ykatabam has joined #openstack-meeting21:36
timburkeoh -- swiftclient's gate is currently busted! https://review.opendev.org/#/c/667477/ is a step in the right direction, but needs more work :-/21:36
mattoliverauanyway, that's all I got atm.21:38
timburkeanyone have anything else to bring up?21:38
alecuyerUnrelated to swift dev but I'll have to help our ML team at OVH with working with swift (with the "JOSS" java library). If there are nice use cases there I'll let you know :)21:38
zaitcev_2019-06-26 10:21:50.802482 | ubuntu-bionic |    Requirement(package='sphinx', location='', specifiers='!=1.6.6,!=1.6.7,!=2.1.0', markers="python_version>='3.4'", comment='# BSD', extras=frozenset())21:38
zaitcev_2019-06-26 10:21:50.802588 | ubuntu-bionic | WARNING: possible mismatch found for package "sphinx"21:38
zaitcev_step in the right direction indeed21:38
timburkealecuyer, cool! sounds like the sort of thing that could turn into a summit talk ;-)21:39
timburkethanks zaitcev_21:39
timburkei think there's also something that needs to happen related to https://review.opendev.org/#/c/588103/ :-/21:40
timburke(wherein we switched to using auth_uri in the test config)21:40
*** diablo_rojo has joined #openstack-meeting21:40
timburkeany other topics?21:42
*** panda has quit IRC21:42
timburkethen i think we'll break a bit early21:43
claygnoice21:43
timburkethank you all for coming, and thank you for working on swift!21:43
timburke#endmeeting21:43
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"21:43
openstackMeeting ended Wed Jun 26 21:43:37 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:43
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2019/swift.2019-06-26-21.00.html21:43
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2019/swift.2019-06-26-21.00.txt21:43
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2019/swift.2019-06-26-21.00.log.html21:43
*** zaitcev_ has left #openstack-meeting21:43
*** alecuyer has left #openstack-meeting21:43
mattoliveraunice thanks timburke21:44
*** panda has joined #openstack-meeting21:45
*** tdasilva_ has joined #openstack-meeting21:52
*** tdasilva has quit IRC21:54
*** rcernin has joined #openstack-meeting22:01
*** Lucas_Gray has quit IRC22:05
*** shilpasd has quit IRC22:11
*** eharney has joined #openstack-meeting22:12
*** rcernin has quit IRC22:19
*** rcernin has joined #openstack-meeting22:20
*** heikkine has quit IRC22:26
*** mattw4 has quit IRC22:37
*** diablo_rojo has quit IRC22:44
*** armstrong has quit IRC22:51
*** diablo_rojo has joined #openstack-meeting23:07
*** ykatabam is now known as yuko|brb23:18
*** carloss has quit IRC23:23
*** igordc has quit IRC23:25
*** mriedem has quit IRC23:42
*** hongbin has quit IRC23:43
*** slaweq has quit IRC23:50
*** yuko|brb is now known as ykatabam23:51
*** bobh has joined #openstack-meeting23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!