Wednesday, 2020-06-17

*** gyee has quit IRC00:01
*** mlavalle has quit IRC00:15
*** diurnalist has joined #openstack-meeting00:17
*** jmasud has joined #openstack-meeting00:20
*** yasufum has joined #openstack-meeting00:22
*** yasufum has quit IRC00:27
*** yasufum has joined #openstack-meeting00:27
*** armax has quit IRC00:33
*** yamamoto has quit IRC00:35
*** yamamoto has joined #openstack-meeting00:35
*** tetsuro has joined #openstack-meeting00:42
*** moguimar has quit IRC00:47
*** cheng1 has quit IRC01:14
*** Liang__ has joined #openstack-meeting01:15
*** cheng1 has joined #openstack-meeting01:17
*** yasufum has quit IRC01:43
*** yasufum has joined #openstack-meeting01:45
*** armax has joined #openstack-meeting01:52
*** TusharTgite has joined #openstack-meeting01:54
*** ricolin has joined #openstack-meeting02:01
*** Lucas_Gray has quit IRC02:12
*** jmasud has quit IRC02:18
*** jmasud has joined #openstack-meeting02:36
*** yasufum has quit IRC02:38
*** apetrich has quit IRC02:42
*** psachin has joined #openstack-meeting02:48
*** rcernin has quit IRC02:49
*** yasufum has joined #openstack-meeting02:52
*** rcernin has joined #openstack-meeting02:55
*** TusharTgite has quit IRC03:03
*** rcernin has quit IRC03:08
*** yasufum has quit IRC03:08
*** TusharTgite has joined #openstack-meeting03:08
*** jmasud has quit IRC03:24
*** jmasud has joined #openstack-meeting03:25
*** eharney has quit IRC03:30
*** psachin has quit IRC03:31
*** psachin has joined #openstack-meeting03:32
*** armax has quit IRC03:40
*** rcernin has joined #openstack-meeting03:50
*** eharney has joined #openstack-meeting03:52
*** jmasud has quit IRC04:05
*** jmasud has joined #openstack-meeting04:06
*** yamamoto has quit IRC04:19
*** Liang__ has quit IRC04:19
*** Liang__ has joined #openstack-meeting04:23
*** yasufum has joined #openstack-meeting04:26
*** markvoelker has joined #openstack-meeting04:27
*** evrardjp has quit IRC04:33
*** markvoelker has quit IRC04:33
*** evrardjp has joined #openstack-meeting04:33
*** yamamoto has joined #openstack-meeting04:35
*** jmasud has quit IRC04:40
*** vishalmanchanda has joined #openstack-meeting04:42
*** andrebeltrami has quit IRC04:49
*** diablo_rojo has quit IRC04:53
*** jmasud has joined #openstack-meeting05:03
*** diurnalist has quit IRC05:10
*** jmasud has quit IRC05:16
*** yasufum has quit IRC05:19
*** jmasud has joined #openstack-meeting05:19
*** yasufum has joined #openstack-meeting05:32
*** manpreet has joined #openstack-meeting05:59
*** yasufum has quit IRC06:03
*** yasufum has joined #openstack-meeting06:06
*** yasufum has quit IRC06:47
*** ircuser-1 has joined #openstack-meeting06:55
*** rpittau|afk is now known as rpittau06:56
*** slaweq has joined #openstack-meeting07:00
*** yasufum has joined #openstack-meeting07:05
*** diurnalist has joined #openstack-meeting07:07
*** jmasud has quit IRC07:08
*** apetrich has joined #openstack-meeting07:11
*** ttsiouts has joined #openstack-meeting07:13
*** psachin has quit IRC07:15
*** psachin has joined #openstack-meeting07:17
*** dklyle has quit IRC07:25
*** diurnalist has quit IRC07:40
*** rcernin has quit IRC07:47
*** ttsiouts has quit IRC07:49
*** ttsiouts has joined #openstack-meeting07:57
*** ralonsoh has joined #openstack-meeting07:57
*** Liang__ has quit IRC08:01
*** maciejjozefczyk has joined #openstack-meeting08:02
*** Liang__ has joined #openstack-meeting08:02
*** maciejjozefczyk has quit IRC08:03
*** e0ne has joined #openstack-meeting08:15
*** Lucas_Gray has joined #openstack-meeting08:24
*** cheng1 has quit IRC08:34
*** Lucas_Gray has quit IRC08:35
*** cheng1 has joined #openstack-meeting08:36
*** Lucas_Gray has joined #openstack-meeting08:36
*** jmasud has joined #openstack-meeting08:38
*** manpreet has quit IRC08:38
*** cheng1 has quit IRC08:44
*** cheng1 has joined #openstack-meeting08:50
*** priteau has joined #openstack-meeting08:50
*** ociuhandu has quit IRC08:51
*** links has joined #openstack-meeting08:55
*** e0ne has quit IRC09:02
*** e0ne has joined #openstack-meeting09:08
*** apetrich has quit IRC09:17
*** cheng1 has quit IRC09:24
*** apetrich has joined #openstack-meeting09:26
*** yasufum has quit IRC09:26
*** cheng1 has joined #openstack-meeting09:26
*** Liang__ has quit IRC09:33
*** yaawang_ has quit IRC09:40
*** diurnalist has joined #openstack-meeting09:40
*** e0ne has quit IRC09:47
*** e0ne_ has joined #openstack-meeting09:47
*** e0ne_ has quit IRC09:51
*** e0ne has joined #openstack-meeting09:52
*** yamamoto has quit IRC10:01
*** yamamoto has joined #openstack-meeting10:02
*** yamamoto has quit IRC10:02
*** oneswig has joined #openstack-meeting10:06
*** oneswig has left #openstack-meeting10:07
*** oneswig has joined #openstack-meeting10:10
*** diurnalist has quit IRC10:10
*** yaawang_ has joined #openstack-meeting10:17
*** rpittau is now known as rpittau|bbl10:18
*** ttsiouts has quit IRC10:22
*** ttsiouts has joined #openstack-meeting10:23
*** TusharTgite has quit IRC10:26
*** ttsiouts has quit IRC10:27
*** psachin has quit IRC10:31
*** sluna has joined #openstack-meeting10:31
*** oneswig_ has joined #openstack-meeting10:38
*** yamamoto has joined #openstack-meeting10:38
*** jmasud has quit IRC10:39
*** yamamoto has quit IRC10:39
*** yamamoto has joined #openstack-meeting10:40
*** oneswig has quit IRC10:41
*** e0ne has quit IRC10:42
*** e0ne has joined #openstack-meeting10:43
*** e0ne has quit IRC10:44
*** e0ne has joined #openstack-meeting10:45
*** e0ne has quit IRC10:46
*** e0ne has joined #openstack-meeting10:46
*** Lucas_Gray has quit IRC10:49
*** yamamoto has quit IRC10:50
*** noggin143 has joined #openstack-meeting10:56
*** yamamoto has joined #openstack-meeting11:00
oneswig_#startmeeting scientific-sig11:00
openstackMeeting started Wed Jun 17 11:00:41 2020 UTC and is due to finish in 60 minutes.  The chair is oneswig_. Information about MeetBot at http://wiki.debian.org/MeetBot.11:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.11:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"11:00
openstackThe meeting name has been set to 'scientific_sig'11:00
oneswig_hi all11:01
oneswig_#link agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_June_17th_202011:01
*** zeestrat has joined #openstack-meeting11:01
oneswig_#topic OpenStack and COVID19 workloads11:02
*** openstack changes topic to "OpenStack and COVID19 workloads (Meeting topic: scientific-sig)"11:02
*** ttsiouts has joined #openstack-meeting11:02
*** psachin has joined #openstack-meeting11:03
oneswig_It seems like a lot of organisations are working to support workloads for COVID, in various forms11:03
*** cheng1 has quit IRC11:04
verdurinHello - am partially here.11:04
noggin143https://home.cern/news/news/cern/cern-contributes-computers-combatting-covid-1911:04
noggin143Running Folding@HOME and Rosetta@HOME on hardware about to be retired11:05
*** cheng1 has joined #openstack-meeting11:05
oneswig_Not heard of Rosetta@home - what is that?11:05
noggin143https://boinc.bakerlab.org/11:06
noggin143"With the recent COVID-19 outbreak, R@h has been used to predict the structure of proteins important to the disease as well as to produce new, stable mini-proteins to be used as potential therapeutics and diagnostics, like the one displayed above which is bound to part of the SARS-CoV-2 spike protein."11:06
*** ttsiouts has quit IRC11:08
*** Zama8152 has joined #openstack-meeting11:08
noggin143pretty simple cloud-init script to start a new VM - https://clouddocs.web.cern.ch/using_openstack/contextualisation.html#install-the-folding-home-client11:08
priteauNSF-funded infrastructures can also accept workloads related to COVID-19, I saw it mentioned through Chameleon: https://www.chameleoncloud.org/blog/2020/03/13/chameleon-use-covid-19-projects/11:08
oneswig_Good article on the CERN blog - how many work units have been completed I wonder!11:09
noggin143Stats at https://stats.foldingathome.org/teams-monthly11:09
noggin143but it is important to not overload the volunteer organisations, we have some spare CPU time at the moment because the team doing hardware work is only just back on site11:10
noggin143However, it's important that the core volunteers don't get displaced by this temporary contribution11:10
noggin143We're also in discussions with the WHO to see how we can help as they are just down the road from us in Geneva11:11
*** ociuhandu has joined #openstack-meeting11:12
oneswig_verdurin: are you close to the source of research in Oxford?11:12
verdurinoneswig_: yes, there's a lot going on here.11:13
noggin143verdurin: we're running F@H at the moment but if there is a better application to run, I can put you in touch with the CERN COVID folk11:15
oneswig_Aside from simulation work, what about the epidemiology, public health, contact tracing etc.11:18
verdurinnoggin143: thanks. As usual there is a myriad different applications in use.11:18
verdurinThere is also the RECOVERY clinical trial, and the vaccine trial that originates from a couple of buildings down.11:20
oneswig_I recall from somewhere that protein simulations don't have significant data requirements, does that also apply for your workloads Adam?11:20
verdurinA lot of the workloads I know about are similar to our normal genomic ones, hence they do have significant data requirements.11:21
verdurinThere is also demand from data generators e.g. sequencing, proteomics.11:23
noggin143I guess there are also privacy concerns for some of the applications, which don't apply for the volunteer projects like F@H11:23
verdurinYes. In some cases various flavours of patient data.11:23
*** cheng1 has quit IRC11:23
oneswig_All of which make them difficult to spread to other places.11:23
oneswig_In the SIG session at the PTG there was reference to this tracking project in India - https://www.aarogyasetu.gov.in/11:24
oneswig_Prakash are you here?11:25
verdurinOne aspect that may be of interest here is provisioning of resources such as RStudio Server on cloud instances, where in the past dedicated nodes were used.11:25
*** ttsiouts has joined #openstack-meeting11:25
oneswig_verdurin: is that licensed software?  That can often be tricky11:26
verdurinIt depends. There are different flavours, free and licensed.11:26
*** ttsiouts has quit IRC11:26
*** ttsiouts has joined #openstack-meeting11:26
oneswig_Is RStudio being used for post-processing and visualisation of batch simulations?11:27
verdurinIt's mainly for code development, I believe.11:28
verdurinVery fast-moving area.11:28
oneswig_I am sure it is.11:29
*** cheng1 has joined #openstack-meeting11:29
oneswig_I'd be interested to hear how Public Health England's OpenStack systems are being applied for the modelling work they do.11:32
oneswig_Anything else to raise on this subject before we move on?11:34
slunaJust one more comment: AFAIK RStudio is an IDE for R code development. RStudio Server is useful when you deploy it next to big data and powerful compute so the researcher connects to it through a web browser to do interactive analyses.11:35
oneswig_Hi sluna, thanks for clarifying.11:35
verdurinThe dividing line is a bit muddier, but it's not that important.11:36
*** rfolco|rover has joined #openstack-meeting11:36
oneswig_We haven't covered the IOT-class issues of tracking populations but I don't think anyone's here who is working on that.11:37
Zama8152anyone using or recommending elastic search for analyzing data and  monitoring movements11:39
oneswig_Hello Zama8152, welcome :-)11:40
oneswig_ElasticSearch is very good for indexing and retrieval of JSON-encoded data.11:41
Zama8152I'd be interested at knowing what better tools to use with regards to tracking population11:41
Zama8152oneswig_Hi, thanks for the invite..11:41
noggin143We use it mainly for structured search like logs but ES is pretty flexible11:42
*** tetsuro_ has joined #openstack-meeting11:42
Zama8152researchers on my side use elastic search to analyze Covid-19 self-screening data and monitoring movements of citizens in areas of interest to understand the effectiveness and impact of the lockdown, this information is used by the National Department of health in making relevant decisions11:44
*** raildo has joined #openstack-meeting11:44
oneswig_Zama8152: it's good for data that doesn't always have the same structure - semi-structured perhaps.  If your data is always of the same format, you could also consider an SQL database like postgres11:44
oneswig_How big is the Elastic Search, will it grow to be massive?11:46
*** tetsuro has quit IRC11:46
Zama8152Yeah the data does'nt have the same structure and progress db is indeed used ..11:46
Zama8152*postgress11:46
oneswig_One other thought on ElasticSearch is that performance can be limited by IOPS and IO latency - it will benefit from local SSD storage in your hypervisors, if you have the local disk capacity for it.11:49
oneswig_Is your group developing everything from scratch?  There probably isn't much precedent for this kind of application.11:49
noggin143we run in VMs with 4 1TB SSDs per server - works OK, you lose some IOPS and rebalancing can take a while11:50
noggin143old presentation at https://indico.cern.ch/event/717615/contributions/3033517/attachments/1676735/2692320/ES_Security.pdf, now migrating to ES711:51
*** tetsuro_ has quit IRC11:51
oneswig_Is that 250TB of data in ES back in 2018?  Must be even more now...11:52
noggin143the security one is getting pretty big... used for forensics but otherwise, we purge aggressively with archive in HDFS.11:54
Zama8152currently they are running on 1 vm with 80GB memory and 1TB ssd11:54
oneswig_We are short on time - final comments on this topic?11:55
Zama8152They have complained about memory and I am moving them to a vm  with 160GB memory..11:55
oneswig_Hopefully that will help!11:56
Zama8152I have to attend another meeting right now.. Thanks for your input..11:56
oneswig_Thanks for coming Zama815211:56
oneswig_OK, a couple more events to mention11:57
*** tetsuro has joined #openstack-meeting11:57
oneswig_#topic online conferences11:57
*** openstack changes topic to "online conferences (Meeting topic: scientific-sig)"11:57
oneswig_I'm sure there are plenty of others going on11:57
oneswig_#link High Performance Container Workshop, 16-18 June https://hpcw.github.io/11:57
oneswig_Content looks quite interesting but I haven't listened in to any of the sessions so far11:58
oneswig_#link Virtual ISC https://www.isc-hpc.com/11:58
*** psachin has quit IRC11:58
oneswig_I'm not sure how much of ISC will be virtual but at least we don't have to find a hotel in Frankfurt to participate.11:59
*** psachin has joined #openstack-meeting11:59
oneswig_The OpenDev events are imminent too12:00
oneswig_The link eludes me alas and we must close12:01
oneswig_Thanks all12:01
oneswig_#endmeeting12:01
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"12:01
verdurinBye.12:01
openstackMeeting ended Wed Jun 17 12:01:43 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)12:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2020/scientific_sig.2020-06-17-11.00.html12:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2020/scientific_sig.2020-06-17-11.00.txt12:01
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2020/scientific_sig.2020-06-17-11.00.log.html12:01
slunabye!12:01
*** Lucas_Gray has joined #openstack-meeting12:02
*** noggin143 has quit IRC12:04
*** rpittau|bbl is now known as rpittau12:06
*** diurnalist has joined #openstack-meeting12:07
*** ttsiouts has quit IRC12:16
*** ttsiouts has joined #openstack-meeting12:16
*** Lucas_Gray has quit IRC12:16
*** Lucas_Gray has joined #openstack-meeting12:19
*** cheng1 has quit IRC12:20
*** ttsiouts has quit IRC12:21
*** yasufum has joined #openstack-meeting12:22
*** cheng1 has joined #openstack-meeting12:22
*** Wryhder has joined #openstack-meeting12:23
*** andrebeltrami has joined #openstack-meeting12:24
*** Lucas_Gray has quit IRC12:24
*** ttsiouts has joined #openstack-meeting12:25
*** yasufum has quit IRC12:26
*** Wryhder has quit IRC12:28
*** Lucas_Gray has joined #openstack-meeting12:32
*** diurnalist has quit IRC12:40
*** slaweq_ has joined #openstack-meeting12:47
*** slaweq has quit IRC12:48
*** Zama8152 has quit IRC12:58
*** cheng1 has quit IRC13:01
*** slaweq_ is now known as slaweq13:03
*** cheng1 has joined #openstack-meeting13:06
*** tetsuro has quit IRC13:15
*** ttsiouts has quit IRC13:18
*** ttsiouts has joined #openstack-meeting13:19
*** ttsiouts has quit IRC13:19
*** ttsiouts has joined #openstack-meeting13:19
*** cheng1 has quit IRC13:22
*** moguimar has joined #openstack-meeting13:24
*** diurnalist has joined #openstack-meeting13:33
*** ZhuXiaoYu has joined #openstack-meeting13:36
*** e0ne has quit IRC13:37
*** e0ne has joined #openstack-meeting13:37
*** TrevorV has joined #openstack-meeting13:37
*** rpittau is now known as rpittau|brb13:39
*** priteau has quit IRC13:40
*** ricolin has quit IRC13:40
*** e0ne has quit IRC13:48
*** e0ne_ has joined #openstack-meeting13:48
*** Liang__ has joined #openstack-meeting13:52
*** Liang__ is now known as LiangFang13:53
*** rpittau|brb is now known as rpittau13:53
*** lpetrut has joined #openstack-meeting13:56
*** priteau has joined #openstack-meeting13:56
*** LiangFang has quit IRC13:57
*** maciejjozefczyk has joined #openstack-meeting14:01
*** ZhuJoseph has joined #openstack-meeting14:04
*** yamamoto has quit IRC14:06
*** ZhuXiaoYu has quit IRC14:07
*** liuyulong has joined #openstack-meeting14:07
liuyulong#startmeeting neutron_l314:08
openstackMeeting started Wed Jun 17 14:08:46 2020 UTC and is due to finish in 60 minutes.  The chair is liuyulong. Information about MeetBot at http://wiki.debian.org/MeetBot.14:08
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:08
*** openstack changes topic to " (Meeting topic: neutron_l3)"14:08
openstackThe meeting name has been set to 'neutron_l3'14:08
liuyulongSorry, a bit late...14:08
liuyulongslaweq, haleyb, ping14:09
slaweqhi14:09
haleybhi14:09
liuyulonghi14:09
liuyulongAlright, let's start14:10
liuyulong#topic Announcements14:10
*** openstack changes topic to "Announcements (Meeting topic: neutron_l3)"14:10
liuyulong#link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015368.html14:11
liuyulongThis is the ptg summary from the Virtual PTG.14:11
liuyulongThanks slaweq for the detailed summary.14:12
liuyulong#link http://kaplonski.pl/images/Virtual_PTG_2020/photo_3.png14:13
liuyulongI saw you handsome guys.14:13
liuyulong#link http://eavesdrop.openstack.org/meetings/networking/2020/networking.2020-06-16-14.00.log.html#l-1314:14
liuyulongThis is the announcements from the team meeting yesterday.14:15
liuyulongWe are in Victoria devloping cycle now, so each spec should be moved to Victoria folder.14:16
*** ttsiouts has quit IRC14:16
liuyulongOK, no more from me now.14:16
slaweq:)14:17
*** ttsiouts has joined #openstack-meeting14:17
liuyulongNeutron CI is down, any idea?14:17
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188360114:18
openstackLaunchpad bug 1883601 in neutron "ovn based neutron gate jobs failing 100% of times" [Critical,In progress] - Assigned to Jakub Libosvar (libosvar)14:18
liuyulongThis is new bug, but seems the real problem is not fixed either.14:19
liuyulongOK...14:20
liuyulong#link https://review.opendev.org/#/c/735536/14:20
liuyulongThis is the gatefix14:20
liuyulongNext topic14:20
liuyulong#topic Bugs14:20
*** openstack changes topic to "Bugs (Meeting topic: neutron_l3)"14:20
liuyulong#link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015178.html14:21
liuyulong#link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015323.html14:21
liuyulong#link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015442.html14:21
liuyulongWe have a long list....14:21
*** ttsiouts has quit IRC14:21
*** ttsiouts has joined #openstack-meeting14:22
liuyulongFirst one14:24
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188096914:24
openstackLaunchpad bug 1880969 in neutron "Creating FIP takes time" [Low,New]14:24
ralonsohIMO, the times spent by the server is ok14:25
ralonsohc#2 of this LP14:25
ralonsoh(only the Neutron server times)14:25
liuyulongralonsoh, yes, agreed. The HTTP response time from the neutron server log should be considered first.14:26
liuyulong"GET /v2.0/ports?network_id=55c74232-825a-4a4a-b53d-5b4b7aa4ad74&device_owner=network%3Adhcp HTTP/1.1" status: 200  len: 1272 time: 0.067623114:27
liuyulongA simple case from my deployment.14:28
liuyulongA pattern for logstash should be useful.14:28
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188053214:28
openstackLaunchpad bug 1880532 in neutron "[RFE]L3 Router should support ECMP" [Wishlist,New] - Assigned to XiaoYu Zhu (honglan0914)14:28
liuyulongI have reviewed the spec one time.14:29
liuyulong#link https://review.opendev.org/#/c/729532/14:29
slaweqI have to review this spec too14:29
*** rh-jelabarre has joined #openstack-meeting14:30
liuyulongIn general, the final use scenarios looks limited to the loadbalancer. The main point is not in the Neutron side.14:31
liuyulongSo let's continue the discussion on the gerrit.14:31
slaweqyes, there are some suggestions that it can be done with existing neutron API IIRC14:31
ZhuJosephMy current plan is to add a new function to extraroutedb.py to handle this requirement.14:32
liuyulongHi, you are here.14:32
liuyulong"XiaoYu Zhu" it's you?14:33
ZhuJosephand use api like :/v2.0/routers/27757e09-fb6a-4196-957d-cdce604f087e/remove_ecmps14:33
ZhuJosephyes14:33
ZhuJosephI am14:33
liuyulongWelcome14:33
*** yamamoto has joined #openstack-meeting14:34
*** yamamoto has quit IRC14:34
*** mlavalle has joined #openstack-meeting14:34
*** yamamoto has joined #openstack-meeting14:34
*** psachin has quit IRC14:34
liuyulongZhuJoseph, if there are some existing code or POC, you may submit it in parallel, that could also be useful for the upstream team to understand your real requirement.14:36
liuyulongAnd do not forget to add the link to the spec.14:37
*** links has quit IRC14:37
liuyulongOne more thing, you should move specs/ussuri/l3-router-support-ecmp.rst, to the Virtual folder.14:37
liuyulongs/Victoria14:37
ZhuJosephok14:37
liuyulongOK, next14:39
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188199514:39
openstackLaunchpad bug 1881995 in neutron "Centralized SNAT failover does not recover until "systemctl restart neutron-l3-agent" on transferred node" [Medium,In progress] - Assigned to Ann Taraday (akamyshnikova)14:39
liuyulongWe already have some discussion on the LP, and here is a workaround fix:14:39
liuyulong#link https://review.opendev.org/#/c/734070/14:40
*** yamamoto has quit IRC14:41
liuyulongFor the fix, IMO, it partially revert the fix of the original fix of https://review.opendev.org/#/c/692352/14:41
ralonsohIMO this is a workaround14:41
liuyulongin some case14:41
ralonsohbut if accepted and does not clash with any other part of the code14:41
ralonsohI'm ok14:41
ralonsohyou know better this code...14:42
liuyulongThe main problem is in the namespace deletion based on my current research.14:42
liuyulong#link https://bugs.launchpad.net/neutron/+bug/1881995/comments/714:43
openstackLaunchpad bug 1881995 in neutron "Centralized SNAT failover does not recover until "systemctl restart neutron-l3-agent" on transferred node" [Medium,In progress] - Assigned to Ann Taraday (akamyshnikova)14:43
liuyulong#link https://bugs.launchpad.net/neutron/+bug/1881995/comments/814:43
liuyulongI will add some log for this issue as a start.14:43
*** dklyle has joined #openstack-meeting14:44
ralonsohgood finding in c#714:45
liuyulongralonsoh, the pyroute2 namespace deleting could be related. I may need your help. : )14:45
ralonsohsure14:45
ralonsohbut where is this called?14:45
liuyulongWait a sec14:45
ralonsohno no14:45
ralonsohI mean14:45
ralonsohin this executing14:45
ralonsohwhy the namespace is deleted?14:45
*** manuvakery has joined #openstack-meeting14:46
ralonsoh*execution14:46
liuyulong#link https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_lib.py#L70514:46
liuyulong#link https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_lib.py#L90614:46
ralonsohyes and the ns is deleted, so that's ok14:47
ralonsohbut why the ns was deleted?14:47
liuyulongAnd finally, https://github.com/openstack/neutron/blob/master/neutron/privileged/agent/linux/ip_lib.py#L54214:47
liuyulongthe qrouter namespace was not deleted successfully.14:48
liuyulongbug/1881995/comments/714:48
liuyulongOr maybe it is concurrent query and deleting.14:50
liuyulongDelete namespace does not have much log now, I will add some.14:50
liuyulongOK, next one14:53
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188286014:53
openstackLaunchpad bug 1882860 in neutron "after FIP is assigned vm lost network connection" [Undecided,Incomplete]14:53
liuyulongIt's a ovn-router related report.14:53
liuyulongJakub has left a potential fix of the issue and some questions, no response for now.14:54
liuyulongNext14:55
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188332114:55
openstackLaunchpad bug 1883321 in neutron "Neutron OpenvSwitch DVR - connection problem" [High,New]14:55
liuyulongThis is really a complicated issue.14:55
liuyulongAs I said in the fix, there are tons of cases for the real deployment, for instance, DVR, DVR + HA, openflow firewall, network node mixed compute services...14:57
liuyulongI have a long list.14:57
liuyulongLet's continue the talk on LP bug.14:58
liuyulongLast one14:58
liuyulong#link https://bugs.launchpad.net/neutron/+bug/188308914:58
openstackLaunchpad bug 1883089 in neutron "[L3] floating IP failed to bind due to no agent gateway port(fip-ns)" [Medium,In progress] - Assigned to LIU Yulong (dragon889)14:58
liuyulongreported by me14:58
liuyulongI have two patches.14:58
liuyulong#link https://review.opendev.org/#/c/735432/14:59
liuyulong#link https://review.opendev.org/#/c/735762/14:59
liuyulongThe test case should be simple, just create a fake external network, and create router/network/subnet/VM.14:59
liuyulongThen just see the changes of fip-namespace on hosts and DvrFipGatewayPortAgentBinding in DB.15:00
liuyulong#link https://review.opendev.org/#/c/702547/15:00
liuyulongIMO, this fix just missed that DVR related clean up action.15:01
liuyulongOK, we are out of time.15:01
*** ttsiouts has quit IRC15:01
liuyulong#endmeeting15:01
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:01
openstackMeeting ended Wed Jun 17 15:01:28 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:01
ralonsohbye15:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_l3/2020/neutron_l3.2020-06-17-14.08.html15:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_l3/2020/neutron_l3.2020-06-17-14.08.txt15:01
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_l3/2020/neutron_l3.2020-06-17-14.08.log.html15:01
liuyulongSee you guys online.15:01
*** ttsiouts has joined #openstack-meeting15:01
*** ttsiouts has quit IRC15:06
*** Lucas_Gray has quit IRC15:12
*** ttsiouts has joined #openstack-meeting15:17
*** aprice has joined #openstack-meeting15:31
*** ttsiouts has quit IRC15:33
*** ttsiouts has joined #openstack-meeting15:33
*** jiaopengju1 has quit IRC15:35
*** jiaopengju1 has joined #openstack-meeting15:36
*** ttsiouts has quit IRC15:38
*** e0ne_ has quit IRC15:40
*** e0ne has joined #openstack-meeting15:46
*** gyee has joined #openstack-meeting15:58
*** e0ne has quit IRC16:01
*** armax has joined #openstack-meeting16:08
*** rpittau is now known as rpittau|afk16:17
*** jmasud has joined #openstack-meeting16:20
*** ociuhandu_ has joined #openstack-meeting16:38
*** ociuhandu has quit IRC16:42
*** ociuhandu_ has quit IRC16:43
*** ociuhandu has joined #openstack-meeting16:47
*** ociuhandu has quit IRC16:51
*** liuyulong has quit IRC16:56
*** maciejjozefczyk has quit IRC16:56
*** maciejjozefczyk has joined #openstack-meeting16:57
*** diablo_rojo has joined #openstack-meeting16:59
*** lpetrut has quit IRC17:02
*** e0ne has joined #openstack-meeting17:03
*** oneswig_ has quit IRC17:18
*** e0ne has quit IRC17:43
*** jmasud has quit IRC17:44
*** manuvakery has quit IRC18:34
*** jmasud has joined #openstack-meeting18:36
*** jamesmcarthur has joined #openstack-meeting18:47
*** jamesmcarthur has quit IRC18:54
*** jamesmcarthur has joined #openstack-meeting18:55
*** jamesmcarthur_ has joined #openstack-meeting18:59
*** jmasud has quit IRC19:00
*** diurnalist has quit IRC19:00
*** jamesmcarthur has quit IRC19:01
*** jmasud has joined #openstack-meeting19:18
*** ralonsoh has quit IRC19:22
*** diurnalist has joined #openstack-meeting19:30
*** ttsiouts has joined #openstack-meeting19:32
*** jmasud has quit IRC19:36
*** vishalmanchanda has quit IRC19:38
*** jmasud has joined #openstack-meeting19:41
*** ttsiouts has quit IRC19:42
*** ttsiouts has joined #openstack-meeting19:43
*** jamesmcarthur_ has quit IRC19:45
*** jamesmcarthur has joined #openstack-meeting19:46
*** ttsiouts has quit IRC19:47
*** ociuhandu has joined #openstack-meeting19:52
*** zhuxiaoyu_inspur has joined #openstack-meeting20:09
*** ktsuyuzaki has joined #openstack-meeting20:11
*** Lucas_Gray has joined #openstack-meeting20:11
*** zeestrat_ has joined #openstack-meeting20:11
*** ktsuyuzaki is now known as kota__20:12
*** tobberydberg_ has joined #openstack-meeting20:12
*** mbuil_ has joined #openstack-meeting20:13
*** irclogbot_0 has quit IRC20:14
*** gyee has quit IRC20:17
*** ZhuJoseph has quit IRC20:18
*** tobberydberg has quit IRC20:18
*** mbuil has quit IRC20:18
*** zeestrat has quit IRC20:18
*** masayukig has quit IRC20:18
*** amorin has quit IRC20:18
*** kota_ has quit IRC20:18
*** zeestrat_ is now known as zeestrat20:18
*** amorin has joined #openstack-meeting20:20
*** irclogbot_3 has joined #openstack-meeting20:21
*** jmasud has quit IRC20:22
*** masayukig has joined #openstack-meeting20:24
*** gyee has joined #openstack-meeting20:31
*** jamesmcarthur has quit IRC20:31
*** jamesmcarthur has joined #openstack-meeting20:32
*** jamesmcarthur has quit IRC20:37
*** csatari has quit IRC20:39
*** aprice has quit IRC20:40
*** knikolla has quit IRC20:40
*** knikolla has joined #openstack-meeting20:42
*** patrickeast has quit IRC20:43
*** aprice has joined #openstack-meeting20:44
*** aprice has quit IRC20:48
*** ociuhandu has quit IRC20:48
*** PrinzElvis has quit IRC20:49
*** priteau has quit IRC20:49
*** thgcorrea has quit IRC20:50
*** knikolla has quit IRC20:52
*** ttsiouts has joined #openstack-meeting20:52
*** PrinzElvis has joined #openstack-meeting20:53
*** jmasud has joined #openstack-meeting20:53
*** patrickeast has joined #openstack-meeting20:53
*** ttsiouts has quit IRC20:57
*** PrinzElvis has quit IRC20:57
*** patchbot has joined #openstack-meeting20:58
*** diablo_rojo_phon has joined #openstack-meeting20:59
timburke#startmeeting swift21:00
openstackMeeting started Wed Jun 17 21:00:03 2020 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: swift)"21:00
openstackThe meeting name has been set to 'swift'21:00
timburkewho's here for the swift meeting?21:00
*** patrickeast has quit IRC21:00
kota__hi21:00
*** maciejjozefczyk has quit IRC21:00
*** patrickeast has joined #openstack-meeting21:01
tdasilvahalf here21:01
rledisezo/21:01
alecuyero/21:01
mattoliverauo/21:02
timburkeas usual, the agenda's at https://wiki.openstack.org/wiki/Meetings/Swift21:02
timburkefirst up21:02
timburke#topic gate21:02
*** openstack changes topic to "gate (Meeting topic: swift)"21:03
timburkeyou may have noticed that nothing was passing the last couple days21:03
*** e0ne has joined #openstack-meeting21:03
timburkei think it's all resolved now, but i wanted to give an overview of the issues21:03
timburke#link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015432.html21:03
timburkethere was an issue with uwsgi that broke our grenade job (along with *everyone else*)21:04
*** aprice has joined #openstack-meeting21:04
*** jamesmcarthur has joined #openstack-meeting21:04
timburkethe qa team's been all over it, and the resolution merged last night21:04
timburkethen there was another issue with our probe tests (most visibly; also affected the ceph s3 tests and rolling upgrade tests)21:05
*** patrickeast has quit IRC21:05
timburkepretty sure it was the result of pip no longer being available in the base images21:06
timburke#link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015425.html21:06
claygo/21:06
timburkethe fix there did require a change to our tooling, but that merged this morning21:06
timburkehttps://review.opendev.org/#/c/73599221:06
patchbotpatch 735992 - swift - Use ensure-pip role (MERGED) - 5 patch sets21:06
*** diurnalist has quit IRC21:06
timburkei rechecked a bunch of changes about three hours ago, but everything's all backed up so none of those have actually posted new results yet21:08
claygthanks for fixing the gate timburke !21:08
*** zaitcev has joined #openstack-meeting21:08
timburkeif anyone sees more issues, holler!21:08
timburke#topic memcache and container failures21:09
*** openstack changes topic to "memcache and container failures (Meeting topic: swift)"21:09
*** aprice has quit IRC21:10
timburkeso last week i had all replicas of a container get overloaded21:10
claygyeah that was pretty cool21:10
claygactually I wasn't there - it SOUNDED cool (after the fact)21:11
timburkewhich led me to notice that when the proxy hands back a 503 (because we got timeout, timeout, timeout, 404, 404, 404), we go evict memcache21:11
timburke#link https://bugs.launchpad.net/swift/+bug/188321121:11
openstackLaunchpad bug 1883211 in OpenStack Object Storage (swift) "get_container_info 503s shouldn't try to clear memcache" [Undecided,In progress]21:11
*** rfolco|rover has quit IRC21:12
timburkewhich meant that once info fell out of cache while there were hundreds of concurrent requests trying to do things in the container, it couldn't *stay in cache* even when some of those HEADs to try to repopulate managed to get back to the proxy21:13
*** jamesmcarthur has quit IRC21:13
timburkei proposed https://review.opendev.org/#/c/735359/ to fix it (basically, follow what the docstring said to do in set_info_cache), but i was wondering if anyone else has seen similar behavior21:14
patchbotpatch 735359 - swift - proxy: Stop killing memcache entries on 5xx responses - 4 patch sets21:14
claygmoral of the story: don't let your primaries get overloaded - but when you do!  you know... be better swift21:15
zaitcevI haven't but it sounds persuasive.21:15
timburkenote that prior to https://review.opendev.org/#/c/667411/ (from about a year ago), we would've been caching a 40421:16
patchbotpatch 667411 - swift - Return 503 when primary containers can't respond (MERGED) - 2 patch sets21:16
claygI was reluctant to go mucking with such old code; but once I realized we're a few iterations away for untangling all the things that could possibly lead to clients+sharder overwhelming a root db... I loaded it in my head and it makes sense to me21:16
timburke(funny enough, it was definitely the same cluster and quite possibly the same container that prompted that change, too)21:17
claygI'm not even sure we really *intended* to clear the cache on error - the history of how it evolved reads more like it just happened on accident as the code evolved21:17
*** diurnalist has joined #openstack-meeting21:17
claygcertainly all the primaries being overloaded isn't something that comes up often - it's possible it was just never bad enough (or when it go that bad there was like OTHER thing that were ALSO bad - like... idk... not enough ratelimiting)21:18
timburkeyeah, it sure *seemed like* https://review.opendev.org/#/c/30481/ didn't mean to change behavior like that21:18
patchbotpatch 30481 - swift - get_info - removes duplicate code (Take 3) (MERGED) - 17 patch sets21:18
clayganyway - even if I'm wrong and someone thought they had a good reason to flush cache on error... I can't convince myself anymore it's a good idea21:18
*** ociuhandu has joined #openstack-meeting21:19
claygwhen the backend service is saying "please back off" - GO HARDER - is rarely going to be the BEST plan 😁21:19
clayganyway; we're shipping it - and at least two cores like the change - so it'll probably merge eventually, but it's fairly fresh and we're open to better ideas!21:20
zaitcevThe problem is usually the cache being stale. If the error is indicative of the main storage being changed without cache flushed, then cache needs to be flushed. not sure if 503 is such. The 409 seems like a candidate for suspicion.21:20
timburke*nod* i'm not sure that the container server can send back a 409 on GET or HEAD, but good thinking. will check21:22
claygwhich 409?  timburke the 404 cache is so weird... to think of that as a "remediation" I mean... maybe a client does a PUT and ends up handoffs!?  I don't think that behavior was anymore desirable really.21:22
claygI'm most happy about the tests - it's now defined behavior - we're saying on 503 we don't want to flush the cache21:22
claygif we change our minds later at least we have tests that can express what we want - and we won't accidently forget to think about it next time we're working in there21:23
*** jmasud has quit IRC21:23
claygi'm gunna go +A it right now - I'm totally talking myself into it!!! 😁21:23
timburkelol21:23
*** ociuhandu has quit IRC21:24
*** rh-jelabarre has quit IRC21:24
*** aprice has joined #openstack-meeting21:24
timburkeso as clayg mentioned, the trouble seemed to come from the shard stat reporting. fortunately, we've already landed a latch for that21:24
claygtimburke: so you're saying before p 30481 you think we'd leave the cache alone on 503?  Or just that was so old ago who KNOWS what would have happened?21:24
timburkeunfortunately, we hadn't gotten that fix out to our cluster yet21:24
patchbothttps://review.opendev.org/#/c/30481/ - swift - get_info - removes duplicate code (Take 3) (MERGED) - 17 patch sets21:24
timburkeclayg, yeah, pretty sure it would've been left alone21:25
claygok, so ... mostly just a heads up for folks I guess - the patch is new; but good.  If anyone else had noticed the behavior before that'd be cool - but it's ok if not either.21:26
timburkewhile we were trying to stop those shard stats from reporting, we were sad to see that we couldn't just stop the replication servers to stop the background traffic21:26
timburke#topic replication network and background daemons21:26
*** openstack changes topic to "replication network and background daemons (Meeting topic: swift)"21:26
timburkei wrote up https://launchpad.net/bugs/1883302 and https://review.opendev.org/#/c/735751/ for that particular issue21:27
openstackLaunchpad bug 1883302 in OpenStack Object Storage (swift) "container-sharder should send stat updates using replication network" [Undecided,In progress]21:27
patchbotpatch 735751 - swift - sharder: Use replication network to send shard ranges - 1 patch set21:27
claygoh yeah, this one's heavy - timburke wants to go full on21:27
alecuyerclayg: (sorry, lagging), we have not seen it, but I can't say it hasn"t happened either21:28
clayghrm... I know that p 735751 is slightly more targeted to the bug - but really the issue and the fix are much more pervasive than we realized originally21:28
patchbothttps://review.opendev.org/#/c/735751/ - swift - sharder: Use replication network to send shard ranges - 1 patch set21:28
claygtimburke: I'd argue we reword the bug to at least "sharder and reconciler don't always use replication" and attempt to move forward with p 735991 which is bigger but WAY better21:29
patchbothttps://review.opendev.org/#/c/735991/ - swift - Add X-Backend-Use-Replication-Network header - 1 patch set21:29
*** jmasud has joined #openstack-meeting21:29
timburkeyeah -- so the writes go over replication, but the sharder still does reads over the client-traffic interface -- but it was harder to fix since it uses internal_client for that21:29
timburkeit's got me wondering: which interface should our background daemons be using?21:30
claygit's like a unified way to make all our different little client interfaces use replication networks like the probably all should have been doing forever; but we never had an interface for 'em before21:30
*** aprice has quit IRC21:30
mattoliverauoh yeah interesting.21:31
timburkethe way i've got that second patch at the moment, callers have to opt-in to using the replication network. but i wonder if we could/should use it by default21:31
claygtimburke: I think i'd be willing to say any thing besides the proxy connecting to the node[ip] when a [replication_ip] is available is a bug?  like not a design choice, or operator choice - a bug21:32
mattoliverauif a direct client or internal client is ever used inline from a customer request then client traffic else replication network.21:32
timburkeclayg says we (nvidia nee swiftstack) have at least one utility we've written that *would* want the client-traffic network; i wonder what other people have written and which interface they'd prefer21:32
claygthat's a fairly strong stance, but personally having a separate storage server for background work (that I can turn off when needed) has been a HUGE QOL improvement for me over the years21:33
claygmattoliverau: I don't think internally we ever use direct/internal client from inside the proxy (i.e. related to a user request)21:33
claygtimburke: do some of the new UPDATE requests use direct client?21:34
clayg"new" - i'm not sure there's anything landed that does that... and IIRC they just call req.get_resp(app)?21:34
mattoliverauyeah, trying to decide if we use it anywhere21:34
*** aprice has joined #openstack-meeting21:34
timburkenope, it's plumbed through the proxy-server app21:34
*** PrinzElvis has joined #openstack-meeting21:35
timburkewell, maybe i keep it opt-in on that patch and propose another to change the default while people think through what they've got and what the upgrade impact would be like21:35
claygso internal-client and "the proxy-server app" are VERY similar - but Tim found a place between InternalClient and the app itself where we can plumb this header through (and then way down near where we make_connection(node, ...) we get to look at headers to pick node[ip] or node[replication_ip]21:35
claygit's really sort of slick - and sexy because it works uniformly across both interfaces (because both interfaces already take headers and can set backend defaults)21:36
zaitcevDirect client goes straight to replication network sounds unexpected to me. I thought that proxies might not even have that network.21:36
claygzaitcev: that's good feedback!  proxies don't use direct client - but anything "defaulting" to the backend network might be "surprising" to some21:37
mattoliveraua quick grep, yeah no direct client in proxy or middlewares.21:38
claygand I hadn't considered access/topology - if someone deploys anything that uses either of these interfaces ON a node that can use the replication network, that could be a big surprise 😞21:38
*** csatari has joined #openstack-meeting21:39
mattoliverauWell for things like the reconsiler and sharder, they are part of the consistency engine, the sharder is just a type of replicatior (in a way). so yeah totally should do it's work over replication network.21:40
*** patrickeast has joined #openstack-meeting21:40
*** knikolla has joined #openstack-meeting21:40
claygtimburke: I would encourage you to drop p 735751 now that p 735991 is on everyone's radar - to me, it's not so much about "fixing ALL THE THINGS" as "fixing it RIGHT"21:40
patchbothttps://review.opendev.org/#/c/735751/ - swift - sharder: Use replication network to send shard ranges - 1 patch set21:40
patchbothttps://review.opendev.org/#/c/735991/ - swift - Add X-Backend-Use-Replication-Network header - 1 patch set21:40
timburke👍 thanks for the feedback, everyone!21:41
timburkeon to updates21:41
timburke#topic waterfall EC21:41
*** openstack changes topic to "waterfall EC (Meeting topic: swift)"21:41
timburkeclayg, how's it going?21:41
*** slaweq has quit IRC21:41
claygmattoliverau: I'm glad to hear you say that!  I think having internal and direct client growing these new interfaces will amek it much easier to get it right out of the gate for new daemons21:41
claygtimburke: a little better-ish, or maybe?21:41
*** raildo has quit IRC21:42
claygI like the feeder!21:42
clayghttps://review.opendev.org/#/c/711342/8 phew - too many links open21:42
patchbotpatch 711342 - swift - wip: asyc concurrent ecfragfetcher - 8 patch sets21:42
claygI'm still waffling about the code duplication21:42
claygi don't know exactly how to describe the experience of pulling them apart - it's like I'm starting to see the tear lines and I can't help but try and imagine a few abstraction that could MAYBE cut through them 😞21:44
claygI mostly try not to think about it while I make waterfall-ec awseome21:44
claygwhich it *totally* is21:44
claygor at least I can see how it will be - once I add a follow to configure the feeder with per-policy settings and the stair-step configuration that alecuyer talked about at the PTG21:45
alecuyernice!21:45
claygI'm much more excited about working on that code than wading through the mess of cutting up GETorHEADHandler and  ECFragGetter21:46
claygat some level I want to just leave the messy turd there finish the stuff I care about and then try to re-evaluate when I feel less pressure to FIX THE DAMN BUG21:46
claygbut I sort of know a new priority will come along, and even though I'll probably get up a patch out of pure guilt - it's not obvious to me "here a 1000 line diff that doesn't change anything" is gunna get merged if I'm not complaining about it21:47
claygALSO!  I need to chat with folks about extra requests for non-durables - or at least... the existing behavior is obviously wrong and the correct behavior is not obvious21:47
claygI picked something... and it's... better - but what if Y'ALL have an even BETTER idea!!!21:48
zaitcevLittle hope of that I'm afraid.21:48
zaitcevAlso21:49
claygi dunno if we can wait til the next PTG to go over it...21:49
timburkeshould we read what you've done so far to try to get our heads around the problem, or should we sum it now?21:49
claygI think it's a complex enough change (I'm really trying to SIMPLIFY) that it's worth a read by anyone who can handle it21:50
claygI've been trying to drop comments around the interesting bits21:50
*** jamesmcarthur has joined #openstack-meeting21:50
timburkewe could schedule a video chat if you think something closer to "in person" would be best21:50
mattoliverauetherpad braindump of the current problem, them video chat to talk through it?21:51
mattoliverauplus time to look at code :)21:51
alecuyeryes  waiting for the next ptg is too far if clay is working on it *now* ?21:51
claygfor the non-durable extra requests - yeah I would like to high-bandwidth (very helpful at the PTG); I would definitely try and prepare if there was something scheduled.21:51
mattoliveraulet's do something then :) We could zoom or jitsi and announce it in channel so anyone can attend. (keeping it open).21:52
claygok, well no one is screaming about the code duplication - that gives me some confidence that I've built it up enough no one is going to go to review and be like "WTF is this!?  you can't do this!"21:52
claygso I'll leave the turd there and move on down the line to the follow on configuration stuff (which will be SUPER sexy)21:53
claygthen we're just left with non-durable extra requests - which I can write up ASAP and Tim will help me with a zoom thing21:53
timburke👍21:54
mattoliverauclayg: like you said at the PTG code dup is ok, so long as we all know, it's documented, and it make it easier to grok and understand ;)21:54
claygmattoliverau: ❤️ you guys are the best21:54
timburkeall right21:54
mattoliverauthanks for polishing the turd :)21:54
*** jamesmcarthur has quit IRC21:54
timburkesorry rledisez, alecuyer: i forgot to drop losf from the agenda like i'd promised to last week21:54
timburkeso21:55
timburke#topic open discussion21:55
*** openstack changes topic to "open discussion (Meeting topic: swift)"21:55
timburkeanything else we should talk about in the last five minutes?21:55
alecuyerwell I'll just post a link for clay ;) wrt to a PTG question21:55
clayghttps://review.opendev.org/#/c/733919/21:55
patchbotpatch 733919 - swift - s3api: Allow CompleteMultipartUpload requests to b... - 3 patch sets21:55
alecuyerhttps://docs.python.org/3/library/multiprocessing.shared_memory.html21:55
claygalecuyer: YAS!!21:55
alecuyer3.8 only tho  - but nice interface to use shared memory, switch the ring to use numpy ?21:55
claygtimburke: my complete multi-part retry has been going for 3.5 hours - and it's still working21:56
timburkealecuyer, that also makes me think of something DHE mentioned earlier today...21:56
alecuyerdidn't think about it but thought i'd share the link, and sorry if you're all aware of that21:56
timburkeclayg, wow! 5 min seems *way* too short then -- maybe it should work indefinitely21:56
claygdunno 😞21:56
claygalso i haven't tried abort - or... what was the other calls you were interested in?21:56
timburkeabort after complete and complete after abort are the two sequences i'm a little worried about21:57
claygalecuyer: I remember thinking "oh it's only arrays?  pfhfhfhfh" - but now that you mention it - what is the ring except a big array!?  😁21:57
claygthere is the error limiting stuff 🤔21:58
timburkezaitcev, thanks for the review on https://review.opendev.org/#/c/734721/ !21:58
patchbotpatch 734721 - swift - py3: (Better) fix percentages in configs - 4 patch sets21:58
claygabort after complete - so i'm in that state now... but if that works I could try to complete it again too!  🤔21:58
kota__error limiting staff on shared memory seems good idea21:59
zaitcevSo, are we trying to load rings into SysV shm?21:59
zaitcevI'd be more comfortable with an mmap() of some temp file into which the json or pickle is dumped first.21:59
claygkota__: yes!  alecuyer will figure out how to make it work :P21:59
kota__ it seems it's not only py3.8 but greater and equals to 3.8?22:00
kota__not yet 3.9 released yet22:00
alecuyerkota__: right22:00
kota__good22:00
timburkeall right, we're about out of time22:01
claygkota__: yeah and like zaitcev it's maybe not even a full solution on it's own even if we did want to do it >= 3.8 only (which by the time it's done might seem reasonable)22:01
timburkethank you all for coming! i feel like we had some really good discussions today :-)22:01
kota__clayg: true, got it.22:01
timburkethank you all for coming, and thank you for working on swift!22:01
timburke#endmeeting22:02
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:02
openstackMeeting ended Wed Jun 17 22:01:59 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:02
claygyeah, long meeting - good stuff - thanks everyone!22:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2020/swift.2020-06-17-21.00.html22:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2020/swift.2020-06-17-21.00.txt22:02
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2020/swift.2020-06-17-21.00.log.html22:02
*** patchbot has left #openstack-meeting22:02
*** zaitcev has left #openstack-meeting22:02
*** e0ne has quit IRC22:19
*** jmasud has quit IRC22:23
*** jmasud has joined #openstack-meeting22:23
*** TrevorV has quit IRC22:36
*** rcernin has joined #openstack-meeting22:42
*** dmacpher_ has quit IRC22:47
*** seongsoocho has joined #openstack-meeting22:51
*** gyee has quit IRC22:52
*** jmasud has quit IRC23:03
*** Lucas_Gray has quit IRC23:07
*** Lucas_Gray has joined #openstack-meeting23:22
*** diurnalist has quit IRC23:30
*** dmacpher has joined #openstack-meeting23:42
*** andrebeltrami has quit IRC23:53
*** dmacpher_ has joined #openstack-meeting23:58
*** dmacpher has quit IRC23:59

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!