Thursday, 2021-07-15

*** akekane_ is now known as abhishekk07:15
*** rpittau|afk is now known as rpittau07:24
*** akekane_ is now known as abhishekk08:57
*** iurygregory_ is now known as iurygregory12:07
gmanntc-members: meeting time. 15:00
gmann#startmeeting tc 15:00
opendevmeetMeeting started Thu Jul 15 15:00:10 2021 UTC and is due to finish in 60 minutes.  The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
opendevmeetThe meeting name has been set to 'tc'15:00
gmann#topic Roll call15:00
gmanno/15:00
ricolino/15:00
mnaserhola15:00
dansmitho/15:00
gmannwe have 3 members absent today. 15:01
gmannyoctozepto on PTO15:01
gmannspotz on PTO15:01
gmannjungleboyj on PTO15:01
gmannlet's start15:02
gmann#link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions15:02
gmann^^ today agenda15:02
gmann#topic Follow up on past action items15:02
gmanntwo AI from last meeting15:02
gmannclarkb to convey the ELK service shutdown deadline on ML15:03
gmannclarkb send it to ML #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023578.html15:03
gmanngmann to send ML to fix warning and oslo side changes to convert them to error15:03
gmann#link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023646.html15:03
gmanngibi mentioned about sqlAlchemy  warning also which need keystone fix to merge to get oslo.db 10.0.0 in g-r15:04
diablo_rojo_phoneO/15:04
gmann#link https://review.opendev.org/c/openstack/keystone/+/79967215:04
gmannseems less active member in kesytone.15:04
gmannknikolla: ^^ if you see this msg15:05
gmannstephen already pinged keystone team on keystone channel so let's see if we can merge it soon15:05
gmann#topic Gate health check (dansmith/yoctozepto)15:06
gmanndansmith: any update you would like to share?15:06
dansmithgate has seemed fairly good to me lately, hard to complain much15:06
ricolinalso check-arm64 was blocked last week, but back to normal now15:07
gmannone issue i am aware of and is fixed now. tempest-full-py3 was broken on ussuri due to python3 disable via base job15:07
dansmithoh,15:07
gmann+115:07
tosky(tempest-slow-py3)15:07
gmannyeah tempest-slow-py315:07
dansmithnot really gate, but is the depends-on is still broken?15:07
gmanndansmith: that is fixed now15:08
fungii don't believe so, i saw the message about it mention an immediate revert15:08
dansmithokay cool15:08
gmannworked for tempest-slow-py3 fix testing15:08
clarkbyes as soon as we identified the issue we pushed and landed a revert of the change that broke depends-on. Then restarted as soon as that had applied to the servers15:08
fungidon't believe it to still be broken, i meant15:08
gmannclarkb: +115:08
dansmithokay I thought it was broken for a while15:09
clarkbdansmith: from Sunday evening to about Tuesday Noonish relative to our timezone15:09
dansmithah okay15:09
dansmithanyway, nothing else gate-ish from me15:10
gmannok, let's move next then15:10
gmann#topic Migration from 'Freenode' to 'OFTC' (gmann)15:10
gmannwhile doing this for deprecation repo, i found few repo not deprecated or retired properly. also some setup in project-config side need update15:10
gmannproject-config side things are merged15:11
gmannfor retired repo, I am leaving the OFTC ref update because 1. they are many repo 2. need to add setup in project-config to get it updated to github repo15:11
gmannif anyone has time I will not object.15:12
gmann#topic PTG Planning15:12
gmannDoodle poll for slot selection15:13
gmannplease vote your availability/preference 15:13
diablo_rojo_phoneWill do today. 15:14
gmannthanks15:14
gmannricolin: jungleboyj you too15:14
gmannwe need to book slot by 21st July15:14
ricolinI thought I already vote, but will check again15:15
gmannalso I sent doodle poll for TC+PTL interaction session which is 2 hrs either on Monday or Tuesday 15:15
gmann#link https://doodle.com/poll/ua72h8aip4srsy8s15:15
gmannricolin: i think you voted on TC+PTL sessions not on TC PTG15:15
gmannplease check15:15
ricolinyou're right15:15
ricolinwill vote right now15:15
dansmithtoo many doodles15:15
gmannricolin: thanks 15:16
gmannfor TC sessions, I am thinking to book slot for two days and 4 hrs each day ?15:16
gmannthat should be enough? what you all say ?15:16
dansmithI'll have a hard time making all of that, as usual, but sure15:17
ricolindone15:17
ricolingmann, I think that's good enough15:18
gmannk15:18
gmannand this is etherpad to collect the topic #link https://etherpad.opendev.org/p/tc-yoga-ptg15:18
gmannplease start adding the topic you would like to discuss15:18
gmannanything else on PTG thing?15:19
diablo_rojo_phoneI assume we also want to coordinate with the k8s folks for some time? 15:19
ricolindiablo_rojo_phone, +115:20
gmannsure, last time k8s folks did not join but we are always fine if they would like to. we can have 1 hr slot for that if ok for them15:20
diablo_rojo_phoneSomething to keep in mind. 15:21
diablo_rojo_phoneYeah I think with more heads up and if we dictate a time to them and put an ical on their ml we should get more engagement. 15:21
gmannsure, i did last time on ML also. I can do this time too.15:22
ricolinIMO we like to include that, maybe we need more than 8 hours(4 a day)15:22
gmannricolin: time slot is not issue I think. 15:22
ricolins/we like/if we like/15:22
ricolingmann, Okay15:22
gmannbut yes we can extend if needed15:22
gmannadded this in etherpad15:24
gmannanything else ?15:24
gmann#topic ELK services plan and help status15:24
gmannHelp status15:24
gmannI think there is no help yet. clarkb fungi anything you heard from anyone ?15:24
clarkbI have not15:25
gmannk15:25
gmannReducing the size of the existing system15:25
gmannclarkb: ^^ go ahead15:25
clarkbSince increasing the log workers to 60% of our total it is keeping up much better than before15:25
clarkbOn the Elasticsearch cluster side of things we are using ~2.4TB of disk right now. We have 6TB total but only 5TB is usuable. The reason for this is we have 6 nodes with 1TB each and we are resilient to a single node failure which means we need to fit within the disk available on 5 instances15:26
clarkbGiven that the current disk usage is 2.4TB or so we can probably reduce the cluster size to 5 nodes. Then we would have 5TB total and 4TB useable.15:26
clarkbIf we reduce to 4 nodes then we get 3TB usable and I think that is too close for comfort15:27
clarkbOne thing to keep in mind is that growing the system again if we shrink and it falls over is likely to be difficult. For this reason I think we can take our time. Keep monitoring usage patterns a bit before we commit to anything15:27
clarkbBut based on the numbers available today I would say we should shrink the log workers to 60% of their total size now and reduce the elasticsearch cluster size by one instance15:28
gmann2.4 TB is usual usage like during peak time of release etc or just current one ?15:28
clarkbgmann: just during the current usage. Its hard to look at numbers historically like that because cacti doesn't give us great resolution. But maybe fungi or corvus have tricks to find that data more ccurately15:29
gmann'shrink the log workers to 60% of their total size' - shrink or increase ?15:29
gmanninitially you mentioned increasing 15:30
clarkbgmann: shrink. We have 20 instances now. I have disabled the processes on 8 of them and we seem to be keeping up. That means we can shrink to 60% I think15:30
clarkbgmann: last week I had it set to 50% and we were not keeping up so I increased that to 60% but that is still a shrink compared to 100%15:30
gmannohk got it15:30
gmanni though 60% more from what we had:)15:31
gmann8thought15:31
gmannI think this is reasonable proposal. 15:31
clarkbAnyway that is what the current data says we can do. Lets watch it a bit more and see if more data changes anything15:31
clarkbBut if this stays consistent we can probably go ahead and make those changes more permanent15:31
gmannclarkb: is it fine to monitor until Xena release ? 15:32
gmannor you think we should decide early than that ?15:32
clarkbits probably ok to monitor until then. Particularly during feature freeze as that is when demand tends to be highest15:33
gmannyeah15:33
clarkbthat was all I had. We can watch it and if those numbers hold up make the changes after the xena release (or maybe after feature freeze)15:35
gmann+1 sounds perfect15:35
gmannclarkb: anything else you would like to keep discussing on this in TC meeting or is it fine to remove it from agenda for now and re-add during Xena release ?15:35
clarkbShould be fine to remove for now15:36
gmannok15:36
gmannthanks a lot clarkb for reporting on data and help on this. 15:36
gmann#topic Open Reviews15:37
gmann#link https://review.opendev.org/q/projects:openstack/governance+is:open15:37
gmannmany open reviews, let's check them quickly and vote accordingly 15:37
ricolinwill do15:38
gmanntc-members please vote on the Yoga testing runtime #link https://review.opendev.org/c/openstack/governance/+/79992715:38
gmannwhich is same as what we had in Xena15:38
gmanncentos-stream9 can be added later once that is released 15:38
clarkbgmann: are you planning to support both 8 and 9?15:39
clarkbmy selfish preference is that you pick only the one (as it allows us to delete images more quickly)15:39
gmannclarkb: no, one, means updating 8->915:39
clarkbgot it15:39
gmannneed one more vote on this project-update #link https://review.opendev.org/c/openstack/governance/+/79982615:40
gmannothers are either having enough required vote or waiting for depends-on /zuul fix.15:42
ricolinvoted15:42
gmannthanks 15:42
gmannricolin: this quick one for governance-sig https://review.opendev.org/c/openstack/governance-sigs/+/80013515:42
gmannanything else we need to discuss for today meeting?15:42
ricolindone15:42
gmannthanks15:43
ricolinyes15:43
ricolinone thing, sorry for the delay but I sended the ML for collect pain point out http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023659.html15:43
gmann+1 15:44
ricolinfor the eliminate pain point idea15:44
ricolinLet's see if we can get valid pain point feedback from teams15:44
gmannthanks ricolin for doing that. 15:45
ricolinNP, will keep tracking15:45
gmannsure15:45
gmannanything else?15:45
gmannthanks all for joining, let's close meeting15:46
gmann#endmeeting15:46
opendevmeetMeeting ended Thu Jul 15 15:46:23 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:46
opendevmeetMinutes:        https://meetings.opendev.org/meetings/tc/2021/tc.2021-07-15-15.00.html15:46
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/tc/2021/tc.2021-07-15-15.00.txt15:46
opendevmeetLog:            https://meetings.opendev.org/meetings/tc/2021/tc.2021-07-15-15.00.log.html15:46
ricolinthanks gmann 15:46
opendevreviewMerged openstack/governance-sigs master: Moving IRC network reference to OFTC  https://review.opendev.org/c/openstack/governance-sigs/+/80013515:49
opendevreviewMerged openstack/governance master: Create repo for Hashicorp Vault deployment  https://review.opendev.org/c/openstack/governance/+/79982616:17
*** rpittau is now known as rpittau|afk16:17
opendevreviewMerged openstack/governance master: Proposing consistent and secure RBAC as a community-wide goal  https://review.opendev.org/c/openstack/governance/+/79970516:19
*** diablo_rojo is now known as Guest92720:39
*** diablo_rojo__ is now known as diablo_rojo21:45

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!