Thursday, 2022-09-29

*** lbragstad8 is now known as lbragstad06:56
fricklertc-members: I suggest you consider taking a more active approach in fixing up Zuul config errors. the weekly reminders don't seem to have had much effect over the last years or so and now we got hundreds of additional errors due to the queue config change11:57
noonedeadpunkI'd say big problem also to backport to all prior branches12:00
noonedeadpunkSo we likely will need to force-merge things to clean out zuul errors with regards to that12:02
fungiwhich raises the question: if it's hard to backport a simple change to old branches, is it because their tests no longer work? and if so, then why are we keeping those branches open instead of eol'ing them?12:04
fungibut also it's worth pointing out that many of the lingering errors are a good sign those branches can be eol'd: they've gone many months with broken configuration then they've definitely not been testable for at least that long12:07
fungialso the ~120 stable branch jobs which fail every day are rather a waste to keep running even though we know they're not fixed12:08
noonedeadpunkwell, I see we also have issues related to retired projects12:09
noonedeadpunkopenstack/networking-midonet is quite good example12:10
fungiyes, those are the sorts of things which have gone unfixed for a very long time12:10
fungibranches still depending on networking-midonet are clearly not running any jobs and nobody seems to care, so why do we keep them around?12:10
noonedeadpunkbut I do agree that eol'ing branches can be fair thing to do indeed12:11
noonedeadpunkbut from other side my expectation as a user, that what we say at https://releases.openstack.org/ matched at least somehow. Which is not a thing already12:20
noonedeadpunk(which is quite confusing to have that said)12:20
noonedeadpunkwell, at least we have note that project can EOL there...12:21
noonedeadpunkBut then for how long EM should be kept12:21
fungiyes, i agree that our list of em branches needs to match reality12:33
*** dasm|off is now known as dasm12:48
JayFThat's a problem in both direction13:56
JayFIronic, for example, has very good support for releases going all the way back to ussuri and train13:57
fungiyeah, but some projects still have branches open and periodic jobs running/failing for pike14:04
knikolla[m]would it be less of a problem if those jobs were passing (as in the problem is that we're wasting resources) or the problem is the amount of jobs (we're lacking resources in general)?14:08
fungithe two are intertwined. certainly running jobs which always fail and nobody is bothering to fix is of no use to anyone14:09
fungirunning jobs on branches nobody's going to introduce new patches into, even if they succeed, is also probably wasteful though14:10
fungikeep in mind that "lacking resources" is a misleading way of looking at things. we run jobs on resources donated by companies who could be using those resources in other ways, so whether or not those resources are available to us, we need to be considerate stewards of them and not use what we don't need to14:11
knikolla[m]that's a really good point14:12
knikolla[m]are the opendev.org resources pooled equally for all projects running therein?14:13
JayFHonestly I would just take the sign that a project would have tests that are failing that regularly as a sign that in general the project isn't being maintained and that either we need to take action to maintain it if that's the contract with users, or take action to retire it14:14
fungieffectively yes, we don't implement per-project quotas, though there are relative prioritizations for different pipelines and "fair queuing" algorithms which prioritize specific changes in order to keep a small number of projects from monopolizing our available quota when we're running at max capacity14:14
JayFAnd unless we have some kind of magical power to summon developers, I don't think maintaining things is really an option14:15
fungiright now we claim we've got "extended maintenance" on branches all the way back to pike, and i know there's been recent discussion of dropping pike specifically, i doubt queens, rocky, stein, or ussuri are in much better shape across most projects at this point14:16
fungier, train/ussuri as JayF pointed out seems to be in good shape in ironic at least14:16
fungibut keeping stein and older open is certainly questionable at this point14:17
JayFOne of the things I backported to get ironic in good shape for train was to nova, so they have functioning CI in train as well14:17
JayFI will look at this specifically for ironic, and if ironic has old branches open - older than train - I can at least take action to fix things there.14:18
knikolla[m]++ on dropping pike. i think we should fix a set number of releases that we keep on EM. we can't scale the amount of jobs with each new release while not increasing testing capacity. 14:18
fricklerpike-eol is almost done, just blocked by some not-too-well maintained projects https://review.opendev.org/q/topic:pike-eol14:18
fungiright now we have 12 branches open for projects (if you count master/2023.1/antelope)14:19
fungi1 for development, 3 under stable maintenance (wallaby will transition to em shortly after zed releases), and 8 in "extended maintenance" right now14:20
fungior 7 if pike eol is finished around the zed release/wallaby transition to em14:21
knikolla[m]frickler: do we need to wait indefinitely for a ptl to approve the eol transition for the remaining teams? 14:21
fungifrom a procedural standpoint, the tc can decide this if they want14:21
fricklerknikolla[m]: discuss this with the release team maybe14:22
fungiand the tc has delegated that decision to the release managers, so they could decide to go forward with it as well even if ptls are in disagreement14:22
fungibut yeah, in this case it's less a matter of project leaders disagreeing, and more a matter of them being completely unresponsive14:23
knikolla[m]i'll bring it up with the release team14:23
gmannwe have discussed it in last PTG or in yoga PTG i think where i raised concern over increasing the number of EM branches over and over and becoming difficult from QA perspective too14:23
gmannproposal was to limit those but it seems everyone including release team also were ok with current model14:24
gmannit is 12 branches currently but it will keep growing by 1 every 6 month or so14:24
fungiwhen the em model was first proposed, we said at the time that if jobs started failing on branches and nobody felt like fixing them, we'd drop testing or eol the branches14:25
knikolla[m]i think the current model is unsustainable14:25
knikolla[m]++ that's my memory too14:25
JayFWe do need to be careful to make sure that we're looking at the whole range of openstack projects, and not just the ones that are noisy and broken right now. We should fix the problem without damaging autonomy of projects that are operating properly.14:25
JayFIsn't this just a resourcing issue? I imagine folks aren't leaving old broken branches up intentionally, but cleaning up older leases has to be pretty low on some folks priority list14:26
fungito put this in perspective, i've been working on tests of production data imports for an upcoming migration from mailman v2 to v3. importing lists.openstack.org data takes roughly 2.5 hours. more than half of that time is importing the archive for the stable branch failures mailing list14:26
knikolla[m]JayF: ++ totally. I'd be in favor of an opt-in model for keeping branches that are EM.14:26
fungithe em process already supports that14:27
gmanntrue and agree14:27
fungithe problem is that the incentive was inverted. the idea was that projects would ask to eol their older branches ahead of the project-wide eol. instead we need to be eol'ing branches for projects whose jobs stop working and they don't ever fix them. if there's nobody around to work on fixing testing for old branches, there's nobody around to request we eol that branch either14:28
gmanncoming back to zuul config error14:29
gmannfungi: is everything in this etherpad, this is what I add in my weekly summary report and ask project to fix https://etherpad.opendev.org/p/zuul-config-error-openstack14:29
gmannand seems old one are fixed14:29
gmannso weekly reminder is working at some extend 14:29
gmannthis seems new right? 'extra keys not allowed @ data['gate']['queue']'14:31
fungigmann: yes. unfortunately, deprecating pipeline-specific queue syntax in zuul for 19 months and spending an entire release cycle repeatedly notifying projects that their jobs are going to break did not get those addressed before the breaking change finally merged to zuul14:32
fungii've answered questions about probably 10 different projects where contributors were confused that no jobs were running on some branch, and in every case it's been that14:33
gmannfungi: ok, may be we can put these one in etherpad and we can ask projects to fix those on priority in weekly report too. let's see14:34
gmannonce we have those project list in etherpad then we can ask one or two tc-members to volunteer to drive this and ask/help project to fix these soon. 14:40
fungithe queue fixes are very simple, but of course if your stable branch testing is already broken then you can just merge that without fixing whatever other bitrot has set in14:41
fungier, then you can't just merge that i mean14:41
gmannyeah, if those are broken anyways no change can be merged so there is no extra breakage caused by zuul error14:42
gmannI will ask ion today meeting if any volunteer from tc-members to drive this in more aggressive way along with ML. may be proposing the fixes or so14:49
fungithanks! hopefully that'll help14:49
fungii considered volunteering to bypass zuul and directly merge fixes for a bunch of those more trivial errors, but honestly if testing on those branches is too broken to merge anything anyway then maybe it's a sign those branches can be eol'd (at least in specific projects who aren't fixing them)14:50
knikolla[m]i can take this, i've rarely ventured into zuul territories. 14:50
knikolla[m]++ on eoling14:51
fungiknikolla[m]: if you have specific questions for basic classes of fixes for those errors, let me know and i'm happy to get you examples14:51
gmannyeah that is general criteria to move them to EOL but this zuul config error fix can highlight it14:51
gmannknikolla[m]: thanks14:52
fungithey're usually very straightforward things like some required project in a job no longer exists because of a repository rename, or a job definition no longer exists because a branch was eol'd14:52
noonedeadpunkTbh right now force-merging seems easier, as we have too much errors at the moment, which can be solved quite easily. But with EOLing branches this process will take a while...14:52
noonedeadpunkso quite depends on how we want to clean zuul config14:53
noonedeadpunkand eol-ing can be taken as a separate track imo14:53
fungii'm more concerned about moving fixes to branches people are actually trying to use. if those errors linger a bit longer while we increase the priority for eol decisions, i'm okay with that14:53
gmanni do not think we need to fix on EOL or EOLing also14:53
JayFgmann: we should mention in meeting -> re: chair elections, only 5 out of 9 of us have voted14:53
JayFgmann: with only 4 working days left14:53
gmannJayF: yeah, I have added it in meeting agenda14:53
JayFty14:53
gmanngood reminder 14:53
gmanntc-members: weekly meeting in ~5 min from now14:55
fungione way to handle the mass zuul fixes would be to prioritize maintained stable branches, and then set a deadline for any in em state where we'll just eol those branches if the projects don't get fixes merged for them before then14:55
gmannif their jobs are broken then it will easy to move them to EOL. if no review then it is difficult may be elodilles will just merge it14:56
gmannI am sure there will be many EM repo where their gate is already broken for other things14:57
fungiabsolutely14:58
slaweq@fungi gmann I was trying some time ago fix zuul config issues related to neutron but most of those patches were to very old branches like e.g. pike and ci for the projects in those stable branches were already broken14:59
slaweqso it was really hard to get things merged and I don't have time and knowledge about all those projects to fix their gates there :/14:59
gmannslaweq: yeah, hopefully pike will be EOL soon14:59
slaweqI still have it in my todo but there is always something more important todo14:59
gmann+1 great14:59
slaweqI will try to get back to it :)14:59
gmann#startmeeting tc15:00
opendevmeetMeeting started Thu Sep 29 15:00:03 2022 UTC and is due to finish in 60 minutes.  The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
opendevmeetThe meeting name has been set to 'tc'15:00
gmann#topic Roll call15:00
JayFo/15:00
gmanno/15:00
slaweqgmann I have today another meeting at the same time as our tc meeting so I will be just lurking here for first 30 minutes15:00
gmannslaweq: ack. thanks 15:00
jungleboyjo/15:00
slaweqo/15:01
gmanntoday agenda #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting15:01
jungleboyjAs a lurker I guess. :-)15:01
knikolla[m]o/15:01
gmannjungleboyj: good to have you in meetings15:02
noonedeadpunko/15:02
spotz_o/15:02
gmannlet's start15:02
gmann#topic Follow up on past action items15:02
rosmaitao/15:03
gmannthere is one action item15:03
gmannJayF to send the ics file on TC PTG ML thread15:03
JayFThere is no TC PTG ML thread to send it to, afaict :) 15:03
gmannJayF: this one you can reply to #link https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030575.html15:03
JayFack; got it, I'll take care of that this morning15:04
gmannperfect, thanks JayF 15:04
gmann#topic Gate health check15:04
gmannany news on gate15:04
gmanndevstack and grenade setup for 2023.1 cycle is almost done #link https://review.opendev.org/q/topic:qa-zed-release15:05
slaweqnothing from me15:05
gmannfrom failure side, I have not noticed any more frequent one15:05
slaweqstarting 2023.1 cycle we should have "skip release" grenade jobs in projects, right?15:06
gmannyes, we have setup that in grenade side15:06
slaweqmaybe we should remind teams that they should ask such jobs in their queues? wdyt?15:06
gmann#link https://review.opendev.org/c/openstack/grenade/+/859499/2/.zuul.yaml#37815:07
gmannmany projects but may be 5-6 have those already but yes we should remind them15:07
slaweqI can send email about it15:07
gmannslaweq: thanks. 15:07
gmann#action slaweq to send email to projects on openstack-discuss ML about add the grenade-skip-level (or prepare project specific job using this base job) in their gate15:08
gmannprojects can use this as base job like done in manila project15:09
gmann#link https://github.com/openstack/manila/blob/486289d27ee6b3892c603bd75ab447f022d25d13/zuul.d/grenade-jobs.yaml#L9515:09
noonedeadpunkgmann: jsut to ensure I got it right - basically upgrade job from Y to A15:09
gmannthis needs to be updated. I will do after meeting15:09
gmannnoonedeadpunk: yes, stable/yoga to current master15:09
gmannI think other than greanade in-tree supported projects, manila is first one to setup the job till now15:11
rosmaitabut to be clear, we don't actually support skip-release-upgrades until 2023.1, because that is our first 'tick' release15:11
gmannalso we need to make sure it does not run on zed gate15:11
gmannrosmaita: so 2023.1 we will say first SLURP right? means it can be upgraded from  stable/yoga and stable/zed both ?15:12
gmannthat is what i remember15:12
arne_wiebalcko/15:12
slaweqso those skip-level job can be non-voting in 2023.1 cycle but I think projects should start adding it to see how things works/not works for them15:12
rosmaitaright, 2023.1 is supposed to be designed to be able to go directly to 2024.115:13
gmannmentioned here too #link https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#example-sequence15:13
rosmaitaright, so A (or 2023.1) is the first SLURP, which i thought meant you will be able to go from 2023.1 to 2024.1, not that you should be able to upgrade to 2023.1 from yoga15:14
gmann"Our letter-based release naming scheme is about to wrap back around to A, so the proposal is that the “new A” release be the first one where we enforce this scheme. Y->A should be a “dress rehearsal” where we have the jobs enabled to help smoke out any issues, but where hard guarantees are not yet made."15:14
gmannrosmaita: you are right15:14
rosmaitaok, cool15:14
rosmaita(just wanted to be sure)15:14
noonedeadpunk+115:14
noonedeadpunkmake sense to me - was just double checking I remember it right15:14
gmannlet's ask project to add jobs anyways15:15
gmannthanks15:15
knikolla[m]makes sense.15:15
jungleboyj++15:15
rosmaitayes, the "dress rehearsal" is a good metaphor for this15:15
JayFIt's not a dress rehersal if we don't actually reherse, so I don't think that it's a rehersal changes any actions.15:15
gmannslaweq: let's ask and if any project facing issue then they can keep it non voting but let's tell to make it voting15:15
slaweq++15:15
knikolla[m]i guess more of a canary15:15
gmannstable/wallaby to stable/yoga went pretty well so may be Y->2023.1 will also15:16
JayFI know many operators in practice have been skipping releases frequently. It'll be interesting to see if CI unearths anything that they haven't exposed.15:16
gmannyeah that is our intension but let's see how our testing is on upgrade 15:17
gmannand will give opportunity to extend it 15:17
jungleboyj*fingers crossed*15:17
gmannok, moving next15:18
gmannBare 'recheck' state15:18
gmann#link https://etherpad.opendev.org/p/recheck-weekly-summary15:18
gmannslaweq: go ahead please15:18
slaweqI don't have much to say today15:19
slaweqI contacted few projects last week15:19
gmann+115:20
slaweqlets see if things for them will improve now15:20
gmannyeah, it is getting noticed. tacker project contacted me to get more clarification on it and they understand and will take action15:20
gmannthanks slaweq for driving this and helping to save the infra resources 15:21
JayFCan the tool used to generate these spit out specific patch URLs? I'm mainly curious (with my PTL hat on) who the 'stragglers' are in Ironic that are not yet putting reasons for rechecks15:21
JayFand if that tool exists, we can potentially offer it to other project leaders to enable them to chat about specific instances15:21
gmannone is this list https://etherpad.opendev.org/p/recheck-weekly-summary#L2115:22
gmannI hope that is bare recheck one only15:22
slaweqJayF scripts are here https://github.com/slawqo/rechecks-stats15:22
JayFack slaweq, will look into it thanks :D 15:22
slaweqyw15:23
gmannthanks15:23
gmannanything else on bare recheck?15:23
knikolla[m]i noticed that someone auto-translated the latter half of that etherpad15:23
fungiif you need it rolled back to an earlier state, let me know15:23
fungithere's an admin api i can use to undo revisions15:23
gmannok15:25
gmannZuul config error15:25
gmann#link https://etherpad.opendev.org/p/zuul-config-error-openstack15:25
gmannwe talked about it before meeting15:25
gmannand we need more aggressive approach to get them fixes or fix them.15:25
gmannknikolla[m] volunteer to drive this from TC side by taking to project lead/liaison or fixes them15:26
gmannthanks knikolla[m] for that15:26
knikolla[m]fungi: thanks. i looked at the revisions and doesn't seem necessary. the translation was appended, rather than replacing the content. weird. 15:26
gmannbut I will suggest we as tc-members also can take some of the project and fix them15:26
gmannlike slaweq will do for neutron15:26
gmannI volunteer to do for nova, qa, glance, and keystone15:26
gmannplease add your name in etherpad #link https://etherpad.opendev.org/p/zuul-config-error-openstack#L2615:27
JayFI am surprised to see Ironic hits on that list; I'll ensure those are corrected.15:27
fungito restate what i suggested before the meeting, i recommend prioritizing maintained stable branches. for em branches you could take teams not fixing those as an indication they're ready for early eol15:27
gmannJayF: cool, thanks15:27
knikolla[m]fungi: ++15:27
clarkbone common cause of these errors in the past has been project renames requested by openstack15:27
TheJuliaJayF: I think the ironic ones are branches I've long expected to be closed out15:27
JayFTheJulia: I will likely propose at next Ironic meeting we EOL a bunch of stable branches 15:28
TheJuliajfyi15:28
clarkbit would be a good idea for everyone requesting project renames to understand they have a little bit of legwork to complete the rename once gerrit is done15:28
gmannyeah, it will unhide the EM branches gate status also if those are broken and unnoticed 15:28
TheJuliaJayF: just do it :)15:28
gmannclarkb: sure, let me it as explicit step in project-team-guide repo retirement process15:28
fungigmann: that's an excellent idea, thanks15:29
gmannbut let's take 2-3 projects by ourselves to fix these and we will 1. fix many projects 2. trigger/motivate other projects/contributors to do it15:30
gmann#action gmann to add a separate step in repo rename process about fixing the renaming repo in zuul jobs across stable branches also15:31
gmannanything else on zuul config error or gate heath ?15:31
clarkbyseterday I updated one of the base job roles15:31
clarkbthe motivation was to speed up jobs that have a large number of required projects like those for OSA. Doesn't seem to have hurt anything and some jobs should be a few minutes quicker now15:32
gmannnice15:32
clarkbIn general, there are some ansible traps that can increase the runtime of a job unnecessarily. Basically loops with large numbers of inputs where each loop iteration shouldn't be expensive so incurs ansible task overhead as the real only overhead15:32
clarkbbe on the lookout for those and if you have them in your jobs rewriting to avoid loops in ansible can speed things up quite a bit15:33
fungithere was also a recent change to significantly speed up log archiving, right?15:33
clarkbfungi: yes that was motivated by neutron jobs but really all devstack jobs were impacted15:33
fungi(along the same lines)15:33
clarkbit had the same underlying issue of needlessly expensive ansible loops.15:33
clarkbAnyway this is more of a "if your jobs are slow this is somethign to look for" thing15:33
gmann+1, thanks for improvement and updates clarkb 15:34
gmann#topic 2023.1 cycle PTG Planning15:34
gmannwe have etherpad ready to add the topics 15:34
gmann#link https://etherpad.opendev.org/p/tc-leaders-interaction-2023-115:35
gmann#link https://etherpad.opendev.org/p/tc-2023-1-ptg15:35
gmannwith schedule information too15:35
*** rosmaita1 is now known as rosmaita15:35
gmannand JayF will send the icals over ML too15:35
gmannon operator-hour sessions, we have 7 projects booked till now15:35
gmannI will send another reminder to projects sometime next week15:36
fungiif anyone needs operator hour tracks created in ptgbot, feel free to ping me in #openinfra-events15:36
gmannbut feel free to reachout to projects you are contributing or in touch with to book it15:36
gmannyeah15:37
fungi(track creation/deletion is an admin operation, the rest should be pretty self-service though)15:37
gmannyeah. 15:38
gmannanything else on PTG things for today?15:38
gmann#topic 2023.1 cycle Technical Election & Leaderless projects15:39
gmannLeaderless projects15:39
gmann#link https://etherpad.opendev.org/p/2023.1-leaderless15:39
gmannno special update on this but all the patches for PTL appointment is up for review, please provide your feedback there15:39
gmannTC Chair election15:40
gmannas you know we have TC chair election going on15:40
gmannyou might have received email for CIVS poll, JayF is handling the election process15:41
gmann1. please let JayF know if you have not received the email15:41
fungi(after checking your junk/spam folder)15:41
gmann+115:41
spotz_Everyone should be opt-in at this point:)15:42
gmann2. vote if not yet done. we do not need to wait until last day instead we can close it early once all members voted15:42
gmannyeah opt-in is required15:42
jungleboyj:-)15:42
gmannand just to restate, once voting are closed. we will merge the winner nomination and abandon the other one15:43
rosmaitathanks to gmann and knikolla[m] for making an election necessary!15:43
rosmaita(i am not being sarcastic, i think it is a good thing)15:43
knikolla[m]:)15:43
gmannand in PTG we can discuss on how to make nomination/election more better/formal. 15:44
gmann+1, agree15:44
gmannanything else on this?15:45
jungleboyj+1  And also gmann thank you for putting your name in again.  All your leadership recently is GREATLY appreciated.15:45
gmannthanks 15:45
gmann#topic Open Reviews15:45
gmannone is this one #link https://review.opendev.org/c/openstack/governance/+/85946415:46
gmannDedicate Zed release to Ilya Etingof15:46
gmannthis is great idea and thanks iurygregory for the patch15:46
gmannplease review and cast your vote15:46
JayFI'd like to thank fungi for suggesting that, and iurygregory for owning putting up the patch. Ilya Etingof was a valued member of the Ironic community and this is a deserving tribute.15:46
rosmaita++15:47
gmann++ true15:47
arne_wiebalckJayF: ++15:47
spotz_++15:47
fungithe 7 day waiting period for motions as described in the charter puts the earliest possible approval date right before the release, so merging it in a timely fashion will be good15:47
gmannyeah15:48
jungleboyj++15:48
fungithe feedback so far has been all in favor, so the foundation marketing crew are operating under the assumption it will merge on schedule15:48
fungias far as mentioning the dedication in press releases about the release, and such15:48
jungleboyjOne of the special things about this community.15:49
knikolla[m]the majority voted, so setting a reminder for when the 7 days are up.15:49
knikolla[m]jungleboyj: ++15:49
gmannsure, I think 7 we can merge it but we will keep eyes on it15:49
gmann7th oct15:49
gmannor may be 6th, i will check15:50
gmannanother patch to review is this #link https://review.opendev.org/c/openstack/governance/+/85869015:50
knikolla[m]4th15:50
fungicharter says 7 calendar days, which would be october 415:50
gmannyeah, that is even better. I will run script to get it merge asap15:50
gmannmigrating the CI/CD jobs to ubuntu 22.0415:51
fungithat's the day before the release, just to be clear15:51
gmannit is in good shape now, thanks to noonedeadpunk for updating 15:51
gmannfungi: ack15:51
gmannthat is all from agenda, we have 9 min left if anyone has anything else to discuss15:51
gmannour next meeting will be on 6th Oct and a video call15:53
JayFThose nine minutes would be a great opportunity for the 4 folks who haven't voted to do so :)15:53
gmann+115:53
knikolla[m]++15:53
spotz_++:)15:53
gmannlet's vote if you have not done15:54
gmannthanks everyone for joining, let's close the meeting15:54
gmann#endmeeting15:54
opendevmeetMeeting ended Thu Sep 29 15:54:50 2022 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:54
opendevmeetMinutes:        https://meetings.opendev.org/meetings/tc/2022/tc.2022-09-29-15.00.html15:54
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/tc/2022/tc.2022-09-29-15.00.txt15:54
opendevmeetLog:            https://meetings.opendev.org/meetings/tc/2022/tc.2022-09-29-15.00.log.html15:54
slaweqo/15:54
spotz_Thanks all15:54
JayFWe're up to 8 outta 9. I don't know who the one is but we're close :D 15:55
jungleboyjI will be OOO next week.  Just FYI.15:55
* knikolla[m] starts the drumroll 15:55
JayFI sent out those ICS files. Let me know if you have any issues with it.16:50
*** chkumar|rover is now known as raukadah17:15
gmannJayF: it seems i received only for Monday sessions 'PTG: TC and Community Leaders session'17:31
gmanndid you send two email or in single only?17:31
JayFgmann: That one ICS file should import three meetings. 17:31
JayFgmann: I tested it working on google claendar.17:31
JayFIf it didn't work for you; I'd be interested to know what calendaring system. 17:32
knikolla[m]Worked for me on iphone17:34
gmannI tried in zoho and in google calendar also17:37
clarkbfastmail is able to show all three17:39
JayFgmann: in google calendar, make sure you go settings -> Import & export -> import file that way17:40
JayFThat's how it worked on gmail in my testing.17:40
* JayF will reply to ML with instructions if that works for you and whatever-other-path did not17:40
gmannJayF: yes, it worked via Import way17:45
JayFack, I'll respond with that information17:46
gmannable to import all 3 events calendar17:46
gmannthanks 17:46
JayFo/ email is out17:51
opendevreviewGhanshyam proposed openstack/project-team-guide master: Add openstack-map and zuul jobs updates step in repository handling  https://review.opendev.org/c/openstack/project-team-guide/+/85988618:44
gmannfungi: clarkb: ^^ please check if anything else we need to mention ? 18:59
clarkbgmann: left a suggestion19:02
gmannslaweq: manila-grenade-skip-level job also passing, you can give this ref in your email while asking projects to start grenade skip-level job https://review.opendev.org/c/openstack/manila/+/85987519:02
gmannclarkb: checking19:02
gouthamrgmann++19:06
opendevreviewGhanshyam proposed openstack/project-team-guide master: Add openstack-map and zuul jobs updates step in repository handling  https://review.opendev.org/c/openstack/project-team-guide/+/85988619:09
gmannclarkb ^^ updated19:09
gmanngouthamr: about to ping for review on manila job :)19:09
gouthamrgmann: 'm on it :) thanks!19:12
*** tosky_ is now known as tosky21:24
*** dasm is now known as dasm|off21:24

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!