15:00:00 <iurygregory> #startmeeting ironic
15:00:00 <opendevmeet> Meeting started Mon Apr 25 15:00:00 2022 UTC and is due to finish in 60 minutes.  The chair is iurygregory. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:00 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:00 <opendevmeet> The meeting name has been set to 'ironic'
15:00:10 <iurygregory> Hello ironicers!
15:00:11 <erbarr> o/
15:00:12 <rpioso> o/
15:00:16 <iurygregory> Welcome to our weekly meeting!
15:00:22 <dtantsur> o/
15:00:22 <hjensas> o/
15:00:33 <TheJulia> o/
15:00:35 <ajya> o/
15:00:44 <rpittau> o/
15:00:49 <iurygregory> the agenda for our meeting can be found in the wiki
15:00:57 <iurygregory> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meetin
15:01:09 <stendulker> o/
15:01:12 <iurygregory> ops wrong link
15:01:37 <iurygregory> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
15:01:50 <iurygregory> #topic Announcements / Reminder
15:02:12 <iurygregory> #info Zed PTG Summary is the ML
15:02:19 <iurygregory> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-April/028293.html
15:02:34 <rloo> o/
15:02:38 <kamlesh6808c> o/
15:03:08 <iurygregory> sorry about the delay on this and also on the patch with the priorities for the cycle, last two weeks were a bit complicated downstream
15:04:08 <iurygregory> #info First bugfix branch to be created next week
15:04:11 <TheJulia> I guess I'm curious, first bugfix like 4 weeks into the new cycle?
15:04:14 <rloo> thx for the PTG summary iurygregory!
15:04:30 <iurygregory> TheJulia, yeah...
15:04:34 <rloo> what, there's a bug? :D
15:04:40 <TheJulia> There are always bugs
15:04:45 <TheJulia> They shall rule the world one day!
15:04:53 <rpioso> More, now that it's spring :)
15:04:59 <TheJulia> rpioso: exactly!
15:05:25 <TheJulia> iurygregory: just every six week timing from the last?
15:06:07 <iurygregory> some information to help on that front.. downstream we consume bugfix and stable branches, and we had some changes in the calendar
15:06:29 <rpittau> I think we're in between 5 and 7 weeks, so we should be good? Next week is 5 weeks from the branch cut
15:06:55 <iurygregory> yeah we will be 5 weeks I think
15:06:59 <TheJulia> Okay
15:07:24 <iurygregory> it would help so we don't have to do downstream backports from a feature we had included in *zed* to *yoga* downstream
15:07:57 <TheJulia> ugh
15:07:59 <TheJulia> fun!
15:08:13 <TheJulia> Anyway, onward
15:08:25 <iurygregory> ok o/
15:08:53 <iurygregory> #topic Review action items from previous meeting
15:09:23 <iurygregory> well I don't think I added as action item, but it's one.. I had to push the summary + the priorities patch so people cold review
15:09:46 <iurygregory> I've done the summary with some delay, the patch with priorities will be up after my lunch today
15:10:10 <iurygregory> #Review subteam status reports
15:10:32 <iurygregory> we will likely skip this week and get back to it next week, thoughts?
15:12:01 <iurygregory> ok, moving on =)
15:12:12 <TheJulia> yeah, move on
15:12:17 <iurygregory> #topic Deciding on priorities for the coming week
15:12:25 <iurygregory> #link https://review.opendev.org/q/status:open+hashtag:ironic-week-prio
15:13:35 <TheJulia> I'm thinking of putting multipath patches up... if I do so, any objection if I add them to the prio review list?
15:13:44 <iurygregory> Does anyone have topics that we should review? we only have 4, I know the tempest-plugin one have been open for a while (I will take a look, I was quite busy past 3 weeks)
15:13:51 <iurygregory> TheJulia, ++ to adding
15:14:37 <iurygregory> rpittau, since you have a -2 on it, what are your thoughts?
15:14:58 <TheJulia> So looks like I could add the auto-add lessee id field feature. two minor things to fix and it shoudl be good to go
15:15:17 <TheJulia> I'm thinking so we actually have a disk representing it, although it might break our RAID testing. Would be interesting to see!
15:15:42 <rpittau> iurygregory: let's add that to priority, my -2 was related to the testing part
15:15:49 <TheJulia> I'm also focused on trying to get our v6 job sorted
15:15:52 <iurygregory> i would just try to keep as simple as possible so we can backport without many problems TheJulia =D
15:15:57 <TheJulia> but I think the issue is in devstack at the moment
15:16:04 <iurygregory> rpittau, ack =)
15:16:19 <TheJulia> iurygregory: well, at least on master we can likely start testing it fairly easily
15:16:25 <TheJulia> I'll add related patches once they are up
15:16:50 <iurygregory> makes sense to me
15:18:13 <iurygregory> I will also add the grenade skip one after looking at the feedback (tks TheJulia and dtantsur )
15:18:51 <iurygregory> and ofc the one with the priorities I will push =D
15:19:37 <iurygregory> ok, moving on
15:19:45 <iurygregory> #topic Open discussion
15:19:55 <iurygregory> we have one topic today \o/
15:20:06 <iurygregory> #info Transition Victoria to EM
15:20:08 <TheJulia> skipping rfe review?
15:20:22 <iurygregory> rfe is after sig
15:20:29 <TheJulia> ahh, open discussion should be at the end
15:20:30 <iurygregory> at least in the order I see in the agenda...
15:20:34 <TheJulia> Sorry
15:20:45 <TheJulia> since open ended discussions can end the meeting
15:20:50 <TheJulia> Anyway, ignore me
15:20:55 <iurygregory> no worries!
15:20:57 <TheJulia> I think i figured out why our v6 job is broken
15:21:15 <iurygregory> #link https://review.opendev.org/c/openstack/releases/+/837937
15:22:14 <iurygregory> I'm wondering if we want to push some releases before tagging victoria-em
15:22:29 <TheJulia> seems reasonable if they are out of date
15:22:57 <iurygregory> #action iury to check if we have releases in victoria before moving to em
15:23:40 <iurygregory> #topic Baremetal SIG
15:23:54 <iurygregory> #link https://etherpad.opendev.org/p/bare-metal-sig
15:24:06 <iurygregory> #info Recording of the last SIG meeting - Manuel Holtgrewe on "Bare Metal for Health - Using OpenStack Ironic for HPC at Berlin Institute of Health"
15:24:16 <iurygregory> #link https://youtu.be/5yJvXFOqzSI
15:24:28 <iurygregory> arne_wiebalck, anything to add for the SIG?
15:26:29 <iurygregory> ok, I think Arne is not around today =)
15:26:31 <iurygregory> moving on
15:26:40 <iurygregory> #topic RFE review
15:27:15 <iurygregory> #info Prevent mass instance deletions/rebuilds
15:27:23 <iurygregory> #link https://storyboard.openstack.org/#!/story/2010007
15:27:30 <iurygregory> TheJulia, o/
15:27:50 <TheJulia> So!
15:27:56 <TheJulia> I proposed two RFE's based upon discussions during the PTG
15:28:09 <TheJulia> The first is a basic mechanism to allow preventing mass deletions of nodes
15:28:43 <TheJulia> The idea being lightweight, and simple to implement, and I believe fairly easy for us to wire in. I'd appreciate any feedback.
15:29:08 <TheJulia> The latter is regarding Agent power savings
15:29:12 <TheJulia> #link  https://storyboard.openstack.org/#!/story/2010008
15:29:24 <TheJulia> I've posted a WIP of this and did some local experimentation, just to keep it simple
15:29:59 <TheJulia> #link https://review.opendev.org/c/openstack/ironic-python-agent/+/839095
15:30:20 <TheJulia> basically, just changes the governor and tries to invoke intel's internal pstate selector if present
15:30:47 <TheJulia> Please let me know if there are any thoughts or concerns, otherwise I'll proceed with as I've started
15:31:24 <iurygregory> this is related to the safeguards we talked at the PTG right?
15:31:30 <TheJulia> yes
15:32:01 <iurygregory> ok, in my mind it was only for cleaning, but the idea does make a lot of sense after reading what you wrote in the RFE
15:32:08 <rpittau> sorry, I need to drop, I'll check the backlog later o/
15:32:36 <iurygregory> bye rpittau =)
15:33:23 <TheJulia> I'm thinking upstream can carry a reasonable default, and folks like Arne can tune the setting down to match their environment's usage
15:33:35 <iurygregory> ++ yeah
15:34:02 <TheJulia> and actually, that reasonable default *could* just default on startup to number of conductors * threads too
15:34:20 <TheJulia> maybe that is not reasonable
15:34:22 <TheJulia> Anyway, just thoughts
15:34:56 <TheJulia> oh, we should add https://review.opendev.org/c/openstack/ironic/+/818299 to the list for reviews
15:35:20 <iurygregory> interesting idea (# conductors * threads)
15:35:30 <TheJulia> In similar vain of protections, I posted another WIP to get an idea out of my head
15:35:33 <TheJulia> https://review.opendev.org/c/openstack/ironic-python-agent/+/839084
15:35:42 <TheJulia> Which also kind of starts building a groundwork an intern can carry forward
15:36:16 <iurygregory> TheJulia, I just noticed you mention in the RFE *This spec proposes*
15:36:32 <iurygregory> in the end you think it requires a spec?
15:36:43 <TheJulia> oh, old habits maybe?
15:36:48 <iurygregory> maybe =)
15:37:12 <iurygregory> I just wanted to double check, because after reading I don't think it would require one...
15:37:43 <iurygregory> but maybe is just me :D we need more feedback to make a final decision
15:38:05 <iurygregory> to me it does make sense, no objections
15:38:10 <TheJulia> I did go through and look for common shared disk/lun block filesystems
15:38:21 <TheJulia> and added the 3 to that patch that made sense
15:38:30 <dtantsur> I don't think we should add all filesystems
15:38:34 <TheJulia> maybe not for GPFS, but I found a shared block device example in IBM's docs
15:38:40 <dtantsur> only that magical ones that can wipe the cluster via iBFT or whatever
15:38:48 <TheJulia> yeah, and those are these filesystems
15:38:49 <dtantsur> I highly doubt GFS2 is one
15:39:13 <TheJulia> According to docs, it is, but I'm happy to remove it. It also doesn't use partitions or a table, which caused me to raise an eyebrow some
15:39:24 <TheJulia> s/table/mbr or gpt table/
15:39:34 <dtantsur> I mean... there is a legitimate case of wiping members of distributed file systems
15:39:41 <dtantsur> think, decomissioning of a storage node
15:39:42 <TheJulia> absolutely!
15:39:55 <dtantsur> I don't disagree that people should do a clean-up first, but it may be impossible e.g. if the OS has died
15:39:56 <TheJulia> and distributed filesystems in network distributed mode, absolutely
15:40:19 <TheJulia> agreed, which is why I've also though to fan easy node and global level knob
15:40:30 <TheJulia> "smash it all, I don't care" sort of button
15:40:34 <TheJulia> but maybe not a button
15:40:37 <TheJulia> TBD
15:40:40 <hjensas> GFS2 is for shared access to the same device. So I think it is something we want to protect?
15:40:53 <dtantsur> being a distributed fs is not enough
15:41:14 <TheJulia> but shared block to same device is kind of it, Distributed as in Ceph doesn't use shared block devices
15:41:17 <dtantsur> we're talking about some $magic that will let the cluster notice (and get broken) even if the cluster software is no longer running
15:41:36 <TheJulia> indeed, and GFS2 is one of those
15:41:38 <dtantsur> so, some ring -1 level magic
15:41:45 <TheJulia> it is not magic actually
15:41:53 <TheJulia> it is range locking instead of whole device locking
15:41:58 <TheJulia> for IO at least
15:42:06 <dtantsur> range locking that survives reboot?
15:42:13 <TheJulia> Depends on the SAN!
15:42:24 * TheJulia has had to reboot a SAN controller due to it keeping a range lock in RAM
15:42:39 <TheJulia> With VMFS actually
15:42:44 <TheJulia> I couldn't vmotion the VMs off the dead hypervisor
15:42:52 <dtantsur> isn't GFS a software thing?
15:42:52 <TheJulia> because the block ranges were locked
15:42:58 <TheJulia> No, its a kernel module
15:43:12 <dtantsur> still, you're talking about a SAN
15:43:15 <TheJulia> GFS as in Google Filesystem.. .is unrelated and is software AIUI
15:43:39 <TheJulia> gfs2 is like any other filesystem, it just does range locking instead of whole device locking
15:44:09 <TheJulia> specifically Red Hat GFS2
15:44:38 <TheJulia> Anyway, we can move on
15:44:53 <TheJulia> got our v6 devstack scripting past Neutron \o/
15:45:07 <iurygregory> yay
15:45:15 <iurygregory> ok, next topic o/
15:45:28 <iurygregory> #topic Who is going to run the next meeting?
15:45:37 <iurygregory> do we have any volunteers?
15:46:29 <iurygregory> I will run the next meeting, thanks everyone!
15:46:48 <iurygregory> #endmeeting