Monday, 2018-01-15

*** kumarmn has joined #openstack-meeting-alt00:03
*** kumarmn has quit IRC00:05
*** kumarmn has joined #openstack-meeting-alt00:05
*** superdan is now known as dansmith00:11
*** kumarmn has quit IRC00:15
*** hiro-kobayashi has joined #openstack-meeting-alt00:17
*** markvoelker has joined #openstack-meeting-alt00:20
*** markvoelker has quit IRC00:24
*** yangyapeng has quit IRC00:28
*** yangyapeng has joined #openstack-meeting-alt00:29
*** yangyapeng has quit IRC00:33
*** ijw has joined #openstack-meeting-alt00:36
*** ijw has quit IRC00:41
*** yangyapeng has joined #openstack-meeting-alt01:13
*** kumarmn has joined #openstack-meeting-alt01:15
*** hongbin has joined #openstack-meeting-alt01:17
*** yangyapeng has quit IRC01:21
*** yangyapeng has joined #openstack-meeting-alt01:21
*** kumarmn has quit IRC01:21
*** daidv has joined #openstack-meeting-alt01:27
*** caowei has joined #openstack-meeting-alt01:30
*** dalgaaf has quit IRC01:59
*** dalgaaf has joined #openstack-meeting-alt02:01
*** julim has quit IRC02:05
*** zhurong has joined #openstack-meeting-alt02:27
*** michchap has joined #openstack-meeting-alt02:41
*** markvoelker has joined #openstack-meeting-alt02:51
*** kumarmn has joined #openstack-meeting-alt03:10
*** kumarmn has quit IRC03:19
*** markvoelker has quit IRC03:25
*** valkyrie1 has quit IRC03:27
*** valkyrie1 has joined #openstack-meeting-alt03:36
*** valkyrie1 is now known as mguiney03:36
*** kumarmn has joined #openstack-meeting-alt03:45
*** sdake has quit IRC03:46
*** sdake has joined #openstack-meeting-alt03:47
*** sdake has quit IRC03:47
*** sdake has joined #openstack-meeting-alt03:47
*** julim has joined #openstack-meeting-alt03:50
*** julim_ has joined #openstack-meeting-alt03:52
*** julim has quit IRC03:54
*** yamamoto has joined #openstack-meeting-alt04:05
*** bhavik1 has joined #openstack-meeting-alt04:10
*** hiro-kobayashi has quit IRC04:10
*** bhavik1 has quit IRC04:16
*** ykarel|away has joined #openstack-meeting-alt04:24
*** caowei has quit IRC04:30
*** sankarshan has quit IRC04:36
*** sankarshan has joined #openstack-meeting-alt04:36
*** zhurong has quit IRC04:39
*** kumarmn has quit IRC04:42
*** hiro-kobayashi has joined #openstack-meeting-alt04:46
*** janki has joined #openstack-meeting-alt04:59
*** links has joined #openstack-meeting-alt05:11
*** markvoelker has joined #openstack-meeting-alt05:22
*** zhurong has joined #openstack-meeting-alt05:24
*** caowei has joined #openstack-meeting-alt05:27
*** anilvenkata has joined #openstack-meeting-alt05:31
*** gouthamr has joined #openstack-meeting-alt05:35
*** dsariel has quit IRC05:40
*** kumarmn has joined #openstack-meeting-alt05:43
*** hongbin has quit IRC05:43
*** kumarmn has quit IRC05:48
*** sridharg has joined #openstack-meeting-alt05:51
*** gouthamr has quit IRC05:54
*** yamamoto_ has joined #openstack-meeting-alt05:55
*** yamamoto_ has quit IRC05:55
*** juggler has joined #openstack-meeting-alt05:55
*** markvoelker has quit IRC05:55
*** yamamoto has quit IRC05:59
*** juggler has quit IRC06:03
*** janki is now known as janki|afk06:10
*** janki|afk has quit IRC06:10
*** janki has joined #openstack-meeting-alt06:11
*** janki is now known as janki|afk06:12
*** yamamoto has joined #openstack-meeting-alt06:16
*** julim has joined #openstack-meeting-alt06:17
*** julim_ has quit IRC06:17
*** shaohe_feng has quit IRC06:27
*** felipemonteiro_ has joined #openstack-meeting-alt06:37
*** absubram has joined #openstack-meeting-alt06:38
*** shaohe_feng has joined #openstack-meeting-alt06:40
*** hieulq has quit IRC06:41
*** kumarmn has joined #openstack-meeting-alt06:44
*** kumarmn has quit IRC06:49
*** felipemonteiro_ has quit IRC06:58
*** jbadiapa has joined #openstack-meeting-alt06:59
*** dtrainor has quit IRC06:59
*** marios has joined #openstack-meeting-alt07:09
*** marios has quit IRC07:14
*** julim has quit IRC07:15
*** marios has joined #openstack-meeting-alt07:19
*** armaan has quit IRC07:21
*** armaan has joined #openstack-meeting-alt07:21
*** alexchadin has joined #openstack-meeting-alt07:25
*** ykarel|away is now known as ykarel07:29
*** vds has joined #openstack-meeting-alt07:34
*** kumarmn has joined #openstack-meeting-alt07:45
*** janki|afk is now known as janki07:46
*** alexchadin has quit IRC07:49
*** kumarmn has quit IRC07:49
*** markvoelker has joined #openstack-meeting-alt07:53
*** matrohon has joined #openstack-meeting-alt07:58
*** rcernin has quit IRC07:59
*** armaan has quit IRC08:04
*** armaan has joined #openstack-meeting-alt08:04
*** alexchadin has joined #openstack-meeting-alt08:05
*** florianf has joined #openstack-meeting-alt08:15
*** zhurong has quit IRC08:18
*** zhurong has joined #openstack-meeting-alt08:21
*** ykarel is now known as ykarel|lunch08:22
*** hiro-kobayashi has quit IRC08:23
*** markvoelker has quit IRC08:26
*** julim has joined #openstack-meeting-alt08:40
*** oidgar has joined #openstack-meeting-alt08:44
*** kumarmn has joined #openstack-meeting-alt08:46
*** jpena|off is now known as jpena08:49
*** shu-mutou-AWAY has quit IRC08:50
*** kumarmn has quit IRC08:50
*** oidgar has quit IRC08:51
*** bauwser is now known as bauzas08:57
*** rossella_s has joined #openstack-meeting-alt09:01
*** alexchadin has quit IRC09:04
*** jtomasek has joined #openstack-meeting-alt09:05
*** alexchadin has joined #openstack-meeting-alt09:05
*** alexchadin has quit IRC09:17
*** tssurya has joined #openstack-meeting-alt09:17
*** fzdarsky has joined #openstack-meeting-alt09:21
*** bfernando has joined #openstack-meeting-alt09:24
*** dsariel has joined #openstack-meeting-alt09:25
*** ssathaye has quit IRC09:26
*** ssathaye has joined #openstack-meeting-alt09:27
*** ykarel|lunch is now known as ykarel09:28
*** alexchadin has joined #openstack-meeting-alt09:36
*** derekh has joined #openstack-meeting-alt09:36
*** absubram has quit IRC09:44
*** zhurong has quit IRC09:46
*** kumarmn has joined #openstack-meeting-alt09:46
*** ganso has joined #openstack-meeting-alt09:49
*** kumarmn has quit IRC09:51
*** fzdarsky has quit IRC09:51
*** lpetrut has joined #openstack-meeting-alt09:52
*** armaan has quit IRC09:55
*** armaan has joined #openstack-meeting-alt09:56
*** hieulq has joined #openstack-meeting-alt09:59
*** daidv has quit IRC10:06
*** matrohon has quit IRC10:12
*** matrohon has joined #openstack-meeting-alt10:12
*** caowei has quit IRC10:21
*** pbourke has quit IRC10:21
*** pbourke has joined #openstack-meeting-alt10:23
*** markvoelker has joined #openstack-meeting-alt10:24
*** sambetts|afk is now known as sambetts10:26
*** panda is now known as panda|ruck10:26
*** armaan has quit IRC10:33
*** armaan has joined #openstack-meeting-alt10:34
*** alexchadin has quit IRC10:36
*** alexchadin has joined #openstack-meeting-alt10:36
*** pgadiya has joined #openstack-meeting-alt10:40
*** anilvenkata has quit IRC10:46
*** kumarmn has joined #openstack-meeting-alt10:47
*** kumarmn has quit IRC10:53
*** alexchadin has quit IRC10:54
*** alexchadin has joined #openstack-meeting-alt10:55
*** markvoelker has quit IRC10:57
*** anilvenkata has joined #openstack-meeting-alt10:58
*** alexchadin has quit IRC11:00
*** alexchadin has joined #openstack-meeting-alt11:09
*** yamamoto has quit IRC11:11
*** yamamoto has joined #openstack-meeting-alt11:11
*** alexchadin has quit IRC11:22
*** alexchadin has joined #openstack-meeting-alt11:23
*** matrohon has quit IRC11:26
*** yamamoto_ has joined #openstack-meeting-alt11:26
*** yamamoto has quit IRC11:29
*** sdague has joined #openstack-meeting-alt11:41
*** matrohon has joined #openstack-meeting-alt11:42
*** kumarmn has joined #openstack-meeting-alt11:48
*** kumarmn has quit IRC11:53
*** julim has quit IRC11:55
*** alexchadin has quit IRC11:55
*** alexchadin has joined #openstack-meeting-alt11:57
*** fzdarsky has joined #openstack-meeting-alt12:07
*** jkilpatr has joined #openstack-meeting-alt12:15
*** tesseract has joined #openstack-meeting-alt12:15
*** jkilpatr has quit IRC12:19
*** jkilpatr has joined #openstack-meeting-alt12:19
*** raildo has joined #openstack-meeting-alt12:20
*** anilvenkata has quit IRC12:20
*** oidgar has joined #openstack-meeting-alt12:28
*** fzdarsky is now known as fzdarsky|afk12:30
*** fzdarsky|afk has quit IRC12:30
*** anilvenkata has joined #openstack-meeting-alt12:37
*** jpena is now known as jpena|lunch12:38
*** panda|ruck is now known as panda|ruck|lunch12:38
*** julim has joined #openstack-meeting-alt12:39
*** slagle has joined #openstack-meeting-alt12:47
*** slagle has joined #openstack-meeting-alt12:47
*** kumarmn has joined #openstack-meeting-alt12:49
*** andreaf has quit IRC12:53
*** andreaf has joined #openstack-meeting-alt12:53
*** kumarmn has quit IRC12:54
*** markvoelker has joined #openstack-meeting-alt12:54
*** raildo has quit IRC13:04
*** jaypipes has joined #openstack-meeting-alt13:08
*** dprince has joined #openstack-meeting-alt13:08
*** ethfci has joined #openstack-meeting-alt13:09
*** dtrainor has joined #openstack-meeting-alt13:09
*** panda|ruck|lunch is now known as panda|ruck13:13
*** fzdarsky has joined #openstack-meeting-alt13:15
*** danpawlik_ has quit IRC13:17
*** alexchadin has quit IRC13:19
*** raildo has joined #openstack-meeting-alt13:20
*** armaan has quit IRC13:21
*** armaan has joined #openstack-meeting-alt13:21
*** yangyapeng has quit IRC13:26
*** yangyapeng has joined #openstack-meeting-alt13:27
*** raildo has quit IRC13:28
*** markvoelker has quit IRC13:28
*** jpena|lunch is now known as jpena13:29
*** janki has quit IRC13:31
*** janki has joined #openstack-meeting-alt13:32
*** yangyapeng has quit IRC13:32
*** krtaylor has quit IRC13:33
*** lennyb_ has joined #openstack-meeting-alt13:33
*** lennyb_ has quit IRC13:34
*** jcoufal has joined #openstack-meeting-alt13:34
*** matrohon has quit IRC13:36
*** armaan has quit IRC13:36
*** matrohon has joined #openstack-meeting-alt13:36
*** armaan has joined #openstack-meeting-alt13:37
*** julim has quit IRC13:37
*** julim has joined #openstack-meeting-alt13:37
*** danpawlik has joined #openstack-meeting-alt13:37
*** julim has quit IRC13:38
*** raildo has joined #openstack-meeting-alt13:43
*** ttsiouts has joined #openstack-meeting-alt13:44
*** ykarel is now known as ykarel|away13:46
*** kumarmn has joined #openstack-meeting-alt13:50
*** finucannot is now known as stephenfin13:50
*** cdent has joined #openstack-meeting-alt13:52
*** caowei has joined #openstack-meeting-alt13:53
*** kumarmn has quit IRC13:54
*** takashin has joined #openstack-meeting-alt13:56
*** ykarel|away has quit IRC13:57
*** krtaylor has joined #openstack-meeting-alt13:57
*** yangyapeng has joined #openstack-meeting-alt13:59
*** efried has joined #openstack-meeting-alt13:59
efried#startmeeting nova_scheduler14:00
openstackMeeting started Mon Jan 15 14:00:01 2018 UTC and is due to finish in 60 minutes.  The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: nova_scheduler)"14:00
openstackThe meeting name has been set to 'nova_scheduler'14:00
jaypipeso/14:00
takashino/14:00
ttsioutso/14:00
cdento/14:00
efriedLet's do this.14:00
efried#topic Reviews: Nested Resource Providers14:01
*** openstack changes topic to "Reviews: Nested Resource Providers (Meeting topic: nova_scheduler)"14:01
efriedComputeDriver.update_provider_tree() series starting with: https://review.openstack.org/#/c/532563/ (efried)14:01
jaypipesefried: I will get back to those reviews today14:01
efriedjaypipes Thanks.  Actually the most urgent thing from you is what's queued up for open discussion.14:02
efriedFor the above, I need to tweak the current bottom of the series per cdent comments.14:02
efriedIt should be noted that I recently reordered that guy to get set_traits_for_provider done earlier, because mgoddard needs that for the ironic-traits bp.14:03
efriedNested affordance in GET /allocation_candidates series starting with: https://review.openstack.org/#/c/531443/1 (jaypipes)14:03
efriedPoor jaypipes has been laid up for the past week.14:03
*** jkilpatr has quit IRC14:03
efriedThe above needs to be rebased with some refactoring in light of test helpers which (finally!) merged.14:04
jaypipesyeah, sorry about that :(14:04
jaypipeswill do today.14:04
jaypipesI noticed from the small amount of email checking I did last week that the gate looked fairly horrible...14:04
efriedGranular resource requests (efried): https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/granular-resource-requests (waiting for ^^)14:04
efriedAnything on NRP?14:05
efried#topic Reviews: Alternate Hosts14:05
*** openstack changes topic to "Reviews: Alternate Hosts (Meeting topic: nova_scheduler)"14:05
efriedWe're down to Just One More Patch mode here.  https://review.openstack.org/526436 looks like it needs a pep8 fix.14:06
*** jkilpatr has joined #openstack-meeting-alt14:06
efriedAnything on alt hosts?14:06
efried#topic Placement bugs14:07
*** openstack changes topic to "Placement bugs (Meeting topic: nova_scheduler)"14:07
efriedhttps://bugs.launchpad.net/nova/+bugs?field.tag=placement14:07
efriedWe should plan on doing a leetle scrubbyscrub shortly after ff14:07
cdentI made a new bug about unnecessary imports14:07
cdent#link imports: https://bugs.launchpad.net/nova/+bug/174312014:07
openstackLaunchpad bug 1743120 in OpenStack Compute (nova) "placement inadvertently imports many python modules it does not need" [Low,Triaged] - Assigned to Chris Dent (cdent)14:07
cdentand I'm working on some changes to fix it14:08
efriedneat14:08
efriedAnything else on bugs, placement or otherwise-scheduler?14:08
jaypipesnothing from me, post flu-ness.14:09
efried#topic Open discussion14:09
*** openstack changes topic to "Open discussion (Meeting topic: nova_scheduler)"14:09
efriedOne item on the agenda14:09
efriedProviderTree accessors (see https://review.openstack.org/#/c/533244/)14:09
efriedMainly needing jaypipes to weigh in here before I get too much further.14:09
efriedAssuming you're not yet caught up on emails and whatnot, here's the executive summary:14:10
efriedIn order to have a way to compare the ProviderTree I get back from ComputeDriver.update_provider_tree with the one cached in report client, I need to be able to get to the actual fields in the _Provider14:10
efriedThe above patch is one possible way of doing it, that still protects the privacy/integrity of _Provider.14:11
efriedAnother way would be individual getters at the ProviderTree level for each attribute of _Provider.  So like get_traits_for(name_or_uuid) etc.14:11
efriedMy plan is to start implementing that method that diffs and flushes ProviderTree changes today.14:12
*** pgadiya has quit IRC14:12
efriedSo whereas I don't need +2s or anything, I'd at least like general buy-in on the approach.14:12
* efried hands the mic to jaypipes 14:13
jaypipesefried: I was thinking more along the lines of a ProviderTree.delta(ProviderTree) method which would return a list of dicts of changed provider information14:13
jaypipesefried: and that ProviderTree.delta() method would only be used/called by the reportclient once the update_provider_tree() virt driver method returns True (indicating *something* was modified in the tree)14:14
efriedjaypipes Even that method would be ProviderTree needing to get at the individual fields of _Provider.14:14
jaypipesefried: of the other ProviderTree's _Provider(s), yes.14:15
efriedI.e. what we're talking about is at a lower level than ProviderTree.delta (or whatever)14:15
jaypipesefried: understood.14:16
efriedSo the proposed patch is one way of doing that: it basically provides a copyout of a podo version of _Provider14:16
jaypipesefried: so ProviderTree.delta() would need to call this ProviderTree.snapshot() method and operate against those snapshots of the other tree's _Provider objects14:16
efriedjaypipes Just so.14:17
jaypipesefried: yeah, I get it.14:17
cdent(there ya go efried, that's some of the additional context for your commit message...)14:17
efriedSo the question is: you cool with the snapshot idea or do we go for a more bean-like set of getters?14:18
efriedcdent ack14:18
jaypipesefried: you could also do a ProviderTree.clone() method that would return a new identical ProviderTree that the delta() method would then fully own (and not have to worry about potentially other threads writing to.)14:18
jaypipesefried: definitely like the snapshot approach over a bean-like getter approach.14:19
*** rossella_s has quit IRC14:19
jaypipesefried: I also think a clone() approach would be more Pythonic, though -- in the mould of copy.deepcopy()-like semantics against dicts14:19
efriedjaypipes Yes; I wasn't as much a fan of simple clone (deepcopy) because it still breaks the (albeit nominal) privacy boundary.14:19
jaypipeswhat does edleafe think? or should I read the reviews? :)14:19
efriedjaypipes Well, edleafe was involved in the brainstorm on Friday.  Not much beyond that in the review.14:20
jaypipesefried: well, I see his reviews on PS114:20
jaypipesefried: in short, I think the snapshot approach is perfectly fine.14:21
jaypipesefried: it does one job and accomplishes that job.14:21
efriedjaypipes Great, I will proceed.14:21
jaypipesefried: let's move forward, I think14:21
efriedThat's everything on the agenda.  Anyone have anything else to discuss?14:21
*** rossella_s has joined #openstack-meeting-alt14:21
jaypipescdent: what were your thoughts on the snapshot/clone thing?14:22
cdentI'm okay with now that I've read the additional context here. It wasn't clear what the calling context was going to be, but now that it is more clear, I think a read only snapshot is a reasonable way to go, if that level of protect is important (I'm still not _entirely_ clear why it is)14:23
efriedRealistically, the ProviderTree given to and returned by ComputeDriver.update_provider_tree doesn't need all this careful locking at all, ever.14:24
efriedBut IMO having the code take that for granted and violate the privacy markings of _Provider is not a good approach.14:24
jaypipesefried: never say never..14:25
efriedExactly.  And those protections *are* necessary for the ProviderTree that report client uses as a cache.14:25
jaypipesya14:25
efriedSo we need *something* there.  And this is about as lightweight/unintrusive as I can think of.14:26
cdentif we need something, then yeah, seems okay to me, it is pretty clean and comprehensible14:26
efriedGood deal.  Thanks y'all.14:26
efriedAnything else?14:27
jaypipes4214:27
cdentWRONG14:27
jaypipesheh14:27
efriedOkay, thanks everyone.14:27
efried#endmeeting14:27
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:27
openstackMeeting ended Mon Jan 15 14:27:40 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:27
openstackMinutes:        http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-01-15-14.00.html14:27
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-01-15-14.00.txt14:27
openstackLog:            http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-01-15-14.00.log.html14:27
cdentthanks efried14:27
* cdent goes home14:27
*** cdent has quit IRC14:28
*** kumarmn has joined #openstack-meeting-alt14:30
*** kumarmn_ has joined #openstack-meeting-alt14:30
*** takashin has left #openstack-meeting-alt14:33
*** kumarmn has quit IRC14:34
*** tssurya has quit IRC14:35
*** matrohon has quit IRC14:41
*** matrohon has joined #openstack-meeting-alt14:41
*** gcb has joined #openstack-meeting-alt14:44
*** caowei has quit IRC14:45
*** ameeda has joined #openstack-meeting-alt14:48
*** ameeda has left #openstack-meeting-alt14:48
*** tssurya has joined #openstack-meeting-alt14:50
*** jkilpatr has quit IRC14:54
*** jkilpatr has joined #openstack-meeting-alt14:55
*** matrohon has quit IRC14:58
*** yamahata has joined #openstack-meeting-alt14:59
*** Leo_m has joined #openstack-meeting-alt15:03
*** yamamoto_ has quit IRC15:06
*** oidgar has quit IRC15:12
*** armaan has quit IRC15:13
*** chhavi__ has joined #openstack-meeting-alt15:13
*** chhagarw has joined #openstack-meeting-alt15:13
*** tssurya has quit IRC15:15
*** julim has joined #openstack-meeting-alt15:20
*** links has quit IRC15:20
*** markvoelker has joined #openstack-meeting-alt15:25
*** oidgar has joined #openstack-meeting-alt15:26
*** tssurya has joined #openstack-meeting-alt15:28
*** matrohon has joined #openstack-meeting-alt15:28
*** hongbin has joined #openstack-meeting-alt15:29
*** dsariel has quit IRC15:38
*** dsariel has joined #openstack-meeting-alt15:48
*** felipemonteiro has joined #openstack-meeting-alt15:50
*** sxc731_ has joined #openstack-meeting-alt15:52
*** gcb has quit IRC15:54
*** felipemonteiro_ has joined #openstack-meeting-alt15:54
*** sxc731_ has quit IRC15:56
*** felipemonteiro has quit IRC15:58
*** markvoelker has quit IRC15:59
*** sxc731_ has joined #openstack-meeting-alt15:59
*** chyka has joined #openstack-meeting-alt16:01
*** chyka has quit IRC16:02
*** chyka has joined #openstack-meeting-alt16:02
*** yamamoto has joined #openstack-meeting-alt16:07
*** yamamoto has quit IRC16:14
*** anilvenkata has quit IRC16:16
*** sxc731_ has quit IRC16:16
*** sxc731_ has joined #openstack-meeting-alt16:16
*** sxc731_ has quit IRC16:20
*** anilvenkata has joined #openstack-meeting-alt16:22
*** sxc731_ has joined #openstack-meeting-alt16:23
*** sridharg has quit IRC16:27
*** dave-mcc_ has joined #openstack-meeting-alt16:29
*** rossella_s has quit IRC16:38
*** erlon has joined #openstack-meeting-alt16:39
*** rossella_s has joined #openstack-meeting-alt16:40
*** tesseract has quit IRC16:40
*** gouthamr has joined #openstack-meeting-alt16:42
*** gyee has joined #openstack-meeting-alt16:43
*** jpena is now known as jpena|brb16:44
*** markvoelker has joined #openstack-meeting-alt16:47
*** gouthamr has quit IRC16:50
*** gouthamr has joined #openstack-meeting-alt16:57
*** marios has quit IRC17:02
*** sxc731_ has quit IRC17:02
*** oikiki has joined #openstack-meeting-alt17:03
*** chhavi__ has quit IRC17:03
*** chhagarw has quit IRC17:03
*** armaan has joined #openstack-meeting-alt17:06
*** dave-mcc_ has quit IRC17:08
*** felipemonteiro_ has quit IRC17:10
*** rossella_s has quit IRC17:15
*** rossella_s has joined #openstack-meeting-alt17:16
*** janki has quit IRC17:17
*** efried is now known as efried_rollin17:19
*** links has joined #openstack-meeting-alt17:21
*** matrohon has quit IRC17:24
*** jpena|brb is now known as jpena17:27
*** links has quit IRC17:50
*** rossella_s has quit IRC17:55
*** rossella_s has joined #openstack-meeting-alt17:58
*** bfernando has quit IRC18:02
*** derekh has quit IRC18:02
*** armaan has quit IRC18:04
*** jkilpatr has quit IRC18:05
*** sambetts is now known as sambetts|afk18:14
*** jpena is now known as jpena|off18:18
*** jkilpatr has joined #openstack-meeting-alt18:18
*** kmalloc has joined #openstack-meeting-alt18:19
*** Swami_ has joined #openstack-meeting-alt18:20
*** gouthamr has quit IRC18:20
*** corvus is now known as jeblair18:21
*** jeblair is now known as corvus18:21
*** lpetrut has quit IRC18:22
*** rossella_s has quit IRC18:25
*** Leo_m has quit IRC18:26
*** rossella_s has joined #openstack-meeting-alt18:26
*** rossella_s has quit IRC18:31
*** rossella_s has joined #openstack-meeting-alt18:32
*** felipemonteiro has joined #openstack-meeting-alt18:34
*** felipemonteiro_ has joined #openstack-meeting-alt18:35
*** rossella_s has quit IRC18:37
*** myoung|pto is now known as myoung18:39
*** felipemonteiro has quit IRC18:39
*** markstur has joined #openstack-meeting-alt18:39
*** rossella_s has joined #openstack-meeting-alt18:41
*** oidgar has quit IRC18:46
*** dsariel has quit IRC18:51
*** Leo_m has joined #openstack-meeting-alt18:57
*** corvus is now known as jeblair18:58
*** jeblair is now known as corvus18:58
*** Leo_m has quit IRC19:01
*** Leo_m has joined #openstack-meeting-alt19:12
*** Leo_m has quit IRC19:16
*** felipemonteiro__ has joined #openstack-meeting-alt19:17
*** edleafe- has joined #openstack-meeting-alt19:17
*** edleafe has quit IRC19:19
*** edleafe- is now known as edleafe19:19
*** felipemonteiro_ has quit IRC19:20
*** fnaval has joined #openstack-meeting-alt19:27
*** Leo_m has joined #openstack-meeting-alt19:27
*** fnaval_ has joined #openstack-meeting-alt19:28
*** fnaval_ has quit IRC19:29
*** Leo_m_ has joined #openstack-meeting-alt19:31
*** oikiki has quit IRC19:32
*** oikiki has joined #openstack-meeting-alt19:32
*** Leo_m has quit IRC19:32
*** fnaval has quit IRC19:32
*** fnaval has joined #openstack-meeting-alt19:33
*** fnaval has quit IRC19:33
*** chyka has quit IRC19:42
*** rossella_s has quit IRC19:48
*** rossella_s has joined #openstack-meeting-alt19:50
*** jkilpatr has quit IRC19:55
*** efried_rollin is now known as efried19:55
*** tobiash has joined #openstack-meeting-alt20:02
*** slaweq has joined #openstack-meeting-alt20:15
*** rmascena has joined #openstack-meeting-alt20:19
*** lpetrut has joined #openstack-meeting-alt20:19
*** raildo has quit IRC20:21
*** dansmith has quit IRC20:22
*** markvoelker has quit IRC20:22
*** fnaval has joined #openstack-meeting-alt20:27
*** matrohon has joined #openstack-meeting-alt20:34
*** dansmith has joined #openstack-meeting-alt20:35
*** dansmith is now known as Guest4952220:35
*** Guest49522 is now known as dansmith20:40
*** matrohon has quit IRC20:45
*** matrohon has joined #openstack-meeting-alt20:46
*** fnaval has quit IRC20:52
*** jkilpatr has joined #openstack-meeting-alt20:53
*** lpetrut has quit IRC20:56
*** chyka has joined #openstack-meeting-alt20:59
*** fnaval has joined #openstack-meeting-alt21:03
*** dprince has quit IRC21:06
*** Leo_m_ has quit IRC21:14
*** Leo_m has joined #openstack-meeting-alt21:19
*** ganso has quit IRC21:29
*** dave-mccowan has joined #openstack-meeting-alt21:31
*** dgonzalez has quit IRC21:33
*** erlon has quit IRC21:39
*** dgonzalez has joined #openstack-meeting-alt21:42
*** lbragstad has quit IRC21:44
*** oikiki has quit IRC21:44
*** oikiki has joined #openstack-meeting-alt21:45
*** lbragstad has joined #openstack-meeting-alt21:47
*** matrohon has quit IRC21:48
*** erlon has joined #openstack-meeting-alt21:48
*** fzdarsky is now known as fzdarsky|afk21:50
*** chyka_ has joined #openstack-meeting-alt21:50
*** chyka has quit IRC21:53
*** armaan has joined #openstack-meeting-alt21:55
clarkbzuul meeting time?22:01
dmsimardohaui22:01
corvusanyone else around?22:02
*** matrohon has joined #openstack-meeting-alt22:03
fungimoi22:04
clarkbI'm going to have to cut my meeting short to run some errands. But have some last minute nodepool stuff (zuul too  IGuess)22:04
*** tpsilva has quit IRC22:04
corvus#startmeeting zuul22:04
openstackMeeting started Mon Jan 15 22:04:49 2018 UTC and is due to finish in 60 minutes.  The chair is corvus. Information about MeetBot at http://wiki.debian.org/MeetBot.22:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.22:04
*** openstack changes topic to " (Meeting topic: zuul)"22:04
openstackThe meeting name has been set to 'zuul'22:04
corvus#topic Roadmap22:05
*** openstack changes topic to "Roadmap (Meeting topic: zuul)"22:05
corvusI'm going to ping folks individually this week to check up on status22:05
corvusbut does anyone here working on a 3.0 release blocker have an issue we should talk about now?22:05
corvus(i know a lot of folks are afk today, but i thought i'd ask)22:05
corvus#topic RAM governor for the executors22:06
*** openstack changes topic to "RAM governor for the executors (Meeting topic: zuul)"22:06
corvusdmsimard: i think this is your topic?22:07
dmsimardoh, from last week yes22:07
corvus#link https://review.openstack.org/508960 ram governor22:08
dmsimardWe're generally memory constrained right now -- we're often finding zuul executors in swap territory and at that point it becomes a vicious circle quickly (can't clear jobs fast enough, so you get more jobs, etc.) and there's several OOM killers going around22:08
dmsimardSo we want to land and enable the RAM governor ASAP but there's also another "governor" I'd like to talk about -- it'd be "max concurrent builds"22:09
corvusdmsimard: when executors go above a certain load average they shouldn't accept new jobs22:09
clarkbon the scheduler side the memory consumer is the size of the zuul config model. Do we know what is consuming the memory on executors? is it ansible?22:09
dmsimardRegardless of our current governors (even pretending RAM had landed), there's nothing preventing a single executor from picking up 200 builds by itself22:09
corvusdmsimard: when have you seeen executors accept new jobs because they can't clear them fast enough?22:09
corvusdmsimard: yes there is -- we would have two things preventing it -- a load governor and a ram governor22:10
corvusright now we have one22:10
clarkb(just want to amke sure that governing job exectution is expected to reduce memory use and it isn't the finger daemon that is consuming all the memroy for example)22:10
dmsimardcorvus: not from a cold boot -- when all executors crashed a week ago, ze01 started first and picked up all the backlogged jobs and (eventually) loaded up to 15022:10
*** lbragstad has quit IRC22:10
fungii gather the issue with the system load governor not kicking in fast enough is that system load average is a trailing indicator and so can in certain thundering herd scenarios pick up a glut of jobs before the system load spikes high enough to stop it22:11
corvusclarkb: i *think* it's ansible eating the memory, but it's not leaking, it just uses a lot.  at least, that's my recollection.  it would be good to confirm that.22:11
corvusdmsimard: yes, that's true.  i think after we have a ram governor, we should look into tuning the rate at which jobs are accepted.22:11
fungithat sounds sane22:12
dmsimardGenerally speaking, there is only so many SSH connections/ansible playbooks we can have running at any given time22:12
dmsimardWouldn't it be reasonable to say an executor can accept no more than 100 concurrent builds for example ?22:12
corvusdmsimard: i'd like to save 'max jobs per server' as a last resort -- as in, i'd like us to avoid ever implementing it if possible, unless we completely fail at everything else.  the reason is that not all jobs are equal in resource usage.  i think it would be best if the executors could regulate themselves toward maximum resource usage without going overboard.22:13
fungidmsimard: depending on the resources available and performance of the server, that number may vary quite a lot though right?22:13
fungiwhat if you have two executors, one of which is ~half as powerful as the other... having zuul scale itself to them based on available resources is nice22:13
dmsimardfungi: If something like that lands, it would be something configurable (with a sane default) imo22:14
dmsimardthe way I see it, it's more of a safety net22:14
*** florianf has quit IRC22:14
funginot enough of a safety net unless you get into fiddling with per-server knobs rather than having a sane resource scheduler which can guess the right values for you22:15
corvusi don't want admins to have to tune these options.  there is no sensible global default for max jobs per server, and would need to be always individually tuned.  further, that ignores that not all jobs are the same, so it's problematic.22:15
corvusa job with 10 nodes that runs for 3 hours is different than a job with zero nodes that runs for 30 seconds.  both are very likely in the same system.22:15
dmsimardright22:16
pabelangero/ sorry I am late22:16
dmsimardpabelanger: ohai I was actually about to ask, do we think we can land https://review.openstack.org/#/c/508960/ soon ?22:16
corvusi agree that we need to prevent the hysterisis from happening -- i think the road there goes through the ram governor first, then tune the acceptance rate (there should already be a small rolloff, but maybe we need to adjust that a bit) so that the trailing indicators have more time to catch up.  finally, we may want to tune our heuristics a bit to give the trailing indicators more headroom.22:17
pabelangerdmsimard: i think we want to add tests first, I'm hoping to finish that up in the next day or so22:17
dmsimardcorvus: fwiw I agree that the max build idea is not a definitive answer and instead we might want to do like you mentioned and revisit/tune how executors pick up jobs in the first place22:18
corvuspabelanger: ++ we should be able to use mock to return some ram data22:18
pabelangercorvus: wfm22:18
*** lbragstad has joined #openstack-meeting-alt22:19
corvusdmsimard: it looks like right now we delay job acceptance a small amount but only with the goal of spreading the load across executors, so the response time is still pretty quick22:19
pabelangerspeaking of jobs, one thing zuulv2.5 did, and I don't believe zuulv3 does, is we had some sort of back off method so a single executor wouldn't accept a bunch of jobs at once. That seem to work well in zuulv2.5 with our zuul-launchers22:20
corvusand it only looks at the number of jobs currently running22:20
corvuswhat we may want to do is adjust that to *also* factor in how recently a job was accepted22:20
*** matrohon has quit IRC22:21
corvusor just increase the delay that's already there and only use jobs running22:21
*** rcernin has joined #openstack-meeting-alt22:21
corvusit's currently: delay = (workers ** 2) / 1000.022:21
corvus'workers' means jobs running in this context22:21
fungiwould that also explain why when all the executors crashed at once, the first one to get started went nuts on the backlog?22:21
fungisince there wasn't even the rotation between executors to save it22:22
clarkbya jobs running seems like maybe a better option than workers22:22
corvusthat's only going to slow us down 6.4 seconds with 80 jobs, so that's not enough time for load/ram to start to catch up22:22
corvusclarkb: no sorry, the variable is called "workers" but it means "number of jobs that this executor is running"22:23
corvusit's the internal executor gearman worker count22:23
fungifair. and the load governor is based on one-minute load average, so you have a lot of time to ramp up to untenable levels of activity22:23
clarkbgotcha22:23
*** slaweq has quit IRC22:24
fungii have similar worries about the ram governor, if the amount of ram ansible is going to use grows over time (we may take on a glut of jobs, and not finish old ones fast enough to make way for the memory growth of the new spike)22:24
pabelangerWe also don't start running ansible right away now too, we first merge code into local executor. Perhaps that isn't load / memory heavy?22:24
*** slaweq has joined #openstack-meeting-alt22:24
corvuspabelanger: it's not too heavy, but it is a delay worth considering.  we could even let that be a natural source of delay -- like don't acquire a new job until we've completed merging the most recent job.22:25
dmsimardpabelanger: I'm wondering if swap should be taken into account in the RAM governor (and how)22:25
corvusthat would be probably be fine fully loaded, but it would make for a very slow start.22:25
pabelangercorvus: yah, that might be something to try. I like that22:26
fungii think once we've started paging zuul activity out to swap space, it's already doomed22:26
dmsimardfungi: that's for scheduler but executors will keep running even when swapping22:26
dmsimardideally the ram governor prevents us from reaching swap territory22:26
fungihow well do they keep running?22:26
dmsimardnot well -- when the executors start swapping, execution becomes largely I/O bound and there's a higher percentage of I/O wait22:27
fungiif "keep running" means jobs start timing out because it takes too long for ansible to start the next task/playbook then that's basically it22:27
corvushttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=64003&rra_id=all22:28
fungii didn't mean doomed to need a restart, i meant doomed to introduce otherwise avoidable job failures22:28
dmsimardfungi: I noticed the i/o wait and swap usage when I was trying to understand the SSH connection issues, there might be a correlation but I don't know.22:28
*** slaweq has quit IRC22:29
corvusinteresting -- they're pretty active swapping but keep the used memory close to 50%22:29
corvushttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=64004&rra_id=all22:29
corvushttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=64005&rra_id=all22:29
fungii wonder if buffer space there is the ansible stdout buffering stuff22:30
*** armaan has quit IRC22:30
dmsimardfungi: actually I asked #ansible-devel about that and the buffer is on the target node, not the control node22:30
pabelangerspeaking of SSH connection issues, we could using SSH retries from ansible: https://review.openstack.org/512130/ to help add some fail protection to jobs22:31
dmsimardso it wouldn't explain the ram usage22:31
corvusdmsimard: did they indicate what happens when the target node sends the data back to the control node?22:31
fungiregardless, system and iowait cpu usage there don't look super healthy, leading me to wonder whether we still have too few executors at peak22:31
pabelangermaybe even expose it to be configurable some how to jobs22:31
fungiand 5-minute load average spiking up over 40 on ze01 just a few hours ago22:31
dmsimardpabelanger: there's some improvements we can do around SSH, yes22:31
fungiwhere it topped out around 4gb of swap in use22:32
dmsimardfungi: that's likely load due to i/o wait22:32
dmsimard(heavy swapping)22:32
fungidmsimard: exactly22:32
pabelangerI also think, OSIC suggested some things we could also tune in ansible for network related issues.   Need to see if I can find that etherpad22:32
pabelangeror was is OSA team22:32
dmsimardfungi: vm.swappinness is at 0 on ze01 too..22:33
dmsimardI read that as "never swap ever" so I don't know what's going on22:33
*** felipemonteiro__ has quit IRC22:33
dmsimardOh, actually it doesn't quite mean that22:33
dmsimard"Swap is disabled. In earlier versions, this meant that the kernel would swap only to avoid an out of memory condition, when free memory will be below vm.min_free_kbytes limit, but in later versions this is achieved by setting to 1."22:34
fungidmsimard: no, it just means don't preemptively swap process memory out to make room for additional cache memory22:34
dmsimardand our min_free_kbytes is vm.min_free_kbytes = 1136122:34
fungithese are fairly typical configuration for "cloud" virtual machines22:35
corvusfungi: our cpu usage from last week is significantly different from november: http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=64000&rra_id=4&view_type=&graph_start=1483002375&graph_end=151605555922:35
clarkbok I've got to head out now. The two things I wanted to bring up were merging feature/zuulv3 to master. I tested this nodepool and wrote an infra list email about it. The other thing is my two nodepool changes to address cloud failures. 533771 and its parent. Left test split out as I expect it may need cleanup but should be good enough to show parent works22:35
corvuswe may be seeing the hit from meltdown and may indeed need to add more executors22:35
fungiyep, it is a bit worse22:36
corvusclarkb: thanks, yeah, i think we can merge the branches soon, maybe let's set a date for thursday and send out a followup email?22:37
fungimeltdown mitigation performance hit seems as good a culprit as any22:37
dmsimardRe: adding more executors -- do we think we have the right size right now ? In terms of flavors, disk size, etc.22:38
*** rmascena has quit IRC22:38
*** markstur has quit IRC22:38
pabelangerlooking at stats from zuul-launcher is a little interesting too: http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=4683&rra_id=4&view_type=&graph_start=1483002628&graph_end=151605581222:38
pabelangerwe do seem to be using more system resources with executors22:38
corvuspabelanger: that had a *very* different mode of operation22:38
dmsimardpabelanger: executors also run zuul-merger which is not negligible22:39
pabelangeryup22:39
*** slaweq has joined #openstack-meeting-alt22:39
corvusdmsimard: i'd argue it is negligible22:39
corvushttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=1519&rra_id=all22:40
corvusthat's on a 2G server22:40
dmsimardbut there's 8 zm nodes :P22:40
corvusyes, for paralellization22:41
corvusi'm just saying that the internal merger is not what's eating things up on the executors.  we're just doing a lot more with ansible than we were in zuulv2.522:41
* dmsimard nods22:41
corvus(among other things, in zuul v2.5, we did *not* ship the entire console output back to the controlling ansible process)22:42
corvusanyway, to conclude: let's say the plan for now is: add ram governor, then slow job acceptance rate.  sound good?22:43
fungiwfm22:44
pabelangerI think we also used 2.1.x vs 2.3.x, so possible ansible is now just using more resources22:44
pabelangercorvus: ++22:44
*** slaweq has quit IRC22:44
corvus#agreed to reduce hysterisis and exceess resource usage: add ram governor, then slow job acceptance rate22:44
corvus#topic merging feature branches22:44
*** openstack changes topic to "merging feature branches (Meeting topic: zuul)"22:44
corvuswe should be all prepared as clarkb said -- but we still may want to actually schedule this so no one is surprised22:45
corvushow about we say we'll do it on thursday?22:45
fungithe puppet changes to make that hitless for people using puppet-openstackci are merged at this point, right?22:46
corvusfungi: yep22:46
pabelangerthursday is fine by me22:46
corvus#link puppet-openstackci change https://review.openstack.org/52395122:47
fungii guess as long as the people using that module update it with some frequency they're protected. if they don't, then they're using a non-continuously-deployed puppet module to continuously deploy a service... so a learning experience for them?22:47
pabelangerwe'll have to make some config changes to nodepool-builder, since it is using old syntax. I can propose some patches for that22:47
corvusfungi: yep.  and i mean, it's not going to eat their data, they just need to checkout a different version and reinstall22:47
pabelangermaybe also upgrade to python3 at the same time22:47
*** jcoufal has quit IRC22:48
corvuspabelanger: heh, well, if we're checking out master on nodepool builders, then i think we'll automatically get switched to v3.  :)22:49
corvuspabelanger: do you want to deploy new builders running from the feature/zuulv3 branch before we merge?22:50
pabelangercorvus: I was thinking maybe we first switch nodepool builders to feature/zuulv3 branch and get config file changes in place22:50
pabelangeryah22:50
corvuspabelanger: think that's reasonable to do before thursday?22:50
corvushopefully there'll be more folks around tomorrow to help too22:50
pabelangerI believe so, I can start work on it ASAP22:50
corvuscool22:51
corvus#agreed schedule feature branch merge for thursday jan 1822:51
corvus#topic open discussion22:51
*** openstack changes topic to "open discussion (Meeting topic: zuul)"22:51
corvusanything else?22:51
fungioh, as far as release-related needs, i wouldn't mind if someone took a look at my change to link the zuul-base-jobs documentation from the user's guide22:52
fungi#link https://review.openstack.org/531912 Link to zuul-base-jobs docs from User's Guide22:52
fungiit's small, and could probably stand to be at least a little less small22:53
*** markvoelker has joined #openstack-meeting-alt22:53
corvusfungi: ++22:53
corvusif that's it, i'll go ahead and end22:55
*** chyka_ has quit IRC22:55
corvusthanks everyone!22:55
corvus#endmeeting22:55
fungithanks corvus!22:55
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:55
openstackMeeting ended Mon Jan 15 22:55:45 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/zuul/2018/zuul.2018-01-15-22.04.html22:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/zuul/2018/zuul.2018-01-15-22.04.txt22:55
openstackLog:            http://eavesdrop.openstack.org/meetings/zuul/2018/zuul.2018-01-15-22.04.log.html22:55
*** chyka has joined #openstack-meeting-alt22:56
*** markvoelker has quit IRC22:58
*** chhavi__ has joined #openstack-meeting-alt23:00
*** chhagarw has joined #openstack-meeting-alt23:00
*** lbragstad has quit IRC23:00
*** armaan has joined #openstack-meeting-alt23:01
*** hieulq has quit IRC23:02
*** hieulq has joined #openstack-meeting-alt23:03
*** chhavi__ has quit IRC23:04
*** chhagarw has quit IRC23:04
*** julim has quit IRC23:09
*** julim has joined #openstack-meeting-alt23:11
*** kumarmn_ has quit IRC23:17
*** kumarmn has joined #openstack-meeting-alt23:18
*** kumarmn has quit IRC23:22
*** Leo_m has quit IRC23:26
*** hongbin has quit IRC23:28
*** kumarmn has joined #openstack-meeting-alt23:35
*** kumarmn has quit IRC23:40
*** chyka_ has joined #openstack-meeting-alt23:44
*** chyka has quit IRC23:49
*** lbragstad has joined #openstack-meeting-alt23:53
*** erlon has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!