04:00:46 #startmeeting masakari 04:00:47 Meeting started Tue Jan 31 04:00:46 2017 UTC and is due to finish in 60 minutes. The chair is samP. Information about MeetBot at http://wiki.debian.org/MeetBot. 04:00:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 04:00:51 The meeting name has been set to 'masakari' 04:00:55 o/ 04:01:03 takashi: hi 04:01:10 o/ 04:01:18 since no critical bugs lets move to discussion.. 04:01:34 #topic Discussion points 04:01:57 There is one issue reported in masakari 04:01:57 1st one, Who will set ha_enabled? 04:02:02 #link : https://bugs.launchpad.net/masakari/+bug/1659495 04:02:02 Launchpad bug 1659495 in masakari "taskflow version is not compatible with latest engine code" [Undecided,New] 04:02:15 tpatil: sorry, 04:02:47 Dinesh will fix this issue 04:02:59 yes 04:03:20 This is taskflow version issue, right? 04:03:27 samP: correct 04:03:38 Can we just bump up required taskflow version? 04:04:11 yes, that's what we will need to do to fix this issue 04:04:32 I'm just wondering this issue happens because the requreiemt.txt is not synced with global-requirements.txt 04:04:35 what is the globel req version for taskflow? 04:04:42 our requreiemtns.txt in masakari 04:05:10 takashi: makes sense to me, masakari requirements are not getting bumped by bot jobs 04:05:11 taskflow>=2.7.0 04:05:17 yes, but why it was not caught by jenkins? IMO we should have a test case to check fail formaters 04:05:32 s/formaters/formatters 04:05:37 takashi: thanks, upper is set to taskflow===2.9.0 04:05:58 fyi: https://github.com/openstack/requirements/blob/master/global-requirements.txt#L280 04:06:22 Maybe we should manually sync requirement.txt before we relase Ocata... 04:06:25 at least 04:06:28 and at workst 04:06:33 s/workst/worst/ 04:06:36 takashi: agree 04:07:55 after Ocata relase, we may use bot to do this. 04:08:13 samP:ok 04:08:33 Dinesh_Bhor: Please bump the taskflow version to 2.7.0 in requirements.txt and upload the patch for review 04:08:49 tpatil: yes 04:09:15 Dinesh_Bhor: Thanks 04:09:21 Dinesh_Bhor: tpatil thanks 04:09:51 #action Dinesh_Bhor Fix https://bugs.launchpad.net/masakari/+bug/1659495 04:09:51 Launchpad bug 1659495 in masakari "taskflow version is not compatible with latest engine code" [Undecided,New] 04:10:32 Ok then, any other bugs to discuss? 04:12:00 samP: No 04:12:15 tpatil: thanks 04:12:29 lets move to the discussion 04:12:36 yes :-) 04:12:56 1st one, Who will set the ha_enabled tag? 04:13:14 I have added that in agenda 04:13:22 abhishekk: thanks 04:13:43 In previous masakari, only operator set this tag to each VM 04:14:26 so how to rstrict normal user from setting this flag? 04:14:54 in glance there is a property protection which we can set using policy.json 04:14:56 samP: HA_Enabled will be set as a tag or metadata? 04:15:15 tpatil: sorry, in meta data 04:15:29 samP: Ok 04:16:26 abhishekk: I have to check, but I think we did not expose metadata API to end user. 04:16:42 abhishekk: so, end user can not set metadata to server 04:17:13 ok, I need to check about that 04:17:42 It can be set at the time of boot as well and normal user can do that 04:17:49 anyway, in normal openstack env, end user can add the metadata 04:18:04 Dinesh_Bhor: correct 04:18:46 I am not sure nova policy does support this kind of restrections on metadata 04:19:51 as abhishekk said, I remember we set simillar setting for glance 04:20:05 https://github.com/openstack/nova/blob/master/nova/policies/server_metadata.py#L31 04:21:03 abhishekk: thanks, we can controll it. 04:21:14 but IMO thiese policies are for meta api 04:21:49 abhishekk: seems you are right. 04:22:13 we can set or remove metadata using meta set/delete, need to check whether this will work for boot as well 04:23:36 abhishekk: Do you mean, set metadata at boot? 04:23:58 while using boot command we can pass --metadat key=value 04:24:14 abhishekk: yep.. got it 04:24:23 s/metadat/metadata 04:26:01 so, is this an implementation related issue or operation related issue? 04:26:09 nope as Dinesh says normal user can set this while booting the instance 04:26:30 IMO operation related issue 04:26:59 samP: since you haven't exposed metadata api for normal users, there will be no issues, but for other operators there is an issue 04:27:10 abhishekk: thanks 04:27:21 tpatil: correct 04:27:50 samP: Maybe we can add a support in Nova to restrict adding certain metadata keys to an instance using policy 04:28:02 IMO, we cannot fix this from masakari side, need to do some work in nova 04:28:05 tpatil: yes 04:28:19 tpatil: makes sense 04:29:04 tpatil: somwhat simillar thing we did in "license metadata" in nova.. 04:29:38 samP: in glance 04:29:50 I think abhishekk mentionted part of it, in glance 04:29:54 abhishekk: yes 04:30:59 samP: similar to glance, we can add this support in Nova as abhishekk has pointed out 04:31:21 If we propose this to nova, it will be in Pike (at best) right? 04:31:30 samP: correct 04:31:46 samP: yes 04:33:38 samP: yus. IMO I think we should propose spec as soon as nova spec repo for Pike is opened 04:33:45 got it. what would be the best way to approach? 04:34:02 I can discuss this in PTG. 04:34:49 but first I think we need some pre-discussion with nova 04:35:28 takashi: sorry, your comment came late.. 04:35:56 samP: np. as you say, we need some discussion in nova project 04:36:57 takashi: OK then, lets propse to Pike spec. 04:38:10 samP: yes 04:38:17 samP: I have noted down this point, we will submit a spec in Nov to address this use case. 04:38:25 s/Nov/Nova 04:38:34 tpatil: thanks 04:38:36 samP: maybe we can discuss our usecase with nova team, and confirm this is the best solution 04:38:40 tpatil: yes, thanks! 04:39:36 Do they set a specific date for pike spec start? 04:39:56 #link https://releases.openstack.org/pike/schedule.html 04:40:15 so may TBDs... 04:40:19 s/may/many/ 04:40:30 takashi: thanks, seems TBD 04:40:54 samP: AFAIK, nova spec freeze happens at the same time as *-1 milestone 04:41:28 samP: so we should get the spec approved before Pike-1 milestone 04:41:28 tpatil: may I assign this task to you for now? 04:41:41 samP: Yes 04:41:49 takashi: got it 04:42:46 #action tpatil Propse Nova spec for matadata controll policy 04:42:54 tpatil: thanks 04:43:16 abhishekk: thanks for adding this point 04:43:39 shall we move to next topic? 04:43:45 samP: no problem 04:44:00 #link https://review.openstack.org/#/c/423072/ 04:44:50 abhishekk: thanks for the nice idea. but I have some operation related issues (pls see my comment on gerrit) 04:45:13 samP: I have seen your comments 04:46:43 abhishekk: Those are just my comments, but other may have different opinion on this 04:46:45 IMO it makes sense to balance the pool of reserved hosts failed node can be reassigned as a reserved host 04:51:21 abhishekk: Are you in favour of add reserved_host=False once we evacuate VM? or wait for other failures? 04:52:21 samP: yes, because once we enable compute service on reserved host we cannot restrict nova to launch instance on that host 04:52:24 abhishekk: We should set reserved=False immediately after all instances are evacuated from a failed compute node. 04:52:45 tushar san makes sense 04:52:52 abhishekk: correct 04:53:51 takashi: agree 04:55:17 I think we have 6 minutes now 04:55:21 abhishekk: could you please update the spec whit this info? 04:55:34 abhishekk: yes, just 5 mins left 04:55:48 saaP: yes 04:55:57 abhishekk: thanks 04:56:16 #topic AOB 04:56:25 set reserved_host to false as soon as all instances are evacuated from the failed node, right? 04:56:44 abhishekk: correct 04:57:17 samP: ok, it's already there in the specs, I just need to rephrase it 04:58:06 abhishekk: yes.. sorry it is there.. my bad 04:58:17 May I ask a question related to the new requirement to add the reserved_host to the same aggregate in which the failed_host is? 04:58:43 Dinesh_Bhor: sure 04:58:54 So my question is: A failed_host can be associated with multiple aggregates, so to which aggregate the reserved_host should be added? 04:59:35 Dinesh_Bhor: all the aggregates of failed host 04:59:49 samP, Dinesh_Bhor: Can we move to #openstack-maskari? 04:59:57 Dinesh_Bhor: in nova, there is unique constraint applied for host, aggregate uuid, delete column 04:59:58 sure 05:00:00 because we run out all meeting time... 05:00:01 samP: ok 05:00:07 takashi: sure 05:00:13 Dinesh_Bhor: so this situation will never arise 05:00:27 OK, then, lets move to openstack-masakari for further discussions.. 05:00:36 Lets end this meeting... 05:00:42 thank you all 05:00:59 thank you 05:01:08 thanks 05:01:15 #endmeeting