13:00:07 #startmeeting senlin 13:00:08 Meeting started Tue Nov 29 13:00:07 2016 UTC and is due to finish in 60 minutes. The chair is yanyanhu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:12 The meeting name has been set to 'senlin' 13:00:22 hi, guys 13:00:32 hi yanyan 13:00:37 hi, XueFengLiu 13:00:44 hi, all 13:00:48 hello 13:00:52 hi 13:00:56 hi, Qiming 13:01:13 lets wait for a while for other attenders 13:01:20 here is the agenda, https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Agenda_.282016-11-29_1300_UTC.29 13:01:27 please feel free to add items 13:02:26 ok, lets move on 13:02:39 #topic ocata workitem 13:02:45 https://etherpad.openstack.org/p/senlin-ocata-workitems 13:02:51 here is the list 13:03:03 - Improve tempest API test 13:03:21 I think we can consider to start working on it for the versioned request support is almost done 13:04:39 the basic idea is adding the verification of exception message 13:04:52 we have removed the performance benchmarking work completely? 13:04:56 to ensure the request handling logic is correct 13:05:11 Qiming, I plan to move it back to todo list 13:05:26 for I don't have time to work on it recently... 13:05:36 if anyone want to pick it up, that will be great 13:05:50 I see 13:06:11 the basement is there, just need efforts to support more profiles and scenarios 13:06:59 ok, next one 13:07:04 HA support 13:07:17 hi, lixinhui 13:07:34 we just started the topic about HA in seconds :) 13:07:55 okay 13:07:57 :) 13:08:30 any new progress? 13:08:33 in the past week, I submitted the ocativia BP and patch 13:08:43 noticed that and your patch 13:08:46 good work 13:08:52 great 13:09:28 Michael has some question about the background 13:09:44 and I gave some answer 13:10:00 about using lbaas hm for HA purpose? 13:10:35 if answer can not resolve his questions, may need to discuss that on the weekly meeting of ocativa 13:10:55 yes, I mentioned the use case 13:11:05 emm 13:11:07 https://review.openstack.org/402296 13:11:21 hope it won't become next vpnaas 13:11:25 which is dying 13:11:34 yes, it is dying 13:12:04 people do have strong requirement for native lb support 13:12:05 anyway, not a big change. hope he can accept 13:12:19 although imho, lbaas is not ready fro production requirement 13:12:26 s/fro/for 13:12:33 agreed 13:13:22 so maybe we can give them a simple introduction about our use case 13:13:35 they may and may not care 13:13:41 Qiming, yes 13:13:50 so we just try our best 13:13:56 it is not about our use case 13:14:00 it is their BUG 13:14:01 okay 13:14:06 actually this is a contribution for lbaas I feel 13:14:10 a serious bug, they need to take care of it 13:14:12 no harm for them :) 13:14:39 Qiming, you mean the incorrect health status of lb member 13:14:43 if lbaas is maintaining health status for nodes, they should do it right 13:15:00 I mean omitting event for status change 13:15:20 we tried help fix it, and they reject the patch, and they "WON'T" fix the bug 13:15:43 I have more strong feeling that our work about event/notification is such important :) 13:15:45 now xinhui is trying another workaround, let them send out notifications when node health change 13:15:58 I think it is not reasonable to submit patch into lbaas any more 13:16:00 Qiming, yes, have the save feeling for that patch 13:16:09 anyway, it's their decision. 13:16:24 so we are trying to use ocativia to emit event 13:16:38 then any listener can handle in their own way 13:16:51 maybe need a ptl to ptl talk with them 13:16:52 lixinhui, yes, that's pretty reasonable 13:16:54 no need to care about neutron-lbaas anymore 13:17:19 yes, agree Qiming :) 13:17:29 so neutron-lbaas and ocaticia are two individual projects? 13:17:36 yes 13:17:41 ocativia 13:17:44 I see 13:17:47 neutron-lbaas is dying 13:17:57 I thought they are the same project 13:18:08 octavia 13:18:20 haha 13:18:31 :) 13:18:38 :) 13:19:07 there is lbaasv2? 13:19:08 I will keep tracking the patch 13:19:18 that is my part 13:19:21 so maybe we need to have further discussion on this issue 13:19:22 of update 13:19:37 Yes, Haiwei_ 13:19:38 to see how to move on 13:19:45 lbaasv1 has been droped 13:20:01 yes, lbaasv2 is now the default enabled api 13:20:13 neutron ptl is armax 13:20:36 Michael Johnson is the octavia PTL 13:20:37 octavia is part of neutron? 13:20:42 yes 13:20:55 or an individual project? 13:20:56 yes 13:20:57 octavia has its own core team: https://review.openstack.org/#/admin/groups/370,members 13:21:12 they have the same weekly meeting with neutron 13:21:17 lbaas is dying, lbaasv2 is the substitute? 13:21:25 http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n2205 13:21:30 yes, I think so, haiwei_ 13:22:05 wait a minute, lbaasv1 is dying or lbaas project is dying? 13:22:28 Oh, I thought lbv2 is dying too 13:22:35 I see 13:22:36 thanks goddess , they are not dying both 13:22:46 ... 13:22:47 lbaasv1 has been dropped 13:22:52 yes, it is 13:23:14 v1 is dead, v2 is dying? 13:23:31 I don't know, I'm listening :) 13:23:56 I did not hear anything about v2 is dying 13:24:27 sigh, if so, lbaas project is ok 13:24:30 just its v1 api is deprecated which is expected 13:24:31 it seems the advanced feature like lbaas , vpn are done by vendors themselves 13:24:36 ocativa will replace neutron-lbaas 13:24:39 project 13:24:53 octavia 13:25:15 they are on the path of transforming 13:25:24 replace? you mean enhancement or completely replacement 13:25:38 so no more haproxy based lb 13:25:42 deprecation 13:25:44 only vm based one 13:26:01 that is another reason why we should not submit patch to the dying project 13:26:04 and neutron-lbaas will be renamed to octivia? 13:26:38 by default, octavia provide haproxy lb 13:26:54 haproxy installed into a VM 13:26:54 and it provide driver for different vendors to plugin 13:27:09 yes, this is what lbaas is doing now 13:27:19 so it is not necessary to have also neutron-lbaas... 13:27:58 so octivia will replace lbaas in bigtent? 13:28:22 no one ever mention big tent obviously 13:28:26 anyway, proposing patch to octivia is reasonable 13:28:34 http://lists.openstack.org/pipermail/openstack-dev/2016-November/106911.html 13:28:51 Octavia is currently the reference backend for Neutron LBaaS. In the near future, Octavia is likely to become the standard OpenStack LBaaS API endpoint. 13:29:00 https://github.com/openstack/octavia 13:29:23 Yes 13:29:34 I see. 13:30:00 should we move on? we have been talking about this extraterrestrial project for 30 minutes 13:30:02 So customers who use lbv1 API will be easy to transfer to use octavia, right? 13:30:15 so we just focusing on communication with octavia team 13:30:15 20 minutes, sorry 13:30:36 sorry, lets discuss this issue offline 13:30:39 lets move on now 13:30:49 ping the cores, send emails to mailinglist, we will find out 13:31:00 no document update I think 13:31:13 no 13:31:14 version request support 13:31:23 almost done 13:31:32 I will finish receiver part this week 13:31:45 I think XueFengLiu and lvdongbing's work has been done 13:32:05 then we can anounce the first step is finished 13:32:08 yes 13:32:19 Yes, only action create/delete need to do. 13:32:42 XueFengLiu, thanks 13:32:46 remove line 20? 13:32:57 Qiming, yes 13:33:09 ok, next one, container profile 13:33:12 hi, haiwei_ 13:33:29 * Qiming really enjoys pressing the DEL key 13:33:42 :) 13:34:10 hi, yanyanhu, I think we need to discuss the image management of containers 13:34:33 haiwei_, yes, that is an important issue 13:34:43 currently, we can get image from glance I think? 13:34:52 although it is not layered 13:34:59 I think the next step maybe the image jobs, what do you think Qiming 13:35:18 what do you mean by image jobs? 13:35:24 I am think about using Zun yanyanhu 13:35:48 how to give the container an image 13:35:55 glance? 13:36:15 when you do glance image-create, you have this: 13:36:15 yes, maybe this is the first choice, at least for now? 13:36:16 --container-format 13:36:16 Format of the container Valid values: None, ami, ari, 13:36:16 aki, bare, ovf, ova, docker 13:36:18 currently we only support downloading the images from the docker lab 13:36:46 not the vm's image 13:37:06 'container-format docker' 13:37:10 what does that mean? 13:37:42 this is glance image-create? 13:37:47 yes 13:37:52 It is 13:38:09 never saw this before 13:38:29 may need more investigation here before making decision :) 13:38:32 we usually use bare for container-format 13:38:32 okay 13:38:45 ok, will check it 13:38:51 haiwei_, great, thanks a lot 13:38:51 this means we can upload a docker image to glance 13:38:56 current docker profile relies on docker to parse and download the image 13:39:03 use glance-create, I thiknk 13:39:06 Qiming, yes, it is now 13:39:16 em ... primary use case was for nova-docker I think 13:39:23 that's the way nova-docker used to work 13:39:39 so maybe an important issue is figuring out how to consume container image stored in glance 13:39:40 not sure how useful it is if we let docker(d) to check the image 13:40:02 zun has the same problem I think 13:40:31 the assumption is that the controller node has access to either the public hub or a local registry 13:40:55 it sounds more of the configuration/deployment topic than a senlin programming job ? 13:41:20 Qiming, yes it is 13:41:52 if the image is not uploaded to glance, glance will download it from public lib and then support it to other project? 13:42:04 maybe this can be a property of container profile? 13:42:31 haiwei_, not sure about how it works 13:42:34 if wanted, user (of a public cloud) can create a vm for storing his private docker images, then start a container cluster referencing images stored there? 13:42:56 docker determines the registry using tag, .... 13:42:59 Qiming, you mean local registry for each container cluster? 13:43:06 why use a vm to store the images? 13:43:12 just want to say that is possible 13:43:17 for public cloud 13:43:18 ok 13:43:44 maybe we can collect different options and then have a dicussion to see which way is better 13:43:48 for private cloud, you can build it somewhere so long the controller node can access its 5000 port 13:44:24 haiwei_, maybe we can collect all possible options in an etherpad 13:44:38 ok, yanyanhu, will create one 13:44:41 and we can leave comments there 13:44:49 haiwei_, great, thanks :) 13:44:59 ok, only 15 minutes left 13:45:04 lets move to next topic 13:45:09 event/notification 13:45:15 hi, Qiming, your turn 13:46:19 okay, almost done 13:46:25 great 13:46:34 common interface abstracted 13:46:39 https://blueprints.launchpad.net/senlin/+spec/generic-event 13:46:48 no reviews yet 13:46:57 may have to approve it myself 13:47:13 next thing would be about the configuration part 13:47:29 making 'database' and 'message' two plugins 13:47:40 and load them via steverdore 13:48:01 add senlin.conf section/options to control when/how to log ... 13:48:05 sounds great 13:48:39 about the message way, so the messages will be published to a public queue? 13:48:58 yes 13:49:01 which is accessable for all openstack services 13:49:06 I see 13:49:13 it is controller plane thing 13:49:20 irrelevant to zaqar 13:49:25 this is also what we expect octavia to do :) 13:49:31 Qiming, I see 13:49:45 zaqar is USER-FACING message queue 13:50:04 yes 13:50:23 sorry for didn't get time to review the code. Will check it tomorrow 13:50:36 if you have check the code: 13:50:44 Qiming, please feel free to self approve to avoid denpendency issue 13:50:45 85 def _emit(self, context, event_type, publisher_id, payload): 13:50:45 86 notifier = messaging.get_notifier(publisher_id) 13:50:45 87 notify = getattr(notifier, self.priority) 13:50:45 88 notify(context, event_type, payload) 13:50:48 that is all 13:51:02 looks simple 13:51:27 get_notifier is provided by oslo? 13:51:33 yes 13:51:35 I mean the backend logic 13:51:37 I see 13:51:59 we have a thin wrapper over it 13:52:04 also pretty simple 13:52:19 oh, is there any limit about publishing message to this public queue in control plane? 13:52:30 don't think so 13:52:47 it is the same message service for rpc 13:53:01 I mean if a service publishs too many messages, it could break the queue? 13:53:30 no experience 13:53:48 me neither... 13:54:00 if you have ever checked events collected by ceilometer, you will realize that we are very cautious on this 13:54:16 yep 13:54:24 if you compare this to the number of RPC calls received by keystone 13:54:34 I know ceilometer gets lots of message from other services 13:54:34 by neutron 13:54:55 things we are sending are trivial 13:55:21 yes, compared with others 13:55:55 also compare to what nova is sending out: http://git.openstack.org/cgit/openstack/nova/tree/nova/notifications/objects/instance.py#n246 13:56:14 @@ 13:56:42 message for each status switching for each instance 13:57:11 so I'm not that worried about it 13:57:14 ok, seems we don't need to worry about this issue 13:57:16 yep 13:57:28 in addition, we are making the behavior configurable 13:58:00 I see 13:58:34 ok, these are all workitems on the list 13:58:38 the time is almost done 13:58:48 any more update? 13:58:57 if not, will end the meeting. 13:59:03 hi, lixinhui, still there? 13:59:24 if it's ok, want to talk with you about the patch for octavia 13:59:47 we can discuss how to work on it together 13:59:53 to push it move on 14:00:05 ok, time is over. Thanks you guys for joining 14:00:09 have a good night 14:00:12 #endmeeting