13:00:32 <yanyanhu> #startmeeting senlin
13:00:33 <openstack> Meeting started Tue Jun 21 13:00:32 2016 UTC and is due to finish in 60 minutes.  The chair is yanyanhu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:37 <openstack> The meeting name has been set to 'senlin'
13:00:48 <yanyanhu> hi
13:00:53 <haiwei> hi
13:00:54 <elynn> hi~~
13:01:04 <yanyanhu> qiming is not here for family reason, so I will hold this meeting today
13:01:05 <yanyanhu> :)
13:01:11 <yanyanhu> hi, haiwei , elynn
13:01:25 <yanyanhu> will wait for a minute for other attender
13:01:27 <haiwei> ok
13:02:03 <yanyanhu> ok, lets start
13:02:14 <yanyanhu> please check the agenda and item you want to discuss :)
13:02:22 <yanyanhu> https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Weekly_Senlin_.28Clustering.29_meeting
13:02:34 <yanyanhu> #topic newton workitems
13:02:47 <yanyanhu> lets quick go through the etherpad
13:02:52 <rossella_s> hi :)
13:02:55 <elynn> okay
13:03:00 <yanyanhu> https://etherpad.openstack.org/p/senlin-newton-workitems
13:03:09 <elynn> welcome rossella_s
13:03:12 <yanyanhu> hi,  rossella_s
13:03:29 <yanyanhu> the first item is tempest test
13:03:41 <yanyanhu> tempest API test is almost done I think
13:03:50 <yanyanhu> elynn, I think it is?
13:03:57 <elynn> yes
13:04:00 <yanyanhu> for both positive and negative cases
13:04:04 <yanyanhu> nice :)
13:04:09 <elynn> Saw you add release note
13:04:25 <elynn> We can move on to functional test and integration test
13:04:35 <yanyanhu> yes. If there is any case missing, plz feel free to propose patch for it :)
13:04:37 <yanyanhu> yes
13:04:49 <yanyanhu> the next step is migrating existing functional test to tempest
13:05:11 <yanyanhu> I think some existing functional test cases will be removed since they are duplicated with tempest API tests
13:05:23 <yanyanhu> will start to work on it
13:05:33 <elynn> yes, a lot of them.
13:05:37 <yanyanhu> yea
13:05:50 <yanyanhu> since our tempest API test is very complete I think :)
13:06:07 <yanyanhu> any thing else about tempest test?
13:06:18 <yanyanhu> ok
13:06:23 <yanyanhu> Performance Test
13:06:30 <yanyanhu> actually no much progress this week...
13:06:42 <yanyanhu> just merged the rally plugin for cluster-resize
13:06:56 <yanyanhu> so now we have rally plugin support for cluster create/delete/resize
13:07:06 <yanyanhu> we can do some basic benchmarking now
13:07:15 <yanyanhu> I think eldon is not here
13:07:30 <yanyanhu> hope these plugins can provide some help for their test work
13:07:51 <yanyanhu> BTW, these plugins still stay in senlin repo
13:08:05 <yanyanhu> contribute them to rally repo is the plan
13:08:20 <yanyanhu> ok, HA topic
13:08:26 <yanyanhu> umm, Qiming  is not here
13:08:34 <yanyanhu> xinhui, are you around?
13:09:04 <yanyanhu> guess no...
13:09:26 <yanyanhu> elynn, haiwei , do u guys have another thing want to discuss about this topic?
13:09:39 <haiwei> no
13:09:43 <elynn> no
13:09:46 <yanyanhu> ok, lets skip it
13:09:56 <yanyanhu> profile
13:10:04 <yanyanhu> for container support
13:10:12 <yanyanhu> haiwei, any thing new :)
13:10:23 <yanyanhu> I noticed you proposed some patches to add docker driver
13:10:30 <yanyanhu> and the profile support as well
13:11:11 <yanyanhu> https://review.openstack.org/331947
13:11:18 <yanyanhu> this is the latest one
13:11:36 <yanyanhu> and this one has been merged
13:11:36 <yanyanhu> https://review.openstack.org/322692
13:12:19 <yanyanhu> haiwei, there?
13:12:56 <yanyanhu> ok, lets move on and maybe go back later
13:13:01 <elynn> okay
13:13:09 <yanyanhu> ah, zaqar support
13:13:15 <yanyanhu> hasn't started I guess
13:13:26 <yanyanhu> not sure anyone has plan to work on it
13:16:02 <elynn> welcome back haiwei :)
13:16:02 <haiwei> dropped just now
13:16:08 <haiwei> yes
13:16:14 <haiwei> :)
13:16:30 <elynn> yanyanhu, just ask you about container support progress :)
13:16:54 <haiwei> I added container create/delete driver this week
13:17:04 <yanyanhu> if not, will try to spend some time on it after functional test migration is done
13:17:04 <yanyanhu> it's on our roadmap of newton release
13:17:05 <yanyanhu> should finish it
13:17:05 <haiwei> hope you can review it
13:17:06 <yanyanhu> next one is notification
13:17:07 <yanyanhu> skip it as well since Qiming is not here.
13:17:08 <yanyanhu> ok, these are all items in etherpad
13:17:10 <yanyanhu> the next topic in agenda is deploying senlin-engine and senlin-api in different hosts
13:17:10 <yanyanhu> this is a problem asked by eldon zhao from cmcc
13:17:11 <yanyanhu> guess he hasn't join meeting channel
13:17:23 <yanyanhu> haiwei, sure
13:17:29 <yanyanhu> will help to check it :)
13:17:29 <elynn> wow, yanyanhu just said lots of words.
13:17:32 <haiwei> thanks
13:17:36 <yanyanhu> I thought my network is broken :)
13:17:47 <yanyanhu> it is real...
13:17:47 <haiwei> me too, yanyanhu
13:17:54 <yanyanhu> my network was broken....
13:18:07 <yanyanhu> I'm using proxy at home
13:18:11 <haiwei> haha, I thought it was my network's problem
13:18:14 <yanyanhu> the proxy is setup in softlayer
13:18:18 <elynn> eldon_, was joined.
13:18:33 <yanyanhu> and I have to use vpn to connect to IBM network before I can access that softlayer machine...
13:18:43 <yanyanhu> hi, eldon_ , welcome :)
13:18:49 <elynn> Is there any bug when deploy api and engine on different hosts?
13:19:20 <yanyanhu> eldon_, I just made a test this afternoon and found it worked in my local env :)
13:19:21 <eldon_> yes
13:19:21 <haiwei> I am curious about deploying api and engine on different hosts
13:19:34 <haiwei> what does it mean?
13:19:40 <yanyanhu> just need to be careful about two things
13:19:47 <eldon_> maybe, you have configured host in senlin.conf @yanyanhu
13:20:03 <yanyanhu> haiwei, that means senlin-engine is running in a host while senlin-api is running in another host
13:20:09 <yanyanhu> eldon_, yes
13:20:29 <yanyanhu> the "host" option in both machines must be the same
13:20:42 <haiwei> why do we need this feature?
13:20:50 <eldon_> but, i have one api and two engine.
13:20:53 <yanyanhu> to let senlin-engine and senlin-api talk using the same rpc server
13:20:54 <elynn> I thought rabbitmq is the only tool to connect them together.
13:21:02 <eldon_> how can I configure host~
13:21:19 <yanyanhu> let me see
13:21:38 <yanyanhu> why you want to setup two senlin-engine services
13:21:43 <yanyanhu> and only one senlin-api services
13:21:47 <yanyanhu> service
13:22:03 <yanyanhu> if you want to add engine worker, you can configure it in senlin.conf
13:22:22 <haiwei> I got it, that is for some cases when nodes are too many
13:22:31 <yanyanhu> the structure of senlin is different from nova :)
13:22:39 <yanyanhu> who has one nova-api with multiple nova-compute
13:22:51 <eldon_> I use two engines for one api, I just want to make senlin more stronger
13:23:02 <yanyanhu> nova-compute is actually an agent which map request from API to VM creation
13:23:07 <eldon_> if one senlin-engine down, we have another engine
13:23:19 <yanyanhu> eldon_, nice try :)
13:23:21 <eldon_> it's not like nova-compute
13:23:29 <yanyanhu> just that could not match the design :)
13:23:35 <yanyanhu> eldon_, yes
13:23:40 <yanyanhu> that's the point
13:23:47 <haiwei> what is the difference?
13:23:54 <haiwei> nova and senlin
13:23:56 <yanyanhu> actually I guess you can setup two engines with one api
13:24:08 <yanyanhu> but the behavior could be unpredictable
13:24:20 <haiwei> senlin engine is not a proxy but some workers?
13:24:49 <yanyanhu> haiwei, I think senlin-engine is not agent like nova-compute
13:24:49 <yanyanhu> yes, I think so
13:24:55 <yanyanhu> eldon dropped...
13:25:13 <yanyanhu> hi, eldon_
13:25:14 <eldon_> sorry to logout
13:25:20 <yanyanhu> no problem
13:25:22 <eldon_> yes, i am in~
13:25:38 <yanyanhu> I think you can try to configure the 'host' option to let one senlin-api talk with two senlin-engines
13:25:40 <eldon_> can we setup two engines with one api?
13:25:45 <yanyanhu> I think so
13:25:50 <eldon_> how?
13:25:54 <yanyanhu> just I'm not 100% sure about the result
13:26:07 <eldon_> this afternoon, I changed code for this.
13:26:14 <haiwei> what is the 'host' option?
13:26:21 <yanyanhu> just set 'host' in senlin.conf in all three machines to the same value
13:26:28 <yanyanhu> senlin.conf, haiwei
13:26:40 <yanyanhu> and also set rabbit_host to the same one
13:26:43 <elynn> yanyanhu, Why do we do that?
13:27:02 <haiwei> a IP address?
13:27:07 <yanyanhu> elynn, eldon_ want to make a try :)
13:27:14 <yanyanhu> to see whether that works
13:27:18 <yanyanhu> haiwei, yes
13:27:29 <eldon_> https://github.com/EldonZhao/senlin/blob/eldon/senlin/rpc/client.py
13:27:30 <yanyanhu> rabbit_host is a ip address where rabbit server is located
13:27:36 <eldon_> def __init__(self):
13:27:36 <eldon_> self._client = messaging.get_rpc_client(
13:27:36 <eldon_> topic=consts.ENGINE_TOPIC,
13:27:36 <eldon_> version=self.BASE_RPC_API_VERSION)
13:27:42 <haiwei> It seems we need a scheduler
13:27:49 <eldon_> I removed server param, and it works
13:27:55 <yanyanhu> haiwei, scheduler for?
13:28:02 <haiwei> engines
13:28:17 <yanyanhu> eldon_, if there is only one api and one engine, you don't need to remove server param
13:28:23 <yanyanhu> it worked in my local env
13:28:26 <elynn> yanyanhu, I mean why do we need to set host the same in 3 machines?
13:28:31 <haiwei> it can be random for rabbitmq to choose which engine to send request?
13:28:38 <yanyanhu> haiwei, it can be, just no necessary I guess
13:28:47 <haiwei> yes
13:29:01 <yanyanhu> elynn, since eldon wants one senlin-api to talk with two senlin-engines at the same time
13:29:21 <yanyanhu> so we need to ensure all three of them interact with each other using the same rpc server.topic
13:29:27 <yanyanhu> which created in rabbit_host
13:29:32 <eldon_> our project wants senlin more stronger.
13:29:37 <yanyanhu> haiwei, it's broadcast
13:29:41 <eldon_> So we need this feature.
13:29:50 <yanyanhu> but only one engine can receive and handle the request from api
13:30:03 <elynn> I think the sever must be different?
13:30:05 <yanyanhu> eldon_, I guess we don't need to revise existing code
13:30:08 <haiwei> really?
13:30:10 <yanyanhu> to achieve your goal :)
13:30:18 <elynn> multi engines is the same as multi workers
13:30:25 <yanyanhu> let me see, elynn
13:30:26 <elynn> I thought.
13:30:29 <elynn> https://github.com/openstack/heat/blob/8b02644d631ea2d28b27fcc30d7b8091b002a9db/heat/rpc/listener_client.py#L37-L40
13:30:41 <yanyanhu> oh, right, if the 'host' is the same for two senlin-engines
13:30:49 <yanyanhu> I guess one of them can't start correctly
13:30:52 <elynn> Here is the codes from heat, it just simply set server to different engine_id
13:31:02 <yanyanhu> elynn, a little different
13:31:31 <elynn> It's just a mark for each engine I think?
13:31:32 <yanyanhu> elynn, we don't do this
13:31:54 <yanyanhu> the server for engine service is configured using 'host' option
13:31:59 <eldon_> yes, not exactly the same.
13:32:07 <yanyanhu> server of each dispatcher is the engine_id
13:32:27 <elynn> I think that's fine.
13:32:53 <yanyanhu> so for senlin-engine service, all engine instances have the same tpc "server"
13:33:13 <eldon_> I am a little interested about multi-workers.
13:33:18 <yanyanhu> that's why rpc request from senlin-api can broadcast to all of them directly
13:33:29 <yanyanhu> sorry, broadcast is not exact word
13:33:40 <elynn> I got what you mean.
13:33:45 <elynn> so eldon_'s problem is he can't get it work when one api and 2 engines on different hosts?
13:33:56 <yanyanhu> elynn, yes, looks so
13:34:12 <elynn> Unless delete host parameter as he said.
13:34:18 <yanyanhu> hi, eldon_ , I will make further test tomorrow to test that case
13:34:29 <elynn> right, eldon_ ?
13:34:34 <yanyanhu> I guess it should work directly
13:34:46 <yanyanhu> but I'm not sure about it :)
13:34:51 <elynn> We should make some unit tests for it after investigation...
13:35:08 <eldon_> now I can explain again:)
13:35:19 <yanyanhu> based on my understanding of oslo.messaging, I guess either it works directly or one senlin-engine can't start correctly
13:35:24 <eldon_> we have one api + two engines.
13:35:44 <yanyanhu> eldon_, yes
13:35:56 <eldon_> one engine is deployed in api's server, and other one in  other server.
13:35:58 <elynn> sorry let you explain again, I'm listening
13:36:18 <yanyanhu> ok
13:36:38 <eldon_> I find that, if we don't configure host in senlin.conf, api will always call engine in its server
13:37:01 <yanyanhu> eldon_, that's true since default 'host' is the host name of your machine :)
13:37:21 <eldon_> because, if host isn't configured, it will use hostname in get_rpc_client function.
13:37:33 <yanyanhu> yes
13:37:35 <yanyanhu> exactly
13:37:53 <eldon_> if i stop engine in api's server, it can't call engine in other server automatically.
13:38:16 <yanyanhu> eldon_, yes
13:38:22 <elynn> because api can't resolve the hostname?
13:38:32 <yanyanhu> since by default, rpc call will be send to 'server1.topic'
13:38:39 <eldon_> if I remove server param in get_rpc_client, it can automatically call engine in next engine.
13:39:13 <elynn> https://github.com/openstack/senlin/blob/master/senlin/common/config.py#L92 it does retrieve the hostname and set it as default.
13:39:20 <eldon_> because api still call engine in its host, but it has already stopped.
13:39:21 <yanyanhu> eldon_, that is because after you remove 'host', rpc request will not target to all rpc server
13:39:25 <yanyanhu> with the given topic
13:39:31 <eldon_> yes.
13:39:46 <yanyanhu> so the correct way is configuring two senlin-engine to listen to the server rpc server :0
13:40:24 <yanyanhu> rather than removing rpc server parameter from rpc client of senlin-api
13:40:26 <eldon_> if it needs configuration, we can't add senlin-engine automatically.
13:40:37 <elynn> eldon_, so in api host, can it ping the other server by hostname?
13:40:41 <eldon_> I find heat solved in the same way.
13:40:42 <yanyanhu> eldon_, but that is the correct way :0
13:40:43 <yanyanhu> :)
13:40:54 <yanyanhu> you can't ensure the rpc topic is unique :)
13:41:12 <eldon_> topic is unique, i am sure
13:41:15 <yanyanhu> we can't ensure the rpc topic senlin engine used is unique in the rabbit_host
13:41:17 <eldon_> topic is senlin
13:41:42 <yanyanhu> it's senlin-engine I guess
13:42:03 <eldon_> https://github.com/EldonZhao/heat/blob/master/heat/rpc/client.py
13:42:41 <elynn> so the 'host' here actually means api host?
13:43:07 <yanyanhu> elynn, it's rpc server actually
13:43:15 <yanyanhu> let me find the code
13:43:27 <yanyanhu> http://git.openstack.org/cgit/openstack/senlin/tree/senlin/engine/service.py#n87
13:43:36 <yanyanhu> http://git.openstack.org/cgit/openstack/senlin/tree/senlin/engine/service.py#n124
13:44:13 <yanyanhu> http://git.openstack.org/cgit/openstack/senlin/tree/senlin/cmd/engine.py#n42
13:44:40 <yanyanhu> and this
13:44:40 <yanyanhu> http://git.openstack.org/cgit/openstack/senlin/tree/senlin/common/consts.py#n16
13:45:23 <yanyanhu> so we use both 'host' and 'topic' together to uniquely identify the rpc topic that a senlin-engine(may include multiple workers) listens to
13:45:35 <eldon_> yes.
13:45:44 <yanyanhu> and by default, the rpc request of a senlin-api will be sent to this place
13:46:05 <elynn> okay, if it's rpc server here, then we should use rabbitmq_host option?
13:46:24 <yanyanhu> so I mean if you want two senlin-engines receive rpc request from one api, you can configure them listen to the same rpc server.topic
13:46:33 <yanyanhu> if it is allowed by oslo.messaging :)
13:46:47 <elynn> So if rabbitmq and api doesn't place on one host, we must set host to rabbitmq host, right?
13:47:15 <yanyanhu> we should set "rabbit_host" to the ip of server where rabbit server is located
13:47:21 <eldon_> yayanhu, our cinder project uses your way to solve the problem.
13:47:37 <yanyanhu> host and rabbit_host are two different config  options :)
13:47:54 <eldon_> they configure the same host in three cinder-volume servers.
13:48:00 <yanyanhu> eldon_, nice, that means oslo.messaging allows this kind of configure :)
13:48:35 <eldon_> yes. that's just like remove key-server.HAHA
13:48:48 <yanyanhu> eldon_, er, you can say that :)
13:49:01 <yanyanhu> since if we remove key of rpc server
13:49:30 <yanyanhu> there is risk for people who just deploy one senlin-api and one senlin-engine :)
13:49:34 <eldon_> But compared with nova-compute, it's not the exact way to solve it.
13:49:40 <yanyanhu> they can't get any benefit
13:49:47 <yanyanhu> eldon_, exactly
13:49:52 <yanyanhu> the design is different
13:50:02 <elynn> So if we have 2 api and one engine, what will it be...
13:50:11 <yanyanhu> nova-api know which nova-compute it is talking with always
13:50:27 <yanyanhu> nova-api sends requests to specific nova-compute based on scheduling result
13:50:39 <yanyanhu> not broadcast to let them compete :)
13:50:54 <eldon_> if we configure host of nova-compute same, vms can't build.
13:50:59 <yanyanhu> but in senlin's design, those engines will compete to get the task
13:51:06 <yanyanhu> eldon_, yes
13:51:11 <yanyanhu> I guess so
13:51:25 <yanyanhu> that doesn't make sense for nova :)
13:51:33 <yanyanhu> since nova has scheduler
13:51:39 <yanyanhu> but we don't need :)
13:51:48 <eldon_> So, if our feature becomes complex, we can't set  engine host the same with each other.
13:52:02 <eldon_> yes.
13:52:24 <yanyanhu> by "feature becomes complex" you mean?
13:52:43 <yanyanhu> oh, you mean send rpc request to specified senlin-engine?
13:52:49 <yanyanhu> if so, yes
13:52:58 <yanyanhu> if so, we need scheduler kind of thing
13:53:11 <yanyanhu> to make the decision before senlin-api sends rpc request out to engine(s)
13:53:14 <eldon_> if we need schedule, e, we don't need~
13:53:52 <eldon_> So, there is no problem now~
13:54:04 <yanyanhu> but currently, I guess we don't need it for senlin-engine is not a worker doing local job in host it is staying in
13:54:21 <eldon_> yes. I see:)
13:54:33 <yanyanhu> eldon_, so besically, there are two different designs :)
13:54:45 <yanyanhu> for senlin-api/engie and nova-api/compute :)
13:55:00 <eldon_> yes.
13:55:20 <yanyanhu> eldon_, nice, plz make more tries and we can make further discussion about this topic
13:55:24 <eldon_> So we can make host the same to solve my problem:)
13:55:28 <yanyanhu> it's very interesting and also very important
13:55:41 <yanyanhu> eldon_, yes, it can be the current solution
13:56:14 <yanyanhu> ok, thank you so much for starting this topic, eldon_ :)
13:56:40 <yanyanhu> ok. last 4 minutes
13:56:45 <yanyanhu> for open discussion
13:56:49 <eldon_> Now, cmcc uses lb_policy and basic scaling policies in our project. And rally is also in use for testing.
13:56:52 <yanyanhu> #topic open discussion
13:56:59 <yanyanhu> eldon_, great
13:57:20 <yanyanhu> looking forward to your result and feed back :)
13:57:24 <yanyanhu> any question, just ping us
13:57:52 <yanyanhu> oh, for rally plugins, I think those ones in senlin repo works
13:58:03 <yanyanhu> you can have a try with latest code
13:58:19 <eldon_> ok
13:58:23 <yanyanhu> :)
13:58:35 <yanyanhu> last two minutes
13:58:50 <yanyanhu> I planned to make a discussion about summit proposal
13:58:58 <elynn> ...
13:59:03 <yanyanhu> but I think we need to postpone it to next meeting :P
13:59:13 <elynn> we can talk about it in senlin channel
13:59:18 <yanyanhu> we have one more week to think :)
13:59:18 <elynn> or propose it
13:59:20 <yanyanhu> sure
13:59:43 <yanyanhu> ok, that's all. thanks you guys for joining, elynn , haiwei , eldon_
13:59:51 <yanyanhu> ttyl
13:59:55 <elynn> thanks
13:59:58 <elynn> good night
13:59:58 <haiwei> thanks
14:00:02 <yanyanhu> have a nice day
14:00:05 <yanyanhu> good night
14:00:06 <eldon_> thanks a lot:)
14:00:11 <eldon_> night
14:00:16 <yanyanhu> my pleasure :)
14:00:19 <yanyanhu> #endmeeting