14:00:33 <zhaochao> #startmeeting trove
14:00:34 <openstack> Meeting started Wed May  9 14:00:33 2018 UTC and is due to finish in 60 minutes.  The chair is zhaochao. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:39 <openstack> The meeting name has been set to 'trove'
14:00:50 <zhaochao> #topic roll call
14:00:58 <fanzhang> zhaochao o/
14:01:09 <zhaochao> fanzhang: hi
14:02:12 <zhaochao> looks like only two of us today...
14:02:14 <gang> o/
14:02:20 <zhaochao> gang: hi
14:02:21 <wangyao> 0/
14:02:27 <gang> hi
14:02:30 <wangyao> hi guys
14:02:32 <zhaochao> wangyao: hi
14:02:38 <fanzhang> hi, so four people today :)
14:02:50 <zhaochao> yes
14:03:27 <zhaochao> let's wait for a few more minutes
14:04:24 <wangyao> It seems that everyone is  busy recently
14:05:16 <fanzhang> yeah, I did not pay much time on trove community due to busy daily work. :(
14:05:39 <zhaochao> yes, the upstream activities are few in recent weeks
14:05:46 <wangyao> The day after tomorrow I will going to go on business again
14:05:49 <zhaochao> OK, let's start
14:06:19 <zhaochao> #topic Rocky goal updates
14:06:36 <zhaochao> I still don't have much time working on trove last week
14:07:51 <fanzhang> May I know the freezing time of Rocky-milestone2
14:08:00 <zhaochao> however I submitted a first try to migrate trove gate jobs to zuul v3 native
14:08:00 <zhaochao> #link https://review.openstack.org/#/c/566607/
14:08:09 <zhaochao> however it's still no passing
14:09:32 <wangyao> thanks for zhaochao's comments. I have update this pactch https://review.openstack.org/#/c/560373/
14:09:32 <zhaochao> fanzhang: let me check, we still have 1 month for that
14:09:33 <zhaochao> https://releases.openstack.org/rocky/schedule.html#r-2
14:10:03 <zhaochao> wangyao: thanks for working on that, I'll come back to it asap
14:11:07 <fanzhang> zhaochao thx, let me see if I can finish the bp on time #link https://blueprints.launchpad.net/trove/+spec/adapt-to-file-injection-deprecation-in-nova
14:11:57 <wangyao> thanks. Sorry for slow progress.
14:12:52 <zhaochao> and Akihiro Motoki(one of the horizon core) helped to review the patches in trove-dashboard, I have already updated one of them, and will update the others according to the ongoing comments
14:13:02 <fanzhang> a few days ago, I got the qcow2 image from wangyao, thanks for that, hit an error while creating a trove instance, which the nova instance was ACTIVE, but the looping call to determine the state of instance in trove went to timeout. Still need some time to dig a little bit
14:13:37 <amotoki> zhaochao: feel free to ping me on #-horizon if you have questions or things to discuss
14:13:40 <zhaochao> fanzhang: thanks, it'd be greate if we could got one bp implemented by then
14:14:21 <fanzhang> zhaochao sure, try me best. 🤣
14:17:46 <fanzhang> ping... am I offline or what..
14:18:22 <zhaochao> amotoki: got that, usuallly I get notification mails by gerrit, it'd be great to discuss on gerrit. Thanks a lot for your help and review, :)
14:18:23 <zhaochao> fanzhang: no
14:18:41 <fanzhang> cool
14:19:02 <zhaochao> wangyao: it's ok, it looks like all of us are busy on the internal work recently
14:19:08 <amotoki> zhaochao: no problem. that's fine
14:19:41 <zhaochao> amotoki: thanks
14:20:58 <zhaochao> OK, that may be all from me last week. I'll try to finish the zuul v3 migration, and then begin with another global goal -- mutable configuration
14:21:38 <amotoki> zhaochao: I noticed your reviews when I explored django 2.0 support in horizon plugins including trove-dashboard. horizon team is evaluating what we can achieve in Rocky. Django 2.0 support is one of requests from Debian support (as you may know)
14:23:00 <amotoki> but it is not a good idea to expect too many changes to horizon plugins so we are exploring a balance. thanks.
14:23:59 <zhaochao> amotoki: I'm not familiar about Django, what we could help for the Django 2.0 support in trove-dashboard?
14:24:51 <zhaochao> and any more updates from you, fanzhang wangyao gang ?
14:25:11 <fanzhang> zhaochao nothing here.
14:25:27 <wangyao> not for me
14:26:01 <amotoki> zhaochao: some backward compatibilities for django <1.11 was dropped in django 2.0
14:26:28 <amotoki> zhaochao: so we need to clean up old usages to support later versions of django.
14:26:44 <zhaochao> I think gang may still suffer the network issues, let's move to the open discussion
14:27:05 <gang> nothing for me, just add some comments for you guys's reviews about the cluster-supportiny spec.
14:27:08 <amotoki> zhaochao: I can propose a change on django 2.0 support itself, but it seems there are some fixes (like what you are working on) before it.
14:27:40 <zhaochao> #topic Open discussion
14:27:44 <amotoki> I don't want to interrupt trove meetings with horizon specific topic more. go ahead
14:28:51 <fanzhang> amotoki that's fine :)
14:29:44 <zhaochao> gang: most of we have leave comments for the bp, but seems we don't have much time to do some more investigation, do you have any new updates for it?
14:29:48 <fanzhang> zhaochao I noticed one thing maybe you guys are interested. While creating trove instance, we may have to create a volume, but from the debug log, we're using cinder v1 api, which even v2 is deprecated and in the config file, endpoint_type is volumev2
14:31:22 <zhaochao> amotoki: it's fine. I'll try to catch up in trove-dashboard with the changes in horizon
14:31:33 <fanzhang> the current api version is v3, so looks like v1 is out of date, but it can be used for now.
14:32:22 <zhaochao> fanzhang: I didn't pay attention about that, but I remember cinder v1 has been disabled by default for some time
14:33:15 <fanzhang> zhaochao yes, so when I tried to create a trove instance, I got 404 while creating volume, had to enable v1 then it worked
14:34:45 <zhaochao> fanzhang: I checked  common/cfg.py , the default value for cinder_service_type is volumev2 , so maybe you specified cinder_url ?
14:34:45 <gang> the version our company used is kilo, so I didn't meet that problem...
14:35:10 <fanzhang> yes, just checked with my conf
14:35:25 <fanzhang> you can ignore me here 🤣
14:35:59 <zhaochao> :)
14:37:42 <zhaochao> gang: for the cluster bp, I think the current reviewers must be all, we should progress based on that(I don't know if  wangyao have time to help review on that)
14:39:30 <zhaochao> gang: we could make some conclusions on one or some topics in the bp, and update it to a more specific one, and then begin to implement the approved ones
14:39:32 <wangyao> I will.
14:39:52 <zhaochao> wangyao: thanks, :)
14:39:57 <gang> zhaochao: Understand, as we discuss before. First step add rolling_restart for all datastore, then rolling_resize, is that ok?
14:40:34 <zhaochao> gang: yes, it's OK to me
14:40:37 <wangyao> that is my duty
14:41:10 <gang> And I will connect songjian to disscuss about this. He may has done some work
14:41:27 <fanzhang> cool 🙂
14:41:29 <zhaochao> wangyao: :)
14:42:17 <zhaochao> gang: that would be greate, it's better to discuss on the gerrit, and we could get a referrence later
14:42:42 <wangyao> cool
14:43:04 <gang> zhaochao: OK
14:43:12 <zhaochao> gang: anyway, the updated bp will be result of the discussion, I think that would be fine too
14:45:40 <gang> And I has a question, is there a good way to add a cluster status?
14:45:51 <gang> like instance status
14:47:17 <zhaochao> gang: do we have cluster status property?
14:47:36 <fanzhang> that depends on how to define a cluster status.
14:47:47 <fanzhang> I don't think we have that property
14:48:30 <zhaochao> OK, I found that, we only have  task_id in the database
14:48:39 <wangyao> yes
14:49:01 <fanzhang> MariaDB [trove]> describe clusters;
14:49:02 <fanzhang> +----------------------+--------------+------+-----+---------+-------+
14:49:02 <fanzhang> | Field                | Type         | Null | Key | Default | Extra |
14:49:02 <fanzhang> +----------------------+--------------+------+-----+---------+-------+
14:49:02 <fanzhang> | id                   | varchar(36)  | NO   | PRI | NULL    |       |
14:49:03 <fanzhang> | created              | datetime     | NO   |     | NULL    |       |
14:49:04 <fanzhang> | updated              | datetime     | NO   |     | NULL    |       |
14:49:05 <fanzhang> | name                 | varchar(255) | NO   |     | NULL    |       |
14:49:06 <fanzhang> | task_id              | int(11)      | NO   |     | NULL    |       |
14:49:07 <fanzhang> | tenant_id            | varchar(36)  | NO   | MUL | NULL    |       |
14:49:08 <fanzhang> | datastore_version_id | varchar(36)  | NO   | MUL | NULL    |       |
14:49:09 <fanzhang> | deleted              | tinyint(1)   | YES  | MUL | NULL    |       |
14:49:10 <fanzhang> | deleted_at           | datetime     | YES  |     | NULL    |       |
14:49:11 <fanzhang> | configuration_id     | varchar(36)  | YES  | MUL | NULL    |       |
14:49:12 <fanzhang> +----------------------+--------------+------+-----+---------+-------+
14:49:21 <wangyao> We are now judging from all the instance states...
14:50:29 <gang> table instances doesn't has this filed too. It's a dynamic value.
14:50:34 <fanzhang> yep, so the cluster status depends on which strategies you're using, like most instances are ACTIVE so the cluster is ACTIVE, or, have to be all
14:50:49 <wangyao> that cause api response very slow.
14:53:28 <zhaochao> could we just reuse the task_id, or we could add new properties for this purpose
14:53:45 <zhaochao> but we need to correctly abstract the status about cluster
14:53:54 <fanzhang> [root@f-control ~(keystone_admin)]# trove cluster-list
14:53:55 <fanzhang> +----+------+-----------+-------------------+-----------+
14:53:55 <fanzhang> | ID | Name | Datastore | Datastore Version | Task Name |
14:53:55 <fanzhang> +----+------+-----------+-------------------+-----------+
14:53:55 <fanzhang> +----+------+-----------+-------------------+-----------+
14:54:08 <fanzhang> Task Name could be BUILDING
14:54:25 <fanzhang> it may be useful.
14:55:42 <fanzhang> Actually, gang brings up a very interesting question that we may need some further investigation, looks like a future bp right now :)
14:56:05 <wangyao> ^_^
14:56:14 <gang> I try to add a property for class cluster in cluster/models.py, but the profermance seems not so good.
14:56:38 <gang> if self.task_name == "NONE":
14:56:39 <gang> instances_status = [instance.status
14:56:39 <gang> for instance in self.instances_without_server]
14:56:39 <gang> if "ACTIVE" in instances_status:
14:56:39 <gang> if "BACKUP" in instances_status:
14:56:39 <gang> return ClusterStatus.BACKUP
14:56:41 <gang> return ClusterStatus.ACTIVE
14:56:43 <gang> else:
14:56:46 <gang> return ClusterStatus.ERROR
14:56:48 <gang> else:
14:56:50 <gang> if "BUILDING" == self.task_name:
14:56:52 <gang> return ClusterStatus.BUILD
14:56:54 <gang> if "RESTARTING_CLUSTER" == self.task_name:
14:56:56 <gang> return ClusterStatus.REBOOT
14:56:58 <gang> if "UPDATING_CLUSTER" == self.task_name:
14:57:00 <gang> return ClusterStatus.UPDATE
14:57:02 <gang> if "RESIZING_CLUSTER" == self.task_name:
14:57:04 <gang> return ClusterStatus.RESIZE
14:57:29 <gang> since the status based on the instance status and cluster task_id.
14:57:30 <zhaochao> I probably know the concern from wangyao and gang , task_id only used to track operations on cluster, if the use want to know whether the cluster is healthy, we stil have to check the status of all cluster member instances
14:58:51 <fanzhang> what if we add some cache layer to cache the status of all members? Periodic task would be useful to sync the status of cluster members. Just a quick thought here. If-elif-else looks just not that elegant.
14:59:10 <gang> And I will add this to that bp for further discuss.
14:59:28 <fanzhang> really a cool feature
14:59:45 <fanzhang> 👍
15:00:22 <zhaochao> gang: wangyao we may don't have enough time to discuss the problem today, I agree with fanzhang , we could propose another bp about this, just for discussion is also ok
15:00:42 <wangyao> yeah
15:00:56 <gang> agree
15:00:57 <fanzhang> ok~ have a good night :)
15:01:04 <wangyao> bye~
15:01:12 <zhaochao> gang: add to the existing bp is also good, a new one may be better, at last we will split the existing one
15:01:25 <zhaochao> ok, time is already up
15:01:37 <zhaochao> goodnight and thanks for everyone
15:01:43 <fanzhang> bye ~
15:01:45 <wangyao> thanks zhaochao
15:01:49 <gang> good night
15:01:55 <zhaochao> #endmeeting