14:10:18 <aostapenko> #startmeeting Magnetodb Weekly Meeting
14:10:19 <openstack> Meeting started Thu Feb 12 14:10:18 2015 UTC and is due to finish in 60 minutes.  The chair is aostapenko. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:10:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:10:24 <openstack> The meeting name has been set to 'magnetodb_weekly_meeting'
14:10:34 <aostapenko> Hi, all
14:10:54 <aostapenko> I will carry meeting this time
14:12:58 <aostapenko> Hi, dukhlov, ikhudoshyn, achuprin_, keith_newstadt, miqui
14:13:09 <miqui> ...hello...
14:13:26 <dukhlov> hi Andrew
14:15:09 <aostapenko> We have no action items from previous meeting and no special status in agenda. So I propose to walk through statuses
14:15:54 <aostapenko> dukhlov, what about your patch https://review.openstack.org/152513
14:16:11 <aostapenko> Moving to cassandra 2.1.2
14:17:09 <dukhlov> hm, I faced with some strange behavior
14:18:13 <aostapenko> #topic Moving to cassandra 2.1.2
14:18:16 <dukhlov> It works fine, but our tests on gate work very slow and even don't meet job timeout sometimes
14:18:41 <dukhlov> I'm troubleshooting this problem
14:18:48 <dukhlov> what I know now...
14:19:21 <dukhlov> problem is only with test_list_tables
14:19:52 <dukhlov> there we create a few table (5)
14:20:27 <dukhlov> and then when test is over we cleanup those tables
14:21:13 <dukhlov> magnetodb processes this request , set status to DELETEING and create job for async task manager
14:22:14 <dukhlov> then I see in logs that this job is processed and DELETE FROM table_info WHERE tenant=<tenant> AND name=<table_name> executed
14:24:10 <aostapenko> list_tables has its own cleanup mechanism. I will try to assist you with this investigation
14:24:36 <dukhlov> but our tests in this time are executing describe table to get information that table is deleted and somehow after job execution it keep receive that table is DELETING for 2 minute
14:24:52 <dukhlov> and then somehow table is gone at least
14:25:50 <dukhlov> why we have such delay for 2 minute - I still can not understand and continue investigation
14:26:18 <aostapenko> It's our default delay for deleting table in tempest
14:26:19 <dukhlov> aostapenko: I saw it
14:26:54 <dukhlov> but in case of timeout it should raise exception
14:27:07 <aostapenko> Are you sure that table is gone? Not DELETE_FAILED state
14:27:19 <dukhlov> I am sure
14:27:28 <dukhlov> At least I think so
14:27:47 <aostapenko> Ok. Lets continue an investigation
14:29:09 <aostapenko> #action dukhlov aostapenko Investigate problem with tempest tests in https://review.openstack.org/152513
14:29:41 <aostapenko> Anything else, dukhlov?
14:30:40 <dukhlov> not today
14:31:54 <aostapenko> Lets move on. miqui, what are you working on?
14:32:33 <miqui> nothing specific atm, more focused on learning some cassandra basics
14:32:40 <miqui> and getting my dev env to work
14:33:33 <aostapenko> miqui: thank you for your patches on table creation validation
14:35:13 <aostapenko> I'm still working on refactoring and extending healthcheck request
14:36:12 <aostapenko> oh, excuse me. That are not your patches. Thanks to vivekd :)
14:36:34 <miqui> ..no worries...
14:37:31 <aostapenko> Does anybody has to say something?
14:37:41 <aostapenko> oh, hi, charlesw
14:38:08 <charlesw> Hi guys
14:38:38 <aostapenko> charlesw: Could you share a status of your notification system refactoring?
14:38:59 <charlesw> Yes
14:39:16 <aostapenko> please
14:39:49 <charlesw> It's close to done. Going thru the notification refactoring comments from Dima. had some offline discussion.
14:40:25 <charlesw> Will send out an updated patch.
14:40:49 <aostapenko> charlesw: anything else you are working on?
14:41:17 <charlesw> For now, we will use existing celiomenter doc for evnets. But we will need to update the ceilometer doc.
14:41:29 <charlesw> Anyone knows the process?
14:41:55 <charlesw> I have an internal project to integrate metrics into a portal.
14:42:23 <charlesw> I will need to convert health_check API results into metrics to be sent to StatsD
14:42:47 <charlesw> Was thinking about a daemon process to call health_check peridically
14:42:56 <aostapenko> charlesw: now we have a problem with integration with ceilometer. We need to transfer to not durable queues. I will send a patch soon
14:44:27 <charlesw> So the community work I have in mind next is to create such daemon to call health_check/monitoring API to periodically convert API call results to metrics
14:45:04 <aostapenko> #action aostapenko Make a patch to make magnetodb to use nondurable queues
14:45:05 <charlesw> If it's ok, I will create a blueprint
14:46:50 <aostapenko> #action charlesw Create a blueprint on periodically convert healthcheck API call results to metrics
14:47:01 <aostapenko> Hi nunosantos
14:47:33 <aostapenko> dukhlov, do you have any thoughts about that?
14:48:26 <aostapenko> charlesw, waiting for bp
14:48:31 <dukhlov> I think we go in the same direction with openstak
14:49:07 <aostapenko> agree
14:49:07 <charlesw> dukhlov, could you be more specific?
14:49:38 <dukhlov> it looks like oslo.messaging does not support different configuration for different topics
14:50:05 <dukhlov> so we can only make all topics durable or make all topics not durable
14:50:43 <dukhlov> Not it should be openstack wide option for avoid conpatibility problems
14:51:05 <dukhlov> so all topics should be durable or not durable
14:51:35 <miqui> question, but diff openstack projects that config rabbit in diff ways?
14:51:38 <dukhlov> in devstack topics are not durable
14:51:55 <miqui> or they seem to agree on what type of queues to use (i.e. durable vs not)
14:52:29 <charlesw> Should we go to oslo messaging instead asking for support to configuring durability for different q
14:52:45 <dukhlov> miqui, yes different project have diff configuration
14:52:56 <aostapenko> miqui ceilometer notification agent forces us to use its own configuration for notification queue
14:53:06 <miqui> k, thanks...
14:53:07 <dukhlov> but different projects usually use the same topic
14:53:15 <miqui> ah k..
14:53:21 <dukhlov> for communication
14:54:01 <miqui> so then all of this depends on how ceilometer configs its queue regardless of oslo no?
14:54:40 <miqui> ceilometer has a msg topo that all have to abide by right?
14:56:20 <aostapenko> ceilometer creates exchanges for all other services. And for its redeclaration config (e.g. durability) should be the same
14:56:21 <dukhlov> mmm, I'm not fully agreed with your terms, but yes
14:56:39 <miqui> k..
14:57:48 <aostapenko> Does anybody have to add something?
14:58:56 <aostapenko> Ok. lets move to open discussion
14:58:59 <aostapenko> #topic Open Discussion
14:59:11 <miqui> am fine thanks....
15:00:53 <aostapenko> we are out of time. Does anybody have something to share or any questions?
15:01:23 <charlesw> somehow I received a cancellation request of this meeting
15:01:56 <charlesw> just want to make sure the meeting is still good going forward
15:02:35 <aostapenko> charlesw, thank you. I'll figure this out
15:04:18 <aostapenko> So lets finish. Thank you, guys
15:04:26 <aostapenko> #endmeeting