00:00:45 <thinrichs> #startmeeting CongressTeamMeeting
00:00:46 <openstack> Meeting started Thu Feb 18 00:00:45 2016 UTC and is due to finish in 60 minutes.  The chair is thinrichs. Information about MeetBot at http://wiki.debian.org/MeetBot.
00:00:47 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
00:00:49 <openstack> The meeting name has been set to 'congressteammeeting'
00:01:51 <ramineni1> hi
00:02:06 <thinrichs> ramineni, ekcs, pballand: courtesy poing
00:02:17 <ekcs> hi
00:02:36 <thinrichs> hi all
00:02:53 <pballand> hi
00:02:58 <thinrichs> Short agenda this week.  Just want to go through statuses and discuss progress on the distributed arch.
00:03:32 <thinrichs> Who wants to go first and tell us what you've been up to?
00:04:11 <thinrichs> ekcs: how about you?
00:04:21 <ekcs> sure
00:04:52 <ekcs> I've added to each heartbeat payload the list of target:tables each node subscribes to. WIP because tests are incomplete.
00:04:53 <ekcs> https://review.openstack.org/#/c/281586/1
00:04:54 <ekcs> Next step is changing the policy engines to use that information. Not completely straightforward because the current trigger mechanism is dependent on a handler being called each time a subs/unsubs occurs.
00:04:54 <ekcs> Straigtforward migration would make dseNode generate handler calls as it observes change in subscribed tables owned by each service. Seems inefficient, but it seems like the only solution short of dismantling the policy engines.
00:05:45 <ekcs> Also fixed a simple bug I came upon during the heartbeat work. https://review.openstack.org/281518
00:05:50 <thinrichs> Why would the handler calls be inefficient?  Once per subscription, right?
00:06:45 <ekcs> yes, but the node needs to keep taking deltas on every heartbeat to determine when to call the handlers.
00:07:03 <thinrichs> What if the heartbeat contained deltas?
00:07:36 <thinrichs> I even wondered if we should publish subscribers as a special table and leverage the delta computation for regular tables.
00:08:08 <thinrichs> Or are deltas a problem for heartbeats?
00:08:45 <ekcs> yea that’s an idea I’ve been thinjing about too. pballand you have any thoughts?
00:09:23 <pballand> I’m confused...
00:09:31 <pballand> what is the inefficiency?
00:09:48 <pballand> looking for new nodes to publish data to?
00:10:40 <ekcs> computing what has changed in the list of subscribed tables each time a heartbeat is recieved.
00:11:03 <ekcs> in order to update the publish triggers in say agnostic.
00:11:44 <pballand> each node knows what tables it can publish; it maintains a list of subscribers for each table… right so far?
00:12:12 <ekcs> I think it’ll be fine, but not most elegant. the more general question is whether there is a problem with sending diffs over heartbeat.
00:12:27 <ekcs> yes basically.
00:12:49 <ekcs> right now I have it maintaining whether a table HAS subscribers. but can easily do it differently.
00:12:52 <pballand> so when a node receives a heartbeat, it updates the publish targets for each table… what is the problem?
00:14:18 <ekcs> it’s not a problem. I think it’ll work fine and it’s what i’m going for right now. just inelegant because we translate a diff signal into a state signal and then back into a diff signal.
00:14:26 <thinrichs> pballand: are there ordering issues with heartbeats?  Do the heartbeats always get delivered?
00:14:30 <pballand> for table in peer_subscriptions: if self.has_table(): self.update_subscriber(peer)
00:14:38 <pballand> thinrichs: yes, no
00:15:00 <pballand> nodes can come up and down at any time - they can certainly miss updates
00:15:23 <pballand> I don’t see what problem deltas are solving other than reducing a trivial amount of computation
00:16:20 <thinrichs> So it sounds that publishing all subscribers in every heartbeat and computing deltas is fine computationally, right pballand?
00:16:28 <thinrichs> ekcs: does that sound right to you?
00:16:38 <ekcs> thinrichs: I agree.
00:17:10 <ekcs> but ultimately I think it’ll be architecturally cleaner to use a special table that contains subscriptions. and use the general mechansim for syncing tables.
00:17:32 <ekcs> not sure if pballand sees any issues with that.
00:18:02 <pballand> ekcs: that sounds fancy :) no problem with that
00:18:22 <pballand> thinrichs: (right)
00:18:39 <ekcs> but in the mean time i’m just aiming to get the basic funcitonality done using heartbeat.
00:18:51 <pballand> (we’re talking a few dozen/hundred hash lookups + string comparisons - pretty trivial)
00:19:19 <ekcs> agreed.
00:19:22 <ekcs> that’s all from me.
00:19:25 <thinrichs> ekcs: sounds good.  We can revisit later as we get a better handle on the pain points.
00:19:35 <thinrichs> Let's move on.
00:19:40 <thinrichs> ramineni: want to go next?
00:19:45 <ramineni1> sure
00:20:07 <ramineni1> exception related bug is complete, now i guess all the tests are enabled in api-models and wrkng fine
00:20:37 <ramineni1> im looking into migrating datasource model
00:20:40 <ramineni1> now
00:21:17 <thinrichs> ramineni1: what's your plan for the datasource model?  In particular for creating/deleting datasources.
00:21:19 <ramineni1> thinrichs: I saw your comment on the patch, about moving into dse node
00:22:18 <thinrichs> ramineni1: I asked partly for me but also for everyone else, so we can get everyone thinking about it.
00:22:46 <ramineni1> thinrichs: im thinking about keeping manager as seperate file and adding reference to dsenode,
00:23:11 <ramineni1> thinrichs: now accoring to harness change , all api models will have a node attribute right
00:24:04 <thinrichs> ramineni1: each api model will have a reference to the DataService they live inside.
00:24:09 <thinrichs> which is a little weird,
00:24:39 <thinrichs> but it seems clear that the API models need to send RPC calls, either to other dataServices or to DseNodes.
00:25:21 <thinrichs> So yes each api model will have a reference to its service, and each service has a reference to its node, so the API can invoke rpc calls on DseNodes.
00:26:15 <pballand> that sounds reasonable to me - what do you find weird about that thinrichs?
00:27:34 <thinrichs> Just the usual OO stuff: API model is a member of DataService, and DataService is a member of DseNode, but API model also has a reference to both the DataService and DseNode that contain it.
00:28:18 <pballand> the model doesn’t need a ref to the node, right?
00:28:29 <pballand> (agreed that having a ref to both is weird)
00:29:02 <ekcs> thinrichs: I thought API model is subclass of DataService?
00:29:05 <pballand> the service can have methods to invoke RPCs to hide the fact that it’s calling the node to do it
00:29:36 <thinrichs> pballand: That's what we're doing with the RPCs that go to other DataServices.
00:29:59 <thinrichs> pballand: with the create/delete datasource, however, the natural place is to put that code into the DseNode (not the service), which means the APi-model is RPCing into the Node.
00:30:01 <ramineni_> back, sorry, power failure
00:30:11 <thinrichs> ekcs: not any more.
00:30:31 <pballand> thinrichs: right - any concerns with exposing a method on service for invoking node rpcs?
00:31:14 <thinrichs> pballand: nope—that seems to be the right abstraction: having a dataservice_rpc and a dsenode_rpc (or something with better names) defined in the API-model base class.
00:31:39 <thinrichs> ekcs: we didn't think that having each API model as a separate dataservice made sense.
00:31:49 <pballand> thinrichs: ok, I thanks for clearing that up for me
00:32:01 <thinrichs> Though now is the time to debate that.
00:32:03 <pballand> s/I thanks/thanks
00:32:07 <thinrichs> pballand: np
00:32:48 <thinrichs> Maybe we should talk that through quickly...
00:33:12 <thinrichs> Two options for the API-models and how they are deployed on the DSE2....
00:33:33 <thinrichs> 1. Each API-model is its own DataService
00:33:45 <thinrichs> 2. All API-models are encapsulated within 1 DataService
00:34:09 <thinrichs> pballand: correct me if I'm wrong...
00:34:49 <pballand> I had assumed 2; what’s the advantage to 1.?
00:34:59 <thinrichs> Putting each API model into its own DataService is awkward given that there is 1 router describes how to map http requests to API-models.
00:35:46 <ekcs> makes sense.
00:35:50 <thinrichs> Here's a change where we're moving to (2)
00:36:17 <thinrichs> pballand: I had assumed (2) as well, but we had (in my mind temporarily) implemented (1)
00:37:06 <thinrichs> ramineni: any thoughts about option 1 versus 2?
00:37:25 <thinrichs> (Before I forget again, masahito let me know he's on a plane to Tokyo right now, which is why he couldn't make it.)
00:37:51 <ramineni_> thinrichs: ya, 2 makes more sense, but does your harness change propsed now covers that
00:37:53 <ramineni_> ?
00:38:13 <ramineni_> thinrichs: i thought you were passing reference to teh node it is registered
00:39:01 <thinrichs> ramineni_: I think my patch covers that—changes the base class of  base.APIModel to object and then passes a reference to the DataService in the constructor.
00:39:12 <thinrichs> #link https://review.openstack.org/#/c/280424/3/congress/api/base.py
00:39:33 <thinrichs> (I'm planning on changing self.bus to self.service or the like.)
00:40:06 <thinrichs> So then if we move the create/delete datasource logic into DseNode, …
00:40:23 <thinrichs> we'll add an invoke_node_rpc (or similar) to that file and call it from the datasource_model
00:40:37 <thinrichs> whenever we need to create/delete a datasource.
00:41:12 <ramineni_> thinrichs: oh, got it
00:41:21 <pballand> sounds good to me
00:41:42 <ramineni_> thinrichs: will change accordingly
00:41:42 <thinrichs> ekcs: what do you think?
00:42:34 <ekcs> thinrichs: seems to make sense.
00:42:50 <ekcs> so each model has a ref to the containing service.
00:42:56 <thinrichs> pballand: want to look at my change to see if it's what you were moving toward?  Anything that's going to bite me later?
00:42:59 <thinrichs> ekcs: yep
00:43:09 <ekcs> and then it can get a ref to the containing node through the containing service.
00:43:58 <ekcs> but we’re encapsulating interaction with the node within a method.
00:43:59 <thinrichs> ekcs: yep
00:44:13 <thinrichs> So it should all be hidden from the model.
00:44:26 <ekcs> sounds sensible.
00:45:31 <thinrichs> That's a pretty natural segue to my status update…
00:46:16 <thinrichs> The patch I linked to earlier is trying to replace the old harness with a new harness ...
00:46:43 <thinrichs> (where we initialize the message bus, add the policy engine, API, previously configured datasources, etc. to the bus).
00:47:06 <thinrichs> Seems to be going mostly okay.
00:47:42 <thinrichs> For testing, I realized I should be porting (or at least taking inspiration from) the old test_congress.
00:47:47 <thinrichs> So the idea for the tests is….
00:48:03 <thinrichs> Use harness to spin up a DseNode along with the policy engine and API.
00:48:18 <thinrichs> Use API calls to create (fake) datasources…
00:48:26 <thinrichs> Use API calls to create policies, add rules, etc.
00:48:44 <thinrichs> The goal is not to have comprehensive API tests, but rather to make sure everything is hooked up right
00:48:46 <thinrichs> on the bus.
00:49:25 <thinrichs> The only bit that we won't be testing is the web server that maps HTTP requests down to the API models.
00:49:43 <thinrichs> (Above I should have said 'Use API-models to create (fake) datasources')
00:50:05 <thinrichs> How does that testing sound?
00:50:31 <ramineni_> thinrichs: sounds good to me
00:50:42 <pballand> sounds good to me
00:51:09 <ekcs> makes sense.
00:51:52 <thinrichs> Once we get all the unit(ish) tests passing, hopefully that'll get us a long way toward a working system.
00:52:12 <thinrichs> masahito is working on the next level up from harness: getting the server started...
00:52:13 <thinrichs> #link https://review.openstack.org/#/c/280793/
00:52:54 <thinrichs> Once those 3 pieces are in place (server, harness, datasource-model), I think we might need to move to tempest-level tests.
00:53:07 <thinrichs> Does that sound right?
00:53:33 <ramineni_> yes
00:54:11 <ramineni_> but tempest doesn't change much right
00:54:45 <ramineni_> it should work as it is, with changed config option
00:55:03 <thinrichs> I guess we'd be testing masahito's server change.
00:55:15 <thinrichs> But you're right that we're API compatible, so the devstack scripts should work...
00:55:38 <thinrichs> I guess we're not using any real rabbitMQ yet, so even that shouldn't change.
00:55:52 <ramineni_> ya
00:56:12 <thinrichs> Time check…4 minutes left.
00:56:23 <thinrichs> #topic open discussion
00:56:35 <thinrichs> Almost ran out of time.  Anyone have anything else to discuss?
00:59:31 <thinrichs> That's it for today then.  Thanks all!  Let's keep plugging away to get this all in place for Mitaka!
01:00:15 <ekcs> laters
01:00:19 <thinrichs> #endmeeting