00:00:34 <thinrichs> #startmeeting CongressTeamMeeting
00:00:35 <openstack> Meeting started Thu Jul 21 00:00:34 2016 UTC and is due to finish in 60 minutes.  The chair is thinrichs. Information about MeetBot at http://wiki.debian.org/MeetBot.
00:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
00:00:37 <ekcs> hi all
00:00:39 <openstack> The meeting name has been set to 'congressteammeeting'
00:00:45 <aimeeu> helloooo
00:00:51 <thinrichs> hi
00:01:30 <masahito> hi
00:02:27 <thinrichs> Agenda for today...
00:02:34 <thinrichs> 1. Status updates
00:02:36 <thinrichs> 2. Mascots
00:02:39 <thinrichs> Anything else?
00:03:46 <thinrichs> #topic Status updates
00:03:58 <thinrichs> ekcs: want to start the status updates?
00:04:10 <ekcs> sure.
00:04:56 <ekcs> api routing patch in review. will rebase today. #link https://review.openstack.org/#/c/341904/
00:05:14 <ekcs> local leader for replicated PE merged. Thanks!
00:05:42 <ekcs> threod safety review ready to merge. #link https://review.openstack.org/#/c/335721/
00:07:05 <ekcs> working on persisting pushed data. it’s not completely straight forward because schemas can change a lot from DS to DS. My plan right now is to use a single table to store all pushed data from all tables in all DS.
00:07:55 <thinrichs> Does our push driver even accept arbitrary schemas?  Last time I worked on it, it didn't
00:07:58 <ekcs> each DS-table data is a row in the DB data stored as json. downside is can’t take as much advantage of differential update.
00:08:22 <ekcs> that’s all my updates.
00:09:18 <ekcs> it doesn’t. but new drivers can have arbitrary schema. it becomes very messy and hacky to all new tables through sqlalchemy say whenever someone wants to use a custom DSD.
00:09:25 <ekcs> or changes the tables of an existing DSD.
00:09:44 <ekcs> s/to all new tables/to add new tables
00:09:59 <ekcs> happy to hear more thoughts on it!
00:10:01 <thinrichs> ekcs: agreed that we should handle it all at once.  More just remembering that we need to enhance the push driver
00:10:31 <thinrichs> Not sure what else we can do in terms of differential updates since the underlying DB doesn't support JSON structured data.
00:11:09 <thinrichs> Or…can we persist the translated version of the data, so that it's all relational?
00:11:10 <ekcs> yea. initially i thought it would be great for each DS table to be a DB table. but that doesn’t seem like a great idea.
00:11:34 <thinrichs> B/c then we're creating/deleting tables in the DB all the time?
00:12:40 <ekcs> that IS what i’m planning to do. persist the tranaslated version. but yea then we’re adding and deleting tables all the time. there are hacky python blackmagic ways set DB schema based dynamically on DSD classes, but not sure that’s a good idea.
00:13:20 <thinrichs> So it's a sqlalchemy problem?  The DB itself has no problem creating/deleting tables.
00:14:04 <ekcs> maybe.
00:14:18 <ekcs> at least partly sqlalchemy problem.
00:14:51 <ekcs> mostly I think.
00:15:19 <thinrichs> Without looking at SQLAlchemy, I'd have guessed we would take a prefix like 'dsd' and then every time a push datasource named P gets created we create the table 'dsd.P' in the database.
00:15:24 <ekcs> but not going through sqlalchemy loses DB compatibilty.
00:15:57 <ekcs> thinrichs: yea something like that.
00:16:10 <thinrichs> So SQLAlchemy has no way to create tables?
00:16:51 <ekcs> there may be. I may need to look deeper at that because it’s a different side of sqlalchemy than what we’ve been using (ORM).
00:17:08 <masahito> oslo db supports creating db with ORM.
00:17:31 <masahito> s/creating db/creating table/
00:17:42 <thinrichs> http://stackoverflow.com/questions/973481/dynamic-table-creation-and-orm-mapping-in-sqlalchemy
00:17:42 <ekcs> do you think it’s a good idea to dynamically extract from a DSD class the schema (including types) of the table and create DB tables?
00:18:34 <thinrichs> We could just require the PushDrivers to declare types.  In fact, we already have some mechanism for doing that, I think.
00:18:38 <ekcs> thinrichs: yea I read that thread, which seems to point people to sql soup as another layer over sqlalchemy, in order to do that.
00:18:49 <masahito> I don't think it's good idea to add dynamically table.
00:19:06 <thinrichs> masahito: why?
00:19:31 <masahito> For upgrading.
00:20:16 <masahito> oslo db manages the table schema for online schema upgrade now.
00:20:28 <thinrichs> ekcs: the top-rated answer on the SO message looks straightforward
00:21:25 <ekcs> thinrichs: no actually that answer just tells you to structure it so you add rows not tables.
00:21:30 <masahito> I thought if we added a dynamic table oslo db couldn't manage it.
00:22:04 <ekcs> masahito: hmm interesting i’ll need to look more at that. I would’ve thought we could just delete all the persisted push data on upgrades.
00:23:39 <thinrichs> ekcs: last line says "That's it, you now have a your player table."  Seems like it's creating a new table with Python code by declaring the types of the columns and mapping it to a Python class.  We'd need to run similar code every time a datasource was instantiated.  (Not sure about the Class though).
00:23:46 <masahito> ekcs: oh. I didn't have the idea. If we can do it, there is no problem.
00:24:05 <thinrichs> masahito: upgrade is an interesting case.  If we're upgrading in place, we wouldn't want to throw away all the pushed data.
00:24:29 <thinrichs> What I don't know is whether oslo-db handles dynamic tables.
00:25:46 <thinrichs> For upgrade we should think thru that.  Those dynamically generated tables probably just wouldn't ever need to be changed (since if they did, the data would need to be transformed anyhow).
00:26:11 <thinrichs> And the name of the table is based on an entry from a separate table in the DB.
00:26:32 <thinrichs> So as long as the migration script didn't delete them, they should be fine.
00:27:25 <thinrichs> Anyway, this is an interesting topic.  I think we've got a few of the issues at least identified now.
00:27:32 <thinrichs> Thanks for taking this on ekcs!
00:29:03 <ekcs> yus.
00:29:05 <ekcs> yu.s
00:29:10 <ekcs> yups
00:29:30 <thinrichs> One last thought…I guess the downside to putting all the data into a single row is that it will be expensive to read/write.  I have no idea how expensive.
00:29:52 <thinrichs> Writing happens on every push; reading happens only when restarting Congress.
00:30:18 <thinrichs> Unless there's something else on this topic, aimeeu: want to give a status update?
00:30:24 <aimeeu> sure
00:30:26 <ekcs> yup. depends on how big the table is I guess.
00:30:45 <aimeeu> I'm stuck for now on the horizon plugin bug #link https://bugs.launchpad.net/congress/+bug/1602837 Spent 3 days on this and
00:30:45 <aimeeu> finally hit a wall. Looked through lots of code and read lots of documentation. Earlier today I sent an email to [openstack-dev][Congress] but have not pushed a patch set yet. I'd appreciate it if you all could read the email when you have time and offer suggestions. I'm not giving up, just putting it aside for a day or two.
00:30:45 <openstack> Launchpad bug 1602837 in congress "Policy UI (Horizon): Unable to get policies list (devstack)" [High,Confirmed] - Assigned to Aimee Ukasick (aimeeu)
00:31:03 <aimeeu> Picked up the HAHT overview and deployment guide documentation tasks.
00:31:04 <aimeeu> #link https://bugs.launchpad.net/congress/+bug/1600016
00:31:05 <aimeeu> #link https://bugs.launchpad.net/congress/+bug/1600017
00:31:06 <aimeeu> Also after I've finished the guides, I'd like to try the tempest tests, basic - #link https://bugs.launchpad.net/congress/+bug/1600021
00:31:06 <aimeeu> Also trying to keep up with code reviews. I do look at the more complicated ones but don't  understand enough yet to +/- 1
00:31:06 <openstack> Launchpad bug 1600016 in congress "HAHT - overview guide" [Low,New] - Assigned to Aimee Ukasick (aimeeu)
00:31:07 <openstack> Launchpad bug 1600017 in congress "HAHT - Deployment guide" [Medium,New] - Assigned to Aimee Ukasick (aimeeu)
00:31:09 <openstack> Launchpad bug 1600021 in congress "HAHT - tempest tests, basic" [Medium,New] - Assigned to Aimee Ukasick (aimeeu)
00:32:32 <aimeeu> That's all for me. Feeling a bit frustrated by my lack of progress but I am learning a lot.
00:32:32 <thinrichs> Does the congress python client support keystone v3?
00:32:43 <aimeeu> I thought it did
00:33:09 <aimeeu> I'll double check
00:33:11 <thinrichs> ramineni knows best, I think
00:33:29 <thinrichs> Seem to remember that it does, but worth double-checking
00:33:46 <aimeeu> keystoneauth1>=2.7.0
00:35:11 <thinrichs> aimeeu: just a word of caution: tempest tests can be difficult because they get run in an environment that's not always easy to replicate
00:35:43 <aimeeu> thinrichs: OK. I'll keep that in mind and will not be offended if somebody else wants to take that task.
00:36:01 <thinrichs> Sounds like you've been busy!  Great!
00:36:54 <aimeeu> thinrichs: yes, and learning tons - it will all click soon
00:37:17 <thinrichs> aimeeu: let us know how we can help.  We all know how hard starting a new project can be.
00:37:48 <thinrichs> masahito: want to do a status update?
00:37:55 <masahito> sure
00:38:10 <masahito> custom resource agent and its guide are in review. #link https://review.openstack.org/#/c/342853/
00:39:20 <masahito> And I started to implement the lazy datasource function though the spec hasn't been approval yet.
00:39:29 <masahito> that's from my side
00:41:13 <thinrichs> masahito: looks like ekcs has another question or two on the spec
00:41:32 <ekcs> thinrichs: just clarification questions.
00:42:29 <masahito> I think yes is answer for both questions.
00:43:26 <thinrichs> This is the one where we're (i) adding configuration to each datasource that records laziness for each table and (ii) adds an API call that updates the datasource config.  Right?
00:43:58 <masahito> right.
00:44:38 <ekcs> great.
00:44:47 <thinrichs> Why would we need a new update_from_datasource method then?
00:45:14 <thinrichs> Wouldn't we just modify the one that exists (for each datasource, perhaps) so that it only runs the translators that it needs to?
00:45:43 <ekcs> yes I think we are saying the same thing differently.
00:46:15 <masahito> oh, a new update_from_datasource means modify the method.
00:46:18 <thinrichs> ekcs: ok.  Just wanted to make sure we were all on the same page
00:46:23 <thinrichs> masahito: great
00:46:47 <thinrichs> ekcs: are you happy with that spec now?  Shall we merge it after the meeting?
00:46:59 <ekcs> yup.
00:47:06 <thinrichs> Sounds good.
00:47:25 <thinrichs> No ramineni today, so I'm the last for the status update.
00:48:00 <thinrichs> I pushed a patch that makes distributed_architecture true by default
00:48:21 <thinrichs> Moved all the tests2 over to tests
00:48:30 <thinrichs> Removed all the code using distributed_arch
00:48:35 <thinrichs> #link https://review.openstack.org/#/c/344551/
00:48:51 <thinrichs> Still need to remove original dse/ folder.
00:49:05 <thinrichs> Seem to be some tests that are still importing it.
00:49:25 <ekcs> awesome.
00:49:26 <thinrichs> The good news is that all the unit tests pass
00:49:29 <thinrichs> py34 and py27
00:49:38 <thinrichs> The new_arch unit tests fail b/c there's no test2
00:49:52 <thinrichs> I think I saw the devstack tests pass
00:50:23 <thinrichs> The new_arch devstack tests failed, but I haven't figured out why.  Looks to be a super-slow node.
00:50:39 <thinrichs> Does anyone know what the new_arch devstack tests are actually doing?
00:50:51 <thinrichs> Are they running all the same tempest tests but with distributed_arch set to true?
00:51:18 <masahito> yes, the test runs same tempest tests.
00:51:19 <ekcs> thinrichs: I think so. with one or two disabled that we haven’t got around to supporting yet.
00:51:46 <thinrichs> Okay, so as long as the regular devstack tests pass, we should be good to go.
00:51:53 <masahito> the difference is only the flag of distributed architecture.
00:52:10 <thinrichs> I think ramineni may have re-enabled all those tempest tests in the new_arch
00:52:33 <ekcs> ok
00:53:49 <thinrichs> I need to rebase b/c of a merge conflict, and then I'll have gerrit rerun the devstack stacks
00:54:06 <thinrichs> I'll add tests2 back, so the new_arch unit tests will pass.
00:54:17 <thinrichs> Hopefully then everything will be passing.
00:54:55 <thinrichs> It's a large change set, but it's all superficial changes
00:55:48 <ekcs> great.
00:55:52 <thinrichs> It'd be fine to split it up amongst everyone, so at least someone has looked at everything.
00:56:03 <thinrichs> Running short on time… one more agenda item.
00:56:17 <thinrichs> #topic Mascots
00:56:29 <thinrichs> Remember that we need to pick out mascots.
00:56:35 <thinrichs> ekcs: thanks for posting a suggestion...
00:56:37 <thinrichs> #link http://lists.openstack.org/pipermail/openstack-dev/2016-July/099413.html
00:57:01 <thinrichs> Our list so far is … Areopagus, salamander, raven, baboon
00:57:30 <thinrichs> ekcs and I seem to be in agreement that salamander is the best of the animal choices, followed by raven, followed by baboon.
00:57:49 <thinrichs> Areopagus is a big rock (right ekcs?)
00:58:00 <masahito> thinrichs: I agree the order.
00:58:10 <thinrichs> with a rich history and quite good choice for Congress
00:58:22 <thinrichs> Downside is that it's hard to spell and I'd never heard of it
00:58:26 <aimeeu> thinrichs: I as well on the order. I like Areopagus
00:58:47 <ekcs> thinrichs: right.
00:59:34 <thinrichs> Also, having a hard time imagining the logo (though isn't there some insurance company in the US with a large rock as its logo?)
00:59:44 <aimeeu> Prudential
01:00:09 <thinrichs> #link https://www.prudential.com/
01:00:20 <thinrichs> Check out upper-left corner (probably better images somewhere)
01:00:27 <thinrichs> aimeeu: thanks!
01:00:47 <thinrichs> If we have any other ideas, add them to the mailing list.
01:00:59 <thinrichs> But for now we seem to have concensus on the order above.
01:01:02 <thinrichs> Out of time.
01:01:04 <thinrichs> Thanks all!
01:01:18 <thinrichs> #endmeeting