20:00:28 #startmeeting trove 20:00:29 Meeting started Wed Sep 11 20:00:28 2013 UTC and is due to finish in 60 minutes. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:33 The meeting name has been set to 'trove' 20:00:43 here 20:00:46 o/ 20:00:47 o/ 20:00:55 o/ 20:00:56 o/ 20:00:58 o^/ 20:01:00 \o 20:01:00 7o7 20:01:01 o/ 20:01:07 can someone ask grapex to join? ;) 20:01:13 o/ 20:01:18 kevinconway: are you walking like an egyptian? 20:01:28 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 20:01:28 lol 20:01:33 #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-09-04-20.02.html 20:01:33 I don't thinnk I can get my arms to make that shape 20:01:46 im thinking thats his hands 20:01:52 anyhooooooo 20:02:01 #topic action items 20:02:02 lets go 20:02:28 cp16net: did you add the db model to schedule_task? 20:02:49 sorry i've dropped the ball and i am picking up the pieces now 20:02:54 smh 20:02:54 grapex is coming back. sorry - i distracted him with something else 20:03:04 oh amytron! 20:03:04 i plan to update that by the end of the week. 20:03:09 for real this time. 20:03:10 my bad hub_cap 20:03:12 ok re-add plz cp16net 20:03:13 its on record. 20:03:22 cp16net: it was on record last wk!!!! ;) 20:03:26 #action cp16net add the db model to schedule_task 20:03:31 shhhhhh 20:03:34 :-P 20:03:38 cp16net: ping in #openstack-trove when you add it. 20:03:44 word 20:03:49 sweet 20:03:50 so, consistent JSON notation across API? 20:03:58 im on that, and i assume it has not happened? 20:04:05 o/ 20:04:11 hub_cap: we talked about it. 20:04:19 it's a touchy subject 20:04:20 underscores underscores underscores 20:04:26 we decided hungarian notation is best 20:04:28 ok cool. i know there was some talk of it 20:04:29 I *think* we said underscores not camel case 20:04:29 in by best balmer impersonation 20:04:33 which isn't that good 20:04:47 kevinconway: nice 20:04:50 motion to agree on underscores? 20:04:52 basically majority were of the opinion that we should stick with underscores. 20:04:58 sSize kevinconway? 20:05:07 err 20:05:11 iSize 20:05:17 undescores 20:05:22 underscores 20:05:27 underscores 20:05:30 Since that was what the python guidelines suggest and what we've been mostly using anyway (least deviation from current API) 20:05:30 +1 to underscore 20:05:33 are the underscores for json only or are we reforming xml to match? 20:05:43 you know this meetbot does voting well :) 20:06:03 I think we said new stuff would use underscores 20:06:15 kevinconway: id think it would be best ot use _ for both 20:06:15 yah this is for new API. 20:06:16 then v2 would be full refactor yes? 20:06:28 for sure 20:06:51 good deal 20:06:53 ok so moving on? 20:07:04 yes 20:07:05 yeah, it's looking like underscores to me. Move on. 20:07:12 #topic reducing pep8 ignores 20:07:16 i'll update the api next wiki 20:07:16 dmakogon_: thats u eh? 20:07:17 mine 20:07:25 +1 20:07:28 https://review.openstack.org/#/c/46063/ 20:07:39 i' started to do that 20:07:48 so off the bat 20:07:58 are there any rules we _want_ to ignore? 20:07:59 so now trove has meaningless ignored rules 20:08:08 as in, we have consensus that XXXX is stupid and we will continue to ignore 20:08:09 my idea to reduce most of them 20:08:25 not any that I'm aware of. 20:08:31 i dont know what all those numbers mean by heart so i cant say 20:08:38 no, we need to reduce amount of igrored rules 20:08:43 i have a gist somewhere of all the codes we ignore and what they mean 20:08:46 but i lost it 20:08:58 kevinconway: whats your github username? 20:09:02 u can just keep pouring thru your old gists 20:09:06 http://pep8.readthedocs.org/en/latest/intro.html#error-codes 20:09:06 itd be nice to have 20:09:14 it was an openstack gist 20:09:19 Most of the ignored rules are in place cause we have code that would break the tests if they weren't. (And we wanted to gate on the tests) 20:09:33 LOL kevinconway ya u have like 3 gists 20:09:39 kevinconway: That gist'd be golden if you can find it. 20:09:46 SlickNik: ok 20:09:47 ok so then assuming we dont have any that are just stupid 20:09:53 im all for dmakogon_ tackling these 20:10:00 plz look @ his review 20:10:03 great work dmakogon_ 20:10:07 thanks) 20:10:09 and NO MERGES till tomorrow!!!!!!!!!!!!!!!!!!!!!!!!!! 20:10:09 yeah lookin good 20:10:11 Go for it dmakogon_ 20:10:22 SlickNik: thanks 20:10:24 SlickNik: vipul grapex NO MERGES!!!! tomorrow rc1 is cut 20:10:30 fine :P 20:10:33 just gonna randomly say that all day today 20:10:41 lol, mad hub_cap))) 20:10:48 ANGRY HUBCAP 20:10:53 heh 20:10:54 will it be separate branch? 20:10:56 hub_cap: So, we still just look at stuff and +2 it right? Because there's a ton of pull requests 20:11:03 #topic project/branch status 20:11:11 so isviridov_ (and all) 20:11:20 when RC1 is cut, icehouse will be "open for development" 20:11:28 which means RC1 will be cut to a branch 20:11:38 and trunk will be open for merging icehouse stuff 20:11:53 so, for now HavanaRC is closed 20:11:58 this is new 20:11:58 if we find critical havana bugs, ill have to manually backport them to RC1 20:11:58 so if thre is a critical issue in RC1, does gerit support pushing to that branch? 20:12:07 vipul: yes and no 20:12:17 ill have to do manual work but it goes thru gerrit yes 20:12:47 so checkout rc1 branch, push to gerrit.. and that's it? 20:12:58 its outlined somewhere 20:13:00 its not a ton of work 20:13:01 vipul: in common - yes 20:13:06 #link https://wiki.openstack.org/wiki/StableBranch 20:13:07 but if we have 50 bugs, then its a lot of work lol 20:13:19 thats why they request the stable projects take ~1 wk to do RC1 20:13:19 kk 20:13:20 thanks SlickNik 20:13:28 if you look @ other RC1's there are like 50+ bugs on them 20:13:44 hub_cap: vipul: SlickNik: we should re-target all BPs and Bugs 20:14:02 we should retarget realistically 20:14:05 Yea I don't thikn we have an icehouse target 20:14:08 if we think itll make i1 then yes 20:14:10 we do vipul 20:14:10 so those should be created first 20:14:15 icehouse-1 is out 20:14:15 nvm then 20:14:27 hub_cap: vipul: SlickNik:yes, that is why we have 'future' target 20:14:28 https://launchpad.net/trove/+milestone/icehouse-1 20:14:34 hub_cap: vipul: SlickNik:not even ice house 20:14:41 thanks hub_cap 20:14:41 looks like our friends @ mirantis are being awesome 20:14:49 and have already retargetted their bps 20:15:24 so if we think its going to be ok for i1, bring it up and we can retarget 20:15:24 hub_cap: you are welcome) 20:15:34 thx dmakogon_! 20:15:44 so tomorrow we will be in merge/rebase/hell 20:15:51 i think we could retarget out stuff manualy 20:15:58 the one with the highest bribe will get merged first 20:15:59 by ourselvs 20:16:05 dmakogon_: i think so if you own it 20:16:11 but only retarget if its realistic 20:16:21 i dont want to have to keep moving stuff out. id rather move more stuff in 20:16:43 hub_cap: vipul: SlickNik: for the next meeting we could collect a pack of BPs and bugs for retargeting 20:16:46 agreed hub_cap 20:17:00 ya we should take a pass dmakogon_, and talk to the people they are assigned to, if any 20:17:07 and try to move realistic items into i1 20:17:19 i had to do a lot of "move this to h2, move this to h3" this time around :) 20:17:23 Sure, i'll try to look at them between now an then 20:17:59 dmakogon_: We can talk about candidates, but ultimately it's up to the people doing them to target them to the correct milestone... 20:18:19 yes 20:18:31 the project is dependent on people getting paid by companies to do the work :) 20:18:33 hub_cap: vipul: SlickNik: i think we should organise our future job in next way - be already up-to-date with new BPs and Bugs 20:18:42 yup 20:18:45 unless you are an independent like me 20:18:54 vipul: :) 20:18:57 and me 20:18:59 heh 20:19:07 put the "nice ot haves" in "trove next", and the "waaaaay future" in "trove future" 20:19:12 ok good to move on? 20:19:13 that will lead us to focused development and planning 20:19:16 weve got a packed meeting 20:19:20 yes agreed dmakogon_ 20:19:27 yup, agreed 20:19:36 good to move on 20:19:40 #topic secgroup perms/ownership 20:19:41 yes 20:19:47 #link https://review.openstack.org/#/c/44380/ 20:19:51 which one ? 20:19:51 amcrn around? 20:19:54 yes 20:19:58 gogogo 20:20:08 just would like folks to read the review above, review the gist, and get a consensus 20:20:17 done :) 20:20:28 hah 20:20:31 not off the hook that easy 20:20:37 anything in particular you'd like to bring up 20:20:42 that may cause wrenches to be thrown 20:20:44 amcrn I'm all for this. But I had a question. 20:20:48 kinda agree with the comment there. the only port that should be available to open is the port the instance is listening on 20:20:52 SlickNik: sure, what's up? 20:20:58 Same as vipuls 20:21:04 now that he's mentioned it. 20:21:05 about review - looks good 20:21:09 (beat me to it) 20:21:18 There's a lot more involved than just the port, please review the gist. 20:21:19 well wait 20:21:30 imsplitbit: can tell u that MANY companies have compliance issues 20:21:34 and will NOT run mysql on 3306 20:21:35 period 20:21:38 PERIOD 20:21:42 exactly, hence the point in my gist :) 20:21:44 correct 20:21:53 Should we allow them to open up ports that the service of service_type is clearly not running on? 20:21:56 https://gist.github.com/amcrn/14501657c5a5e9ee78dd 20:22:05 amcrn: :P 20:22:07 SlickNik: Make that configurable, see ^^ 20:22:22 hub_cap: vipul: SlickNik: before trove will have parameter groups, we should leave this question as it is 20:22:36 Well, the guest agent doesn't support running it on a diff port. 20:22:44 can u explain dmakogon_ 20:22:48 SlickNik: yet 20:23:07 hub_cap: vipul: SlickNik: we still use default ports for services 20:23:19 sure but when configuration edits drops 20:23:20 with parameter groups you could pass a different port, etc. 20:23:21 agreed 20:23:22 And even if it did, you could update the port in the config to the port it was using. 20:23:22 all bets could be off 20:23:44 the suggestion in the gist will accomodate future changes like that 20:23:48 But the port info would still be tied to that instance 20:23:54 hub_cap: vipul: SlickNik: when (Amazon) paraters group implementation will be done, user could specify optional ports to be opened 20:24:05 so maybe we do this 20:24:13 instead of allowing the user to specify the port in the API, we should figure out a way to pick it from the instance 20:24:15 the api says "open the port the guest is listening on" 20:24:21 regardless of what port it is 20:24:23 and the guest syas "what port am i on", ok, "open it" 20:24:40 hub_cap / vipul ++ 20:24:49 vipul: but ports are specified by configs 20:24:58 sure dmakogon_ and the guest will know that 20:25:13 ask the guest.. or push that info to the guest if need be 20:25:21 (on provision) 20:25:27 dmakogon_: it could be different for different service types. 20:25:28 vipul: configs of each service, and parameters groups allows user to configurate service as they want it 20:25:50 So only good way to do that is to check with the guest which will know its service type and have that info 20:25:56 Sure, but I think what we are saying is the Guest knows what that config is 20:26:08 vipul: +1 for pushing specific data to GA 20:26:11 well not _only_ good way, but _a_ good way. 20:26:41 amcrn: does that meet your needs? 20:26:50 just have a "open mah port" 20:26:56 Suggestion 20:26:59 SlickNik: you mean to store networking confs of each service type ? 20:27:40 dmakogon_: not networking confs, but probably ports that the service communicates on. 20:27:49 It's possible that it could be multiple ports. 20:27:56 depending on service type. 20:27:56 i fail to see how the suggestion doesn't fit the model I described 20:28:32 hub_cap: vipul: SlickNik: amcrn: my idea to (for now) keep it as it is, then, when params. group will come, specify ports in it 20:28:57 again, dkamogon, your suggestion doesn't preclude what I've described in the gist 20:29:03 you still need a way of specifying what ports are eligible 20:29:07 for which service_type 20:29:19 hub_cap: vipul: SlickNik: and trove should create sec. rules for default port(as SlickNik sad) and custom spicified by user 20:29:47 amcrn: I think the one contention is the user is still the one specifying hte ports 20:29:51 amcrn: we need mechanism to register services 20:29:58 id really prefer to make this easier for the user by having a "create security group" 20:30:00 vipul: not true, the gist explains how that can be turned off 20:30:07 in it will be defined default port 20:30:12 for each service 20:30:13 and then letting the app figure out what ports are open 20:30:15 the guest knows 20:30:22 the guest owns the service 20:30:31 and the logic to find out what port its on 20:30:32 amcrn: apologize if it's already in there.. first time lookign at it 20:30:46 amcrn: lol do u expect us to do our homework?!?!?!!?!?! 20:30:49 ;) 20:30:54 :| 20:31:01 hub_cap: vipul: SlickNik: amcrn: GA could store ports as meta-data on instance 20:31:09 amcrn: reading the gist, I think the idea is exactly the same 20:31:37 ok lets do this 20:31:40 table this discussion 20:31:42 I think we're all in consensus, we just don't know it yet ;) 20:31:42 and read the gist 20:31:47 and talk about it tomorrow 20:31:48 except that the onus is on the guest to do the port config vs the api / taskmgr (the way it is today) 20:31:51 since amcrn knwos this stuff 20:31:58 amcrn: I suspect you are correct :) 20:31:58 i believe him 20:32:01 he says we are in consensus 20:32:06 the main thing is we should keep this a managed service 20:32:08 still, table it 20:32:13 and not have to let the user figure all this out 20:32:15 * hub_cap picks up the gavel 20:32:22 we have too much on agenda 20:32:27 next.. 20:32:38 #topic MongoDB support 20:32:39 moving on 20:32:52 i thinks we 20:32:55 aha 20:33:01 so this came up last week 20:33:02 i think we're done with it 20:33:04 i like it! lets do mongo 20:33:06 good 20:33:08 moving on 20:33:09 lol 20:33:16 ) 20:33:28 main goal for Ice House - global refactoring 20:33:37 for pluggability 20:33:38 refactoring everything into globals? 20:33:42 lol 20:33:43 hehehe i kid i kid!!!! 20:33:44 +1 hub_cap 20:33:44 haha 20:33:48 lol 20:33:51 yes 20:33:58 period 20:34:00 but yes i agree we need refactoring first 20:34:01 PERIOD 20:34:04 global HEATing 20:34:10 lol nice 20:34:12 #topic virgo 20:34:17 so who came up w/ this doozie? 20:34:21 and dont say me 20:34:23 cuz i didnt 20:34:27 you did 20:34:28 i mightve tried to plant seeds 20:34:30 so many IRONY(c) 20:34:30 hub_cap 20:34:32 :-P 20:34:34 isviridov: wanted some clarification 20:34:36 but i didnt water them and they died 20:34:39 again from last week. 20:34:43 ok whats up 20:35:16 hup_cap, came from trove channel, seems from you. Please comment if it is on roadmap or something& 20:35:20 anything new to discuss ? 20:35:21 fwiw, i think we need to fundamentally rule out a python guest beffore we move to virgo 20:35:29 isviridov_ shhhhhhhhh 20:35:33 lol 20:35:51 hub_cap, i'm all silence 20:35:53 http://summit.openstack.org/cfp/details/53 20:35:55 or any other lang 20:35:55 do we only need to rule out the standard python imll? 20:36:08 or can jython be suggested? 20:36:25 kevinconway: I like it 20:36:28 i would like to stay pure python 20:36:28 wait you arent running this whole infra in jython kevinconway? 20:36:35 what ahppened to the my little pony one 20:36:36 dmakogon_: lets pray hes joking 20:36:42 no, i'm running iron python 20:36:44 vipul: lady rainicorn still lives 20:36:45 my little pony one? 20:36:46 hub: ok :) 20:36:50 :) 20:36:55 oh, that was adventure time methinks 20:36:56 so i can use my vs2014 20:37:07 i need to put some feelers out between projects that use guests 20:37:12 see if there is enough overlap between them 20:37:23 or if we should just say screw it and make our own (outside trove) guest 20:37:40 I think the community already has plenty of those 20:37:47 My feeling is a guest by definition should be very small, so the basis of a common implementation is less important than a common interface. 20:37:50 hub_cap: vipul: SlickNik: do not forget about OpenStack TC acceptance 20:37:51 I'd love to see a semi-unified agent 20:37:58 dmakogon_: fair point 20:38:11 dmakogon_, +1 20:38:23 what's the clever name we would use for an openstack guest project? 20:38:28 i think that's the most important part 20:38:32 or just a common reference spec at the very least 20:38:42 kevinconway: obvi 20:38:45 imsplitbit: agreed 20:38:50 imsplitbit: i think a common implementation that works in production for all shoudl be the goal 20:38:59 ok so lets move on. ther is not much to do here for us in this meeting 20:39:02 we are all singing in concert 20:39:04 vipul +1 20:39:06 If kevinconway or someone wants to use a guest on a Windows OS maybe they'll even use a more VS2014-centric language to code it. :) 20:39:08 sans kevinconway and his iron jython 20:39:14 but grapex +100000 I think python isn't necessarily the best way to implement a guest agent, esp for really small vms/containers 20:39:20 kevinconway: burglar 20:39:26 +1 to moving on. 20:39:30 + 20:39:32 ++1 20:39:37 #topic trove refactoring 20:39:38 move on, thx hub_cap 20:39:41 dmakogon_: go go go 20:39:42 1++ 20:39:45 imsplitbit: And once you get there, I think enforcing an implementation only seems like a good idea if you've never encountered other people's specific problems. 20:39:47 are these all from last wk? 20:39:54 yes 20:40:08 hub_cap: yep 20:40:14 we already doing some approved stuff 20:40:21 so, nothing new 20:40:33 i think we could move one 20:40:49 sounds good 20:40:54 ok good 20:41:02 ill clean this up (the meeting stuff) 20:41:03 for next wk 20:41:13 #topic moving guest into a new repo 20:41:17 hub_cap: sounds good 20:41:28 so we talked a while ago abotu moving the guest 20:41:32 this kinda goes back to topic -1 20:41:42 Yes, we discussed it in the past. 20:41:44 i think its still a good idea (tm) 20:41:46 hub_cap: vipul: SlickNik: i have something to say 20:41:50 dmakogon_: plz do 20:41:55 go on 20:42:06 I also think it's a good idea. 20:42:17 hub_cap: vipul: SlickNik: we could create sepparate setup for GA package 20:42:37 hub_cap: vipul: SlickNik: 2 setup.py and 2 setup.cfg in one repo 20:42:46 dmakogon_: In order to do that we need to move it out into a separate package 20:43:02 hub_cap: vipul: SlickNik: but still in one repo 20:43:03 dmakogon_: openstack ci doesn't support 2 setup.py's in the same package. 20:43:06 2 setup.py in same repo is not good i believe from mordred... 20:43:11 we could.. just wouldn't work too well with the tooling CI gives us 20:43:15 * hub_cap waits for mordred to magically appear 20:43:18 correct 20:43:23 With different repos, active development will we have double reviews in gerrit? 20:43:24 never fails. 20:43:28 * hub_cap waits for mordred to magically vanish 20:43:32 didn't even have to say his name three times 20:43:38 lol 20:43:45 heh 20:43:50 HAHAHAHA that was awesome 20:43:52 2 setup.py and 2 setup.cfg in one repo completely unsupported 20:43:58 good timing mordred 20:44:02 one repo == one release artifact 20:44:17 == one happy ci team 20:44:23 now - what are you talking about? :) 20:44:29 mordred, double reviews in gerrit? 20:44:34 mezcal 20:44:38 isviridov_: why would you do double reviews? 20:44:49 lets not rabbit hole this 20:44:53 its not supported 20:44:58 i trust thre are good reasons 20:45:00 hub_cap: vipul: SlickNik: mordred: we could use second setup at instance, while tests, CI builds trove with original setup.py 20:45:10 mordred: and team are super smart, so lets go w. them on this 20:45:38 I don't know if the arguments for keeping them in the same repo are > keeping them separate 20:45:41 mordred, If we are adding feature and changing core, and changing guest agent... 20:45:41 hub_cap: basically, python tooling is not good enough for this, it's confusing, it's hard to reason about, it breaks everything, and it will cause your children to lose all of their limbs 20:45:44 like a client, it can be a separate artifact. i think its a perfectly sane idea 20:46:06 oh god mordred but my sons only 10mo old! hes barely understanding his limbs 20:46:06 hub_cap: ++ 20:46:15 hub_cap: yup. they'll fall off 20:46:20 shiiiiiiii 20:46:20 hub_cap +1 for separate artifact 20:46:24 isviridov_: sure, there may be those cases, but you ahve the same issue with troveclient when you chagne the trove API 20:46:30 so this was more about the timeline 20:46:34 not whether to do it 20:46:38 we decided to do it already :) 20:46:41 like 6mo ago 20:46:41 * mordred goes back into his hole 20:46:50 thx mordred, say hi to the other rabbits 20:46:56 squirrel! 20:47:00 and to Alise 20:47:03 HA 20:47:10 thanks mordred, say hi to clarkb for me :) 20:47:10 vipul, means not so critical. Thx 20:47:35 ok so my gift to the group 20:47:40 So, this kinda ties in with the trove conductor work that datsun180b's been working on 20:47:47 hub_cap: bbq? 20:47:49 ill make sure i get a better timeline for this 20:47:58 hehe kevinconway nice 20:47:59 we should shoot for icehouse 20:48:04 btw, what about conductor ? 20:48:07 we need to shoot for i1 vipul 20:48:15 even better 20:48:18 thats when we get to rip shit out like a mad scientist 20:48:31 all the big changes happen in 1 :) 20:48:33 i need a scalpel and duct tape STAT 20:48:43 do we need to discuss versions / service_types 20:48:48 or 20:48:48 guest_agent service registry 20:48:48 no 20:48:52 no 20:48:58 again i dont knwo if these are last wks 20:48:59 both were remnants of last week 20:49:01 last one interesting 20:49:03 pdmars: configuration management ya? 20:49:06 #topic configuration management 20:49:06 yes 20:49:11 #link https://wiki.openstack.org/wiki/Trove/Configurations 20:49:12 so i picked this bp up 20:49:12 go for it pdmars 20:49:23 mostly makes sense, but i have some questions 20:49:35 specifically about handling dynamic vs non-dynamic mysql vars 20:50:00 dynamic don't require a restart, non-dynamic do 20:50:48 i was thinking that when a configuration group is attached to an instance, it should set the dynamic vars and inform the user they need to restart to set non-dynamic 20:50:54 do others have thoughts/opinions on that? 20:51:17 i think thats a valid point... maybe having a message upon group creation too? 20:51:18 what about returning wehther one is dynamic or not in an API call 20:51:25 hub_cap: sure 20:51:30 so list of all available options 20:51:36 yeah, so that info is in /configurations/parameters 20:51:37 vipul: thre is some api around "avail options" 20:51:41 ok 20:51:42 pdmars: are you thinking of a state verification (like seen in a resize), force sending of ack=true, or ? 20:51:43 ya what pdmars says 20:51:45 it lists what you can change, what the bounds are, and if it's dynamic 20:52:02 pdmars: that is good 20:52:04 so the question is whether to rquire the user to issue a restart vs. us doing it 20:52:13 amcrn: i think that maint windows would be good for restart but thats a ways off 20:52:21 sorry, i think i worded it poorly 20:52:23 hub_cap: amcrn: right 20:52:24 i was asking what vipul is 20:52:33 vipul: yes 20:52:37 hes american 20:52:39 sry 20:52:41 Merican 20:52:43 lol 20:52:46 MURICA 20:52:52 heh 20:53:02 ha 20:53:18 heh 20:53:23 let's keep this question for the nex meeting 20:53:33 well we have 7 min left, i think its ok to discuss 20:53:34 you could send in a 'force-apply' in the request stating that if a restart-applicable parameter is included, that the instance be restarted 20:53:35 i think that's fair.. we shouldn't restart it 20:53:35 So I prefer informing the user that a restart is needed. 20:53:41 i'd like some initial feedback today 20:53:43 if possible 20:53:44 and. i think, we need some specs for it 20:53:45 amcrn: i think that will naturally happy 20:53:47 *happen 20:53:54 dmakogon_: see the link 20:54:00 amcrn: i was thinking something similar 20:54:02 amcrn: I like that approach (force-apply) 20:54:03 maybe we should patch mysql to never need restart 20:54:04 when i change subject 20:54:08 guys 20:54:15 hub_cap: sorry missed that 20:54:16 if the config is edited 20:54:20 and a restart happens 20:54:22 you get a force apply 20:54:23 period 20:54:28 resizes restart an instance 20:54:30 kevinconway: maybe _you_ should :P 20:54:34 this will naturally happen 20:54:36 hub_cap: right 20:54:38 hub_cap: i mean specs for force applying 20:54:39 like an api to /apply gets a response do what it needs to do 20:54:40 we dont need a flag for this 20:54:50 hub_cap: agreed 20:54:53 either restart or say you're good 20:55:04 on no i dont think we need a response 20:55:08 thatll be complicated 20:55:21 just warn "these wont go into affect till you /instanc/id/actions {restart} 20:55:27 so if a user doesn't want a restart, he doesn't change the config? 20:55:28 " 20:55:29 w/o the newline 20:55:31 hub_cap: yes 20:55:36 hub_cap: algorithm for this operation is not so clear is it should be, is some one got perfect vision of this task ? 20:55:40 SlickNik: the dynamics get changed 20:55:44 maybe i was misstaken... so you can push >1 config at a time? 20:55:46 mysql in to the instance, change what you can 20:55:49 or not? 20:56:01 the non dynamics, which require restart, will just sit ther ein the cofnig file 20:56:02 1 config or 1 config option? 20:56:04 *config 20:56:22 dmakogon_: thats why pdmars is asking these questions :) 20:56:30 I see what you're saying hub_cap, that works 20:56:51 if you submit things that are dynamic and non dynamic (is there a better word for that) 20:56:55 the dynamic ones get applied 20:56:56 as long as on a instance-get, it has some sort of flag saying outstanding changes haven't been applied 20:56:59 and both get written 20:57:01 otherwise you might forget 20:57:08 amcrn: thats a fair point 20:57:11 amcrn: hmm 20:57:11 restart_pending 20:57:12 are we storing these configs? 20:57:17 vipul: yes 20:57:25 besides just on the my.cnf right? 20:57:25 maybe yeah a different state change 20:57:26 vipul: we write them to an overrides.cnf file for mysql 20:57:31 thatll be useful for maint windows (restart_pending or whatever) 20:57:41 yeah i like that 20:57:44 maint window can see if a user needs to restart 20:57:44 i see use cases where you may want to spin up a new instance with an existing config 20:57:53 that is valid too vipul 20:57:57 would this be a use case for: 205 Reset Content 20:57:58 vipul: also can do 20:57:59 its in the spec i believe 20:58:02 cool 20:58:05 hub_cap: yes 20:58:22 so the other major question is the unassign ... same basic idea 20:58:23 I'm fine with the warn as long as we have the restart_pending 20:58:37 SlickNik: ok, i think that's reasonable 20:58:44 +1 SlickNik amcrn et al 20:58:45 else the user has no way to know that only half his config is applied upon inspection of the instance 20:58:47 pdmars: after we fully read your wiki, where would you like comments/questions posted? Is there a bug/bp, or in-line on the wiki? 20:59:00 why not just allways warn they need to restart? 20:59:06 i wish bp had a better way to do this amcrn :( 20:59:11 word 20:59:15 in line, or in irc to me/the group, whatever works for you 20:59:23 ok meeting is over. we didnt get to finish 20:59:23 pdmars: ok, cool. 20:59:32 lets move this discussion back to #openstack-trove 20:59:39 to get consensus on last item for pdmars 20:59:45 and to have open discussion 20:59:46 ok 20:59:50 sounds good 20:59:56 ok 20:59:59 any final words of wisdom? 21:00:08 I want to make a suggestion 21:00:11 Love for everyone ! 21:00:18 the wiki page for the meeting. 21:00:26 <3 21:00:30 Can we have a _previous_ meeting section 21:00:31 <3 21:00:39 <3 21:00:40 not a bad idea SlickNik 21:00:42 SlickNik +1 21:00:44 HUGS 21:00:45 #endmeeting