10:00:05 #startmeeting requirements 10:00:06 Meeting started Wed Feb 1 10:00:05 2017 UTC and is due to finish in 60 minutes. The chair is tonyb. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 10:00:10 The meeting name has been set to 'requirements' 10:00:20 #topic roll call 10:00:23 o/ 10:00:30 Who's here? 10:01:02 :D 10:01:32 number80, dirk, coolsvap, toabctl ping? 10:02:34 hi 10:02:50 woot! we have toabctl ;P 10:03:02 tonyb, I was on vacation :) 10:03:44 toabctl: Ahh that wasn't a dig I'm sorry. It was just I convinced prometheanfire to get up early for the meeting and it looked like it was going to be the 2 of us 10:03:49 toabctl: now there's 3 :) 10:04:57 So we don't have a queue as such so I'll skip that 10:05:05 that's quorum? :P 10:05:15 we're iut of time for priorities so /skip 10:06:05 so lets just go straight to open discussion / retrospective of what happen in Jan / the freeze 10:06:15 tonyb: hey 10:06:22 prometheanfire, toabctl, dirk go! 10:06:42 so 10:06:47 dirk: hey 10:06:51 not too much I don't think 10:07:06 I overfroze things in mitaka/newton, but that was fixed quickly enough 10:07:31 osc still comes in late and that sucks, but it's osc, so what can you do 10:08:19 prometheanfire: yeah it's part service (well client consumer) and part library 10:08:38 yep 10:08:53 and constantly breaks other things lol 10:09:32 prometheanfire: hehe, yeah we shoudl try to get some functional testing on u-c chnages but that's a lot of CPU time :( 10:09:52 indeed 10:10:20 it's a slipery slope 10:10:38 tonyb: I think its worth it 10:10:56 iirc doug and dirk were working on a project that seemed to be 'the canary in the coal mine' in relation to osc iirc 10:11:04 we definitely need to improve testing time, it reduces the number of high-urgency-reverts/fixups that we need to get through the gate afterwards 10:11:18 so we might be able to cross check that one and get some good coverage increase 10:11:46 well, my original plan was to extend the *-cross-* jobs to also run a rally functional test 10:12:11 I think in this cycle we broke rally 4 times, so just one job would have caught those four regressions easily 10:12:21 ya, rally, that one :D 10:12:31 I was looking at the infra config for this but I got completely lost 10:12:43 we may be able to run it only when updating osc if it's too heavy even 10:12:52 my opinion is always that electrons are cheap, humans are not 10:13:00 okay 10:13:10 dirk: are you going to be at the PTG? 10:13:20 tonyb: yes, saturday-wednesday 10:13:27 guess we should talk about that next? 10:13:30 I have conflicting appointments so I need to leave early 10:13:52 tonyb: we could probably sit down together there with an infra person and get it done, yep 10:13:57 dirk: okay I'll add this to the agenda for the PTG so we can discuss this and work through the project-config chnage 10:14:15 dirk: I know enough infra to make this work 10:14:36 great 10:14:39 we'll need to pick a good rally test and or good osc chnages 10:15:19 sadly we need to do it on all as we can't select jobs based on the *contents* of files just the files that chnage :/ 10:15:53 you mean we only run rally on our oslo releases? :) 10:17:06 dirk: no I mean like the current *-cross-* jobs they'll need to run on every u-c change not just the ones for osc 10:18:03 ah, that's too bad 10:21:46 prometheanfire: Yeah but we can try it and see like I said it's a lot of CPU *BUT* if it catches bugs then it'll be worth it 10:22:09 yep 10:22:23 we have time on Tuesday Morning and we can discuss / work through stuff then 10:22:30 we've already made significiant progress this cycle based on the existing cross tests 10:22:35 ya 10:22:56 I'll be arriving monday morning, so it's probably better to consider me out before lunch 10:23:06 though I may get in earlier, dunno 10:23:23 the agenda is: https://etherpad.openstack.org/p/relmgt-stable-requirements-ptg-pike 10:23:26 land in ATL at 9:05am 10:24:46 I have a couple of things for post branching/ the pike thaw that I'd like to discuss .... 10:25:49 ya, saw the ml post 10:26:47 Once we open for pike I want to hold off any minimum bumps for a while 10:27:06 min in master and ocata branches? 10:27:48 and the sorting of u-c ... I kinda think that it's a no brainer to alter the sort order as it's mostly machine consumed so anything we can do to make that better is a good thing 10:28:15 true 10:28:17 prometheanfire: we don't do min bumps in stable/* branches I was talking about master 10:28:30 heh, right 10:28:54 we saw a bunch of backwards updates due to us being agressive with the minimum bumps and then that bit us 10:29:25 ya, you mentioned that in the email 10:29:55 I don't know for sure how long it'll be and what the signal will be but it was a mess this cycle 10:30:50 you mean how we updated gr early in it? 10:32:18 yeah 10:32:27 ya 10:32:51 I think we need to look at which projects have branched and taken the bot updates 10:33:46 seeking to prune projects.txt? 10:33:49 or what? 10:34:19 no nothing of the sort. I just want to avoid a bunch of issues for the stable releases 10:34:56 ah, so we can use that as a limiter for what we update 10:35:15 yeah somethign like that 10:37:20 I also think somewhat early in pike we need to do somethign with python-kafka 10:37:47 how so? 10:38:05 sadly that'll suck for monasca I hope to find Roland/Joe at the PTG to talk through that 10:38:29 tonyb: so regarding uc, should we just take the sha-ify change? I'm ok with it 10:38:34 (after branching that is) 10:38:49 well, they should be migrating to the new lib 10:39:00 dirk: Yeah I think so but e have a few weeks to settle on that. 10:40:02 prometheanfire: Yes there are a number of things that could be done better but oslo.messaging is stuck and has been for too long 10:40:05 tonyb: I agree, the kafka issue we could have solved better 10:40:18 tonyb: we unstuck them I thought... 10:40:29 I don't understand why oslo.messaging isn't able to adapt the same library that monasca now switched to 10:40:33 prometheanfire: que? 10:40:53 * tonyb has missed something 10:40:58 tonyb: we bumped uc and gr ~1.5 weeks ago for python-kafka 10:41:10 for oslo.messaging 10:41:11 what did monasca do/ which did we take? 10:41:40 monasca is currently vendoring the old while they switch to the new c based python lib 10:41:49 like the json thing again 10:42:01 it may be nice to get everyone on the same lib though 10:42:28 kafka-python===1.3.2 10:42:37 it was less that 1.0 10:43:00 pykafka===2.5.0 was added as the new one monasca is using 10:43:08 Huh I didn't see that happen 10:43:19 it was while you were out 10:43:48 oh well never mind me then 10:43:50 we still need to make sure they are not going to be vendoring it 10:44:08 and maybe talk to oslo.messaging about switching (and any others) 10:44:20 but the immediate problem's been fixed 10:44:37 prometheanfire: they aren't. my understanding is that monasca is switching to pykafka and olso.messaging uses the new kafka-python 10:45:04 tonyb: well, its still an issue (two libs for the same thing). its something where prometheanfire and I decided to opt for duplication rather than forcing everyone to agree on a short time 10:45:22 dirk: I'm aware, I just want to ask if oslo.messaging can use pykafka 10:45:31 I would have preferred that both would have settled on the same solution (like both adapting pykafka or monasca switching to oslo.messaging) 10:45:37 dirk: we have the same thing with a json lib 10:46:36 there's a c backed on for things that need the speed 10:46:38 I think I'm okay with the duplication 10:47:53 dirk, prometheanfire: thanks 10:48:13 * tonyb wonders why monasca didn't do that 8 months ago 10:48:35 my understanding is that they underestimated the whole g-r thing 10:48:42 probably 10:48:48 I was working with some of the team to get their g-r incompabile changes sorted out 10:48:56 which is also why I cleaned up the psutil thing 10:48:59 which we finally got merged 10:49:08 divergant mins wouldn't have helped here either 10:49:29 prometheanfire: yeah. 10:49:37 yeah, the more I think about the more I'm not happy about having divergent mins allowed :) 10:49:45 * dirk is still sceptical about that 10:50:14 they'd only be allowed for things following UC 10:50:23 so we are still in lockstep there 10:50:52 dirk: great! we can talk it through at the PTG (perhaps over beer) the more angles we look at it from the better 10:51:02 projects would just be allowed to have a minimum that is higher than the one defined in gr 10:51:12 ya, over beer would be good :D 10:52:00 prometheanfire: there would be no minimum in g-r 10:52:40 tonyb: not global min vs project min? 10:52:45 that's new to me 10:52:53 prometheanfire: but the aim was to allow projects to set a lower minimum than the one we currently have in g-r 10:53:35 prometheanfire: no projects would be the sole custodian of the minimums 10:53:38 ah, suppose that's still doable 10:53:55 given uc is still the sync point between projects 10:54:18 prometheanfire: the requirements team would need to write tools to allow projects to test the minimum 10:54:23 yep 10:54:33 we need to test the mins as it is 10:54:34 prometheanfire: Yeah u-c would be the only sync point 10:55:09 prometheanfire: Yeah originally I was treating them as seperate but I was convinced that we could just merge the goals and get there quicker 10:56:02 my only worry is that all projects would need to be ready at the same time for it 10:56:18 though I guess they could be set, one by one, to ignore gr updates 10:56:26 once they do it themselves 10:56:31 anyway 10:56:40 impl details to be discussed in a couple weeks :D 10:56:53 prometheanfire: no I think if we're careful it'll "just work" 10:57:08 prometheanfire: we'd disbale the updates 10:57:34 Yeah clearly we need to discuss the plan with more of us in the room 10:57:59 we are at time, did we have anything else? 10:58:14 prometheanfire: we have 3mins ;P 10:58:31 right, server irssi is on is ahead :| 10:58:45 hehe 10:59:23 anyway I think we're done 10:59:45 I'm going to be around for a bit more so we can carry on in -requirements 10:59:49 #endmeeting