18:00:16 #startmeeting third-party 18:00:17 Meeting started Mon Aug 18 18:00:16 2014 UTC and is due to finish in 60 minutes. The chair is anteaya. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:20 The meeting name has been set to 'third_party' 18:00:26 hello and welcome 18:00:35 Hello ! 18:00:39 Hello ! 18:00:41 do show yourselves if you are here 18:00:43 hi! 18:00:46 o/ 18:01:01 o/ 18:01:03 Hi 18:01:05 o/ 18:01:05 Hello 18:01:09 o/ 18:01:23 great thank you 18:01:27 off we go 18:01:38 #link https://wiki.openstack.org/wiki/Meetings/ThirdParty#Agenda_for_next_meeting 18:01:42 today's agenda 18:01:57 #topic Welcome & Reminder of OpenStack Mission 18:02:11 #info The OpenStack Open Source Cloud Mission: to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable. 18:02:14 hello! 18:02:16 o/ 18:02:29 O/ 18:02:34 for those of you for whom this might be your first meeting, being aware of the OpenStack mission is helpful 18:02:37 hello 18:02:37 o/ 18:02:38 and welcome 18:02:53 next topic 18:02:59 #topic Review of previous week's open action items 18:03:13 #link http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-08-11-18.00.html 18:03:20 * krtaylor is thankful for anteaya running the meeting this week 18:03:23 there were two items from last week's meeting 18:03:24 o/ 18:03:39 aloha 18:03:43 :D thanks to you krtaylor for doing such a great job chairing the last month or so of meetings 18:03:56 it makes is so much easier to have someone to share with, you do a great job 18:04:06 nice to see huge crowd today! 18:04:10 #info third-party review and sync in infra on https://review.openstack.org/#/c/99990/ 18:04:13 * krtaylor blushes 18:04:16 hello 18:04:27 so we are working on getting the ci modules to a more consumable state 18:05:13 I created an action for all to review puppet module split 18:05:15 to that end we are going to identify patches to review to help that effort 18:05:19 this is one 18:05:23 #link https://review.openstack.org/#/c/99990/ 18:05:33 yep, yet to review that one myself 18:05:44 and the latest patchset looks like it could use some reviews 18:05:52 so let's action that for this week as well 18:06:10 when I say we should review something, show of hands who knows what I mean? 18:06:12 o/ 18:06:30 o/ 18:06:33 #action third party team to review https://review.openstack.org/#/c/99990/ 18:06:49 there has got to be more attendees who know how to review 18:06:53 I know you are there 18:07:09 o/ 18:07:09 i'm not familiar with puppet, nor if splitting is best practices. my review wouldn't be useful. 18:07:11 not much sense to put out a call to review if folks don't know how to review 18:07:12 o/ 18:07:15 anteaya, as always, it would be nice for the third-party tag, but in this case, it is secondary, can we add multiple tags with -t? 18:07:31 it may be that some don't know the interest as it applies here 18:07:31 dougwig: I disagree, can you identify a spelling mistake or a whitespace error? 18:07:37 dougwig: if yes, your review is helpful 18:07:45 i'm happy to do that. 18:07:53 do we review with a +1 if we agree with the item? 18:08:03 krtaylor: no I don't think we can add multilple topics, let me think on that 18:08:09 daya_k: yes 18:08:16 dougwig: great thanks, I look forward to your review 18:08:35 next item from last week 18:08:41 daya_k, looks good to me (+1) or if you see a problem, you add a comment and -1 18:08:49 #info joa and krtaylor to discuss doc improvement sections to focus on 18:09:04 so joa can't make today's meeting 18:09:07 * krtaylor I'll go play with -t and multiple tags 18:09:26 anteaya, joa never pinged, I should have followed up 18:09:31 and he is caught in the void between the infra-manual repo and config/third-party.rst 18:09:36 so I will help him out 18:09:45 he just in in -infra 10 minutes ago 18:09:58 he was looking for advice on how to bust up what he has into reviewable sections 18:10:04 and will offer some patches to config based on his work, with him as co-author this week 18:10:07 right 18:10:14 ++ 18:10:32 #action anteaya to offer joa's work as config patches with joa as co-author by next meeting 18:10:40 I'll need some reviews from folks 18:10:43 next topic 18:10:59 #topic Announcements 18:11:03 this is me 18:11:15 #info Avoid comment generator scripts (anteaya) 18:11:26 salv-orlando: would you like to share your thoughts here? 18:11:35 can you expand on what that means? 18:11:38 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/042880.html 18:12:04 so the vmware ci had an issue and had to reboot, losing its queue 18:12:08 o/ 18:12:20 Ah, I think I can jump in here as well - we have an ipv6 patch that is trying to re-trigger NEC CI 18:12:22 I think I should apologise publicly again? Yeah our ci had a script that did recheck by pushing ‘recheck’ comments to gerrit 18:12:33 so it used a generator script to mass comment on patches in gerrit to recheck theim with their system 18:12:50 salv-orlando: no apology necessary, just a look so other folks avoid the problem 18:12:51 this script was ran on all openstack/nova patches in the CI build’s queue. 18:13:14 mass retries should be internally triggered with no public side effects other than a new vote when finished, right? 18:13:21 Apart from annoying people with an extra comment coming from nowhere, this also had a side issue which was totally our fault. 18:13:39 We missed the email where we said that recheck.* is a verboten namespace for rechecks 18:13:50 and ended up pushing all those patches upstream as well to zuul 18:14:02 is it still ok to do manually on an occasional basis, or do we need internal mechanisms for anything we re-trigger? 18:14:07 at the end of the day minesweeper choked zuul, and the account was disabled. 18:14:10 dougwig: manual is fine 18:14:43 so if you have a similar mechanism for recheck and more importantly if you use recheck* for doing rechecks you are at risk of being disabled at any moment 18:14:44 yeah so the point is the automation of comments ended up taking out openstack's zuul 18:15:00 and more importantly bringing down infra 18:15:04 so please don't do that 18:15:17 once we can handle as a learning item, so let's learn 18:15:27 more than once and we risk mutiny 18:15:32 so let's not do that 18:15:37 questions? 18:15:39 what anteaya is saying happened because the pattern was recheck-vmware. Now.^.*recheck.*$ triggers zuul 18:16:08 do we have a recommended syntax for 3rd party retriggers? 18:16:15 also the point is automation can provide a ddos very quickly 18:16:22 dougwig: we are working on that 18:16:36 dougwig: would you like to add that to the infra team meeting tomorrow so we can discuss it 18:16:43 anteaya: what can devs do in the meantime for retriggering specific CIs that are blocking work? 18:16:54 it is a discussion item but we haven't reached consensus yet 18:17:05 * sc68cal_ feels bad for being pushy 18:17:17 sc68cal_: good question, I have to be honest and say I don't currently know the safe way to do that 18:17:25 sc68cal_: never feel bad for being pushy 18:17:37 better than staying quiet and wreaking havoc 18:17:40 anteaya: OK - do you have that link handy for the contact info for 3rd party CI contacts? 18:17:47 I had it at some point but lost it :( 18:17:49 isn’t manually retriggering still fine? sc68cal_, anteaya 18:17:50 sc68cal_: maybe for vmware I’ll try “"Bibbidi bobbidi boo!”” 18:17:51 anteaya: sure. i'm not picky on what it is, as long as we do something to retrigger without affecting openstack jenkins. 18:17:54 salv-orlando: lol :) 18:17:57 sc68cal_: can you ask in -infra and report back if you learn before the top of the hour? 18:18:08 kevinbenton: manual retrigger is fine 18:18:09 anteaya: sure - thank you :) 18:18:16 * krtaylor looks for sdague's recheck proposal 18:18:22 kevinbenton: since even if you do an oops it is one oops, not 600 18:18:31 https://review.openstack.org/#/c/109565/ 18:18:35 salv-orlando: I like it 18:18:45 dougwig: agreed 18:18:48 i think we're going to see a lot of creative retrigger prefixes in the short-term. :) 18:18:55 dougwig: and right now I don't know what that is 18:19:19 krtaylor: that is more of the conversation, not the decision about what the safe way is yet 18:19:26 lots of discussion on that topic 18:19:28 anteaya: right. so if sc68cal_ is blocked by a negative he can just issue rechecks on that one safely 18:19:40 anteaya, just posted for background 18:19:44 dougwig: unfortunatly, which is why if you can add an item to the infra agenda it would help many others 18:19:49 krtaylor: right, thank you 18:19:57 kevinbenton: yes 18:20:14 so more here or moving on for the moment? 18:20:28 allowing sc68cal_ to chime in if he gets an answer 18:20:51 sorry this was supposed to be under announcements as well 18:20:55 #info Helping out on the mailing list 18:21:08 it would be great if people could help out on the mailing list 18:21:16 some have and it is great to see, thank you 18:21:25 anteaya: thanks for raising this topic on meeting 18:21:26 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/043392.html 18:21:39 I'm just going to shoot an e-mail to the e-mail address the NEC CI account is listed as - and hope for the best :-\ 18:21:48 this item currently needs some attention and a response from cinder 18:22:02 jungleboyj: can you take it? 18:22:07 Did someone say Cinder? 18:22:12 e0ne: my pleasure 18:22:23 jungleboyj: yes the above linked email needs your attention 18:22:27 jungleboyj: yep:) 18:22:28 jungleboyj: can you reply? 18:22:37 sc68cal_: go you 18:22:37 sc68cal_: amotoki handles NEC if that’s what you are looking for... 18:23:07 anteaya: kevinbenton: thanks :) 18:23:08 anteaya: Yeah, I can follow up on that e-mail. 18:23:09 anteaya, I don't think this is a cinder question but an infra question 18:23:21 jungleboyj: thank you 18:23:23 i'm not sure thas such issue is onlu cinder related. maybe neutron too 18:23:32 anteaya: Wait, asselin is right. 18:23:33 asselin: please expand 18:23:57 anteaya, the question is how to configure zuul to limit which patches it checks to a file being changed 18:24:21 asselin: great, do you have any knowledge you can share to help find an answer to that question? 18:24:25 rule: when change includes file_regex, run test 18:24:26 or does anyone else? 18:24:47 Ok, so it sounds like you have that one asselin ? 18:24:49 anteaya, no, but I'm also interested in the answer :) 18:24:50 there is a lot of knowledge in this group about how to run a ci 18:25:13 does anyone in channel have even a bit of info to respond to this email? 18:25:25 since waiting for infra to answer may take some time 18:25:38 asselin: I’m sure zuul/layout.xaml or some similar file have a section where you configure a filter similar to those of a jenkins’s gerrit trigger 18:25:40 well I can take the action to ask -infra how to do it, if it's even possible today.It's possible that feature is not supporeted 18:25:46 * layout.yaml 18:25:49 i'm currently running against all commits, because i haven't had a chance to run this down. i'd love an example as well. 18:26:00 asselin: thanks, that will be a help indeed 18:26:23 asselin: is it ok to have one ci for two or more proejects? in my case it is cinder and nova 18:26:36 i think it's Jobs -> Files, but i haven't tested it. 18:26:45 #action asselin to ask infra about zuul configration to filter patches 18:26:52 asselin: did I get that right? 18:26:59 salv-orlando, yes, I noticed this recent patch is similar idea, (wait for jenkins +1 before running) which is nice for 3rd party as well: https://review.openstack.org/#/c/114712/1/modules/openstack_project/files/zuul/layout.yaml 18:27:11 anteaya: I can confirm it’s zuul/layout.yaml in oepnstack-infra/config 18:27:18 salv-orlando: thanks 18:27:18 yes, filter patches on changed files 18:27:26 there is a section for configuring the gerrit trigger on each pipeline 18:27:43 asselin: great, are you also willing to reply to the email with what you learn or would you like someone else to? 18:27:51 anteaya, sure 18:27:57 thanks 18:28:04 e0ne, yes, one account can test multiple projects, does that answer your question? 18:28:13 #action asselin to reply to http://lists.openstack.org/pipermail/openstack-dev/2014-August/043392.html with what he learns 18:28:15 thanks 18:28:21 krtaylor, yes, thanks 18:28:29 e0ne, let's bring it up in the open discussion 18:28:34 more on this or moving on? 18:28:39 e0ne, oh, ok, nm 18:28:56 okay moving on 18:29:03 #topic OpenStack Program items 18:29:15 #info Review Nova requirements progress and timeline https://wiki.openstack.org/wiki/HypervisorSupportMatrix/Requirements (jaypipes) 18:29:21 jaypipes: you're up 18:29:22 heyo. 18:29:33 #link https://wiki.openstack.org/wiki/HypervisorSupportMatrix/Requirements 18:29:37 hello jaypipes :D 18:30:18 jaypipes, thanks, was just looking for brief summary state and timeline, ect 18:30:28 anteaya: not sure there's all that much to discuss. basically, dansmith put together the above link, and it was discussed at the nova mid-cycle meetup. nobody had any problems with the requirements as listed on the above wiki page. 18:30:38 jaypipes: great 18:30:45 anteaya, krtaylor: no timeline AFAIK. 18:30:53 thanks for bringing it here so we know as well 18:31:03 np 18:31:21 jaypipes: so there isn't much of a description on the page 18:31:32 can you share a bit of context for the travel weary? 18:32:03 right, not much for when this would be a hypervisor minimum set of functionality 18:32:15 I was wanting to discuss here as that is also tied to testing 18:32:49 but, prob only interesting to those testing nova drivers 18:32:55 I don't know that information, unfortunately. I think Mikal is the arbiter of that. 18:33:10 jaypipes: ah okay 18:33:25 anyone implimenting testing for nova drivers with questions? 18:33:41 anteaya: I'm sorry. It's still something under active discussion what exactly the rigid requirements are for a virt driver staying in tree. 18:33:53 jaypipes: ah okay great 18:33:59 jaypipes: thanks for the early heads up 18:34:17 so if folks have a stake in this, be sure to participate in the discussion 18:34:33 jaypipes, thanks 18:34:38 all the usual places, I presume? irc, ml, meetings, correct jaypipes? 18:35:03 anteaya: ML, followed by IRC, yeah. 18:35:13 jaypipes: great thanks 18:35:19 any more here? 18:35:19 anteaya: the "fair standards for all hypervisors" ML thread speciically 18:35:29 jaypipes: ah yes, of course 18:35:45 anyone with a quick link to the parent post of that thread for the logs? 18:35:53 * krtaylor looks 18:35:55 one sec, will do. 18:36:12 http://lists.openstack.org/pipermail/openstack-dev/2014-July/040421.html 18:36:22 #link http://lists.openstack.org/pipermail/openstack-dev/2014-July/040421.html 18:36:25 thanks jaypipes 18:36:29 http://lists.openstack.org/pipermail/openstack-dev/2014-August/042284.html 18:36:38 it spans both months/ 18:36:50 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/042284.html 18:37:00 jaypipes: yes, a chewy chat indeed 18:37:05 yup. 18:37:11 okay any objection to moving to the next topic? 18:37:25 nope 18:37:26 next topic it is 18:37:33 #topic Deadlines & Deprecations 18:38:18 #info Cinder update (jungleboyj) 18:38:28 jungleboyj: your floor 18:38:47 anteaya: Thank you. 18:39:15 Sorry, I forgot to hit save on the agenda page earlier, but it should be there now. 18:39:21 jungleboyj: it is 18:39:28 Ok, good. 18:39:34 fill in the blanks 18:40:00 So, as far as your dates are concerned, DuncanT is pinging people this week if they have not indicated some progress. 18:40:30 There will be a mailing list e-mail next week if still no response and then patches to remove the non-compliant drivers will beging. 18:41:16 People are making progress though. IBM has their CIs running, but not all of them are against the main gerrit stream riht now due to being unreliable. 18:41:26 great so Cinder driver folk, make sure DuncanT is aware of your progress even if you are not yet in full compliance, don't stay quiet 18:41:30 HP and EMC are pretty reliably post results. 18:41:33 * anteaya nods 18:41:38 good work 18:41:42 anything else here? 18:42:00 anteaya: +2 to not being quiet. 18:42:09 I can cover the rest in open discussion. 18:42:12 okay 18:42:17 moving on 18:42:30 #topic Highlighting a Program or Gerrit Account 18:42:41 #info Communicating CI status changes and updates, posting to the -dev ml becomes seen as spam, options? 18:42:57 so we currently have about 87 gerrit accounts 18:43:14 if every one of them posts to the ml for status outages and return of service 18:43:18 that doesn't work 18:43:24 anteaya: just a wiki page with the current status of all CI seems adequate 18:43:29 anteaya: sorry, on previous topic, neutron is doing a CI housecleaning as well: http://lists.openstack.org/pipermail/openstack-dev/2014-August/043218.html 18:43:29 how do we communicate status? 18:43:47 dougwig: ah yes, sorry I forgot to ask about that 18:43:57 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/043218.html 18:44:10 #info neutron ci housecleaning 18:44:19 kevinbenton: that is a good idea 18:44:25 any other options? 18:45:08 anteaya, re: status topic, I had a discussion on nova about this last week, we decided that short-term we would report status on our third-party wiki page 18:45:11 anteaya - the problem is when a system is down no point in doing recheck. will a developer look at the wiki on all time? 18:45:11 anteaya: Hi There! about Neutron CI I have received tons of responses 18:45:56 okay lets' finish status reporting and then circle back to neutron ci updates 18:46:03 anteaya: I am planing to update the Neutron Wiki with the current status. What is clear is that CI requirements needs to be more specific 18:46:12 anteaya: my reaction to CI downtime is: why is it posting to gerrit at all for scheduled downtime, and similarly, why aren't the missed jobs being queued later? why announce at all? 18:46:16 nuritv_: agreed, unless it is one easy to access source it won't be referenced 18:46:28 dougwig: good points 18:46:37 re: status - this works for a quick fix becaus we already link to that page in comments 18:46:39 anteaya: maybe a comment in the gerrit? 18:46:53 no, not adding comments to gerrit 18:46:57 hmmm 18:47:10 I wonder if we can add info to this page: https://wiki.openstack.org/wiki/Category:ThirdPartySystems 18:47:14 #link https://wiki.openstack.org/wiki/Category:ThirdPartySystems 18:47:23 any wiki page experts here? 18:47:35 no, but that is the best user story for why there needs to be a ci dashboard 18:47:50 willing to experiment to see if we can have a table of ci statuses on that same page 18:48:03 krtaylor: hmmm, that might be interesting too 18:48:19 or just color code the name table cell 18:48:28 krtaylor: oh I like that 18:48:32 anteaya, I though emagana what trying to do that (at leat on the Neutron side) 18:48:50 green for up, red for down, amber for issues read the ci page for details 18:48:55 lyxus: Yes, that is my goal 18:48:57 there are several threads for how to fix this issue 18:49:10 we need the threads to converge 18:49:11 here is emagana's wiki: https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers 18:49:11 anteaya, exactly, stoplight 18:49:19 and for one person to champion the solution 18:49:28 I don't care what it is, we need a champion for it 18:49:36 lyxus: Unfortunately, it is quite manual right now 18:49:56 emagana, we just need to discuss how we define what is a good status for a CI :) 18:49:58 need automatic solution, else stale = useless 18:50:00 krtaylor: can you see if you can change your ci table colours as described above and report back 18:50:12 lyxus: not good status, just the ci is running 18:50:28 okay no solution this week, but we need to find one 18:50:34 we can't continue to spam the ml 18:50:46 anteaya, I know I can change it, just how to do it programatically is the issue 18:50:54 and we need folks to have an outlet to communiciate system status since that is encouraged 18:50:54 anteaya, then define what is a "running" CI 18:51:01 krtaylor: do it manually for this week 18:51:07 anteaya, will do 18:51:12 lyxus: at a future meeting 18:51:26 I just need omrim to stop spamming the ml 18:51:37 so thanks for attending the meeting omrim 18:51:49 we had previously discussed requiring builds of master at given intervals 18:52:05 #link https://review.openstack.org/#/c/105299/ 18:52:11 right but calibration is different from ormi's ml spam 18:52:23 Anteaya: Thanks for your comment 18:52:43 anteaya: i was just suggesting for an automated “status” page 18:52:53 #link http://lists.openstack.org/pipermail/openstack-dev/2014-August/043382.html 18:53:01 this is the problem I am trying to fix today 18:53:12 kevinbenton: yes and I would love that 18:53:37 reducing email spam is today's goal, having an automated setup is still a goal 18:53:58 yeah, i think we should just have a wiki page for that. if someone doesn’t want to check the wiki, they probably aren’t going to search their inbox anyway 18:54:01 okay any objections to moving to neutron ci cleanup? 18:54:10 kevinbenton: fair enough 18:54:17 Just brainstorming, but can we have an #openstack-cistatus IRC that CI systems send status messages to. Then dashboard can parse those messages to display current status? 18:54:37 and wikipage or colouring table entries is something we can do shor term 18:54:48 smcginnis: I like the idea 18:55:05 smcginnis: do you have any space for a proof of concept? 18:55:24 anteaya: Unfortunately better at suggesting ideas than implementing. ;) 18:55:33 #topic Neutron CI housecleaning 18:55:35 anteaya: Maybe once my corp firewall lets out IRC I can. 18:55:41 emagana: your floor 18:55:47 smcginnis: okay 18:56:36 * anteaya hopes emagana is still around 18:56:45 markmcclain: can you jump in here? 18:56:49 sorry! 18:56:51 I am here! 18:56:53 np 18:56:54 great 18:57:00 * anteaya listens 18:57:15 Neutron CI requiremets are not very clear 18:57:41 that is the big summary after my investigation 18:57:47 link again? 18:57:56 we need to work on very specific CI requirements 18:58:09 can you clarify? i've been going off of this: https://wiki.openstack.org/wiki/NeutronThirdPartyTesting 18:58:27 #link https://wiki.openstack.org/wiki/NeutronThirdPartyTesting 18:58:32 link: http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg32385.html 18:58:39 thanks emagana 18:58:54 #link http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg32385.html 18:59:00 all the responses that I got were explaining why their CI did not test the commit 18:59:09 one minute left and I think we need to wrap up 18:59:21 the bottomline is that requirements should be clear 18:59:26 neutron ci systems please continue on the #openstack-neutron channel 18:59:31 we also need to establish time to report back test 18:59:33 I will work with the Neutron community to do that 18:59:44 and clarify the -1 18:59:51 emagana: can we get a neutron agenda item for next week's agenda so we ensure you have more time 18:59:59 anteaya: I will 19:00:06 I agree this is an important discussion 19:00:09 and I support it 19:00:16 and we need to end for today 19:00:25 so let's contiue to discuss 19:00:31 thanks to everyone for a great meeting 19:00:43 items we didn't cover will roll over to next week 19:00:45 thanks 19:00:50 #endmeeting