Friday, 2014-06-13

fungidoes it open a new connection per query or something?00:00
clarkbit possible it may00:00
clarkbit uses sqlalchemy session objects. not sure if it creats a new one per query but can probably find out00:00
comstudjogo: If you can get me a list of instance uuids and regions00:01
comstudi don't have a lot of time right now00:01
jogocomstud: follow me to -infra00:01
comstudi'm in infra00:02
jogooh woops hehe00:02
comstudhere00:02
jogofungi: ^^00:02
comstudhahaha00:02
jogoclarkb: ^00:02
clarkboh yup it uses a session factory so that may be it00:02
jogocomstud graciously accepted being volunteered to clean out some deleting,error instances00:02
anteayaha ha ha00:03
comstudi'm going to attempt to delegate it00:03
comstudbut if i can't wake someone up, i'll try to take care of it quickly00:03
clarkbphschwartz was the one looking at it and I thought he had a list00:03
comstudoh okay00:03
clarkbhe was looking at it from the rax side00:03
jogoclarkb: comstud is rax too00:04
comstudperfect, i don't need to do anything00:04
anteayaclarkb: he is at a concert00:04
clarkbit was easy to determine which nodes were derpy bceause cell and nova api weren't coordinating with each other or some such00:04
clarkbanteaya: right just pointing out that we shouldn't need to give anyone data00:04
clarkbthey have it00:04
anteayaah00:04
jogoclarkb: true but the person with data is AFK00:04
jogoand we can get someone to help now00:04
clarkbjogo: right but I don't have the data00:05
clarkbjogo: I only ave data for what nova told me which isn't correct or some such00:05
jogoclarkb: oh can't you just do a nova list and see00:05
jogoclarkb: ohh I thought any instance in error,deleting is bad00:05
anteayawhat about fungi's pastes00:06
clarkbfungi: the scoped session it uses should be one per thread00:06
clarkbfungi: and we use a one thread server so we should be ko00:06
anteayahttp://paste.openstack.org/show/83880/00:06
clarkbfungi: clearly we arent' so something is up00:06
anteayahttp://paste.openstack.org/show/83881/00:06
anteayahttp://paste.openstack.org/show/83882/00:06
anteayahttp://paste.openstack.org/show/83885/00:06
clarkbanteaya: that shows you what the states are according to nodepool00:06
clarkbbut aiui this is a cells thing and we have no access to cells info00:06
anteayathere are uuid's are there not?00:06
*** plars has joined #openstack-infra00:07
clarkbanteaya: there are, but some of those nodes may be fine00:07
clarkbwe have no way of knowing00:07
clarkbunless I misunderstood what was said earlier00:07
comstudthis is not a cells thing00:07
anteayathey are in error status00:07
comstudat least, i have no idea what that means in this context00:07
anteayathat might be an incorrect status00:07
anteaya:(00:07
comstudnova ignores deletes on instances in task_state deleting.00:07
clarkbcomstud: cells says I am ignoring you nova api00:07
comstudis the problem00:07
clarkbnova api says no really please do this00:08
comstudclarkb: for 'delete' or something else?00:08
comstudit's a bug in compute/api00:08
clarkbcomstud: for delete because nova sometimes drops delets on teh floor00:08
jogoclarkb: comstud wrote cells00:08
clarkbjogo: I know00:08
clarkbso we reissue delete requests00:08
comstudthere's the same behavior with cells and without cells in this case00:08
clarkbcomstud: ah ok00:09
comstudis what i'm trying to say00:09
comstudit's a general 'bug' in nova00:09
clarkbit was framed as a cells thing earlier00:09
comstudjogo put up a review00:09
clarkbbut can totally believe its a noav bug00:09
comstudyeah, that's quite possible...00:09
comstudcells can be blamed for a lot of things00:09
comstud:)00:09
jogoclarkb: yeah I didn't see any cells specific thing in there yet00:09
clarkbanyways we have no way of knowing which is why we reissue the deletes00:09
comstudyeah00:09
comstuddeletes in 'deleting' task_state00:09
comstudare ignored by nova00:09
clarkbI can give you a complete list of the things we think should be deleting00:10
comstudso we have to reset the task_state to fix them00:10
jogoclarkb: the error is happening on the cell level but its not a cells isssue00:10
comstudjogo: This is why i think it's kinda dumb to leave things in 'deleting'00:10
comstudbecause it's a lie00:10
* jogo owes comstud a beer00:10
comstudjogo: correct00:10
clarkbcomstud: would a complete list of things we think should be deleting help?00:10
comstudjogo: and: also correct00:10
comstud:)00:10
jogocomstud: at least they have an error state. but I am not disagreeing with you00:10
clarkbfrom a nova api consumer perspective?00:10
comstudclarkb: Yes.. a list of instance uuids and their region00:10
comstudoptionally.. just your tenant ids00:11
comstudor id00:11
comstudwe probably have that :)00:11
comstudbut00:11
comstudi don't want to just blanket reset things in error/deleting00:11
comstudi'd rather you tell me what you want fixed00:11
comstudactually, it sounds like someone is going to kick it off here anyway00:12
*** melwitt has quit IRC00:13
*** ramashri has joined #openstack-infra00:13
comstudon some tenant id that ends with the same number that it starts with.00:13
comstud:)00:13
clarkbwell I almost have the list if I could convince pastebinit to stop dying00:14
comstudhehe00:15
*** W00die has quit IRC00:15
anteayaget MySQL to stick around00:15
markmcclainfrom the traceback..the original error that trigger the nova bug was related to contacting neutron?00:15
comstudclarkb: Would you prefer we also just delete them for you?00:16
*** ianw has quit IRC00:16
comstudafter resetting task state?00:16
markmcclainnetworking failure or possible neutron problem?00:16
comstudheh00:16
comstudsomething.00:16
*** ianw has joined #openstack-infra00:16
comstud:)00:16
comstudfor transparency purposes.. I think it's a scaling issue00:16
comstudwith neutron00:17
comstudthat we're working on... but TBH, I'm not 100% sure.. been busy on other things00:17
jgriffithclarkb: a start at any rate: https://review.openstack.org/9980900:18
markmcclaincomstud: thanks for sharing00:19
*** matsuhashi has joined #openstack-infra00:19
clarkbgah and of course xclip and firefox hate aech other00:19
jogomarkmcclain: isn't neutron fun00:20
comstudhaha00:20
comstudso, I actually gotta run00:21
comstudyou can email me and I can forward it00:21
anteayathanks comstud00:21
comstudbut00:21
comstudit sounds like someone is actually working on this already00:21
markmcclainjogo: yep :) the tricky part is the backend heavily influences scaling00:21
jogocomstud: thanks!00:21
comstudby matching 'error'/'deleting' and your project id00:21
comstudand resetting00:21
jogomarkmcclain: heh  yeah00:21
comstudIt also sounded like we were going to delete those for you afterwards00:22
comstudbut anyway00:22
clarkbcomstud: http://paste.openstack.org/show/83900/00:22
clarkbwoo xclip -selection clipboard00:22
comstudok00:22
jogoclarkb: I use pastebinit00:22
clarkbjogo: it wouldn't work00:22
clarkbjogo: pastebinit -b http://paste.openstack.org -i foo borked00:23
clarkbcould be related to the 500 errors00:23
comstudwhat's with the NULL server ids?00:23
comstudthat seems... unpossible.00:23
clarkbcomstud: I think that means nova didn't give us a uuid back00:23
comstud:)00:23
comstudwat00:23
clarkbeither that or I raced against the db00:23
clarkbbut they are >11 hours old so probably not that00:24
jogoanteaya: how does this look https://review.openstack.org/9979600:24
comstudyeah, something seems wrong with that data00:24
comstudbut00:25
comstudwe'll ignore those00:25
clarkbya thats actually amazing00:25
clarkbcould be amazing from your side or amazing from our side00:25
clarkbwe do have an alien lister we can use to hopefulyl sort that out00:25
* clarkb tries this00:25
clarkbI have zero nodes in the rax iad alien list00:26
fungithere are times when nova boot calls never get far enough to return an instance uuid00:27
clarkbwe are doing about 200kb per half hour of es slow log00:29
clarkbwe should be safe to leave that in place for a while00:29
clarkbfungi: I am going to put a small shell script in my homedir to turn off the slow log though00:29
*** sweston has quit IRC00:29
clarkbfungi: er homedir on es01 that way if you need it it is there00:29
*** mrmartin has quit IRC00:30
*** thuc_ has quit IRC00:30
*** thuc has joined #openstack-infra00:31
comstudok, people will be working this00:32
comstudI gotta run00:32
comstudi'll bbl00:32
clarkbcomstud: thanks00:33
clarkbfungi: two scripts, one to turn it on and the other to turn it off.00:33
clarkbfungi: pretty simple but this way you don't have to dig through docs00:33
anteayajogo: I get it now, I think00:33
fungiclarkb: thanks for the heads up. SergeyLukjanov ^ that may be of interest to you too if elasticsearch01.o.o say... runs out of space for logs or something00:35
*** thuc has quit IRC00:35
jogoanteaya: cool, let me know if you have any other questsions00:35
*** mmaglana has joined #openstack-infra00:36
anteayaahhhh, when can those deleting nodes get deleted00:37
anteayathat's about my only other question atm00:37
anteaya:D00:37
anteayabut I appreciate the walk though the status, I didn't have a clue how that was handled before00:38
*** dims_ has joined #openstack-infra00:39
anteayapolls close in 17 minutes and the thunder is threatening to take out my power00:43
anteayaboo00:43
clarkbanteaya: you guys don't vote by mail?00:44
anteayaI don't know if it is an option00:44
anteayawent into the legion was second in line this morning and first at my polling station00:44
anteayathe woman in front was 91 and brought her flashlight so she didn't have to ask for help to see the ballot00:45
anteayafeisty00:45
anteayaif I was out of the country I would do a mail in ballot or find out about it00:46
anteayabut mostly we show up, take the paper, mark it with a pencil, show the initals on the outside of the ballots to the returning officer and put it in the box00:46
anteayas/outside of the ballots/outside of the ballot00:47
*** jhesketh has quit IRC00:48
*** saper has joined #openstack-infra00:51
*** csheedy has joined #openstack-infra00:51
*** yamahata has joined #openstack-infra00:53
*** ramashri has quit IRC00:55
*** csheedy has quit IRC00:57
*** malini1 has joined #openstack-infra01:00
*** xchu has joined #openstack-infra01:01
*** mriedem has joined #openstack-infra01:03
*** nati_ueno has quit IRC01:04
clarkbso I just realized I arrive in frankfurt on the day of the world cup final01:04
anteayaso it is a bad day to travel or a good day to arrive in germany?01:06
anteayathe sunday?01:06
*** jhesketh has joined #openstack-infra01:07
anteayaany world cup is a great time to be in toronto01:07
anteayasome part of the city will be celbrating01:07
anteayaand everyone else is invited to the party01:07
tchaypounless they're rioting01:07
clarkbanteaya: I am guessing a little bit of both01:07
clarkbalso I am sure the festivities will change quite a bit depending on who is playing01:07
clarkbanteaya: yes the sunday01:08
anteayacool01:08
anteayayes01:08
anteayatchaypo: yeah, I can't remember world cup riots in toronto01:08
mattoliverauI think Josh and I are planning on arriving on the Saturday.. to attempt to fight jetlag.01:08
anteayamattoliverau: you and Josh are coming to Germany? awesome!!01:09
clarkbanteaya: well canada hasn't qulaified in forever01:09
clarkbanteaya: whereas germany is fielding a good team this year01:09
anteayaoh no, not canada's team01:09
mattoliverauWe've RSVP'ed... and the flights are still being organised, but should be :)01:09
anteayayeah, I hope they make it to the final01:09
anteayawould be awesome to be in germany for that01:09
clarkbanteaya: thats what causes the rioting01:09
clarkb:)01:10
clarkbanteaya: you all do that with hockey01:10
clarkbwin or lose: riots01:10
anteayainteresting01:10
anteayaI remember the vancouver riot01:10
clarkbmattoliverau: I am on an overnight flight and arrive ~9am Sunday01:10
*** yaguang has joined #openstack-infra01:10
anteayaand you have perceptions that rioting happens all the time01:10
mattoliverauclarkb: so you can join us for breakfast then :)01:11
clarkbmattoliverau: sure. do you plan on staying in darmstadt or frankfurt? I will be at the maritim konferenz hotel in darmstadt01:11
*** Ryan_Lane has quit IRC01:12
*** aysyd has quit IRC01:12
clarkband the game should be on that night01:12
mattoliverauclarkb: Rackspace doesn't move too fast... So I don't know yet :)01:12
*** camunoz has joined #openstack-infra01:15
*** MarkAtwood has joined #openstack-infra01:16
*** MarkAtwood has quit IRC01:16
mattoliverauclarkb: Josh sent the wiki page to the people organising our trip, so we should be in Darmstadt somewhere :)01:16
clarkbcool01:16
anteayawooooooo, looks good so far for the good side01:17
openstackgerritJoe Gordon proposed a change to openstack/requirements: Bump minimum hacking version to 0.9.2  https://review.openstack.org/9981501:18
*** nosnos has joined #openstack-infra01:21
openstackgerritMORITA Kazutaka proposed a change to openstack-infra/config: Add pylint job for swift3  https://review.openstack.org/9981601:22
*** sarob_ has quit IRC01:24
*** arnaud__ has quit IRC01:26
*** alexpilotti has quit IRC01:26
*** HenryG_ has joined #openstack-infra01:27
*** HenryG has quit IRC01:29
*** markmcclain has quit IRC01:34
*** asettle has quit IRC01:37
*** rwsu has quit IRC01:38
*** timrc is now known as timrc-afk01:40
*** marcoemorais has quit IRC01:49
*** timrc-afk is now known as timrc01:50
*** thomasbiege1 has joined #openstack-infra01:55
*** dkehnx has quit IRC02:00
*** mrodden1 has quit IRC02:01
*** dkehnx has joined #openstack-infra02:01
*** marun has quit IRC02:01
*** marun has joined #openstack-infra02:02
*** mrodden has joined #openstack-infra02:02
*** cp16net has joined #openstack-infra02:09
*** zns has quit IRC02:11
*** zns has joined #openstack-infra02:24
*** dims_ has quit IRC02:24
*** sarob_ has joined #openstack-infra02:25
mesteryjogo: Noticed the ping earlier, will read the logs, if it's something urgent, ping me back.02:28
*** sarob_ has quit IRC02:29
*** asettle has joined #openstack-infra02:33
openstackgerritNikhil Manchanda proposed a change to openstack-infra/config: Added new experimental job for trove functional tests  https://review.openstack.org/9851702:37
openstackgerritNikhil Manchanda proposed a change to openstack-infra/config: Use job-template for gate-trove-buildimage jobs  https://review.openstack.org/9968002:37
*** HenryG_ has quit IRC02:37
*** HenryG_ has joined #openstack-infra02:38
*** amcrn has quit IRC02:38
*** homeless has quit IRC02:39
*** zns has quit IRC02:39
*** dims_ has joined #openstack-infra02:39
*** malini1 has quit IRC02:40
*** malini1 has joined #openstack-infra02:40
*** zns has joined #openstack-infra02:41
*** tchaypo is now known as tchorizo02:43
*** dims_ has quit IRC02:44
*** zhiyan_ is now known as zhiyan02:45
*** arnaud has joined #openstack-infra02:45
*** sweston has joined #openstack-infra02:46
*** malini1 has quit IRC02:47
*** harlowja is now known as harlowja_away02:53
*** gargola has quit IRC02:54
*** amcrn has joined #openstack-infra02:55
*** Alexandra_ has joined #openstack-infra03:03
*** asettle has quit IRC03:04
*** plars has quit IRC03:05
*** gokrokve_ has joined #openstack-infra03:05
*** praneshp has quit IRC03:05
*** dims_ has joined #openstack-infra03:10
*** dims_ has quit IRC03:15
*** nosnos has quit IRC03:16
*** mriedem has quit IRC03:18
*** gyee has quit IRC03:19
*** gokrokve_ has quit IRC03:20
jogomestery: its not, was just amused that one of the reasons why the gate is hurting is neutron at  rax03:23
*** zns has quit IRC03:23
*** yamahata is now known as tacker-owner03:24
*** arnaud has quit IRC03:26
*** yjiang has joined #openstack-infra03:29
*** HenryG_ has quit IRC03:32
*** arnaud has joined #openstack-infra03:34
*** tacker-owner is now known as tacker-owner_03:37
*** tacker-owner_ is now known as tacker-owner__03:38
*** talluri has joined #openstack-infra03:41
*** markwash has joined #openstack-infra03:41
*** markwash_ has joined #openstack-infra03:44
*** tacker-owner__ is now known as tacker-owner03:45
*** markwash has quit IRC03:46
*** markwash_ is now known as markwash03:46
*** nosnos has joined #openstack-infra03:52
*** CaptTofu_ has quit IRC03:53
*** CaptTofu_ has joined #openstack-infra03:54
openstackgerritMatthew Treinish proposed a change to openstack-infra/devstack-gate: Add an option to enable the nova v3 api tests  https://review.openstack.org/9983303:56
openstackgerritMatthew Treinish proposed a change to openstack-infra/config: Add tempest jobs with nova-v3 enabled  https://review.openstack.org/9983503:56
*** CaptTofu_ has quit IRC03:58
*** tacker-owner has quit IRC03:59
*** thuc_ has joined #openstack-infra04:01
*** yamahata has joined #openstack-infra04:01
*** yamahata is now known as tacker-owner04:02
*** matsuhas_ has joined #openstack-infra04:03
*** markwash has quit IRC04:03
*** matsuhas_ has quit IRC04:04
*** matsuhas_ has joined #openstack-infra04:05
*** vladan has quit IRC04:05
*** matsuhashi has quit IRC04:05
*** alkari has joined #openstack-infra04:05
*** vladan has joined #openstack-infra04:06
*** arnaud has quit IRC04:09
*** dims_ has joined #openstack-infra04:11
*** matsuhas_ has quit IRC04:12
*** matsuhashi has joined #openstack-infra04:13
*** dims_ has quit IRC04:16
*** arnaud has joined #openstack-infra04:18
Alex_GaynorHow does one debug a failing test like this: http://logs.openstack.org/15/99815/1/check/check-requirements-integration-dsvm/d6aefe5/console.html04:25
clarkbAlex_Gaynor: http://logs.openstack.org/15/99815/1/check/check-requirements-integration-dsvm/d6aefe5/logs/devstacklog.txt.gz#_2014-06-13_03_18_57_97604:26
Alex_Gaynorclarkb: thanks. /me goes in search of the approrpirate bug04:27
Alex_Gaynordstufft: ^ new pip can't come soon enough04:27
dstufftdoes openstack have any sort of numbers for how many times a day they are failing from connection reset by peer04:28
Alex_Gaynordstufft: it's way less, most jobs don't hit pypi04:28
dstufftAlex_Gaynor: is openstack even using pip 1.5 yet? I thought they were still on 1.404:28
Alex_GaynorI think it's 1.5? Either that or we manually turn on wheels, I've seen some wheels get downloaded04:29
*** arnaud has quit IRC04:30
clarkbits 1.5 for devstack04:31
clarkbbut 1.4 for tox04:31
clarkbwe will be 1.5 everywhere when tox upgrade happens which will happen when they release new version04:32
*** om has joined #openstack-infra04:32
dstufftI forget how the openstack log searching stuff04:32
dstufftworks04:32
Alex_Gaynorelastic search04:33
dstufftcan someone possibly look up to see what providers/regions that connection reset by peer is occuring in ? :/04:33
dstuffttrying to isolate it to a particular fastly DC04:33
dstufftif it can be isolated04:33
clarkbdstufft: we are having trouble with elasticsearch right now so I turned off access04:33
dstufftoh04:33
dstufftok04:33
clarkbdstufft: Alex_Gaynor if you have any insight into rax cinder volume performance you might be able to help :)04:34
clarkbwe were told pvo was the person to talk to though04:34
dstufftI have zero insight into any of that04:34
*** alkari has quit IRC04:34
dstufftis cinder the block storage04:34
Alex_Gaynoryes04:34
dstufftI forget the code names04:34
dstufftPyPI is on Rackspace and has had some problems with the block storage in IAD too fwiw04:34
Alex_Gaynorpvo is way more likely to be the right person than I am :/ I don't know anything about our block storage deployment.04:34
Alex_GaynorI can dig more on monday if you like, I'm off tomorrow04:34
dstufftclarkb: is it random large IO latency spikes?04:35
*** mmaglana has quit IRC04:36
clarkbdstufft: it seems cyclic but yes04:36
clarkbdstufft: basisically every 6 hours or so we go through it04:36
clarkbiowait skyrockets04:37
*** thuc has joined #openstack-infra04:37
dstufftsounds familarish04:37
dstufftNot sure if ours is every 6 hours or so04:37
dstufftglusterfs gets real cranky04:37
dstufftwe made it better by killing the node completely and spinning up a new server, which seemed to have a better time at it04:38
*** saper has quit IRC04:38
*** tacker-owner has quit IRC04:38
clarkbya that was brought up04:38
clarkbwe may be colocated with someone else doing crazy IO every 6 hours or some such04:38
*** yamahata has joined #openstack-infra04:38
dstufftafaik you don't get dedicated IOPS unless you're paying like 1k/month per VM04:39
dstufftsomething like that04:39
*** yamahata is now known as tacker-owner04:39
clarkbAlex_Gaynor: sure will let you know04:40
dstufftguess it's only $500/month04:40
clarkbhopefully we are sorted by then04:40
dstufftclarkb: Alex_Gaynor if you figure something out let me know too, it'd be nice to fix the gluterfucking04:40
dstuffton PyPI04:40
*** thuc_ has quit IRC04:40
*** trinaths has joined #openstack-infra04:40
dstufftclarkb: Openstack CI runs in which Rackspace DC, and which HP DC?04:44
dstufftif you don't mind me asking04:44
*** dims_ has joined #openstack-infra04:46
clarkbdstufft: rax dfw for the "static" things04:46
clarkbthen dfw, iad, ord, for rax slaves and hpcloud east for hp slaves04:47
Alex_Gaynordstufft: the last one that failed was HP04:47
Alex_Gaynor(reset by peer)04:47
dstufftok thanks04:48
*** talluri has quit IRC04:49
*** ildikov has quit IRC04:50
*** dims_ has quit IRC04:51
*** masayukig has quit IRC04:53
*** tacker-owner has quit IRC04:53
*** talluri has joined #openstack-infra04:55
*** jcoufal has joined #openstack-infra04:57
*** masayukig has joined #openstack-infra04:57
*** thuc has quit IRC04:59
*** masayukig has quit IRC05:02
*** SumitNaiksatam has joined #openstack-infra05:03
*** ramashri has joined #openstack-infra05:05
*** arnaud__ has joined #openstack-infra05:06
*** zhiyan is now known as zhiyan_05:06
*** ramashri_ has joined #openstack-infra05:09
*** ramashri has quit IRC05:09
*** zhiyan_ is now known as zhiyan05:13
*** masayukig has joined #openstack-infra05:15
*** krtaylor has joined #openstack-infra05:24
*** ildikov has joined #openstack-infra05:26
*** sarob_ has joined #openstack-infra05:29
*** om has quit IRC05:29
*** sarob_ has quit IRC05:33
*** ihrachyshka has joined #openstack-infra05:35
*** ramashri_ has quit IRC05:38
*** talluri has quit IRC05:40
*** masayuki_ has quit IRC05:40
*** talluri has joined #openstack-infra05:41
*** om has joined #openstack-infra05:42
*** talluri has quit IRC05:45
*** Alexandra_ has quit IRC05:46
*** dims_ has joined #openstack-infra05:47
*** Longgeek has joined #openstack-infra05:47
*** e0ne has joined #openstack-infra05:51
*** dims_ has quit IRC05:51
*** rcarrill` has joined #openstack-infra05:51
*** matsuhashi has quit IRC05:52
*** rcarrillocruz has quit IRC05:53
*** rcarrillocruz has joined #openstack-infra05:54
*** sarob_ has joined #openstack-infra05:54
*** rcarrill` has quit IRC05:56
*** matsuhashi has joined #openstack-infra05:57
*** sarob_ has quit IRC05:59
*** jeremyb has quit IRC06:00
*** e0ne has quit IRC06:02
*** e0ne has joined #openstack-infra06:03
*** praneshp has joined #openstack-infra06:03
*** cody-somerville has quit IRC06:04
*** yamahata has joined #openstack-infra06:04
openstackgerritIan Wienand proposed a change to openstack-infra/devstack-gate: [WIP] cleanup of log copy  https://review.openstack.org/9985206:05
*** arnaud__ has quit IRC06:06
*** talluri has joined #openstack-infra06:06
*** e0ne has quit IRC06:08
*** basha has joined #openstack-infra06:09
*** praneshp_ has joined #openstack-infra06:10
*** camunoz has quit IRC06:11
*** praneshp has quit IRC06:13
*** praneshp_ is now known as praneshp06:13
*** cody-somerville has joined #openstack-infra06:15
*** zehicle_at_dell has joined #openstack-infra06:16
*** om has quit IRC06:30
*** doude has joined #openstack-infra06:30
*** ihrachyshka has quit IRC06:30
*** zhiyan is now known as zhiyan_06:32
*** dkehn_ has joined #openstack-infra06:34
*** penguinRaider has joined #openstack-infra06:36
*** e0ne has joined #openstack-infra06:38
*** dkehnx has quit IRC06:38
*** e0ne has quit IRC06:39
*** e0ne has joined #openstack-infra06:40
*** bogdando has joined #openstack-infra06:42
*** e0ne has quit IRC06:42
*** dims_ has joined #openstack-infra06:47
*** flaper87|afk is now known as flaper8706:49
*** om has joined #openstack-infra06:51
*** dims_ has quit IRC06:54
*** talluri has quit IRC06:54
*** talluri has joined #openstack-infra06:55
*** cody-somerville has quit IRC06:57
*** talluri has quit IRC06:59
*** zhiyan_ is now known as zhiyan07:00
*** rdopieralski has joined #openstack-infra07:01
*** jlibosva has joined #openstack-infra07:03
*** srenatus has quit IRC07:06
*** srenatus has joined #openstack-infra07:06
*** doude has quit IRC07:07
*** doude has joined #openstack-infra07:07
*** basha has quit IRC07:08
*** mrmartin has joined #openstack-infra07:09
*** cody-somerville has joined #openstack-infra07:09
*** achuprin_ has quit IRC07:15
*** _nadya_ has joined #openstack-infra07:16
*** jamielennox is now known as jamielennox|away07:17
*** andreykurilin_ has joined #openstack-infra07:21
*** dkehn has quit IRC07:22
*** medieval1 has quit IRC07:23
*** dkehn_ has quit IRC07:23
*** dkehn has joined #openstack-infra07:23
*** mrda is now known as mrda-weekend07:25
*** medieval1 has joined #openstack-infra07:26
*** dkehn_ has joined #openstack-infra07:28
*** afazekas_ has joined #openstack-infra07:28
*** achuprin_ has joined #openstack-infra07:30
*** matsuhas_ has joined #openstack-infra07:36
*** matsuhashi has quit IRC07:36
*** amcrn has quit IRC07:37
*** andreykurilin_ has quit IRC07:37
*** andreykurilin_ has joined #openstack-infra07:38
*** ihrachyshka has joined #openstack-infra07:38
*** medieval1 has quit IRC07:39
*** dkehn has quit IRC07:39
*** dkehn_ has quit IRC07:39
*** hashar has joined #openstack-infra07:39
*** om has quit IRC07:41
*** medieval1 has joined #openstack-infra07:45
*** dkehn has joined #openstack-infra07:45
*** dkehn_ has joined #openstack-infra07:46
*** cody-somerville has quit IRC07:50
*** dims_ has joined #openstack-infra07:50
*** isviridov|away is now known as isviridpv07:55
*** isviridpv is now known as isviridov07:55
*** dims_ has quit IRC07:55
*** talluri has joined #openstack-infra07:55
*** sarob_ has joined #openstack-infra07:57
*** Hal has joined #openstack-infra07:59
*** Hal is now known as Guest4718407:59
*** talluri has quit IRC08:00
*** sarob_ has quit IRC08:01
*** mrmartin has quit IRC08:03
*** cody-somerville has joined #openstack-infra08:04
openstackgerritBob Ball proposed a change to openstack-infra/devstack-gate: Merge ZUUL_REF branch  https://review.openstack.org/9986308:05
BobBallhmmm - that's not where it's meant to be going - sorry08:07
*** marios has joined #openstack-infra08:07
*** zhiyan is now known as zhiyan_08:10
*** srenatus has quit IRC08:12
*** srenatus has joined #openstack-infra08:13
*** tkelsey has joined #openstack-infra08:13
*** e0ne has joined #openstack-infra08:16
*** dizquierdo has joined #openstack-infra08:16
*** e0ne_ has joined #openstack-infra08:16
*** derekh_ has joined #openstack-infra08:17
*** amcrn has joined #openstack-infra08:18
*** e0ne has quit IRC08:20
*** trinaths has quit IRC08:23
*** pblaho has joined #openstack-infra08:26
*** e0ne_ has quit IRC08:28
*** e0ne has joined #openstack-infra08:28
derekh_SergeyLukjanov: I'm not seeing anything going through the check-tripleo queue, would you be able to see if you can see any problemns your end?08:28
derekh_hmm there is a template that started building 2 dates ago#08:30
*** zhiyan_ is now known as zhiyan08:30
derekh_but nodepool seems to have 17 instances, one in ERROR state08:31
*** penguinRaider has quit IRC08:34
*** cody-somerville has quit IRC08:39
*** eglynn-office has joined #openstack-infra08:41
eglynn-officegood morning folks!08:42
eglynn-officeFYI logstash appears to be down ... http://logstash.openstack.org/08:43
*** pelix has joined #openstack-infra08:44
derekh_tcpdump on tripleo rack show no traffic at all the API, as if nodepool isn't trying08:46
openstackgerritThierry Carrez proposed a change to openstack-infra/release-tools: Add script for new-style milestone publication  https://review.openstack.org/9812308:49
ttxSergeyLukjanov: I updated the script -- this is the version I ended up using for juno-1 ^08:50
*** talluri has joined #openstack-infra08:50
ttxSergeyLukjanov: I'd love your +1 on it before I self-approve it08:50
*** dims_ has joined #openstack-infra08:51
*** cody-somerville has joined #openstack-infra08:51
*** zhiyan is now known as zhiyan_08:52
*** rcarrill` has joined #openstack-infra08:54
*** johnthetubaguy has quit IRC08:55
*** thomasbiege1 has left #openstack-infra08:55
*** rcarrillocruz has quit IRC08:55
*** dims_ has quit IRC08:56
*** johnthetubaguy has joined #openstack-infra08:56
*** johnthetubaguy has quit IRC08:57
*** johnthetubaguy has joined #openstack-infra08:57
*** johnthetubaguy has quit IRC08:57
*** matsuhas_ has quit IRC08:57
*** zhiyan_ is now known as zhiyan09:00
*** johnthetubaguy has joined #openstack-infra09:02
*** johnthetubaguy has quit IRC09:03
*** rdopieralski is now known as rdopiera09:05
*** johnthetubaguy has joined #openstack-infra09:05
*** basha has joined #openstack-infra09:06
*** rcarrill` is now known as rcarrillocruz09:06
*** CaptTofu_ has joined #openstack-infra09:07
*** zhiyan is now known as zhiyan_09:09
*** d0ugal has quit IRC09:09
*** d0ugal has joined #openstack-infra09:10
*** alkari has joined #openstack-infra09:10
*** zhiyan_ is now known as zhiyan09:10
*** matsuhashi has joined #openstack-infra09:12
*** CaptTofu_ has quit IRC09:12
*** talluri has quit IRC09:13
isviridovttx, as far as I know it is public holyday in Russia till Monday09:13
*** talluri has joined #openstack-infra09:13
isviridovttx, so SergeyLukjanov is not available. Also need him09:14
ttxhah, no pb09:14
*** alkari has quit IRC09:15
isviridovttx, BTW do infra team creating new projects only every Friday? Is today one of them?09:17
*** ildikov has quit IRC09:17
*** ildikov has joined #openstack-infra09:17
*** talluri has quit IRC09:18
ttxisviridov: they usually do it over weekends yes. Not sure this is one of them09:18
* isviridov will ask in the evening09:20
*** ominakov has joined #openstack-infra09:21
*** habib has joined #openstack-infra09:24
mattoliverauPhew, a long day, I'm calling it a night, have a great weekend all.09:25
*** mkerrin has quit IRC09:26
*** johnthetubaguy_ has joined #openstack-infra09:30
*** johnthetubaguy has quit IRC09:31
*** johnthetubaguy_ is now known as johnthetubaguy09:31
*** mkerrin has joined #openstack-infra09:33
*** Guest47184 has quit IRC09:38
*** Guest47184 has joined #openstack-infra09:41
derekh_Anybody have any thoughts on why nothing is running in the check-tripleo queue09:41
*** praneshp has quit IRC09:42
*** matsuhashi has quit IRC09:43
*** talluri has joined #openstack-infra09:44
openstackgerritA change was merged to openstack-infra/config: Be specific about which ES nodes are puppetable  https://review.openstack.org/9979409:45
*** talluri_ has joined #openstack-infra09:45
*** maxbit has quit IRC09:46
*** talluri has quit IRC09:48
*** yamahata_ has quit IRC09:49
gilliardHi all - is there something wrong with nodepool?  Looks like >50% instances are "deleting"09:50
*** talluri_ has quit IRC09:50
*** dims_ has joined #openstack-infra09:52
*** dims_ has quit IRC09:56
*** ccorrigan has quit IRC09:57
*** sarob_ has joined #openstack-infra09:58
*** talluri has joined #openstack-infra09:59
derekh_gilliard: somethings seems up alright, its not using the tripleo region at all10:02
*** sarob_ has quit IRC10:02
openstackgerritA change was merged to openstack-infra/config: Remove tripleo cross-testing with oslotest  https://review.openstack.org/9291010:02
*** dkehn__ has joined #openstack-infra10:04
*** rlandy has joined #openstack-infra10:04
*** om has joined #openstack-infra10:04
*** jamielennox|away has quit IRC10:04
*** zhiyan is now known as zhiyan_10:06
gilliardI haven't any way to investigate nodepool itself but if anyone suspects problems with HP instances just give me a shout :)10:06
*** jamielennox|away has joined #openstack-infra10:07
openstackgerritThierry Carrez proposed a change to openstack-infra/release-tools: Support swift and oslo milestone releases  https://review.openstack.org/9989210:08
*** dkehn has quit IRC10:08
*** AaronGr has quit IRC10:09
*** AaronGr has joined #openstack-infra10:11
*** yamahata has quit IRC10:18
*** e0ne has quit IRC10:22
*** e0ne has joined #openstack-infra10:23
*** Longgeek has quit IRC10:26
*** nosnos has quit IRC10:26
*** Longgeek has joined #openstack-infra10:26
*** e0ne has quit IRC10:27
*** hashar has quit IRC10:27
*** Longgeek_ has joined #openstack-infra10:27
sdagueI guess we never turned the api back on for elastic search10:28
*** Longgeek has quit IRC10:28
*** e0ne has joined #openstack-infra10:28
sdaguegilliard: I think the issue is more on the rax side right now10:28
gilliardsdague: OK. Do you have a graphite link that splits up the nodepool data by cloud?10:29
sdagueI don't10:30
* gilliard goes to poke around in graphite.10:30
sdaguehonestly, I just ask the infra folks about nodepool when I think it's bonkers :)10:30
sdaguemaybe SergeyLukjanov would know10:30
gilliardNo probs.  I think it'd be useful to have some info about nodepool on our wall somewhere10:31
*** markmc has joined #openstack-infra10:33
*** Guest47184 has quit IRC10:33
*** nosnos has joined #openstack-infra10:34
*** ociuhandu has joined #openstack-infra10:35
derekh_sdague: If you have a chance would you mind looking to see why nodepool doesn't seem to be using the tripleo region10:37
derekh_sdague: or is it todo with the RAX problem you mentioned10:37
openstackgerritA change was merged to openstack-infra/config: Remove link to devstack_launch_slave.pp  https://review.openstack.org/9711010:38
sdaguederekh_: I don't have access to nodepool10:38
lifelessderekh_: you want SergeyLukjanov10:38
derekh_sdague: ahh ok, np10:38
sdagueit can't be related to the rax issue10:38
*** zehicle_at_dell has quit IRC10:38
lifelesswell, it could, but very very unlikely given nodepools internals10:38
derekh_lifeless: I pinged SergeyLukjanov earlier he doesn't seem to be around10:38
sdaguethat's just the fact that rax has a ton of nodes in 'deleting' state that are stuck10:38
*** e0ne has quit IRC10:38
*** nosnos has quit IRC10:39
sdagueand we keep looping and retrying them10:39
sdaguebut it doesn't matter, because you can't delete a deleting node10:39
lifelessI wonder if we hit the socket timeout wedge again10:39
lifelesson the rh1 region this time10:39
*** e0ne has joined #openstack-infra10:39
lifelesssdague: you will be able to once jogos patch lands10:40
derekh_lifeless: possibly, we havn't seen it in that region befor I think but that doesn't mean much10:40
sdaguelifeless: and once it gets into the clouds :)10:40
lifelesssdague: ponies!10:40
lifelesssdague: what are you doing up? Am I up too late?10:40
sdagueI'm usually up this early :)10:41
sdagueI tend to start my day at 6am EST10:41
*** Alexei_987 has quit IRC10:41
*** denis_makogon has joined #openstack-infra10:45
openstackgerritLukas Bednar proposed a change to openstack-infra/jenkins-job-builder: multijob: added kill-phase-on option  https://review.openstack.org/9990310:45
*** _nadya_ has quit IRC10:45
gilliardThu/Fri are public holidays in Russia, SergeyLukjanov is out till Mon I think.10:46
*** chandan_kumar has joined #openstack-infra10:51
*** chandankumar has quit IRC10:52
sdaguegilliard: gotcha10:52
*** radez_g0n3 is now known as radez10:52
*** dims_ has joined #openstack-infra10:52
*** chandan_kumar has quit IRC10:56
*** chandan_kumar has joined #openstack-infra10:57
*** dims_ has quit IRC10:57
*** _nadya_ has joined #openstack-infra11:03
openstackgerritA change was merged to openstack-infra/config: Add a mailing list for Win The Enterprise WG  https://review.openstack.org/9876211:06
*** julim has joined #openstack-infra11:08
*** yjiang has quit IRC11:11
*** trinaths has joined #openstack-infra11:12
*** ildikov_ has joined #openstack-infra11:13
*** ildikov has quit IRC11:14
*** dims_ has joined #openstack-infra11:16
*** _nadya_ has quit IRC11:19
trinathsanteaya: hello11:20
enikanorovwhatsup with logstash?  "If it helps, I received a 0 error from: "11:22
*** marios has left #openstack-infra11:23
*** viktors|afk is now known as viktors11:26
*** tkelsey has quit IRC11:27
trinathsGot a question on gerrit change sets. how to know by command line that a specific change is merged ? any ideas11:28
*** hashar has joined #openstack-infra11:29
*** thomasbiege has joined #openstack-infra11:31
*** thomasbiege has quit IRC11:32
*** talluri has quit IRC11:33
sdagueis there anyone with gerrit admin here?11:33
sdagueanteaya: do you have that yet?11:33
*** talluri has joined #openstack-infra11:33
sdagueI need to get added to - https://review.openstack.org/#/admin/groups/363,members  when there is a chance11:34
*** srenatus has quit IRC11:35
*** srenatus has joined #openstack-infra11:35
*** zul has quit IRC11:36
*** talluri has quit IRC11:38
*** thomasbiege has joined #openstack-infra11:42
openstackgerritThierry Carrez proposed a change to openstack-infra/infra-specs: Added specification for storyboard story tags  https://review.openstack.org/9721111:44
*** yamahata has joined #openstack-infra11:46
openstackgerritThierry Carrez proposed a change to openstack-infra/infra-specs: Added specification for storyboard story tags  https://review.openstack.org/9721111:46
openstackgerritThierry Carrez proposed a change to openstack-infra/infra-specs: Added specification for storyboard story tags  https://review.openstack.org/9721111:47
ttxsorry for the noise11:47
*** yamahata is now known as tacker-owner11:47
*** matjazp has joined #openstack-infra11:48
*** tacker-owner is now known as yamahata11:49
*** _nadya_ has joined #openstack-infra11:49
*** trinaths has quit IRC11:50
*** yamahata is now known as tacker-owner11:53
*** trinaths has joined #openstack-infra11:54
*** basha_ has joined #openstack-infra11:55
*** tacker-owner has quit IRC11:55
trinathsfungi: hi11:55
*** yamahata has joined #openstack-infra11:56
*** basha has quit IRC11:56
*** basha_ is now known as basha11:56
*** dizquierdo has quit IRC11:56
*** maxbit has joined #openstack-infra11:56
*** lcostantino has joined #openstack-infra11:57
*** weshay has joined #openstack-infra12:01
fungisdague: adding you now12:01
trinathsfungi: hi12:02
trinathsfungi: need some help on jenkins job builder.12:02
fungisdague: done12:02
*** mbacchi has joined #openstack-infra12:02
fungitrinaths: what's the trouble?12:03
*** mwagner_lap has quit IRC12:03
trinathsfungi:  i'm using single jenkins node for devstack. now I have another node in place and its sync with jenkins master.12:04
trinathsfungi: and now when there are serveral jobs in queue from zuul, I need to run 2 jobs simultaneously.12:04
trinathsfungi: can you help me on how can I achieve this.? guidelines please12:05
trinathsfungi: my jenkins_jobs/config/projects.yaml looks like, http://paste.openstack.org/show/83934/12:06
*** ArxCruz has joined #openstack-infra12:06
fungitrinaths: you would either need to assign some jobs to separate slave node names, or set a common slave label on them12:07
fungitrinaths: for openstack infra, we use common sets of platform-specific node labels to accomplish that12:08
trinathsfungi: okay. my old slave-node label is 'jenkins_slave'. for the new one I named 'jenkins_slave_2'. If I move the new one's lable to 'jenkins_slave', I can run two jobs simultaneously. right ?12:09
fungitrinaths: yes, or i think you can use a boolean operator on the node line to specify multiple names12:10
fungiif memory serves it would be like node: jenkins_slave || jenkins_slave_212:11
trinathsfungi: okay12:11
*** dprince has joined #openstack-infra12:11
trinathsfungi: thanks for the help12:11
*** yaguang has quit IRC12:12
*** basha has quit IRC12:12
sdaguefungi: can you also turn apache back on for logstash.openstack.org?12:13
fungitrinaths: yeah, while we do use labels, we sometimes use the || operator to specify more than one label, for example http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/jenkins_job_builder/config/projects.yaml#n30912:13
fungisdague: sure12:13
sdaguethanks sir12:13
*** eharney has joined #openstack-infra12:14
sdaguefungi: hmm... being in the ptl group doesn't put me in the core group?12:15
sdagueI can't seem to modify this - https://review.openstack.org/#/admin/groups/362,members12:15
sdagueor is this an issue about possibly having multiple ids again in gerrit?12:15
fungisdague: no, they're just separate groups12:16
*** ihrachyshka has quit IRC12:16
fungii added you to that one too now12:16
*** ihrachyshka has joined #openstack-infra12:16
fungiyou can make one included in the other or something if it suits you12:16
*** bookwar has joined #openstack-infra12:17
*** trinaths has quit IRC12:17
*** CaptTofu_ has joined #openstack-infra12:17
fungisdague: and kibana should be reachable again now12:17
*** om has quit IRC12:19
*** aysyd has joined #openstack-infra12:20
*** rfolco has joined #openstack-infra12:20
*** mriedem has joined #openstack-infra12:22
*** talluri has joined #openstack-infra12:23
sdaguefungi: thanks12:24
openstackgerritA change was merged to openstack-infra/devstack-gate: Fix ntp restart for Fedora  https://review.openstack.org/9828812:25
*** mfer has joined #openstack-infra12:26
*** yamahata has quit IRC12:28
*** alexpilotti has joined #openstack-infra12:30
*** talluri has quit IRC12:31
*** talluri has joined #openstack-infra12:32
*** radez is now known as radez_g0n312:33
*** talluri has quit IRC12:37
*** tkelsey has joined #openstack-infra12:38
*** om has joined #openstack-infra12:39
openstackgerritPeter Belanyi proposed a change to openstack-infra/config: Add jshint job for tuskar-ui  https://review.openstack.org/9647312:42
*** Guest47184 has joined #openstack-infra12:44
*** CaptTofu_ has quit IRC12:48
*** ildikov has joined #openstack-infra12:52
*** ildikov_ has quit IRC12:52
derekh_fungi: if you have a few minutes could you see why nothing seems to be running in the check-tripleo queue12:56
*** matjazp has quit IRC12:59
openstackgerritMatt Riedemann proposed a change to openstack-infra/config: Don't run large-ops test on stable/havana branches  https://review.openstack.org/9975013:00
*** med_ has joined #openstack-infra13:01
*** med_ has quit IRC13:01
*** med_ has joined #openstack-infra13:01
*** adalbas has joined #openstack-infra13:03
*** om has quit IRC13:05
*** HenryG has joined #openstack-infra13:07
*** russellb is now known as rustlebee13:07
openstackgerritValeriy Ponomaryov proposed a change to openstack-infra/config: Added devstack job for manila  https://review.openstack.org/9993313:11
openstackgerritMatthew Treinish proposed a change to openstack-infra/config: Add tempest jobs with nova-v3 enabled  https://review.openstack.org/9983513:17
*** hashar has quit IRC13:17
*** yolanda has quit IRC13:22
*** smarcet has joined #openstack-infra13:23
*** blamar has quit IRC13:26
*** matjazp has joined #openstack-infra13:26
*** yolanda has joined #openstack-infra13:26
*** radez_g0n3 is now known as radez13:26
*** lbragstad has joined #openstack-infra13:26
*** basha has joined #openstack-infra13:27
*** matjazp has quit IRC13:27
*** reaper has quit IRC13:28
*** CaptTofu_ has joined #openstack-infra13:28
sdaguefungi: I also think a pretty substantial actor in our failures is rax networking13:30
*** lbragstad has quit IRC13:31
sdaguethe network timeouts talking to even our own pip mirror are pretty high13:31
*** homeless has joined #openstack-infra13:31
*** rdopiera has quit IRC13:31
*** dkehn__ is now known as dkehnx13:31
*** zul has joined #openstack-infra13:32
*** dkehn_ is now known as dkehn13:32
*** basha has quit IRC13:32
fungiderekh_: i'll try, but i'm in the middle of dismantling all my servers and network, so i'm about to go dark for a bit of the day13:33
derekh_fungi: ok, cool13:34
*** rdopieralski has joined #openstack-infra13:36
*** vhoward has joined #openstack-infra13:37
*** dkliban_afk is now known as dkliban13:39
StevenKfungi: Moving, or something more sinister?13:41
*** malini1 has joined #openstack-infra13:42
*** malini1 has left #openstack-infra13:43
fungiStevenK: sinister moving13:44
StevenKfungi: Haha13:44
johnthetubaguysdague: the issues with the mirrors came up before, do you guys have the info you need to make support talk to you, to try fix that?13:46
sdaguejohnthetubaguy: honestly, I have no idea13:46
johnthetubaguysdague: is it talking to the ubuntu mirror, or the rackspace mirror?13:46
sdaguejohnthetubaguy: both, as well as pypi.openstack.org13:47
sdaguewhich is hosted in rax13:47
johnthetubaguyI think you have to change the fqdn of the mirror to reach hours, its not like a transaprent procsy13:47
johnthetubaguyhmm, that sucks13:47
johnthetubaguybetween DC will just be regular internet traffic I think13:47
johnthetubaguyso you might want one per DC13:47
johnthetubaguyand put it onto service net, not the public network13:47
sdaguewell, that doesn't help the hp regions13:47
johnthetubaguybut agreed, you shouldn't see those problems anyway13:48
johnthetubaguyright13:48
johnthetubaguysdague: how big is your server? it might be our QoS on your networking throttling things13:48
sdaguejohnthetubaguy: I know none of these details actually13:48
fungisdague: johnthetubaguy: yeah, we've talked about quite possibly putting mirrors in each region of each provider we use13:48
sdagueI can just see them from the fail logs13:48
johnthetubaguysdague: true true13:49
fungii think we agree it's probably a good way to mitigate it, just need someone with available time to get it all puppeted13:49
johnthetubaguyfungi: how about resize up to 30Gb instance13:49
johnthetubaguythat way you get the full box of network (in standard thats 1Gb link)13:49
johnthetubaguyshould stop any OVS issues from noisy friends, and reduce the risk of you hitting QoS limits13:50
sdaguejohnthetubaguy: the issue is, right now, we've got 112 race bugs we are tracking in elastic recheck13:50
fungijohnthetubaguy: that's not a bad idea actually13:50
sdagueand seem to be adding 3 or 4 a day13:50
*** hashar has joined #openstack-infra13:50
sdagueso diving really deep on any particular one is hard in the firehose :(13:50
johnthetubaguysdague: understood13:51
*** jistr has joined #openstack-infra13:51
*** _nadya_ has quit IRC13:51
johnthetubaguyfungi: I would resize up to 30Gb standard, or snapshot and rebuild, I guess, to stop the outage, then see if that helps13:51
johnthetubaguyfungi: its a slight stab in the dark, I know, but should be a quick one13:52
*** mwagner_lap has joined #openstack-infra13:52
srenatushmm having problems up update a `contact information` on gerrit right now... known issue?13:53
*** matjazp has joined #openstack-infra13:54
fungisrenatus: the error message got more vague when we upgraded gerrit recently, but make sure you've followed the suggestions at https://wiki.openstack.org/wiki/CLA-FAQ#When_trying_to_sign_the_new_ICLA_and_include_contact_information.2C_why_am_I.27m_getting_an_error_message_saying_that_my_E-mail_address_doesn.27t_correspond_to_a_Foundation_membership.3F13:54
fungisrenatus: ideally you were following the instructions at https://wiki.openstack.org/wiki/How_To_Contribute#Contributor_License_Agreement which would tell you to sign up for a free openstack foundation membership before moving onto gerrit13:56
srenatusfungi: thanks for the pointer, checking that13:56
fungisrenatus: the primary/preferred e-mail address you listed in your foundation profile and your gerrit settings need to match initially while you're submitting contact into in gerrit, to link the accounts13:57
*** _nadya_ has joined #openstack-infra13:57
*** annegentlereally has joined #openstack-infra13:57
fungisrenatus: we've got efforts underway to simplify all this, so apologies for the complication there13:57
srenatus:) no problems13:57
srenatuseverything works fine for me, anyways, just trying to get a coworker set up wit gerrit...13:58
srenatuscouldn't remember all the steps.13:58
srenatusthanks again13:58
fungisrenatus: you're welcome!13:58
*** zz_gondoi is now known as gondoi13:59
*** yamahata has joined #openstack-infra14:00
*** yamahata has quit IRC14:01
fungiderekh_: the only 18 nodes i see are in the red hat region, and they're all in a delete state, most for roughly a day at this point14:01
*** yamahata has joined #openstack-infra14:01
*** basha has joined #openstack-infra14:02
*** dizquierdo has joined #openstack-infra14:02
*** rdopieralski has quit IRC14:03
derekh_fungi: ya, we arn't using the other region at the moment, I see all of those nodes (except 1 which is error) in a running state14:03
*** thomasbiege has left #openstack-infra14:03
fungiConnectionError: HTTPSConnectionPool(host='ci-overcloud.rh1.tripleo.org', port=13000): Max retries exceeded with url: /v2.0/tokens (Caused by <class 'socket.error'>: [Errno 110] Connection timed out)14:03
fungiderekh_: that looks like the probable culprit14:03
derekh_fungi: is that currently happening or was it back then it tried to delete them?14:04
fungiand now i have to disappear for a while... unplugging my networks here and whatnot. should be back in a couple hours, hopefully14:04
fungiderekh_: that was at 2014-06-13 07:23:55,014 utc14:04
fungione of the delete attempts14:04
fungimost recent mention of tripleo in the nodepool log i could find14:05
* fungi vanishes in a puff of NO CARRIER14:05
derekh_fungi: ok, I'm not seeing any api traffic at all from nodepool14:05
derekh_fungi: thanks, I'll see if I can grab somebody else when their on14:05
*** jistr has quit IRC14:06
*** jistr has joined #openstack-infra14:07
*** matjazp has quit IRC14:08
*** jcoufal has quit IRC14:08
*** blamar has joined #openstack-infra14:11
srenatusjust learnt that you can do interesting things by ssh'ing into review.openstack.org and using `gerrit`.  Is that ok in general or is there a prefered API to use for this data?14:13
sdaguesrenatus: that's the documented api14:13
sdaguethere is also now a rest api14:13
sdaguebut it's totally legit14:14
anteayasrenatus: you can only run the commands that your user is allowed given the permissions on that user14:14
*** thedodd has joined #openstack-infra14:15
sdaguederekh_: is there an external to RH monitoring job on your endpoints? so that it can at least isolate that piece14:15
*** matjazp has joined #openstack-infra14:16
*** gondoi is now known as zz_gondoi14:19
*** _nadya_ has quit IRC14:20
derekh_sdague: nothing monitoring it, but I can access it from outside RH14:21
sdaguederekh_: well, if there was something that was hitting it more regularly, it might exposed the blips. The one off checks don't usually catch those.14:21
*** CaptTofu_ has quit IRC14:22
sdagueespecially as this seems to be a more regular occurance14:22
*** ihrachyshka has quit IRC14:22
derekh_sdague: yup agreed, we have a whole bunch of things in the CI spec to get through, thats one of them14:22
*** reaper has joined #openstack-infra14:24
*** basha has quit IRC14:24
*** trinaths has joined #openstack-infra14:25
*** matjazp has quit IRC14:25
*** sandywalsh_ has joined #openstack-infra14:27
*** [1]trinaths has joined #openstack-infra14:27
*** zns has joined #openstack-infra14:29
*** trinaths has quit IRC14:29
*** [1]trinaths is now known as trinaths14:29
*** cp16net_ has joined #openstack-infra14:32
*** homeless has quit IRC14:33
*** basha has joined #openstack-infra14:33
*** lttrl has joined #openstack-infra14:34
*** CaptTofu_ has joined #openstack-infra14:34
*** dansmith is now known as superdan14:34
*** atiwari has joined #openstack-infra14:36
*** zns has quit IRC14:37
*** andreykurilin_ has quit IRC14:38
*** andreykurilin_ has joined #openstack-infra14:39
*** zns has joined #openstack-infra14:40
*** cp16net_ has quit IRC14:41
*** xyang1 has joined #openstack-infra14:44
*** zz_gondoi is now known as gondoi14:45
*** zhiyan_ is now known as zhiyan14:46
*** james_li has joined #openstack-infra14:46
*** basha has quit IRC14:48
*** blamar has quit IRC14:48
anteayaI'm answering a question on the ml about gerrit ports for firewalls, ensuring I have the right info here: https://gerrit-review.googlesource.com/Documentation/config-gerrit.html#sshd14:50
anteayawe use the defaults, 29418, 22 and 8080 for the web browser14:51
*** _nadya_ has joined #openstack-infra14:51
anteayais that accurate?14:51
*** om has joined #openstack-infra14:52
*** ildikov has quit IRC14:52
*** lttrl has quit IRC14:54
xyang1anteaya: question for you.  When I apply for a service account, can I use a group email address?14:55
openstackgerritDerek Higgins proposed a change to openstack-infra/config: Index the os-collect-config logs in tripleo jobs  https://review.openstack.org/9995014:55
*** Longgeek_ has quit IRC14:57
anteayaxyang1: use any email address where if an email comes in there will be a quick response14:58
anteayaxyang1: however you want to organize it from your end so that there is a quick response is entirely up to you14:58
anteayaso yes, if you want to use a group email, by all means do so14:58
xyang1anteaya: thanks.  a group email address will allow us to respond quicker14:59
anteayafantastic14:59
*** matjazp has joined #openstack-infra14:59
anteayathat is the primary concern, quick dissemination of information, quick response14:59
*** otherwiseguy has joined #openstack-infra15:00
xyang1anteaya: sure15:00
*** lcostantino has quit IRC15:00
*** mrodden has quit IRC15:02
*** sarob_ has joined #openstack-infra15:03
anteayaxyang1: thanks, which system is yours?15:04
*** gondoi is now known as zz_gondoi15:04
*** changbl has quit IRC15:06
*** jaypipes is now known as leakypipes15:06
*** lcostantino has joined #openstack-infra15:07
*** sarob_ has quit IRC15:07
*** zhiyan is now known as zhiyan_15:09
*** blamar has joined #openstack-infra15:10
jogosdague: just chimed into your stop hacking please email15:11
*** blamar has quit IRC15:11
*** zul has quit IRC15:12
*** blamar has joined #openstack-infra15:12
*** zul has joined #openstack-infra15:12
*** zul has quit IRC15:14
*** zul has joined #openstack-infra15:15
*** mrodden has joined #openstack-infra15:16
*** changbl has joined #openstack-infra15:19
*** yfried_ has quit IRC15:22
*** andreykurilin_ has quit IRC15:23
*** sweston has quit IRC15:23
*** matjazp has quit IRC15:23
*** andreykurilin_ has joined #openstack-infra15:24
*** zul has quit IRC15:25
*** jlibosva has quit IRC15:25
*** timrc is now known as timrc-afk15:26
*** jistr has quit IRC15:27
*** jistr has joined #openstack-infra15:27
*** xchu has quit IRC15:29
*** annegentle has quit IRC15:29
arunkantCan someone please approve it so that this change can merge. This has been already reviewed by infra-core and keystone-core and tried recheck..it did not re-trigger the build15:29
arunkanthttps://review.openstack.org/#/c/95842/15:29
anteayawell you have a failing test15:30
anteayaso you need to look at the logs and see why the test failed first15:30
anteayathen when you have fixed the failing test with a new patch, or if you have found the bug number because if failed on a bug15:31
*** e0ne has quit IRC15:31
mriedemjogo: fungi: we should promote this https://review.openstack.org/#/c/99144/ since it's masking other timeout failures15:31
anteayayou can use `recheck bug <bug number>`15:31
*** e0ne has joined #openstack-infra15:31
jogomriedem: works for me15:31
mriedemjogo: and how does one find out the pass rate of a given job? because i'd bet that (check|gate)-grenade-dsvm sucks right now15:32
*** om has quit IRC15:32
openstackgerritJoe Gordon proposed a change to openstack-infra/config: Don't run large-ops test on stable/havana branches  https://review.openstack.org/9975015:33
jogojogo.github.io/gate15:33
jogowhich uses graphite15:33
jogographite.openstack.org15:33
arunkantanteaya: All of failed issues were related to connection .15:33
jogoand yes looks like your right the page I have uses moving average so spikes take a bit of time15:33
anteayaarunkant: great, can you find a bug for that?15:34
mriedemjogo: so looking at jogo.github.io/gate it looks like the two biggest gate failures right now are the grenade jobs right?15:35
anteayaarunkant: this might be a good place to start: https://bugs.launchpad.net/openstack/15:35
mriedemfailing around 25% of the time on average15:35
*** e0ne has quit IRC15:36
jogomriedem: yeah, as I said it could be much worse since its a moving average15:36
mriedemthis is the grenade bug i keep rechecking on https://bugs.launchpad.net/grenade/+bug/131509515:36
xyang1anteaya: EMC's Cinder drivers15:36
uvirtbotLaunchpad bug 1315095 in grenade "grenade nova network (n-net) fails to start" [Undecided,New]15:36
mriedemjogo: for whatever reason we don't have the *.failure logs from the grenade jobs getting archived15:36
mriedemaccording to the logs on the failure, they don't exist15:36
*** wenlock_ has joined #openstack-infra15:37
mriedemjogo: i.e. http://logs.openstack.org/44/99144/1/gate/gate-grenade-dsvm/3be9510/logs/devstack-gate-cleanup-host.txt15:37
*** cp16net_ has joined #openstack-infra15:37
mriedem2014-06-13 14:31:00.112 | ls: cannot access /opt/stack/status/stack/*.failure: No such file or directory15:37
*** CaptTofu_ has quit IRC15:37
mriedemi'll update https://etherpad.openstack.org/p/gatetriage-june201415:38
sdaguejogo: cool15:38
sdaguejogo: so I think in future hacking major releases *have* to happen the week after the openstack release, or they hold to the next window15:38
mtreinishmriedem: I thought you pushed a fix for the *failure files grab15:39
mriedemmtreinish: i did15:39
anteayaxyang1: ah you don't have a system yet then?15:39
mriedemmtreinish: but looks like the files still don't exist for some reason15:40
anteayaxyang1: can we chat about the name before you submit your request?15:40
anteayaxyang1: for instance how large is EMC? do you think you will ever need to have more than one gerrit account?15:40
anteayaxyang1: and what is the name of the EMC component you are testing?15:41
mtreinishmriedem: hmm...15:41
sdaguemtreinish: I'm not convinced the failure is always happening in a way that's recording15:42
xyang1anteaya: sure.  We are still setting them up.15:42
xyang1anteaya: We'll need 4 accounts, one for each product15:43
*** rwsu has joined #openstack-infra15:43
xyang1anteaya: VMAX cinder driver, VNX cinder driver, XIO cinder driver, and ViPR cinder driver15:43
sdaguemriedem: man, this might be the elusive screen dropping stuff commands issue that we've seen before15:44
sdaguewhich would suck15:44
*** CaptTofu_ has joined #openstack-infra15:46
*** matjazp has joined #openstack-infra15:47
*** bknudson has joined #openstack-infra15:48
*** bknudson has quit IRC15:48
mriedemsdague: yeah maybe15:49
mriedemhttps://bugs.launchpad.net/grenade/+bug/1315095/comments/215:49
uvirtbotLaunchpad bug 1315095 in grenade "grenade nova network (n-net) fails to start" [Undecided,Confirmed]15:49
mriedemsome notes there15:49
*** bknudson has joined #openstack-infra15:49
sdaguethere are other services failling to start. I actually have an earlier sanity check patch for this in my queue. I'll focus on fixing that today.15:49
*** rlandy has quit IRC15:49
mriedemyeah there are like 32 hits of 'fails to start'15:50
*** matjazp has quit IRC15:50
anteayaxyang1: okay so I will suggest 4 names `EMC VMAX CI`, `EMC VNX CI`, `EMC XIO CI`, and `EMC ViPR CI`15:50
*** mbacchi has quit IRC15:50
anteayaxyang1: let's leave cinder out of the names15:50
anteayait is a shame they are all acroymns the all caps gets irritating15:51
mriedemWHY I DO NOT SEE THE ISSUE15:51
anteayaright like that15:51
*** UtahDave has joined #openstack-infra15:51
anteayabut acroymns are shorter15:51
mriedemi had a girlfriend in college that wrote her emails in all caps15:51
mriedemwe didn't date for long...15:51
anteayaha ha ha15:51
mriedemfelt like i was being yelled at constantly15:51
anteayayup15:51
*** krtaylor has quit IRC15:52
anteayahaving mulitple ci systems yelling at you does'nt go over well15:52
*** amcrn has quit IRC15:52
anteayabut I can't stop companies from using acroymns15:52
anteayaxyang1: so that is what I suggest15:52
*** pblaho has quit IRC15:52
xyang1anteaya: sure. thanks for the suggestion15:52
anteayaxyang1: and we will create usernames in lowercase with hyphens to match15:52
anteayaxyang1: it will help us process your request faster15:53
*** CaptTofu_ has quit IRC15:53
*** zul has joined #openstack-infra15:53
xyang1anteaya: ok15:53
anteayaxyang1: so the usernames will be `emc-vmax-ci`, `emc-vnx-ci`, `emc-xio-ci`, and `emc-vipr-ci`15:53
anteayaand thanks15:53
xyang1anteaya: we'll need to give you email address and ssh key before those names are accepted?15:54
anteayawe need the ssh key to create the accounts, so yes15:54
anteayaand put the email address in as well since that is the current process but that will soon change15:55
xyang1anteaya: can we change the email address later after the account is created?15:55
anteayaI'm working on wikipage templates so you can list the emails yourself and modify them as you see fit15:55
xyang1anteaya: ok, that's good15:55
anteayaxyang1: yes, the wikipages should be available soon15:56
xyang1anteaya: thanks!15:56
*** alkari has joined #openstack-infra15:56
anteayaI'm working on them, I don't want to release them early since that would mean I would have to change 60 wikipages by hand after the fact15:56
anteayaxyang1: thanks for being available15:56
xyang1anteaya: you are welcome.15:56
anteayaxyang1: irc is the easiest way to find me and ask me any questions since it saves me a ton of work after the fact15:57
anteaya:D15:57
xyang1anteaya: you can find me on cinder irc too15:57
anteayagreat15:57
anteayathank you15:57
xyang1anteaya: thanks15:58
*** gyee has joined #openstack-infra15:58
*** basha has joined #openstack-infra15:58
*** basha has quit IRC15:58
*** basha has joined #openstack-infra15:59
*** zns has quit IRC16:00
*** freyes has quit IRC16:00
*** arnaud has joined #openstack-infra16:00
*** arnaud has quit IRC16:01
*** arnaud has joined #openstack-infra16:01
*** todd_dsm has joined #openstack-infra16:01
*** zns has joined #openstack-infra16:01
*** wenlock_ has quit IRC16:03
*** markwash has joined #openstack-infra16:04
*** dhellman_ has joined #openstack-infra16:05
*** todd_dsm has quit IRC16:05
*** dims_ has quit IRC16:07
*** krtaylor has joined #openstack-infra16:07
*** wenlock_ has joined #openstack-infra16:08
*** dims_ has joined #openstack-infra16:08
*** reed has joined #openstack-infra16:11
*** tcammann has quit IRC16:11
mordredanteaya: we don't use 22 or 8080 for gerrit for people - only 29418 and 44316:12
*** marcoemorais has joined #openstack-infra16:12
*** zns has quit IRC16:12
*** tcammann has joined #openstack-infra16:12
*** derekh_ has quit IRC16:13
*** zns has joined #openstack-infra16:13
*** zul has quit IRC16:14
*** habib has quit IRC16:15
anteayamordred: thank you I will respond to the email16:15
*** CaptTofu_ has joined #openstack-infra16:16
openstackgerritA change was merged to openstack-infra/elastic-recheck: have realtime engine only search recent indexes  https://review.openstack.org/9977616:18
*** lcostantino has quit IRC16:20
*** lcostantino has joined #openstack-infra16:20
*** zns has quit IRC16:22
*** zns has joined #openstack-infra16:24
reedttx, where is the classification you mention in programs.yaml?16:24
reedttx: quoting your message to the mlist "I think they can go to your "other" category. That's how they are classified in programs.yaml." but I don't see such classification there16:25
*** comstud is now known as bearhands16:25
*** leakypipes has quit IRC16:26
*** afazekas_ has quit IRC16:30
*** salv-orlando has joined #openstack-infra16:32
*** mbacchi has joined #openstack-infra16:34
*** ominakov has quit IRC16:35
*** viktors has quit IRC16:35
anteayapaste.o.o 500'd on my again16:38
trinathsanteaya: hi16:39
*** annegentlereally has quit IRC16:39
anteayahello trinaths16:39
anteayatrinaths: was there something on your mind?16:40
trinathsanteaya: yes. got a quick question.16:40
anteayado share16:40
trinathsanteaya: regarding CI log repository16:40
*** todd_dsm has joined #openstack-infra16:41
anteayawell the logs aren't a repository16:41
anteayaa repository is something that is under version control16:41
*** vponomaryov has left #openstack-infra16:41
trinathsanteaya: what is the period do we need to store the logs.. (okay..)16:41
anteayaall of our projects are repositories since they are under git version control16:41
trinathsanteaya: mistaken.. corrected16:41
*** Ryan_Lane has joined #openstack-infra16:42
*** Ryan_Lane has joined #openstack-infra16:42
anteayait was decided that 3rd party ci systems store their logs for one month16:42
*** amcrn has joined #openstack-infra16:42
*** e0ne has joined #openstack-infra16:42
*** ramashri has joined #openstack-infra16:42
trinathsanteaya: but, we have a change which spanned for 2 months and did not get merged.16:42
trinathsanteaya: (just for scenario)16:42
anteayaI'm all for storing the logs for longer16:43
anteayabut I got push back from other ci folks16:43
trinathsanteaya: if we delete the old logs, how can the owner check the old logs.16:43
anteayaI can't remember what room I was in at the summit but I do remember agreement on one month16:43
anteayatrinaths: very good point16:43
trinathsanteaya: but what will be the use, if we store logs for 'merged' changes??16:43
anteayaif I had my way we would store all the logs forever16:43
anteayatrinaths: I don't understand your last question16:44
trinathsanteaya: let's have, a change is merged to master branch. now what will the use of the logs with CI16:45
anteayathat change got merged16:45
trinathsanteaya: yes16:45
anteayathe ci logs are the only logs that said what effect that change had on your driver or plugin16:45
trinathsanteaya: true16:45
trinathsanteaya: agree. but 3rd party CI tested it and posted success.16:46
anteayaso if there is a bug someone spots, they need to see the third party logs to see if it introduced the bug, or if it had an effect16:46
anteayawe revert code all the time16:46
anteayasince it can pass the tests and still introduce a bug16:46
anteayaif we have a race condidtion or something we don't test for16:47
*** zul has joined #openstack-infra16:47
*** e0ne has quit IRC16:47
trinathsanteaya: okay, good. but then our one month linmit to logs will not work in this scenario.16:47
anteayaand the way to find it is to work backwards16:47
anteayaI'm not arguing with your logic16:47
trinathss/linmit/limit16:47
anteayabut I am saying that a limit was agreed to16:47
anteayaand the limit was one month16:47
trinathsanteaya: yes. but just want to make my thing clear16:47
anteayasave them all if you can, I would really apprecaite that16:47
anteayayes, you clearly have identified a failure in the logic of the limit16:48
anteayaI see that16:48
anteayaand I am not disputing that16:48
trinathsanteaya: okay.16:48
devanandasdague: did your change to make gate-tempest-dsvm-virtual-ironic non-voting put ironic and diskimage-builder in a separate merge queue, or did something else do that? or am I smoking crack?16:48
anteayabut in my role of ensuring compliance for over 50 systems, a limit was agreed to at the last summit - one month16:48
*** dangers_away is now known as dangers16:48
anteayatrinaths: if you wish the limit to be changed, do start a thread on the ml stating your position16:49
*** thedodd has quit IRC16:49
anteayaif the community agrees that third party ci logs should be stored for a duration different than one month then that is what I will communicate and ensure16:49
*** saper has joined #openstack-infra16:49
trinathsanteaya: okay. one month storage of logs is good. but when the change is in review for 2 months, then, owner of that change, may not see the old logs with 3rd party CI16:50
anteayayes, that is the current situation, you are correct16:50
*** dizquierdo has quit IRC16:51
trinathsanteaya: also, in the scenario you said, reagarding the bugs and backward checks, too when we loose the logs after 1 month, it will not help the backward check.16:53
*** hashar has quit IRC16:53
trinathsanteaya: I felt storage as a issue here, hence I shared with you.16:54
anteayatrinaths: I'm grateful we are discussing it16:54
anteayayes, storage is the limiting factor in log retention16:55
anteayafor us as well16:55
anteayawe keep 6 months of logs, we would like to keep more, but we have too many logs16:55
trinathsanteaya: true. agree16:56
reedtrinaths, the issue you raise is important, probably a thread on the mlist is good16:56
*** wenlock_ has quit IRC16:56
*** dhellman_ has quit IRC16:56
reedit's a topic related somewhat to the ongoing discussion about compute quota16:56
trinathsreed: may I mail this in infra list16:57
reedtrinaths, openstack-dev seems more appropriate16:57
*** bknudson has left #openstack-infra16:57
trinathsreed: okay16:57
anteayatrinaths: yes use the tag [3rd party] in the email subject line and email to -dev16:57
anteayait is okay to cc the infra list too if you want16:58
*** hemna_ is now known as hemna16:58
anteayabut the conversation will take place on -dev16:58
*** sweston has joined #openstack-infra16:58
*** james_li has quit IRC16:59
trinathsanteaya: okay. preparing the mail.16:59
anteayakk17:00
*** fbo is now known as fbo_away17:01
fungimriedem: jogo: does 99144 still merit promoting>?17:02
clarkbo/17:03
clarkbfungi: sdague: after talking to dstufft last night I am much more on board with spinning up an ES 07 and rotating 01 out17:04
*** dims_ has quit IRC17:04
clarkbfungi: what do you think?17:04
vishyso I’m having trouble seeing what went wrong here: http://logs.openstack.org/54/93754/6/check/check-grenade-dsvm/7f76538/console.html17:04
*** sarob_ has joined #openstack-infra17:04
vishyit looks like ceilometer-dbsync randomly died in the middle17:04
fungiclarkb: oh, excellent. i missed a lot of scrollback (didn't actually sleep at all last night, stayed up until dawn migrating the last of my servers into rackspace)... what's the tl;dr?17:04
*** todd_dsm has quit IRC17:05
*** harlowja_away is now known as harlowja17:06
clarkbfungi: wow thats dedication. tl;dr is pypi gluster servers ran into similar issues and they worked around it by spinning up new nodes17:06
clarkbfungi: so 1 it makes me feel less crazy about blaming the cloud and 2 that work around is known to work with a small sample size17:06
*** shayneburgess has joined #openstack-infra17:06
fungiclarkb: not so much dedication as deadline. junk removal was scheduled to come today during lunchtime so i could get it all hauled off17:07
clarkbvishy: it says the main setup script failed so http://logs.openstack.org/54/93754/6/check/check-grenade-dsvm/7f76538/logs/old/devstacklog.txt.gz should be the place to look17:07
clarkbvishy: but I don't see anything there either17:07
*** markwash has quit IRC17:08
fungiclarkb: so the pypi crew saw i/o issues isolated to a single node in their cluster, and it went away when they replace the problem node? that definitely sounds like a misbehaving neighbor if so17:08
shayneburgessquick question, if someone can help. I am trying to submit my first patch for review and I get the following error from “git review”: fatal: ICLA contributor agreement requires current contact information.17:08
shayneburgessPlease review your contact information:17:08
shayneburgess  https://review.openstack.org/#/settings/contact17:08
shayneburgessfatal: Could not read from remote repository.17:08
*** sarob_ has quit IRC17:08
clarkbfungi: yes and rax doesn't guarantee IOPS unless you spend lots of money or some such17:08
vishyclarkb: I added it to here https://bugs.launchpad.net/ceilometer/+bug/1221580 since it looks similar17:08
trinathsanteaya: done.17:08
uvirtbotLaunchpad bug 1221580 in ceilometer "devstack's call of ceilometer-dbsync gets stuck after migration 6 -> 7" [Medium,Triaged]17:08
*** lcostantino has quit IRC17:08
clarkbfungi: and we seem to feel the same problems17:08
shayneburgessWhen I go to that url and update my information it gives a server error17:09
clarkbshayneburgess: if you open that link and update your contact info does the error go away?17:09
fungiclarkb: makes total sense in that case, yes17:09
shayneburgessclarkb: It gives the following error: Server Error Cannot store contact information17:09
clarkbshayneburgess: ah ok. The thing to check in that case is that you are an openstack foundation member, and that your primary email for that account matches the primary email for your gerrit account17:10
*** pelix has quit IRC17:10
mesteryanteaya: Sent the email to openstack-dev around 3rd party testing, sorry it took a few days longer than I originally thought.17:10
shayneburgessok. i think that both of those are true. Verifying now17:10
mesteryanteaya: I put deadlines in there too.17:10
anteayamestery: awesome thank you17:10
mesteryanteaya: Also highlighted the weekly third party meeting you host, hope to get more people to attend that and share experiences and askq uestions.17:11
shayneburgessclarkb: I have validated the same email address for both accounts: shayne.burgess@hp.com17:11
clarkbfungi: great, I will start spinning up a new es07 node shortly. did that puppet change merge to not puppet 07when it first comes up?17:11
clarkbfungi: then once everything is happy on 07 I will kill 01 but keep it around so that we can run bonnie++ and do other sanity checking17:12
*** todd_dsm has joined #openstack-infra17:12
fungiclarkb: i believe so... looking17:12
*** mmaglana has joined #openstack-infra17:12
*** rcarrill` has joined #openstack-infra17:13
anteayamestery: great thank you17:13
*** wenlock_ has joined #openstack-infra17:13
clarkbshayneburgess: I just updated my contact info to make sure it wasn't something like a dead server and it updated for me17:14
*** Ryan_Lane1 has joined #openstack-infra17:14
clarkbshayneburgess: are you sure the account at https://www.openstack.org/profile/ matches gerrit?17:14
clarkbshayneburgess: note it must be the primary email addresses17:14
*** ramashri has quit IRC17:14
trinathsanteaya: got my mail ?17:15
anteayatrinaths: I have many emails, and yes I read yours17:15
*** rcarrillocruz has quit IRC17:16
anteayayou outline your understanding but I am foggy on the actual question you are asking17:16
shayneburgessclarkb: checking17:16
anteayalet it steep for the weekend17:16
shayneburgessclarkb: that fixed it. I had the wrong email address on this form: https://www.openstack.org/profile/17:17
trinathsanteaya: okay. is my mail not clear?17:17
shayneburgessthanks for your help!17:17
*** rcarrill` has quit IRC17:18
anteayatrinaths: i think it is more of a cultural thing17:19
*** arnaud has quit IRC17:19
anteayatrinaths: my sense is you come from a culture that uses the third person when discussing topics and I come from a culture where I use the first person17:19
anteayatrinaths: so it is more of a cultural difference than a lack of facts17:20
anteayawe are learning to communicate with each other, you and I17:20
*** davidlenwell is now known as davidlenwell_17:20
*** davidlenwell_ is now known as davidlenwell17:20
anteayaand patience on both of our sides helps17:20
anteayathank you for your patience with me17:20
trinathsanteaya: okay17:21
*** rcarrillocruz has joined #openstack-infra17:21
*** ramashri has joined #openstack-infra17:23
*** markwash has joined #openstack-infra17:24
trinathshappy weekend. bye all17:24
*** trinaths has quit IRC17:24
mriedemfungi: sdague: i'd say yeah https://review.openstack.org/#/c/99144/ merits promoting since it's masking other failures, but i'm not pushing it17:25
*** _nadya_ has quit IRC17:26
*** _nadya_ has joined #openstack-infra17:26
clarkbfungi: `cinder absolute-limits` and `cinder quota-show` are failing for me. How did you check that quota?17:26
fungimriedem: too late. i already promoted it a little while ago and it's ~45 minutes from merging :/17:27
mriedemfungi: works for me :) was at lunch17:27
*** Guest47184 has quit IRC17:28
*** melwitt has joined #openstack-infra17:29
*** markmcclain has joined #openstack-infra17:30
*** marcoemorais has quit IRC17:30
*** marcoemorais has joined #openstack-infra17:30
*** praneshp has joined #openstack-infra17:31
* clarkb tries upgrading cinder client in a different venv17:31
*** amcrn_ has joined #openstack-infra17:34
clarkbfungi: and you are comfortable adding another 1TB volume to a rapidly shrinking volume quota?17:34
clarkbfungi: looks like we have ~2TB free currently/17:34
clarkb(new cinder fixed the errors for me)17:35
*** amcrn has quit IRC17:36
mordredclarkb: woot17:36
clarkbmordred: do you want a tl;dr on es situation? or have you been able to follow scrollback17:36
mordredclarkb: I think I've been able to follow the scrollback17:36
clarkbcool17:36
sdaguedevananda: I don't know about the non voting change doing that or not17:37
anteayaso what is our current direction for addressing the gate distress?17:37
clarkbI am ready to boot a node and create a volume if other folks are generally happy with that plan given the quota situation17:37
mordred++17:37
anteayaclarkb: go you17:38
anteayaclarkb: that is for the es situation and won't affect the gate situation, is that correct?17:39
clarkbcorrect17:39
*** markmc has quit IRC17:39
anteayakk, happy es gives waiting devs something to do17:39
clarkbanteaya: I haevn't caught up on scrollback myself. Is gate situation the lack of nodes?17:40
*** markwash has quit IRC17:40
mordredclarkb: that situation has to do with problems deleting, right?17:41
anteayayes17:41
anteayamore headed for delete state all the time17:41
sdagueyeh, the deleting trend line means we'll be at 100% deleting by mid next week I think17:41
anteayathere hasnt' been much conversation around it today since you just showed up and fungi has been busy most of the morning17:41
mordredso - this is a problem with _both_ clouds?17:42
anteayarax17:42
sdaguemordred: I don't know17:42
anteayathis is a rax cloud issue17:42
mordredyah. so at some point I'd expect rax to be at 100% deleting and our only source of nodes to be hp, yeah/17:42
anteayagilliard: was in earlier asking if he could access nodepool info so he could assess things from his end17:42
*** ildikov has joined #openstack-infra17:43
sdaguebut I'd love clarkb to stay focussed on ES because we're pretty handicapped on addressing the right race bugs until that's working again17:43
mordredanteaya: ah - ok. yes- I can get him as much info as he wants17:43
mordredsdague: ++17:43
*** zul has quit IRC17:43
anteayamordred: he wants a wall with nodepool updates on it17:43
*** zul has joined #openstack-infra17:43
sdagueand we're down 3 infra core today, as it's a holiday in russia, fungi is moving, and jeblair is still out17:43
*** maxbit has quit IRC17:43
*** todd_dsm has quit IRC17:43
anteayaat least that is what I got from what he posting in scrollback17:43
sdagueso mordred, if you could look into the deleting issue, that would be huge17:43
mordredanteaya: can you expand on that?17:44
mordredanteaya: "wall with nodepool updates" ?17:44
*** zzelle has joined #openstack-infra17:44
anteayamordred: I will find the statement in the logs17:44
sdaguemordred: today nodepool status is something only infra core can get, correct/17:44
clarkbmordred: I think tail -f of the logs17:44
sdague?17:44
clarkbsdague: yes bceause of leakage of stuff17:44
anteaya2014-06-13T09:50:36  <gilliard> Hi all - is there something wrong with nodepool?  Looks like >50% instances are "deleting"17:44
clarkbthere was that big discussion with morganfainberg about how we dump data into logs we shouldn't17:45
sdagueclarkb: ok.17:45
mordredanteaya: gotcha17:45
anteaya2014-06-13T10:06:09  <gilliard> I haven't any way to investigate nodepool itself but if anyone suspects problems with HP instances just give me a shout :)17:45
clarkbok booting elasticsearch07 now17:45
anteaya2014-06-13T10:29:58  <gilliard> sdague: OK. Do you have a graphite link that splits up the nodepool data by cloud?17:45
mordredanteaya: k. from the graphs I'm looking at, hp looks healthy17:45
anteayaI'll stop now, but that is the idea17:45
mordredyah17:45
*** gyee has quit IRC17:45
*** todd_dsm has joined #openstack-infra17:46
anteayayeah, I hadn't heard anything from hp17:46
anteayafortunately17:46
*** markwash has joined #openstack-infra17:47
*** chuck__ has joined #openstack-infra17:47
*** markwash has quit IRC17:47
*** zul has quit IRC17:47
*** sweston has quit IRC17:47
fungiclarkb: yeah, we have just under 2tb available, so if we need to temporarily dip a tb into that for a node swap over a weekend and into next week it's not a problem i think17:48
fungisdague: yeah i hate that i'm not being more helpful. i'll be putting in some serious extra effort once i'm settled in at the far end of the move17:49
clarkbfungi: no problem. and thanks17:49
openstackgerritMichael Krotscheck proposed a change to openstack-infra/storyboard-webclient: [WIP] Added search interface  https://review.openstack.org/9997517:50
sdaguefungi: no problem, life is allowed to happen :)17:50
sdagueI was mostly just prodding mordred to take ownership on the delete side, as we are definitely down a lot of capacity now17:51
*** todd_dsm has quit IRC17:51
*** markwash has joined #openstack-infra17:51
anteayayes, there are times when we need mordred and now is one of them17:51
fungiis graphite struggling or is this just the crappy wireless access point i switched to in the past couple hours?17:53
fungii guess it's just me. i can directly load stuff from graphite but the images embedded in the zuul status page are teh broken for me17:53
fungiand now they're suddenly better again. whatever17:54
*** nati_ueno has joined #openstack-infra17:54
fungibut yeah, the delete accumulation is getting pretty bad. i can try to burn down some of those but not sure if it will be more than a very temporary relief17:54
mordredsdague, fungi: I saw discussion with comstud in here yesterday on delete - is there an action item from our side we need to be doing?17:55
*** praneshp has quit IRC17:55
*** sweston has joined #openstack-infra17:56
fungimordred: i think the main takeaway there is that there's no point in retrying a delete in nodepool unless the node is still showing as "active" in nova (as nova delete calls on an instance which is in deleting or error state are entirely ignored and therefore wasted effort)17:56
sdaguesomeone needs to follow up with rax and actually get them to cleanup all the errored deleting nodes17:56
sdaguebecause it can't be done by us17:57
sdaguethere is also possibly a nova fix for this17:57
sdaguebut that's long term17:57
fungiyeah, for the "nova can't delete nodes which went into error state after the delete call" bug17:57
sdagueright17:57
sdaguebut that has to go through code review, land, and get into rax cloud. So that's not a near term fix17:58
fungiwe currently have 353 nodes which nodepool believes it has been trying to delete for more than one hour. i'm going to set a parallel fire on these to try and clear away as many as i can17:58
clarkbok volume created, node booted. time to do the cinder nova + lvm dance17:59
clarkbthis should be fun17:59
sdaguefungi: all in rax?17:59
sdagueor are some in hp?17:59
fungisdague: most of rax is in that state, but also some in hp18:00
fungiwe already know that some percentage of nova delete calls simply fall on the floor but work on the second (third, fourth, fifth) retry18:00
reedit's Community Office Hour time :) if you want to talk, I'm here18:00
*** _nadya_ has quit IRC18:00
clarkb"attaching"18:00
clarkbwoot in use18:00
*** slagle has joined #openstack-infra18:01
*** melwitt has quit IRC18:01
*** melwitt has joined #openstack-infra18:01
*** todd_dsm has joined #openstack-infra18:02
mordredclarkb: woot18:02
*** praneshp has joined #openstack-infra18:03
clarkbfungi: mordred: looks like I need to vgcreate main. Are we using any special options on that like zeroing?18:04
clarkbI don't think we need zeroing for elasticsearch (data is public anyways)18:04
mordredclarkb: I don't think so?18:05
openstackgerritMichael Krotscheck proposed a change to openstack-infra/storyboard-webclient: [WIP] Added search interface  https://review.openstack.org/9997518:05
fungiclarkb: i haven't been, no18:05
*** todd_dsm has quit IRC18:06
*** todd_dsm has joined #openstack-infra18:06
*** annegentlereally has joined #openstack-infra18:08
*** chuck__ is now known as zul18:09
*** zul has quit IRC18:09
*** zul has joined #openstack-infra18:09
clarkbInsufficient free extents (262143) in volume group main: 262144 required <- I only need one more free extent :/18:09
clarkbI guess I lvcreate -l18:10
*** zehicle_at_dell has joined #openstack-infra18:10
*** CaptTofu_ has quit IRC18:12
*** gyee has joined #openstack-infra18:13
fungiclarkb: yeah if i'm using the entire pv i just call out the extent count explicitly18:13
*** annegentlereally has quit IRC18:14
fungiclarkb: though you can also do 100%FREE notation for size i think18:14
mordredclarkb: can't you just get another extent from wal-mart?18:14
clarkbyeah I used -l100%FREE18:14
*** arnaud has joined #openstack-infra18:14
fungimordred: they only sell extents in thousand-packs there18:14
*** mmaglana has quit IRC18:15
clarkbwaiting for a filesystem to be created18:15
*** maxbit has joined #openstack-infra18:15
sdagueclarkb: the optimized version of the bot is landed18:16
clarkbsdague: thanks18:16
sdaguehopefully that helps a bit18:16
*** CaptTofu_ has joined #openstack-infra18:17
clarkbin theory ES can start rebalancing soonish. Just need filesystem. reboot. merge puppet change and puppet all the things18:17
clarkbI should write that puppet change18:17
*** ihrachyshka has joined #openstack-infra18:17
*** annegentlereally has joined #openstack-infra18:17
*** james_li has joined #openstack-infra18:18
*** amcrn_ has quit IRC18:18
*** flaper87 is now known as flaper87|afk18:20
openstackgerritClark Boylan proposed a change to openstack-infra/config: Add elasticsearch07  https://review.openstack.org/9998018:20
clarkbI need to create DNS records before that should merge18:20
clarkboh hey the launch script does the /opt dance now18:22
clarkbthere is fun stuff in fstab18:22
clarkbI am going to remove the swap entry in fstab though to be like the other nodes18:22
sdagueclarkb: fyi, I'm going to pumkin soon. Need to drive to VT for family activities for the weekend. Any other things you will need from me before doing so?18:24
jerryzfungi: about log retention, could we keep logs of a failed job longer and successful shorter? just curious.18:26
anteayajerryz: is this in response to the ml thread?18:27
anteayathere is a ml thread about log retention for third party ci systems18:28
anteayais your question motivated by that thread?18:28
jerryzanteaya: just provoked by it18:28
anteayagreat18:28
clarkbsdague: I don't think so, have fun18:28
anteayawell to keep the conversation all in one place, it would be best if you could reply to the thread18:28
clarkbsdague: at this point its massaging es onto a new host18:28
anteayaright now it is log retention for one month for third party ci systems regardless of log build outcome18:29
sdagueclarkb: cool18:29
anteayaif you want to discuss that, it is best if you reply to the email so that trianth gets a feeling for how other people feel18:29
sdaguedid you do an ssd volume?18:29
jerryzanteaya: not necessarily for 3rd party CI, but all the logs.18:29
sdagueor is that a later attempt of black magic18:29
clarkbsdague: later attempt18:30
jerryzanteaya: because not a review, but also a bug may stay unattended or under discussion for a long time18:30
clarkbsdague: I want to make a like for like copy to reduce variables18:30
mordredclarkb: at some point, not right now, I'd love to capture "massaging ES onto a new host"18:30
anteayajerryz: well the thread is addressing third party ci systems log retention18:31
anteayaopenstack log retention is 6 months18:31
clarkbmordred: its pretty straight forward, boot node, edit DNS records, attach volume and do all of the necessary (mkfs, fstab, create mount point and user:group for that location), update firewalls, run puppet18:31
clarkbmordred: iptables and volume stuff being the clunkiest18:31
mordrednod18:31
clarkbok rebooting es07 now to make sure everything comes up18:32
jerryzanteaya: so now third party CI can archive logs to static.o.o ? i still remembered they archived in their own locations18:32
anteayajerryz: no18:32
anteayajerryz: third party ci systems archive to the url they provide in their gerrit comments18:32
anteayawhere did you get they archive to static.o.o based on what I just said?18:33
anteayaI'm confused18:33
*** _nadya_ has joined #openstack-infra18:33
jerryzanteaya: i thought 3rd party ci maintain their own log server18:33
anteayayes they do18:34
anteayaand the url for their log server is provided in every gerrit comment18:34
*** ociuhandu has quit IRC18:34
jerryzanteaya: where does the one month period apply, then?18:35
*** todd_dsm has quit IRC18:35
anteayafor the log retention of the third party ci systems18:35
*** talluri has joined #openstack-infra18:35
clarkbfungi: mordred ok es07 is rebooted, DNS records created. anyone else want to login to that machine and check things before I self approve https://review.openstack.org/#/c/99980/18:36
clarkbalso only 20 changes from 100k18:36
anteayaso if I click the link of your gerrit comments I need to be able to access those logs on your server for 30 days18:36
jerryzanteaya: so 3rd party CI need to archive a minimum of one month of logs18:36
anteayayes18:36
anteayaexactly right18:36
anteayaif you want to archive more, that is great18:37
anteayabut right now we only require one month18:37
anteayaand for me one month == 30 days18:37
jerryzanteaya: and trinath wants it to be longer18:38
anteayaand trianth has a ml post up to discuss having a longer limit18:38
anteayayes18:38
anteayathat is my understanding of his position18:38
clarkbmordred: fungi: I am going to find breakfast/lunch and give you all a few minutes to look at things if yo uare able18:38
anteayaand for the limit to change requires input from the community18:38
fungiclarkb: i'll poke around18:38
clarkbthanks18:38
openstackgerritA change was merged to openstack-dev/hacking: Fix a typo in HACKING.rst  https://review.openstack.org/9843518:40
jerryzanteaya: sure. my advise still kinda solve the conflict between more data and less storage. unless there are still some values in successful logs i can't think of.18:40
sdagueclarkb: yeh, I'm surprised someone hasn't botted to snag it :)18:40
clarkbsdague: I will shame anyone if they bot it18:40
clarkber shame that someone18:40
anteayajerryz: advise on the mailing list18:41
sdagueI think someone did for the last counter roll over18:41
fungisdague: i'm more interested in when we hit the one millionth review18:41
anteayajerryz: but keep in mind the limit will be teh same regardless of log build outcome18:41
fungiwe're just about 10% of the way there!18:41
anteayawe aren't having one limit for success and a different limit for failure18:41
*** dims_ has joined #openstack-infra18:41
reedgentle ping for this review https://review.openstack.org/9948118:42
*** _nadya_ has quit IRC18:45
*** tkelsey has quit IRC18:45
*** shayneburgess has quit IRC18:46
*** maxbit has quit IRC18:47
*** otherwiseguy has quit IRC18:49
*** chuck__ has joined #openstack-infra18:49
*** reaper has quit IRC18:49
openstackgerritK Jonathan Harker proposed a change to openstack-infra/infra-specs: Put the puppet modules in their own projects/repos  https://review.openstack.org/9999018:49
fungiclarkb: the machine and the change to bring it into service look like they're in good shape. here's hoping this gets elasticsearch caught back up18:51
*** zul has quit IRC18:51
* sdague would do a happy dance if that happens18:52
anteayaoh my look at all that blue in the node graph18:52
sdagueanteaya: yeh, I was about to say18:52
anteayaall those 'in use' nodes18:53
sdagueyay capacity18:53
* anteaya does a happy dance18:53
fungithat's my mass delete bearing fruit, apparently18:53
*** _nadya_ has joined #openstack-infra18:54
anteayawoooooo18:54
*** ihrachyshka has quit IRC18:54
anteayafungi: thanks for your magic scripts18:54
*** CaptTofu_ has quit IRC18:55
*** CaptTofu_ has joined #openstack-infra18:55
clarkbfungi: thanks I am going to go ahead and start applying things then18:57
*** todd_dsm has joined #openstack-infra18:58
*** ihrachyshka has joined #openstack-infra18:58
clarkbok pupept change is approved18:58
lifelessclarkb: you aware of the tripleoci downness?19:00
lifeless\\ is it still ongoing?19:00
*** dims has joined #openstack-infra19:00
clarkbI am not aware. sorry19:00
clarkbI was asked earlier to focus on elasticsearch19:00
*** shayneburgess has joined #openstack-infra19:01
*** dprince has quit IRC19:01
fungilifeless: when derekh asked about it earlier, it appeared that all nodes in the red hat region were in a delete state in nodepool, and it had been hours since nodepool had retried tried to delete them (saw some tracebacks returned from nova in the log which i summarized in channel)19:01
*** cody-somerville has quit IRC19:01
bnemecHey infra folks, the dib-utils project has been created and according to the instructions I'm following I need to ask someone here to add me to dib-utils-core so I can get the review groups set up correctly.19:01
fungibnemec: done19:02
*** dims_ has quit IRC19:03
bnemecfungi: Awesome, thanks!19:03
*** mbacchi has quit IRC19:06
openstackgerritA change was merged to openstack-dev/hacking: Mark hacking as being a universal wheel  https://review.openstack.org/9952819:07
*** alexpilotti_ has joined #openstack-infra19:08
*** e0ne has joined #openstack-infra19:08
*** alexpilotti has quit IRC19:11
*** alexpilotti_ is now known as alexpilotti19:11
openstackgerritA change was merged to openstack-infra/reviewstats: Oslo project updates  https://review.openstack.org/9918419:12
*** mmaglana has joined #openstack-infra19:13
*** _nadya_ has quit IRC19:14
openstackgerritJaroslav Henner proposed a change to openstack-infra/jenkins-job-builder: add presend-script to email-ext  https://review.openstack.org/9999419:14
*** otherwiseguy has joined #openstack-infra19:20
openstackgerritA change was merged to openstack-infra/config: Add elasticsearch07  https://review.openstack.org/9998019:22
*** matjazp has joined #openstack-infra19:23
clarkbwoot that will be applied to the various nodes over the next few minutes, I will run puppet by hand on 07 once firewalls are updated19:24
openstackgerritmark mcclain proposed a change to openstack-infra/reviewstats: add neutron-specs  https://review.openstack.org/10000119:25
*** chuck__ has quit IRC19:26
*** james_li has quit IRC19:26
clarkbwe have hit 100k19:28
clarkbgreghaynes is to blame19:28
jesusaurusgreghaynes: you were sitting on your change waiting for that, weren't you?19:30
*** james_li has joined #openstack-infra19:33
greghaynes:)19:34
greghaynesmebbe19:34
*** talluri has quit IRC19:35
*** talluri has joined #openstack-infra19:35
*** talluri has quit IRC19:40
*** Ryan_Lane1 has quit IRC19:40
*** todd_dsm has quit IRC19:43
*** nati_ueno has quit IRC19:43
*** zns has quit IRC19:44
mordredgoooooooooooooooool19:44
lifelessbnemec: fungi actually dib-utils should not have its own review group, it should use tripleo-core and tripleo-ptl19:45
*** markwash has quit IRC19:45
*** annegentlereally has quit IRC19:45
bnemeclifeless: It does, but it's configured like diskimage-builder where diskimage-builder-core includes tripleo-core.19:46
clarkbI am runnin gpuppet on 07 now19:46
*** todd_dsm has joined #openstack-infra19:47
*** bookwar has quit IRC19:47
*** torgomatic has quit IRC19:47
*** _nadya_ has joined #openstack-infra19:48
bnemeclifeless: Oh, maybe it should have just used diskimage-builder-core directly though.  I see that's what t-i-e does.19:48
openstackgerritA change was merged to openstack-infra/reviewstats: Encode to utf-8 before printing  https://review.openstack.org/9910419:49
*** matjazp has quit IRC19:49
clarkband 7 is up. now we wait for it to rebalance and when that is done (probably late today I hope). I will disable 0119:49
clarkb07 is super write heavy now as it gets shards and the initial sar numbers look good19:50
*** torgomatic has joined #openstack-infra19:54
*** ociuhandu has joined #openstack-infra19:54
*** nati_ueno has joined #openstack-infra19:54
jesusaurusdid infra switch over to puppet3?19:55
*** Sukhdev has joined #openstack-infra19:56
mriedemfungi: this was promoted but should i still recheck it? https://review.openstack.org/#/c/99144/19:58
*** ociuhandu has quit IRC19:58
*** zns has joined #openstack-infra19:58
*** adalbas has quit IRC19:58
fungimriedem: yeah, it apparently failed i the gate19:59
mriedemok i rechecked it19:59
*** zul has joined #openstack-infra19:59
*** amcrn has joined #openstack-infra20:00
clarkbjesusaurus: no, but we are trying to be compat with both as we move20:00
clarkbjesusaurus: right now our puppet is compatible with puppet3 on slaves for fedora and trusty20:00
nibalizerclarkb: you scared me with your typo above20:01
nibalizeri'm all 'wth is gpuppet'20:01
jesusaurusclarkb: http://logs.openstack.org/50/99950/1/check/gate-config-puppet-apply-precise/4253068/console.html sure looks like the deprecation warnings are coming from puppet-3.6.220:01
clarkbjesusaurus: wow thats new20:02
clarkbjesusaurus: looks like a gemfile installed it?20:02
clarkbnibalizer: the keyboard on my laptop has a bad habit of transposing spaces and letters that end words20:02
jesusaurusi guess...20:02
*** james_li has quit IRC20:03
bearhandsclarkb: Supposedly you can try to nuke a lot of those rax instances now, if you've not already... or if we didn't make them just disappear for you20:04
clarkbjesusaurus: that is really odd though. That is why we don't use puppet librarian for example20:04
bearhandstheir task states may still say deleting, but you can try them again20:04
*** maxbit has joined #openstack-infra20:04
clarkbjesusaurus: I am wondering if something somewhere added a dependency on newer puppet (like lint)20:04
clarkbbearhands: thanks20:04
bearhandsnp.. let me know if they don't go away :(20:04
clarkbbearhands: according to our big graph a ton of nodes deleted all at once20:04
clarkbso I think it may be sorting itself out20:05
bearhandsok, we may have done it for you20:05
bearhandssounds like we did20:05
bearhands:)20:05
bearhandscools.20:05
*** bookwar has joined #openstack-infra20:06
*** _nadya_ has quit IRC20:08
*** todd_dsm has quit IRC20:09
*** blamar has quit IRC20:09
clarkbmordred: ok do it again20:09
*** james_li has joined #openstack-infra20:10
mordredclarkb: wait - what do I do again?20:13
clarkb@mordred | goooooooooooooooool20:15
*** mfer has quit IRC20:15
fungibearhands: clarkb: that jump in the graph was me explicitly deleting any nodes nodepool seemed to think it had been trying to delete for at least an hour20:18
*** jistr has quit IRC20:18
clarkbfungi: rgr20:18
clarkbfungi: sdague mordred one other thing we may try with es01 is to turn it into a non data node. Basically point clients at it and only use it for its vast quantity of RAM20:19
isviridovclarkb, mordred, fungi could you give me a hint when it is planned to add new projects to stackforge? Last time it was on Fridays20:19
clarkbfungi: sdague mordred but I don't think we need to do that20:19
fungiall of the instances in the list for my batch delete were successfully removed, which does confirm the error nodes were either fixed or gone20:19
clarkbisviridov: we do it as we are able to review the changes now. unfortunately I haven't been able to do much review this week20:19
*** todd_dsm has joined #openstack-infra20:19
fungifew if any of us have been able to get any reviewing done for a couple of weeks20:20
*** matjazp has joined #openstack-infra20:20
fungiexcept for reviewing changes fixing breakage i the gate or other parts of our infrastructure20:20
clarkbmordred: I hope you are watching20:21
adam_gmordred, https://review.openstack.org/#/c/99740/20:21
jesusaurusclarkb: aha! install_puppet.sh is only pinning the version on trusty20:23
jesusaurusclarkb: not on precise20:23
*** rcarrill` has joined #openstack-infra20:24
mordredclarkb: gooooooooooooooool20:24
mordred:)20:24
clarkbjesusaurus: thats uh unfortunate. But maybe if it works...20:25
clarkbI guess it makes getting that tset voting harder?20:25
*** matjazp has quit IRC20:25
*** miqui has quit IRC20:25
clarkbjesusaurus: I am actually somewhat curious as to whether people think this may be perferred or not20:25
openstackgerritK Jonathan Harker proposed a change to openstack-infra/config: Pin the version of puppet on all Ubuntus  https://review.openstack.org/10001020:25
mordredclarkb, jesusaurus: we _Wanted_ to pin on precise ...20:25
clarkbI can go either way right now20:25
*** rcarrillocruz has quit IRC20:25
clarkbmordred: so I wonder if I have puppet 3 on es0720:26
mordredclarkb: wait - you were the one who convinced me to pin everywhere20:26
clarkbmordred: also this drama is great20:26
clarkbmordred: yes still pin20:26
clarkbmordred: but bump slave pin to 3 if it iworks20:26
*** todd_dsm has quit IRC20:26
clarkbmordred: but then we would have two different versions of puppet on precise whcih isn't a good thing20:26
clarkbso I can go either way and going back to what we had is probably simplest right now20:27
mordredso - somewhere I thought we had a patch to install_puppet that made all of this correct20:27
clarkbok puppet on es07 is correct20:27
mordredwe need to pin to 2.x on != trusty - and to 3.x on trusty20:27
jesusauruspin to 3 on trusty?20:28
clarkbright20:28
mordredjesusaurus: yeah - clarkb thinks we should be explicit with our pins everywhere20:28
clarkbjesusaurus: however I just booted a node and it worked fine20:28
clarkbjesusaurus: so I don't know how this is breaking on the slaves20:28
mordredclarkb: we should take a day and just migrated20:28
mordredso we can stop screwing with this dance20:29
clarkb++20:29
clarkbmordred: also O_O are you watching this game?20:29
mordredgooooooooooooooooooooooooooooooooooooooooooooooool20:29
clarkbjesusaurus: how is it installing 3 on precise?20:30
clarkbit clearly is, but I still don't understand how20:30
mordredclarkb: is it because we're doing gems for the puppet tests with rvm or something?20:30
*** alexpilotti has quit IRC20:30
clarkbmordred: thats what I am thinking20:30
*** zzelle has quit IRC20:30
mordredyeah. it's time to be done with this game20:30
mordredwait20:30
mordrednot the game20:30
mordredthe game is great20:31
mordredthe puppet2/3 split20:31
mordredit officially bores me now20:31
*** zzelle has joined #openstack-infra20:31
jesusaurusclarkb: yeah, i dont understand how either. i had misread "!= trusty" as "== trusty" in install_puppet.sh20:32
openstackgerritJaroslav Henner proposed a change to openstack-infra/jenkins-job-builder: add presend-script to email-ext  https://review.openstack.org/9999420:32
*** sandywalsh_ has quit IRC20:33
*** flaper87|afk is now known as flaper8720:33
*** mwagner_lap has quit IRC20:36
mordredgooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool20:37
clarkbmordred: this ame20:37
mordredclarkb: dude20:37
clarkbI approve20:37
*** eharney has quit IRC20:38
krotscheckmordred: You’ve seen this? http://fivethirtyeight.com/interactives/world-cup/20:40
krotscheckSpain was heavily favored over the netherlands :)20:40
mordredkrotscheck: I had not seen that - but I believe that they were wrong :)20:41
mordredwow, and they thought Chile over Holland too?20:41
mordred(I hope the other bits of that are right though - if so, it would mean I'd get to see England v. Columbia, which would be a cool match to watch)20:42
clarkbmordred: this game20:44
clarkbit doesn't stop20:44
mordreddude20:44
mordredI think they're potentially still pissed off about last time20:44
jogomordred: what happened to you going to the world cup?20:46
*** dkliban is now known as dkliban_afk20:48
mordredjogo: I'll be going in a couple of weeks20:48
mordredI fly down on the 25th, then watch a game in rio on the 28th (winner of C vs. runner up of D) - and spend the rest of the time watching matches in bars with brazilian friends20:49
clarkbmordred: when do you go to germany?20:49
*** markwash has joined #openstack-infra20:52
jogomordred: oh nice20:53
*** Ryan_Lane1 has joined #openstack-infra20:55
SpamapSclarkb: whenever he damn well pleases? ;)20:56
SpamapSjogo: so if I wanted to use 1 of my hours today to help improve the gate.. where would I start?20:56
openstackgerritA change was merged to openstack-infra/devstack-gate: Revert "Set CEILOMETER_PIPELINE_INTERVAL to 15"  https://review.openstack.org/9887120:56
lifelessmordred: oh so we do crossover in sunnyvale?20:57
*** rfolco has quit IRC20:57
SpamapSaww man I was going to grab https://bugs.launchpad.net/devstack/+bug/125348220:58
uvirtbotLaunchpad bug 1253482 in devstack "Keystone's IANA-assigned default port in linux local ephemeral port range" [Undecided,In progress]20:58
jogoSpamapS: excellent question20:59
jogoSpamapS: fix the heat bugs listed on http://status.openstack.org/elastic-recheck/index.html21:00
*** weshay has quit IRC21:01
*** radez is now known as radez_g0n321:01
*** james_li has quit IRC21:01
SpamapSah silly me I was looking at /rechecks21:01
jogoSpamapS: that or work on helping us drop gate jobs21:02
mordredlifeless: nope. I'll be in NYC before I fly out21:02
mordredclarkb: on the saturday - so I'll be in germany for the game.21:02
jogoas per the 'test nodes' graph at the bottom of http://status.openstack.org/zuul/21:02
jogowe are hitting quota21:02
jogoSpamapS: also it looks like are having a spike in grenade issue, I think mriedem was looking into those21:04
clarkbmordred: woot21:05
*** sarob_ has joined #openstack-infra21:05
*** ihrachyshka has quit IRC21:05
*** james_li has joined #openstack-infra21:10
*** shayneburgess has quit IRC21:11
*** mgagne has quit IRC21:11
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Add unit tests for SearchEngine.search  https://review.openstack.org/10001721:13
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Add unit tests for SearchEngine.search  https://review.openstack.org/10001721:13
*** cp16net_ has quit IRC21:13
*** weshay has joined #openstack-infra21:14
*** sandywalsh_ has joined #openstack-infra21:14
*** sandywalsh_ has quit IRC21:14
*** otherwiseguy has quit IRC21:18
*** smarcet has quit IRC21:19
*** mgagne has joined #openstack-infra21:20
*** mgagne is now known as Guest6148621:20
*** matjazp has joined #openstack-infra21:21
openstackgerritAntoine Musso proposed a change to openstack-infra/jenkins-job-builder: Apply defaults to job-templates parameters  https://review.openstack.org/10002021:23
*** hashar has joined #openstack-infra21:24
*** matjazp has quit IRC21:25
*** todd_dsm has joined #openstack-infra21:27
*** wenlock_ has quit IRC21:28
jogosdague: wow a single devstack-gate patch takes up 20+ jobs21:28
*** rcarrillocruz has joined #openstack-infra21:30
yjiang5clarkb: hi, all, I'm reading http://docs.openstack.org/developer/tempest/HACKING.html#parallel-test-execution and I didn't find any lock in AggregatesAdminTest which is specified in the doc, is it because doc not updated for code change?21:31
*** rcarrill` has quit IRC21:31
clarkbyjiang5: I am not sure. mtreinish or the folks in the qa channel may know21:31
mordredjogo: yah man.21:31
clarkbyjiang5: but if I had to guess I would guess they were able to make it lockless and the docs aren't up to date21:32
mtreinishyjiang5: locks are only needed on the aggregates tests if there is host manipulation in the test21:33
*** isviridov is now known as isviridov|away21:34
mtreinishyjiang5: for example: http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/admin/test_aggregates.py#n15421:35
*** e0ne has quit IRC21:38
*** sarob_ has quit IRC21:38
*** e0ne has joined #openstack-infra21:39
*** mrodden has quit IRC21:39
*** sarob_ has joined #openstack-infra21:39
mtreinishjogo: I think tempest wins at 2621:41
*** maxbit has quit IRC21:41
jogomtreinish: wow21:42
jogomtreinish: so stop working on tempest your hogging resources ;)21:42
*** e0ne has quit IRC21:42
*** sarob_ has quit IRC21:43
mtreinishjogo: nah, I think I should push more hacking patches. I skipped 2 rules because there were too many hits on them21:43
*** shayneburgess has joined #openstack-infra21:44
yjiang5mtreinish: thanks. I searched 'lock', should use "Lock". Do you mind me to send a patch to update the doc to point to the function?21:47
yjiang5clarkb: thanks.21:47
*** fbo_away is now known as fbo21:48
*** todd_dsm has quit IRC21:49
jogomtreinish: which ones?21:49
mtreinishyjiang5: go for it, but I'm not sure how specific we should get in the docs because how we instantiate a lock has changed at least 3 times21:49
mtreinishjogo: H405 and H90421:50
yjiang5mtreinish: aha, got it. That make sense also. Hope others can find the usage more easily.21:50
jogoahh yeah, 904 is a little intense21:51
jogoand 405  can be really big to fix as well. nova skips a bunch for similar reasons21:51
mtreinishjogo: yeah and we were already skipping h404 before the new version because docstrings in tempest suck21:52
mtreinishso h405 will take some time21:52
*** markwash has quit IRC21:56
jogofor sure21:56
*** Sukhdev has quit IRC21:56
*** hashar has quit IRC21:57
*** e0ne has joined #openstack-infra21:59
devanandasdague: fwiw, it seems like ironic does indeed have a separate merge queue now... I'm not sure what changed to effect that, though21:59
devanandaclarkb: ^ ?21:59
*** xyang1 has quit IRC21:59
clarkbdevananda: sdague wrote a chnage to stop gatin gon the oslo stuff21:59
devanandaclarkb: ahh! great21:59
clarkbdevananda: which decoupled the transitive test dependency21:59
devanandafantastic21:59
jogoclarkb: oh nice22:00
jogoso random question: so we have non-voting staable icehouse and havana jobs for all the clients22:00
jogoas client aren't tied to a release22:00
clarkbright22:00
jogoand we have neutron and nova-network tests for both22:01
jogoperhaps we can strip drop the number of tests there22:01
clarkbpossibly22:02
clarkbdoes neutronclient run the nova network tests?22:02
clarkbit probably shouldn't but we need to test novaclient with neutronclient in the neutron case22:02
jogomaybe say just nova-network for all clients except neutron which would have neutron client?22:03
jogoso drop 2 jobs for most client patches22:03
jogoin check queue22:03
*** harlowja has quit IRC22:03
jogoclarkb: not sure what you mean22:04
*** aysyd has quit IRC22:04
*** timrc-afk is now known as timrc22:04
jogoneutronclient isn't used in the nova-network settings22:04
jogoand if it is at all that is a bug22:04
jogoin nova22:04
clarkbjogo: I am saying novaclient needs to be tested on neutron too22:05
jogobut don't think it is22:05
clarkbright that is the case aiui. its the neutron case that is tricky22:05
jogoclarkb: ohh hmm yeah there are some nova comands that talk to neutron22:05
jogoin the backend22:05
jogoexcept our tempest tests don't do much around testing the clients directly22:06
jogothis is much more for the interaction between services (which uses the clients)22:06
clarkbright22:06
clarkbthe whole point of those tests is that services depend on the clients but we don't really gate them against old stable versions22:06
clarkbso we have to be careful that we cover the intended bases22:07
jogoclarkb: yeah we may loose some coverage by dropping some of those jobs22:08
jogodropping cells test would help too ;)22:08
jogoits pretty useless http://logs.openstack.org/51/99751/2/check/check-devstack-dsvm-cells/9380c31/console.html#_2014-06-13_20_40_23_13922:08
jogoit runs only a few tests and most aren't cells related22:09
jogobut thats a whole can of worms22:09
*** sarob_ has joined #openstack-infra22:10
*** blamar has joined #openstack-infra22:10
*** marcoemorais has quit IRC22:10
*** marcoemorais has joined #openstack-infra22:11
*** sarob__ has joined #openstack-infra22:11
*** flaper87 is now known as flaper87|afk22:13
*** sarob___ has joined #openstack-infra22:13
*** sarob_ has quit IRC22:14
*** sarob__ has quit IRC22:16
jogoclarkb: anyone put up the patch to drop some postgres jobs yet?22:17
clarkbjogo: I don't think so. What are we trying to accomplish with that?22:17
clarkbpostgres actually isn't very problematic...22:17
jogoclarkb: free capacity. actually you have been very quite on that ML thread22:17
*** james_li has quit IRC22:17
jogo[openstack-dev] Gate proposal - drop Postgresql configurations in the gate22:18
clarkbjogo: I was apparently getting a lot of bounces22:18
clarkbso mailman made me go away22:18
*** sarob___ has quit IRC22:18
jogoahh, well yeah that thread is why22:18
jogoso the idea I think we are moving towards is leaving one job in postgres mode22:18
jogomaybe one of the neutron jobs22:18
clarkbya I was beginning to wonder why my email was so quiet then eventually mailman told me22:19
clarkbI recently resubscribed22:19
jogohaha22:19
anteayajhesketh: mtreinish submitted a talk to the openstack miniconf for pycon au22:20
clarkbyup never got that thread22:20
anteayait is going to be a great talk, the crowds will just pack themselves in22:20
anteayayou have to accept it22:20
jogoclarkb: http://lists.openstack.org/pipermail/openstack-dev/2014-June/037431.html22:20
jogoclarkb: happy reading, I am very interested in your thoughts on it22:21
*** prad has quit IRC22:21
*** matjazp has joined #openstack-infra22:22
mtreinishanteaya: I'm not sure that I can live up to that :)22:22
anteayayou did already22:22
anteayathe room was packed22:22
anteayathey gave you an encore22:22
anteayait was inspiriing22:22
anteaya:D22:22
jogoanteaya: what was the talk?22:22
*** matjazp_ has joined #openstack-infra22:23
anteayamtreinish's tempest talk at summit22:23
anteayayou didn't hear about it at the time22:23
anteayaoh it was marvelous22:23
anteayawhat 350 people in that room?22:23
anteayathey were asking for autographs afterward22:23
anteaya:D22:23
*** blamar has quit IRC22:24
mtreinishhahaha, I guess the room was mostly full. But I can safetly say that I didn't use a pen except to sign reciepts my whole stay in atlanta...22:25
anteayashhhhhhh22:26
*** mriedem has quit IRC22:26
anteaya:D22:26
*** matjazp has quit IRC22:27
jrollis there a way for a patch to depend on multiple (chains of) patches, without rebasing everything into a massive chain?22:28
openstackgerritJoe Gordon proposed a change to openstack-infra/config: Remove duplicate dsvm-postgres-full jobs  https://review.openstack.org/10003222:28
clarkbjogo: ugh so I think there is a clear issue in people not groking the question because it probably could have been posed more clearly22:29
clarkbjogo: no one seems to be saying stop testing postgres22:29
openstackgerritSalvatore Orlando proposed a change to openstack-infra/config: Make neutron full job voting (neutron gate only)  https://review.openstack.org/8828922:29
clarkbjogo: it seems like sdague is saying stop making postgres a special job22:29
clarkbjogo: and we can test in unittests all we want22:29
*** changbl has quit IRC22:29
jogoclarkb: exactly22:29
jogoso this is what I am thinking we should do: patch coming soon22:30
*** zns has quit IRC22:30
*** CaptTofu_ has quit IRC22:30
fungiyeah, other participants in the thread not getting that the bits of postgres actually exercised differently than mysql behind a tempest job are likely trivial to nonexistent22:30
mtreinishjogo: did you see my proposal on the thread?22:31
mtreinishto switch the global neutron job to postgres and run the mysql neutron only on neutron patches22:31
clarkbjogo: well I think we need to reset the whole damn thread22:32
clarkband start with a concrete proposal.22:32
clarkbdrop the special postgres job. run postgres as DB backend for another job. Stress that postgres isn't going anywhere for unittest nodes22:32
*** sarob_ has joined #openstack-infra22:33
clarkbI feel like everyone was missing the very important point that unittests/functional tests can use postgres all they want22:33
*** UtahDave has quit IRC22:33
clarkbeg DB migrations are tested against postgres and mysql assuming oslo db is working22:33
*** matjazp_ has quit IRC22:34
*** zns has joined #openstack-infra22:34
*** blamar has joined #openstack-infra22:34
fungiof everyone who has claimed in that thread that testing with postgres turned up postgres-specific issues, nobody has actually come forward suggesting they were discovered because of fails on tempest tests22:34
openstackgerritJoe Gordon proposed a change to openstack-infra/config: Merge postgres and neutron jobs in integrated-gate template  https://review.openstack.org/10003322:34
jogoclarkb mtreinish: this is my proposal, as a starting point22:34
jogomtreinish: like this ^22:34
jogofungi: ^22:35
*** dangers is now known as dangers_away22:35
clarkbfungi: correct, they were all catchable functional/unittest type things22:35
*** sarob__ has joined #openstack-infra22:35
clarkbhonestly though postgres seems like a weird target because it hasn't really been a problem22:36
mtreinishjogo: that's basically what I was saying, but think once we get to tempest-neutron-full working you shouldn't need the extra nova job22:36
jogomtreinish: I added that just for  good meaure22:36
clarkbmtreinish: right22:36
clarkbmtreinish: that was sort of the basis of my  test matrices sessions22:36
jogomtreinish: but happy to drop it if you think that is the right direction22:36
jogoclarkb: yeah22:36
clarkbonce you have feature parity this is much easier22:36
mtreinishclarkb: I agree we don't really need the extra config, but I was just suggesting compacting things to appease the greater crowd22:37
anteayajroll: multiple chains of patches? do you mean across projects?22:37
mtreinishjogo: yeah short time it's not the same thing22:37
clarkbmtreinish: ya I think its a reasonable compromise22:37
mtreinishbut once the last set of neutron bugs get fixed we can drop the extra nova job22:37
clarkbjroll: the answer whether a single project or multiple projects is "no"22:38
mtreinishalthough I think the extra coverage is minimal22:38
clarkbanteaya: ^22:38
jrollanteaya: no. in my case, my patch depends on three in-flight patches22:38
clarkbjroll: anteaya the exception is you can push a merge commit, but we turn those off22:38
*** sarob_ has quit IRC22:38
jrollclarkb: :(22:38
*** Guest61486 has quit IRC22:38
jogomtreinish: so your saying drop the nova change in https://review.openstack.org/#/c/100033/1/modules/openstack_project/files/zuul/layout.yaml ?22:38
clarkbjroll: you can have a soft dependency and construct it such that tests don't pass without all three and let reviewers know about the dependency22:38
*** zns has quit IRC22:39
jrollclarkb: any idea what people do in this situation? (the chains are owned by different people, I'm not going to rebase their patches)22:39
jrollah22:39
clarkbbut usually you rebase22:39
clarkbjroll: really people shouldn't be afraid of having their code manipulated for the greater good22:39
* fungi is not afraid to rebase other devs' patches22:39
mtreinishjogo: yeah, I don't think the extra tests run there will provide much extra coverage22:39
fungi(or fix obvious trivial erors in them)22:39
fungis/erors/errors/22:39
mtreinishjogo: although I'm forgetting about nova-net22:39
jrollclarkb: hmm, I'd still want to poke them first, idk. I'll figure something out, thanks.22:40
jogomtreinish: yeah see the commit message ;)22:40
clarkbjroll: really meh22:40
jogopostgres+nova-network will be tested in nova.22:40
clarkbjroll: I think this is a social thing we may need to push harder to change22:40
mtreinishjogo: heh, yeah I should start reading those22:40
clarkbjroll: it shouldn't be seen as offensive or a problem for people to do stuff like this22:40
jogomtreinish: I am gonna float that patch on the thread so we can start discussing something concrete22:41
*** andreykurilin_ has quit IRC22:41
*** mgagne has joined #openstack-infra22:41
*** mwagner_lap has joined #openstack-infra22:41
*** mgagne is now known as Guest803122:41
jrollclarkb: I agree, I just don't like pissing off people who may disagree :)22:41
jrollclarkb: my main thing is, the dependent patches are completely unrelated, other than the fact I depend on them, so I don't want one to get blocked on the other landing22:42
JayFFWIW, anyone feel free to rebase my patches or fix bugs in them whenever you want :)22:42
JayFexcept you, jroll22:43
jroll-.-22:43
fungiterritoriality over patches is something we should definitely strive to rise above as a community22:44
clarkbjroll: so that sort of situation is an outlier and the sfot dependency approach may be best if they are not actually related22:44
fungithere's already PLENTY of work to go around without anyone needing to carve out their own exclusive problem space22:44
jrollindeed22:44
*** CaptTofu_ has joined #openstack-infra22:45
jrollI'll look a bit more at it22:45
jrollone is about to land; one is far from landing22:45
jrollso it might work out if I order correctly22:45
openstackgerritJoe Gordon proposed a change to openstack-infra/config: Merge postgres and neutron jobs in integrated-gate template  https://review.openstack.org/10003322:46
*** amcrn has quit IRC22:47
openstackgerritA change was merged to openstack/requirements: Fix sphinx requirement to add overlap  https://review.openstack.org/9894722:49
*** zns has joined #openstack-infra22:49
openstackgerritOpenStack Proposal Bot proposed a change to openstack-dev/hacking: Updated from global requirements  https://review.openstack.org/10003722:50
*** blamar has quit IRC22:55
*** ramashri has quit IRC22:59
*** zns has quit IRC23:02
*** markmcclain has quit IRC23:03
*** fbo is now known as fbo_away23:07
*** matjazp has joined #openstack-infra23:08
*** wenlock_ has joined #openstack-infra23:10
*** matjazp has quit IRC23:12
*** reed has quit IRC23:15
*** medieval1 has quit IRC23:17
*** medieval1 has joined #openstack-infra23:17
*** medieval1 has quit IRC23:22
SpamapSjogo: what is "integrated-gate-neutron" ?23:23
mtreinishSpamapS: it's a job template in zuul that gets run on neutron and neutronclient change23:26
*** e0ne has quit IRC23:28
*** e0ne has joined #openstack-infra23:29
morganfainbergmtreinish, so this means instead of doing the dsvm-neutron-full we do dsvm-neutron-pg, if i'm reading this correctly; postgres-ful is removed from the main python27 templates and added specifically to nova?23:32
*** melwitt has quit IRC23:35
mtreinishmorganfainberg: actually it replaces dsvm-neutron in the integrated-gate template. The neutron-full job is a non-voting job because neutron still isn't stable enough to run all the tests in parallel23:35
mtreinishbut yeah that's the basic idea23:35
morganfainbergmtreinish, ah23:35
morganfainbergmtreinish, ok. and then the postgres+novanet is covered on nova changes only23:36
* SpamapS mumbles something about that PERHAPS being the actual problem23:36
mtreinishyep23:36
*** lcostantino has joined #openstack-infra23:36
SpamapSwell time for the weekend to start23:36
morganfainbergSpamapS, ahve a good weekend :)23:37
morganfainbergmtreinish, ok. i can be on board with that.23:38
*** matjazp has joined #openstack-infra23:38
mtreinishmorganfainberg: yeah it seems like the best compromise23:40
*** matjazp_ has joined #openstack-infra23:40
*** shayneburgess has quit IRC23:40
openstackgerritMorgan Fainberg proposed a change to openstack-infra/config: Make apache-services tempest check experimental  https://review.openstack.org/9763823:41
*** matjazp has quit IRC23:43
*** matjazp_ has quit IRC23:44
*** dims has quit IRC23:47
*** praneshp has quit IRC23:53
*** sweston has quit IRC23:57
*** medieval1 has joined #openstack-infra23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!