Monday, 2016-07-25

*** zodbot_ is now known as zodbot00:00
*** fcoelho has quit IRC00:02
*** rook has quit IRC00:07
*** rook has joined #rdo00:07
*** gildub has joined #rdo00:26
*** jubapa has joined #rdo00:34
*** jubapa has quit IRC00:39
*** limao has joined #rdo00:39
*** saneax is now known as saneax_AFK00:42
*** mbound has joined #rdo00:47
*** leanderthal|afk has quit IRC00:51
*** mbound has quit IRC00:52
*** chlong has quit IRC00:54
*** rain has joined #rdo01:03
*** rain is now known as Guest356301:04
*** chlong has joined #rdo01:11
*** chlong is now known as chlong_POffice01:21
*** akshai has joined #rdo01:33
*** akshai has quit IRC01:44
*** coolsvap has joined #rdo01:59
*** crossbuilder has joined #rdo02:35
*** crossbuilder_ has quit IRC02:35
*** ashw has joined #rdo02:53
*** kambiz has quit IRC03:02
*** paragan has joined #rdo03:04
*** pilasguru has quit IRC03:08
*** gbraad has joined #rdo03:08
*** gbraad has quit IRC03:08
*** gbraad has joined #rdo03:08
*** ashw has quit IRC03:12
*** rdas has joined #rdo03:14
*** gbraad has quit IRC03:21
*** gbraad has joined #rdo03:21
*** imcleod has joined #rdo03:22
*** morazi has quit IRC03:24
*** Amita has joined #rdo03:30
*** abregman has quit IRC03:38
*** vimal has joined #rdo03:41
*** pilasguru has joined #rdo04:05
*** imcleod has quit IRC04:16
*** nehar has joined #rdo04:23
*** pilasguru has quit IRC04:29
*** jubapa has joined #rdo04:34
*** vimal has quit IRC04:38
*** jubapa has quit IRC04:39
*** saneax_AFK is now known as saneax04:41
*** Amita has quit IRC04:44
*** oshvartz has quit IRC04:52
*** jhershbe__ has joined #rdo04:54
*** abregman has joined #rdo04:57
*** vimal has joined #rdo04:58
*** Amita has joined #rdo04:59
*** chandankumar has joined #rdo04:59
*** Amita has quit IRC05:01
*** Amita has joined #rdo05:12
*** Alex_Stef has joined #rdo05:18
*** Poornima has joined #rdo05:19
*** ekuris has joined #rdo05:32
*** satya4ever has joined #rdo05:41
*** mosulica has joined #rdo05:51
*** anilvenkata has joined #rdo05:57
*** vaneldik has quit IRC05:57
*** pradiprwt has joined #rdo05:59
pradiprwtHi Everyone, I want to do some post deployment changes in RDO using director as plugin, can any one please guide me how I can start developing plugin for that ..???06:03
*** mosulica has quit IRC06:03
*** pgadiya has joined #rdo06:04
*** rcernin has joined #rdo06:05
pradiprwtHow to develop plugin for RHEL-OSP, Is there any document ..???06:08
*** dgurtner has joined #rdo06:09
*** dgurtner has joined #rdo06:09
*** ganesh has joined #rdo06:13
*** ganesh is now known as Guest5745006:14
*** Guest57450 is now known as gkadam06:19
*** oshvartz has joined #rdo06:21
*** Amita has quit IRC06:21
*** Amita has joined #rdo06:22
*** rasca has joined #rdo06:28
*** edannon has joined #rdo06:29
*** jprovazn has joined #rdo06:32
*** Amita has quit IRC06:34
*** Amita has joined #rdo06:36
*** pcaruana has joined #rdo06:37
*** tesseract- has joined #rdo06:41
*** Amita has quit IRC06:41
*** bandini has joined #rdo06:41
*** smeyer has joined #rdo06:42
*** florianf has joined #rdo06:44
*** Amita has joined #rdo06:44
*** mosulica has joined #rdo06:46
*** milan has joined #rdo06:50
*** jtomasek has joined #rdo06:55
*** rdas has quit IRC06:58
*** nmagnezi has joined #rdo06:59
*** tshefi has joined #rdo07:00
*** yfried has joined #rdo07:02
*** hynekm has joined #rdo07:02
*** vaneldik has joined #rdo07:03
*** Alex_Stef has quit IRC07:08
*** eliska has joined #rdo07:08
*** zoli_gone-proxy is now known as zoliXXL07:14
*** rdas has joined #rdo07:14
*** apevec has joined #rdo07:14
zoliXXLgood morning07:16
*** ccamacho has joined #rdo07:18
*** jtomasek has quit IRC07:18
*** jtomasek has joined #rdo07:19
*** paramite has joined #rdo07:25
*** tshefi has quit IRC07:27
*** tshefi has joined #rdo07:28
*** abregman_ has joined #rdo07:28
*** garrett has joined #rdo07:31
*** abregman has quit IRC07:31
*** ihrachys has joined #rdo07:32
*** kaminohana has quit IRC07:40
*** gildub has quit IRC07:42
*** gildub has joined #rdo07:43
*** gildub has quit IRC07:48
*** jpich has joined #rdo07:52
*** Guest3563 is now known as leanderthal07:55
*** leanderthal is now known as leanderthal|afk07:55
*** pblaho has joined #rdo07:55
*** abregman_ is now known as abregman_|mtg07:56
number80o/07:56
number80damn, I just saw the tripleo/swift ticket07:57
number80I'm glad that I didn't read it during the w-e :)07:57
*** jlibosva has joined #rdo07:57
number80apevec, slagle: bottom line is to never test from *unreleased* packages07:58
*** jtomasek has quit IRC07:58
number80*upgrades07:58
*** jtomasek has joined #rdo07:58
apevecwell, slagle is testing it07:58
*** fzdarsky has joined #rdo07:59
number80test from N releases, current milestones07:59
number80well, if they do, we can't support it07:59
number80that's just not possible unless we ship everything in monolithic packages07:59
*** tumble has joined #rdo08:00
number80or that'd mean that we'd have to review carefully changes in DLRN so accepting that CI may be broken for one or two days08:01
*** vaneldik has quit IRC08:01
number80or not ninja introducing packages08:01
apevecI'm not sure what do you mean? If we want to CD, we need to support upgrades with trunk packages08:02
*** fragatina has joined #rdo08:02
apevecupgrade issue w/ swift rpm was only that obsoletes was too restrictive08:02
number80apevec: then no more ninja merges, no more changes accepted without careful review08:02
apevecsure, that should be always the case, unless there's promotion breakage08:03
flepied1number80: apevec: what is the problem?08:03
number80well, that's the current case08:03
number80https://bugzilla.redhat.com/show_bug.cgi?id=135937708:03
openstackbugzilla.redhat.com bug 1359377 in openstack-swift "can't yum update to new swift packages" [Unspecified,On_qa] - Assigned to apevec08:03
apevecI see no big deal really08:04
number80apevec: technical debt will grow quickly, if we don't have flexibility08:05
*** abregman_|mtg has quit IRC08:05
*** mcornea has joined #rdo08:05
apevecbut what is the tech debt in this particular case?08:05
*** jtomasek has quit IRC08:06
apevecdo you think unversioned Obsoletes ?08:06
number80Yes, but I can think of other non-corner case08:06
number80new package with incorrect splitting that has to be kept forever08:06
apevecOnly use-case for Obsoletes <= is if we want to reintroduce obsoleted subpackage, which I don't think we ever should08:07
miscerror happen08:07
number80There could be good example08:07
*** mbound has joined #rdo08:07
miscbut obsoletes should have a time where they are being removed, they also make computation of deps a bit more difficult, no ?08:07
*** flepied1 is now known as flepied08:07
number80Yeah, but nobody ever risked to do that08:08
number80especially with DLRN, you can't know for sure, which snapshots people are using08:08
*** shardy has joined #rdo08:08
miscjust put a policy to remove the obsoletes after X releases or something08:08
miscunless you want upgrade to be supported on package level from all possible snapshot of the past08:09
miscand if we want that, it has to be tested08:09
number80misc: except that we can have people out there upgrading after X+1 releases and it'll break too.08:09
miscnumber80: sure, but did you promise to make it work ?08:10
*** fragatina has quit IRC08:10
number80misc: no, that's the point08:10
number80but well, in the end, I'm just saying bad idea to do it.08:10
flepiedapevec: number80: do you have time to discuss the way to solve the issue regarding tag building of some components this morning?08:14
apevecflepied, what is your free slot? in 1h ?08:15
number80flepied: yes, but jpena was working on it though in a different context (he cames back next week)08:15
* apevec missed breakfast08:15
number80oops, then, I remember what I forgot08:15
miscmhh breakfast, good idea08:15
apevecnumber80, he did dlrn change required08:15
number80\o/08:15
apevecso we just need rdoinfo change, which I'll send review then we can discuss in gerrit08:16
number80I need to pick up this change in my virtualenv08:16
apevecbut call is also fine08:16
apevecjust not right now :008:16
flepiedapevec: number80: at 11h? we'll see if what jpena did is enough08:16
number80ack08:16
apevecack08:16
*** abregman has joined #rdo08:18
*** vaneldik has joined #rdo08:18
*** ushkalim has joined #rdo08:19
*** vaneldik has quit IRC08:23
*** vaneldik has joined #rdo08:25
*** dcotton has joined #rdo08:26
*** snecklifter_ has joined #rdo08:27
*** Alex_Stef has joined #rdo08:27
*** iranzo has joined #rdo08:34
*** pilasguru has joined #rdo08:37
*** devvesa has joined #rdo08:38
*** hewbrocca-afk is now known as hewbrocca08:39
*** pilasguru has quit IRC08:42
*** iranzo has quit IRC08:43
*** derekh has joined #rdo08:45
*** gildub has joined #rdo08:47
*** artem_panchenko_ has joined #rdo08:48
*** milan has quit IRC08:55
*** beagles has quit IRC08:56
*** milan has joined #rdo08:59
*** limao has quit IRC08:59
*** egallen has joined #rdo09:01
*** limao has joined #rdo09:03
*** jtomasek has joined #rdo09:05
*** hynekm has quit IRC09:05
*** steveg_afk has quit IRC09:06
*** hynekm has joined #rdo09:10
*** Goneri has joined #rdo09:10
*** milan has quit IRC09:16
*** gszasz has joined #rdo09:16
*** steveg_afk has joined #rdo09:18
*** gfidente has joined #rdo09:22
*** mvk has quit IRC09:27
*** jubapa has joined #rdo09:32
*** limao_ has joined #rdo09:36
*** Son_Goku has quit IRC09:36
*** limao has quit IRC09:36
*** Son_Goku has joined #rdo09:37
*** jubapa has quit IRC09:37
*** Son_Goku has quit IRC09:38
*** Son_Goku has joined #rdo09:39
*** akrivoka has joined #rdo09:39
*** Son_Goku has quit IRC09:40
*** Son_Goku has joined #rdo09:41
*** pgadiya has quit IRC09:43
*** pgadiya has joined #rdo09:43
*** satya4ever has quit IRC09:43
*** paragan has quit IRC09:45
*** fragatina has joined #rdo09:47
*** satya4ever has joined #rdo09:52
*** fragatina has quit IRC09:54
*** tosky has joined #rdo09:55
*** chem has joined #rdo09:57
*** mvk has joined #rdo10:00
*** limao_ has quit IRC10:01
*** jubapa has joined #rdo10:02
*** zoliXXL is now known as zoli|lunch10:03
*** gildub has quit IRC10:04
*** egallen has quit IRC10:06
*** jubapa has quit IRC10:07
*** _degorenko|afk is now known as degorenko10:08
*** egallen has joined #rdo10:15
*** Amita has quit IRC10:20
*** Amita has joined #rdo10:22
*** panda|sick is now known as panda10:22
*** gchamoul is now known as gchamoul|afk10:26
*** gchamoul|afk is now known as gchamoul10:27
*** egallen has quit IRC10:29
*** imcleod has joined #rdo10:43
*** fragatina has joined #rdo10:45
*** KarlchenK has joined #rdo10:45
rdogerrithguemar proposed openstack/glanceclient-distgit: Added py2 and py3 subpackage  http://review.rdoproject.org/r/162010:50
*** anshul has joined #rdo10:52
apevecweshay_afk, dmsimard - actually our probability to pass is 1/16 :(  Both oooq jobs must hit dusy chassis10:53
*** paragan has joined #rdo10:53
apevecin current run ha ended up on dusty and should succeed, but minimal is on gusty: https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/531/10:54
hewbroccaheh10:55
hewbroccathat would be funny if it wasn't so sad10:56
apevecI really don't know what makes those AMDs so slow, but in the current run both oooq jobs started at the same time,10:56
apevec10:24:34 TASK [tripleo/overcloud : Deploy the overcloud] on AMD10:56
apevec10:08:37 TASK [tripleo/overcloud : Deploy the overcloud] on Intel10:57
hewbroccaweird. So is that the difference, dusty is intel hardware and the rest are AMD?10:57
apeveckbsingh, ^ AMDs must have some serious bottleneck, not just CPU?10:57
apevechewbrocca, yes10:58
hewbroccareally odd10:58
hewbroccaheh10:58
kbsinghapevec: to be fair, I have asked about this a few times over the last few weeks - what is the real problem you guys have ?10:58
hewbroccawe do have virt turned on in the bios, right?10:58
apevecthat's what I can tell from https://wiki.centos.org/QaWiki/PubHardware10:58
kbsinghapevec: I am fairly sure your tests are busted10:58
kbsinghapevec: typically, a slower machine means test take a bit longer, not just fail and top themselves10:58
apeveckbsingh, sure they are :)10:58
kbsinghhas anyone looked then at the tests to see why it fails ?10:59
apevecIntel works in upstream OpenStack do make them pass only on their machines:)10:59
kbsinghapevec: can you quantify this ? if so, can i publicly state that RDO only aims to work on intel hardware ?10:59
apeveckbsingh, I have provided some numbers in https://bugs.centos.org/view.php?id=1120010:59
apeveckbsingh, that was j/k10:59
kbsinghapevec: so that helps, but what is the actual bottleneck ?11:00
apevecthat I'm not sure, dmsimard ^ can we monitor more closely machines while the oooq job is running?11:01
apevecI think we have sar output, just a sec11:01
*** ade_b has joined #rdo11:02
apevechttps://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/403/ passed in ~1h on dusty, next 404 failed after ~2h on gusty11:04
hewbroccakbsingh: It wouldn't surprise me if the deployment is simply timing out11:05
hewbroccathere are a bunch of different things that can cause that to happen11:05
apevechm, we don't have sar, that was in upstream jobs11:05
apevectimeout on API requests was already increased to improve the odds, we could bump it some more I guess11:06
apevecEmilienM, ^ double it? https://review.openstack.org/#/c/334996/4/lib/puppet/provider/openstack.rb11:07
kbsinghsomeone must own these tests right ?11:07
kbsinghto me the fundamental problem is how they are setup - it looks like you guys are trying to replicate a developer laptop rather than use the infra :/11:07
apevecyes,it's all in one11:08
apevecin VMs11:08
kbsinghfor example - why are you embedding the whole stack on one host - if i am reading it right, you have nested, and then nesting inside the nested setup11:08
kbsinghapevec: yeah :/ but it looks like you just need 3 machines to spread this out11:08
apevecweshay_afk, ^ can we spread oooq to multiple nodes?11:09
apeveckbsingh, but we might be hitting inter-node networking issues then11:09
kbsinghthe second thing to maybe look at is - rather than increas the timeout's - can you maybe just use the VMs in the rdo-cloud.cico ?11:09
apevecthat was biggest issue previously with the upstream multinode jobs11:09
hewbroccakbsingh: so, the reason we test it all on a single machine using virt is that we have better control of the environment11:10
kbsinghthose are E5 nodes, per core is about 40% faster than the E3's in the cico pool.11:10
kbsinghwell11:10
hewbroccawe know exactly how the networking is configured, how the VMs are set up, etc.11:10
hewbroccaI'd love to spread it out but it will require some additional work11:10
kbsinghyou have 7 network ports to play with in cico... there is some code in there that locks the ports 1-7 for the session, so you can do whatever you want really11:10
hewbroccawhich is fine, if that's what it takes11:10
kbsinghjust eth0 needs to stay where it is11:11
kbsingh( its not that simple, but if you only need another 3 network ports, from eth1 to eth3, it can be that simple )11:11
*** pmyers has joined #rdo11:12
kbsinghso, i think there are a few options - dont disregard the rdo-cloud.cico as well, if that can be used.11:12
*** nyechiel has joined #rdo11:13
kbsinghthe problem with having duffy only hand out specific nodes from specific pools is that you can never be sure about availability - and its very hard to have a warm-standby-cache that way11:15
*** imcleod has quit IRC11:15
kbsingheg. what happens when allocation in that hardware pool gets to 80% - do we still keep handing them out ? if so, what happens when there are no machines - you might be waiting upto 24 hrs (hopefully not, but thats the reaper timeout ) before a machine comes available11:16
*** gildub has joined #rdo11:17
kbsinghwe can however try and run some hardware profiles and see if there is a huge difference, hopefully its not something like a numa/cpu screwup11:17
*** fragatina has quit IRC11:19
*** spr2 has joined #rdo11:19
apeveckbsingh, thanks - I'll ask dmsimard and weshay_afk when they're online to look at adding some perf monitoring to job logs11:22
apevecfor now I don't have visibility where is the bottleneck11:22
kbsinghi jsut tried to trawl through some of this - and cant workout either :/ but then I've never looked at this code side of things before, and there is a lot of it!11:23
kbsinghbtw, does nova pin cpu/cores by default ?11:23
kbsinghjust noticed that on the E5 machines, i can get 20 - 22% more compute capacity from a single core, compared with a nova run VM on the same host using the same core11:24
hewbroccakbsingh: it definitely does not do that11:25
hewbroccapin cpu/cores11:25
hewbroccaI believe you can ask it to try to do some things11:25
*** egafford has quit IRC11:26
hewbroccaBut it's a bit strange because in our case Nova isn't actually creating the VMs11:26
hewbroccawe pre-create them with oooq, and then Ironic "boots" them11:26
hewbroccawhich simulates a bare-metal deployment11:26
hewbroccaSo... IIUC... it might actually be possible to tell oooq to do nova pinning, if we thought that would help?11:27
kbsinghlet me play with this a bit more11:27
kbsinghone thing we noticed in our devcloud was that having libvirt do a cpuset=auto made a noticeable difference; and we didnt need to faff around with numa and pinning and all that11:27
number80dmsimard: shouldn't the result of the voting gates be pushed in the review?11:28
*** aortega has joined #rdo11:28
number80and not waiting non-voting jobs to finish? -> http://review.rdoproject.org/r/162011:28
kbsinghhewbrocca: apevec: let me figure out some metrics on the hardware side and get back in a day or so11:29
apevecnumber80, it's the same job11:29
*** shardy is now known as shardy_lunch11:30
apevecnumber80, so zuul will report when all is finished11:30
*** danielbruno has joined #rdo11:31
*** danielbruno has quit IRC11:31
*** danielbruno has joined #rdo11:31
*** zoli|lunch is now known as zoli11:31
*** zoli is now known as zoliXXL11:32
hewbroccakbsingh: many thanks for all your help11:32
hewbroccaI'll raise the "libvirt cpuset=auto" question with weshay_afk or trown|outtypewww when one of them comes on line11:33
number80apevec: then, we need separate jobs beyond the PoC11:34
number80it can be very long for trivial changes11:34
apevecI think we want this to be voting actually11:36
apevecyou don't know change is trivial until it passes full CI11:36
number80at some point yes11:36
apevecbut there might be some zuul config magic to report parent job early?11:37
number80well fixing typo in description or SourceURL are :)11:37
kbsinghhewbrocca: absolutely, hopefully we can get this fixed soon11:37
apevecnumber80, what if you insert invalid unicode? :)11:37
apevecanyway, let's check w/ David later11:37
number80ack11:38
apevecand now, just for fun, oooq HA job failed on Intel, in overcloud deploy after 2h11:38
*** flepied1 has joined #rdo11:40
hewbroccaoof11:40
*** rhallisey has joined #rdo11:42
apevecInstanceDeployFailure: Failed to provision instance e0ed7164-28b8-4cac-8cf5-7911808e0e8d: Timeout reached while waiting for callback for node 100bd23b-a188-46b9-abcf-d75ccebe328011:42
*** flepied has quit IRC11:43
apevecthat might be some nova/ironic race that trown mentioned11:43
*** dpeacock has joined #rdo11:45
*** rbrady_ has quit IRC11:46
*** rodrigods has quit IRC11:47
*** weshay_afk is now known as weshay11:47
apevecso that's 25. TBD ironic/nova timeout11:47
*** thrash has quit IRC11:47
*** rodrigods has joined #rdo11:47
*** jdob has joined #rdo11:48
apevecI'll try to find LP# that trown mentioned11:48
*** pkovar has joined #rdo11:48
apevecweshay, ^ do you now about that nova/ironic timeout?11:48
hewbroccaapevec: hold on, no, I've seen that before11:49
hewbroccaI think that is an ipxe failure11:49
*** rdas has quit IRC11:50
hewbroccaDo *all* the virthosts we are operating on definitely have the correct repo setup that get the ipxe rpm we are shipping in RDO instead of the one on the baremetal machine?11:51
*** flepied has joined #rdo11:51
weshaysorry.. reading through11:52
*** flepied2 has joined #rdo11:52
*** flepied1 has quit IRC11:52
*** sdake has joined #rdo11:54
*** flepied has quit IRC11:56
*** tosky has quit IRC11:56
*** fbo has quit IRC11:56
*** tosky has joined #rdo11:57
weshayapevec, we see that timeout from time to time.. in the deploy log it manifests as Message: No valid host was found. There are not enough hosts available.,11:57
*** milan has joined #rdo11:59
*** gkadam has quit IRC12:00
*** morazi has joined #rdo12:01
weshaykbsingh, btw.. running virt based tests on multiple hosts would be a cool thing for us to pull off, but there is nothing wrong w/ running the entire stack on one virthost12:01
*** ushkalim has quit IRC12:01
hewbroccaeurrgh, weshay that does sound vaguely like the nova/ironic race12:02
*** amuller has joined #rdo12:02
*** mvk has quit IRC12:02
weshayhewbrocca, trying the latest test image on my minidell12:06
hewbroccaweshay: great12:06
weshayif we're hitting a race.. theoretically.. I could check the deploy log or nova logs for that error.. and retry in CI if we hit it12:08
weshayif it works eventually after a few retries that would prove it12:08
hewbroccalet's pass it by lucasagomes and see if I'm diagnosing it correctly12:08
hewbroccaanother solution might be to provision one extra virt host that isn't going to get booted12:09
*** beagles has joined #rdo12:09
hewbrocca(you just don't know which one it's going to be)12:09
weshayhewbrocca, we did get 2+ full passes and promotes on liberty and mitaka over the weekend12:09
hewbroccaYeah I saw that!12:10
hewbroccathat is excellent12:10
*** Guest32906 is now known as flaper8712:10
*** flaper87 has quit IRC12:10
*** flaper87 has joined #rdo12:10
weshayhewbrocca, an extra virthost.. for like the compute virt guests?12:10
hewbroccayeah12:10
*** imcleod has joined #rdo12:10
hewbroccaIf Ironic is going to deploy to 5 guests12:11
hewbroccathen put 6 vms in its inventory12:11
hewbroccabut -- check it with lucasagomes12:11
weshaythen we get into a bunch of networking issues12:11
weshayusing multple virthosts is not viable atm..12:12
hewbroccaahh... maybe not wort the trouble then12:12
hewbroccano, you could put all the guests on the same host12:12
hewbroccathe same virthost12:12
*** kgiusti has joined #rdo12:12
*** milan has quit IRC12:12
weshaybecause the neutron network bridge etc.. we have to ensure some how we're not bringing up tests w/ dupe ips12:12
weshayetc12:12
hewbroccaone of them won't ever get booted, so it won't consume any resources12:12
*** trown|outtypewww is now known as trown12:13
weshaysorry.. still early for me.. what advantage do we get w/ a guest that is not booted?12:13
hewbroccaIt avoids this Nova race12:13
weshaytrown, morning12:13
hewbroccawell, lessens the impact of it12:14
weshayoh i c12:14
hewbroccaThere's a time lag between the time Nova requests a host and the time Ironic delivers it12:14
hewbroccaand in that time Nova can request the same host again12:14
*** gkadam has joined #rdo12:15
*** gkadam is now known as Guest1825012:15
*** ushkalim has joined #rdo12:16
kbsinghweshay: just trying to see how best we can unblock on the perf issue, if it really os a perf issue12:16
*** Guest18250 is now known as gkadam12:16
*** d0ugal_ is now known as d0ugal12:17
hewbroccaIt retries a whole bunch of times, which is why this doesn't come up as often as it used to12:17
*** d0ugal has quit IRC12:17
*** d0ugal has joined #rdo12:17
kbsinghweshay: but i dont know how much work is needed to use multihost v/s other options12:17
*** mvk has joined #rdo12:17
*** rbrady has joined #rdo12:17
hewbroccaWell, we *shouldn't* have to do that -- I do like the idea of doing that because it'll let us test more/better configurations, but I don't think it is the first thing we should address12:18
trownweshay: morning12:18
hewbroccatrown: feeling better??12:18
trownnot much actually, but its ok12:19
weshaykbsingh, the networking there gets difficult. At any given moment we would have to know exactly what ips, even the bridge ips our systems are using.12:19
weshayso we don't need this to merge? apevec12:19
weshayhttps://review.openstack.org/#/c/346469/12:19
*** jlibosva has quit IRC12:19
*** jlibosva has joined #rdo12:20
kbsinghhewbrocca: ack12:21
*** thrash has joined #rdo12:22
*** thrash has quit IRC12:22
*** thrash has joined #rdo12:22
hewbroccablearrgh12:23
hewbroccatrown: if you need to go away and rest please do so12:23
weshaytrown, fyi.. I have a ha overcloud deploying in progress atm w/ http://trunk.rdoproject.org/centos7/f0/0e/f00ed98048a1a24e55dfea64171771ff73216335_969c6c4912:23
*** nmagnezi_ has joined #rdo12:25
*** nmagnezi has quit IRC12:26
weshayhewbrocca, looks like I did not hit the race on my deployment..12:26
*** hrw has quit IRC12:26
hewbroccaweshay: excellent12:26
*** hrw has joined #rdo12:27
*** eliska has quit IRC12:27
number80need reviewer for this rdoinfo change: https://review.rdoproject.org/r/#/c/1685/12:30
*** kaminohana has joined #rdo12:31
number80(it's dummy release to allow usage of rdopkg to build common deps builds12:31
*** jprovazn has quit IRC12:35
*** hynekm has quit IRC12:36
apevecweshay, 346469 should still be merged, I just took it temporary into RPM until upstream gate is fixed12:37
apevec(unrelated setuptools/devstack thing)12:37
weshaynice.. /me loves hot wiring things12:37
rdogerritMerged openstack/glanceclient-distgit: Added py2 and py3 subpackage  http://review.rdoproject.org/r/162012:38
apevecweshay, nice thing is that we'll get notification (FTBFS) as soon as it is merged upstream12:39
apevecso it's managed hot wiring12:40
weshaycool man +112:40
*** rlandy has joined #rdo12:41
*** eliska has joined #rdo12:41
lucasagomeshewbrocca, reading (/me was having lunch)12:43
hewbroccalucasagomes: thank you sir12:43
*** Son_Goku has quit IRC12:44
*** mosulica has quit IRC12:46
*** sasha2 has joined #rdo12:47
*** vimal has quit IRC12:47
*** shardy_lunch is now known as shardy12:48
*** morazi has quit IRC12:50
*** ohochman has joined #rdo12:50
*** hynekm has joined #rdo12:50
*** mosulica has joined #rdo12:51
*** fragatina has joined #rdo12:52
*** nehar has quit IRC12:52
*** egafford has joined #rdo12:52
*** abregman is now known as abregman|mtg12:52
lucasagomesweshay, yeah so, this race is very annoying I was checking the work in nova that address the problem but it's not completed merged yet https://blueprints.launchpad.net/nova/+spec/host-state-level-locking12:52
*** shaunm has joined #rdo12:54
hewbroccalucasagomes: I'm glad they've finally admitted it's a problem at least12:54
lucasagomesweshay, problem is that all "workarounds" to mitigate the problem is not very good. One is like hewbrocca proposed, to have more hosts available so when the nova scheduler pick one node twice for deployment the retry filter can be activate and fallback to a node that is idle12:55
lucasagomeshewbrocca, yeah12:55
lucasagomesweshay, another way would be to not try to deploy all the nodes at the same time12:55
weshaylucasagomes, is that a deployment setting?12:55
lucasagomesif we could do it in batches... e.g depoy 3 and 3 more12:56
weshayto stagger it?12:56
lucasagomesweshay, the problem is that there's no lock between the nova scheduler and the resource tracker12:56
lucasagomesweshay, so nova can pick the same node for 2 different instances12:56
*** imcleod has quit IRC12:56
lucasagomesweshay, it becomes more apparent in our deployment scenario because we usually deploy all the nodes at the same time12:57
*** dneary has joined #rdo12:57
*** zaneb has joined #rdo12:57
lucasagomeswhere in a normal nova usage you always have spare nodes, so the retry filter end up covering this problem12:57
weshaylucasagomes, k12:57
lucasagomesweshay, that blueprint that I pasted address this problem, the code seems to be up but not merged yet12:58
weshayso if you run w/ HA.. could we deploy one controller at a time?12:58
*** jprovazn has joined #rdo12:58
*** _elmiko is now known as elmiko12:59
*** imcleod has joined #rdo12:59
lucasagomesnot sure, I haven't tried. We maybe can consult the guys that worked on the HA modules12:59
weshaylucasagomes, the other option I was considering was to check the deployment log.. and if we get "not enough hosts".. delete the stack and retry13:00
*** puzzled has joined #rdo13:01
*** eliska has quit IRC13:01
rbowenGood morning #rdo13:01
lucasagomesweshay, "no valid host" you mean? But that's the thing, this shouldn't be a problem13:02
weshayya13:02
lucasagomesbecause of the retry filter it should try again and get another node :-/13:02
lucasagomesyou can try to increase the number of attempts as well13:02
weshayhewbrocca, apevec good details here13:02
lucasagomesbut again, all that just mitigate the problem it does not solve it13:02
*** Alex_Stef has quit IRC13:03
weshaylucasagomes, k.. just to make sure I'm looking at the right setting.. can you paste it13:03
hewbroccalucasagomes: I suspect we don't hit it all that often13:03
hewbroccaand increasing the retry number would help13:03
lucasagomeshewbrocca, yeah, I think we have a code in the nova ironic driver also trying to help with this13:03
lucasagomesbut yeah :-/13:03
apevecweshay, lucasagomes - thanks, copy/pasted to etherpad :) Iis there ironic LP# for this?13:05
*** unclemarc has joined #rdo13:05
*** snecklifter has quit IRC13:06
*** eliska has joined #rdo13:06
*** jhershbe__ has quit IRC13:07
*** ushkalim has quit IRC13:07
lucasagomesapevec, not in Ironic, because the problem is in nova13:07
lucasagomesapevec, it's just more apparent in Ironic because of the usage of it13:07
lucasagomesin our case, where we try to deploy all nodes at the same time13:08
*** ashw has joined #rdo13:08
apeveclucasagomes, do you know is thre nova lp# for this?13:13
*** manous has joined #rdo13:13
*** Amita has quit IRC13:13
apevecor is it just that spec?13:13
lucasagomesapevec, it's a spec: http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/host-state-level-locking.html13:13
*** jcoufal has joined #rdo13:13
lucasagomesapevec, and the patches proposed are linked here in the blueprint: https://blueprints.launchpad.net/nova/+spec/host-state-level-locking13:13
apevecand: This looks more or less stalled/abandoned and we're nearly at non-priority feature freeze (6/30) so I'm going to defer this from Newton. -- mriedem 2016062913:14
hewbroccaapevec: yeah do not expect a fix in Newton13:14
*** koko has joined #rdo13:15
*** koko is now known as Guest9836113:16
*** dustins has joined #rdo13:17
*** akshai has joined #rdo13:17
*** manous has quit IRC13:17
*** manousd has joined #rdo13:18
EmilienMit seems like python-mistralclient is broken in stable/mitaka13:18
*** JuanDRay has joined #rdo13:18
EmilienMhttp://logs.openstack.org/95/346195/1/check/gate-puppet-mistral-puppet-beaker-rspec-centos-7/82b59d4/console.html#_2016-07-24_02_25_02_61446213:18
EmilienMor it's a virtual package?13:18
*** jpich_ has joined #rdo13:18
*** manousd has quit IRC13:18
*** Guest98361 has quit IRC13:18
*** Alex_Stef has joined #rdo13:18
*** ushkalim has joined #rdo13:19
rdogerritFabien Boucher created config: Initial commit to activate the stable branch build on CBS  http://review.rdoproject.org/r/172713:19
*** jmelvin has joined #rdo13:19
chemEmilienM: it lacks the openstack tag in the manifests/clients, I try adding it13:20
*** dyasny has joined #rdo13:20
rdogerritFabien Boucher proposed config: Initial commit to activate the stable branch build on CBS  http://review.rdoproject.org/r/172713:20
EmilienMchem: yeah I was looking this13:21
EmilienMchem: was it backported?13:21
*** jpich has quit IRC13:21
chemEmilienM: arghh ... no13:21
EmilienMshould we do it?13:22
chemEmilienM: we could give it a try13:22
*** snecklifter has joined #rdo13:22
chemEmilienM: so the problem doesn't exist in master ?13:23
EmilienMchem: I don't understand, we have https://github.com/openstack/puppet-openstack-integration/blob/stable/mitaka/manifests/init.pp#L18-L2013:23
EmilienMso it should not fail13:23
chemEmilienM: no tag https://github.com/openstack/puppet-mistral/blob/master/manifests/client.pp#L1813:24
EmilienMnice catch13:24
EmilienMI'm adding it13:24
chemEmilienM: I'm adding it right now and backporting13:24
EmilienMchem: okk thanks !13:24
*** pgadiya has quit IRC13:26
*** jhershbe__ has joined #rdo13:26
*** mlammon has joined #rdo13:27
*** anshul has quit IRC13:29
*** anshul has joined #rdo13:30
*** julim has joined #rdo13:31
*** jpich_ is now known as jpich13:31
*** vaneldik has quit IRC13:31
*** weshay is now known as weshay_brb13:32
*** ayoung has joined #rdo13:32
*** jhershbe__ has quit IRC13:33
*** trown is now known as trown|brb13:34
*** pilasguru has joined #rdo13:34
*** thrash has quit IRC13:34
*** hynekm has quit IRC13:35
*** hynekm has joined #rdo13:35
*** mgarciam has joined #rdo13:36
*** saneax is now known as saneax_AFK13:38
*** snecklifter has quit IRC13:38
*** zaneb has quit IRC13:39
*** zaneb has joined #rdo13:39
*** READ10 has joined #rdo13:40
*** Alex_Stef has quit IRC13:40
*** ekuris has quit IRC13:41
*** thrash has joined #rdo13:42
*** thrash has quit IRC13:42
*** thrash has joined #rdo13:42
rdogerrithguemar proposed openstack/aodhclient-distgit: Fixed py2 and py3 subpackage  http://review.rdoproject.org/r/162513:42
*** jeckersb_gone is now known as jeckersb13:43
*** weshay_brb is now known as weshay13:44
*** morazi has joined #rdo13:44
*** sdake_ has joined #rdo13:44
*** sdake has quit IRC13:45
*** jhershbe__ has joined #rdo13:46
*** vaneldik has joined #rdo13:49
*** Poornima has quit IRC13:49
*** chandankumar has left #rdo13:50
number80https://bugzilla.redhat.com/show_bug.cgi?id=135982013:50
openstackbugzilla.redhat.com bug 1359820 in Package Review "Review Request: python-cloudkittyclient - Client library for CloudKitty" [Medium,New] - Assigned to nobody13:50
number80this is an existing package that was never reviewed so should be quick13:50
*** hynekm has quit IRC13:51
*** milan has joined #rdo13:52
*** READ10 has quit IRC13:53
weshayapevec, the yum.log on the overcloud is empty because the overcloud nodes are images w/ everything pre-installed.  We're going to make sure we have a rpm_list.txt in /var/log13:54
apevecweshay, thanks, I was going to ask for that as I couldn't find it13:55
apevecweshay, also sar or something like that during test execution, to see where would be bottleneck13:55
*** Pharaoh_Atem has quit IRC13:56
trown|brbweshay: I failed to reproduce the Ironic issue, but the undercloud is very slammed during that step. LA ~913:57
*** trown|brb is now known as trown13:57
trownwhich I think means we have no chance outside of dusty13:57
*** richm has joined #rdo13:58
*** zoliXXL is now known as zoli|mtg13:58
*** vimal has joined #rdo13:58
apevecyeah13:58
weshayapetrich, heh.. sorry I didn't realize where it was https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-minimal-405/overcloud-controller-0/var/log/extra/rpm-list.txt.gz13:59
apevectrown, funily enough, https://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/405/ did pass today on n10.gusty13:59
*** Amita has joined #rdo13:59
weshay/var/log/extra13:59
apevecahh13:59
trownapevec: ya the minimal job should pass there, but it might be a bit more racy13:59
*** ushkalim has quit IRC14:00
trownapevec: but ha almost never passes outside of dusty14:00
apevectrown, ok, so odds a bit better than 1/16 :)14:00
*** JuanDRay has quit IRC14:00
*** fultonj has joined #rdo14:00
*** ohochman has left #rdo14:00
*** Son_Goku has joined #rdo14:00
trownya, but not much, because there are still odd failures on the weirdo jobs too, and those all compound since we have to have 100% pass at the same time14:00
apevecwould be nice to have all those job facts in the db, so we could query for stats14:01
trownI think we need to fix that part14:01
trownbeing able to retry a single job would save a lot of pain14:01
*** jhershbe__ has quit IRC14:01
apevectrown, yes, can we do that ?14:01
*** pilasguru has quit IRC14:02
trownnot with multi-job how we have it now14:03
*** Son_Goku has quit IRC14:03
dmsimardtrown, apevec: mitaka just passed twice recently14:03
dmsimardalso, hello #rdo14:04
trownmorning dmsimard14:04
apevecdmsimard, hey14:04
apevecdmsimard, yeah, pure luck? :)14:04
dmsimardI don't know. Were the quickstart jobs always *that* flappy ?14:04
dmsimardIt seems much more of a problem recently14:05
*** nehar has joined #rdo14:05
rbowenHi, dmsimard14:05
trowndmsimard: ya something changed in the last month (and was backported to mitaka) that increased resource usage14:06
*** nehar has quit IRC14:07
trownfrom my unscientific observation, mosty CPU14:07
dmsimardtrown: I noticed somewhere we were running quickstart with 1 worker of everything everywhere14:07
trownbut liberty jobs dont have the issue14:07
dmsimardAre we RAM constrained ? Could we increase the workers ?14:07
trownwe are definitely RAM constrained too, and increasing workers if we have a CPU bottleneck wont help either14:08
*** fragatina has quit IRC14:08
*** Pharaoh_Atem has joined #rdo14:08
*** fragatina has joined #rdo14:08
dmsimardWell what I mean is that one worker and it's threads can only process so many requests simultaneously and could be generating a lot of queueing14:09
dmsimardespecially if not behind apache14:09
dmsimardI understand the CPUs aren't super awesome, but the load can also be artificially generated by lack of workers and processes just waiting and retrying all the time.14:11
trownwell overcloud nodes are only given 1 cpu14:12
trownand undercloud does not have any specific configuration, so should have workers equal to the number of cpus14:13
dmsimardAlso, do we load kvm_amd and nested if the node happens to be an AMD machine ?14:13
trownwhich is 4 for minimal and 2 for ha14:13
trowndmsimard: ya, https://github.com/openstack/tripleo-quickstart/blob/master/roles/parts/kvm/tasks/main.yml#L61-L6414:14
*** ushkalim has joined #rdo14:14
dmsimardcool14:14
jjoycenumber80++14:16
zodbotjjoyce: Karma for hguemar changed to 12 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any14:16
trowndmsimard: and looking at 09:49:12 on https://ci.centos.org/view/rdo/view/promotion-pipeline/job/tripleo-quickstart-promote-master-delorean-minimal/405/consoleFull we are successfully determining AMD and loading appropriate module14:17
*** pilasguru has joined #rdo14:17
apevectrown, isn't that loaded automatically?14:18
apevecbut modprobe doesn't hurt14:18
*** yfried has quit IRC14:19
weshaytrown, https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-minimal-405/undercloud/var/log/extra/lsmod.txt.gz14:19
trownhmm dont know, anisble tasks all have "changed=true", but not sure if that is meaningful14:19
*** anshul has quit IRC14:19
trownweshay: that is from the undercloud14:20
*** abregman|mtg has quit IRC14:20
trownweshay: the undercloud is not a virthost :)14:20
weshayoh sec14:20
weshayha14:20
*** abregman has joined #rdo14:20
socialhmm do we have some master overcloud deployment jenkins jobs?14:20
weshayhttps://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-minimal-405/172.19.2.138/var/log/extra/lsmod.txt.gz14:20
trownsocial: https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/14:21
weshaykvm_amd                65072  1214:21
weshaydmsimard, ^14:21
trownI just got a successful local ha run with the last CI image, so we only have infra issues14:22
apevectrown, nested kvm might actually help in overcloud-novacompute14:22
apevectrown, yep14:22
apevectrown, and that nova/ironic race, sometimes14:22
*** dmsimard sets mode: +v rdogerrit14:22
trownapevec: ya logs in the etherpad dont look like nova/ironic race to me14:22
trownapevec: looks like slow undercloud failed to get callback from i-p-a ramdisk14:23
apevectrown, 25. ?14:23
trownapevec: but  we do hit the nova/ironic race too sometimes, it just looks different than that14:23
apevecplease update14:23
*** gildub has quit IRC14:23
trownj14:24
trownk14:24
apevecwhere can we see more i-p-a logs?14:24
socialtrown: https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-ha-401/undercloud/var/log/ironic/ironic-conductor.log.gz14:24
socialtrown: Stderr: u"/bin/dd: error writing '/dev/disk/by-path/ip-192.0.2.10:3260-iscsi-iqn.2008-10.org.openstack:a328351a-bdde-4625-82d7-490151848031-lun-1-part2': Input/output error\n4608+0 records in\n4607+0 records out\n4830789632 bytes (4.8 GB) copied, 628.016 s, 7.7 MB/s\n"14:24
socialtrown: this is issue with neutron using ovs native instead vsctl14:24
*** mvk has quit IRC14:25
socialtrown: it causes ovs to timeout and reconnect which should not disrupt anything but it drops all the connections eg dd fails for ironic14:25
socialtrown: so we have it also on CI14:25
socialjlibosva: ^^14:25
apevecsocial, is there LP# ?14:25
socialapevec: I found it today14:25
socialstill getting data14:25
trownnice one14:26
*** mvk has joined #rdo14:26
socialwe have workaround14:26
*** fultonj_ has joined #rdo14:28
*** fultonj has quit IRC14:28
*** eliska has quit IRC14:29
*** jhershbe has joined #rdo14:30
rdogerritJon Schlueter created openstack/glance-distgit: Conditionalize -doc building  http://review.rdoproject.org/r/172814:30
jschlueterapevec: ^^14:30
*** ohochman has joined #rdo14:33
dmsimardnumber80: https://review.rdoproject.org/r/#/c/1620/ lol... the one project out of the 1000 that has integration jobs14:34
dmsimardbut hey, it passed too !14:34
*** vaneldik has quit IRC14:34
*** ohochman has left #rdo14:34
number80dmsimard: yeah, just wondering if we can separate the votes14:35
*** Ryjedo has joined #rdo14:35
number80(in this case, it's ok, but in some cases, it can get very long)14:35
*** pradiprwt has quit IRC14:35
rdogerritMerged openstack/aodhclient-distgit: Fixed py2 and py3 subpackage  http://review.rdoproject.org/r/162514:38
*** jcoufal_ has joined #rdo14:39
*** beekneemech is now known as bnemec14:40
*** paramite is now known as paramite|afk14:41
*** jcoufal has quit IRC14:42
*** ushkalim has quit IRC14:44
dmsimardnumber80: hm, not sure we can separate the votes, it's a set of jobs for a commit14:44
dmsimardnumber80: I can see how the integration jobs can take some time to build, but at least they're just in check and non-voting right now14:44
dmsimardnumber80: I can check if we can bump the specs of the VMs we're running tests on14:45
dmsimardbut if we bump the specs, it'll result in less overall capacity14:45
dmsimardi.e, we currently run on 4 cores/8GB RAM VMs at a quota of 10 VMs we can relatively safely bump to 20, If we bump specs to 8 cores, I'm not so sure we could as easily bump to 20.14:46
dmsimard*but* -- here, we could try switching to 8 cores and perhaps reconsider if we happen to have longer queues that could be alleviated by more concurrent jobs14:46
*** nmagnezi_ is now known as nmagnezi14:49
*** kaminohana has quit IRC14:49
*** Alex_Stef has joined #rdo14:49
weshaytrown, so re: the master etherpad.. #25 is really caused by the above issue w/ the ipa image?14:49
trownweshay: looks like it14:50
trownsocial: is that issue racy?14:50
rdogerritJon Schlueter proposed openstack/glance-distgit: Conditionalize -doc building  http://review.rdoproject.org/r/172814:50
*** vimal has quit IRC14:51
number80dmsimard: well, the thing is, we can run more jobs in // but since they take much longer time, it could still result in jamming the queue14:51
number80not sure how we can fix that14:51
*** mburned has joined #rdo14:52
flepied2apevec: I added a card in the backlog regarding our discussion this morning: https://trello.com/c/guK9Ag12/157-this-column-represents-work-that-the-team-has-taken-a-first-pass-through-and-added-some-details-no-commitments-to-completing-the#14:52
dmsimardnumber80: yeah. I'll ask for bumping the specs within our current quota and we can reconsider later.14:54
dmsimardnumber80: upstream jobs in openstack-infra gate finish in around 30 minutes which I feel is completely reasonable14:55
*** vimal has joined #rdo14:55
*** mosulica has quit IRC14:56
*** fbo has joined #rdo14:56
number80dmsimard: it was more than 1 hour here14:56
number801h30/2h14:56
*** ushkalim has joined #rdo14:57
dmsimardnumber80: for that particular glanceclient job I see 45 mins + dlrn-rpmbuild14:57
*** tshefi has quit IRC14:57
number80zuul displayed 82 minutes and it was still running14:57
*** dtrainor_ has joined #rdo14:57
*** fragatina has quit IRC14:58
*** fragatin_ has joined #rdo14:58
*** Guest98668 is now known as melwitt14:58
*** dtrainor has joined #rdo14:59
socialtrown: so far I always reproduced it15:00
*** rcernin has quit IRC15:00
* trown retries15:00
*** JuanDRay has joined #rdo15:01
*** dtrainor_ has quit IRC15:03
*** Amita has quit IRC15:03
*** milan has quit IRC15:04
*** linuxgeek_ has joined #rdo15:05
*** Alex_Stef has quit IRC15:05
*** linuxaddicts has quit IRC15:09
*** KarlchenK has quit IRC15:13
*** aortega has quit IRC15:16
dmsimardapevec: could you create a CNAME status.rdoproject.org -> master.monitoring.rdoproject.org ?15:17
apevecI could, once I find email from rbowen how to edit DNS zone, just a sec15:18
rbowenYou should still have git clone from last time, right?15:19
rbowenI can send again if you need - and if I can find what I sent. :-)15:19
apevecnope, I've changed laptop :)15:19
apevecrbowen, np, found email15:19
rbowenok, good.15:19
*** tosky has quit IRC15:19
*** ade_b has quit IRC15:20
*** tosky has joined #rdo15:20
*** KarlchenK has joined #rdo15:20
*** iberezovskiy|off is now known as iberezovskiy15:22
apevecdmsimard,  master.monitoring is already CNAME, so entry is:15:25
apevecstatus                  IN      CNAME   monitoring15:25
dmsimardapevec: sure15:25
*** jhershbe has quit IRC15:25
*** pcaruana has quit IRC15:26
apevecdmsimard, pushed, serial 201607250115:27
dmsimardapevec: ty15:28
*** milan has joined #rdo15:31
*** READ10 has joined #rdo15:31
*** zoli|mtg is now known as zoli15:33
*** zoli is now known as zoliXXL15:33
*** satya4ever has quit IRC15:34
socialtrown: /etc/neutron/plugins/ml2/openvswitch_agent.ini ovsdb_interface = vsctl15:35
socialtrown: for workaround15:35
*** dtrainor has quit IRC15:35
*** choirboy|afk is now known as choirboy15:35
*** fragatin_ has quit IRC15:36
*** milan has quit IRC15:36
*** fragatina has joined #rdo15:37
*** vimal has quit IRC15:37
*** dtrainor has joined #rdo15:39
*** ade_b has joined #rdo15:39
*** zodbot_ has joined #rdo15:40
*** zodbot has quit IRC15:40
trownsocial: hmm not seeing that in CI or local run any more15:40
trownunless it is racy15:40
trownCI failed with openstack client command timeout Error: Could not prefetch keystone_tenant provider 'openstack': Command: 'openstack [\"project\", \"list\", \"--quiet\", \"--format\", \"csv\", \"--long\"]' has been running for more than 20 seconds (tried 7, for a total of 170 seconds)15:41
*** zodbot_ is now known as zodbot15:41
*** rbrady has quit IRC15:41
socialtrown: link on CI to check?15:41
trownsocial: https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-ha-40215:41
trownsocial: but it failed well after Ironic was done15:42
*** spr2 has quit IRC15:42
socialtrown: yeah, that one passed well15:43
*** rpioso has joined #rdo15:43
* social not happy15:43
*** edannon has quit IRC15:44
*** sdake has joined #rdo15:46
apevectrown, so we need to double timeouts again? https://review.openstack.org/#/c/334996/4/lib/puppet/provider/openstack.rb15:46
apevecbut really, 3 mins no reply is bad15:46
trownapevec: well I think we patched the wrong thing originally15:46
trownapevec: we patched retries, not timeout15:47
apevecabove is request_timeout15:47
trownapevec: so if a command fails in 20 seconds that would succeed in 30, we retry when in fact we should just wait longer15:47
trownapevec: look on 1715:47
trownrequest_timeout is the max time15:48
apevecah many timeouts15:48
*** nyechiel has quit IRC15:48
*** abregman has quit IRC15:49
*** dtrainor has quit IRC15:49
apevecok, yeah, 3x 20s doesn't help15:49
*** sdake_ has quit IRC15:49
trownit actually did help some... but it is just racing more times to win rather than just extending the time so we dont have a race15:49
*** rbrady has joined #rdo15:50
apevecright15:50
*** dtrainor has joined #rdo15:51
dmsimardapevec: on second thought, I'm not going to host status.rdo on the same server as the monitoring so we can have somewhere to post if we have issues with rcip-dev that hosts both monitoring and review.rdo15:54
dmsimardapevec: so could you do status.rdoproject.org an A record on 209.132.178.9615:54
*** nmagnezi has quit IRC15:54
dmsimardIt's an OS1 instance, it'll have to do for now15:54
apevecha, good point15:54
*** anilvenkata has quit IRC15:55
*** Liuqing has joined #rdo15:56
*** florianf has quit IRC15:57
*** Liuqing has quit IRC15:57
*** Liuqing has joined #rdo15:58
*** fragatina has quit IRC15:58
weshaypanda, you avail?16:00
*** fragatina has joined #rdo16:00
apevecdmsimard, 201607250216:00
pandaweshay: ye16:01
apevectrown, so you're going to sed-1liner this or propose in puppet-openstacklib ?16:01
apevecand if the former ^ dmsimard can we add this hack in weirdo too?16:02
trownapevec: I will propose to puppet-openstacklib16:02
*** milan has joined #rdo16:02
dmsimardapevec: what ?16:02
dmsimardI'm not following16:02
trownapevec: given they accepted the patch that was intended to fix it, seems likely a patch that fixes the fix would also be ok16:02
dmsimardweirdo jobs don't need a bump in timeouts16:02
apevecdmsimard, https://review.openstack.org/#/c/334996/4/lib/puppet/provider/openstack.rb@1716:02
apevecdmsimard, they do, see etherpad16:03
apevecI've seen timeouts16:03
dmsimardlooking ..16:03
apevechm, no that was timeout nova->neutron16:03
apevecand other one was cirros d/l failure16:04
*** Goneri has quit IRC16:04
apevecre. cirros could we mirror it in ci.centos ?16:04
dmsimardthe 60s timeout is already very generous16:04
dmsimardand we shouldn't be running into it16:04
dmsimard60s is a *long* time16:04
apevecdmsimard, it was only 20s per attempt actuallyt16:04
dmsimardah16:04
apevecthat's puppet, not sure what timeouts apply for inter-service comms16:05
dmsimardapevec: for cirros the challenge is that the location is provided from upstream16:06
*** ushkalim has quit IRC16:06
dmsimardapevec: we *could* change the location of the cirros image16:06
dmsimardbut weirdo tries to modify as little things as possible16:06
apevecisn't that puppet-tempest parameter?16:07
dmsimardp-o-i and packstack retrieve it differently16:08
dmsimardp-o-i: https://github.com/openstack/puppet-openstack-integration/blob/master/run_tests.sh#L15916:08
*** devvesa has quit IRC16:09
apevecah so we could prepare it like upstream in ~/cache/files/16:10
dmsimardpackstack: https://github.com/openstack/packstack/blob/e192b6f202fd1624b48e2cbfc56c7c9ec7842c8e/packstack/puppet/modules/packstack/manifests/provision/glance.pp (which refers back to https://github.com/openstack/packstack/blob/e192b6f202fd1624b48e2cbfc56c7c9ec7842c8e/packstack/plugins/provision_700.py#L32 (16:10
*** Liuqing has quit IRC16:10
pabelangerapevec: dmsimard: Ya, we cache the images during on DIB process16:11
pabelangerour*16:11
apevecdmsimard, it would help if packstack gate could learn about ~/cache16:11
pabelangerapevec: dmsimard: cache-devstack element handles that16:12
dmsimardapevec: so what I would do is to align packstack to be able deploy the image from a file location instead of an url location16:12
apevecyeah, we don't use DIB in ci.centos ?16:12
dmsimardapevec: and then we can pre-provision that image16:12
apevecdmsimard, ack16:12
dmsimardapevec: no, ci.centos are centos-minimal virgin installs16:12
dmsimardapevec: but we could leverage caching in review.rdo nodepool16:12
*** oshvartz has quit IRC16:13
pabelangerYa, I'd recommend it.16:13
*** nyechiel has joined #rdo16:14
trownapevec: EmilienM https://review.openstack.org/34691516:14
apevectrown, thanks!16:14
*** dsneddon_ has quit IRC16:14
*** dsneddon has quit IRC16:15
*** dsneddon has joined #rdo16:15
trownah crap forgot to fix spec16:15
EmilienMagain?16:15
trownEmilienM: first attempt did not really address the issue, I tried to explain in commit message16:16
EmilienMok16:17
*** hrw has quit IRC16:17
EmilienMchem: ^ fyi16:17
trownEmilienM: first attempt did help, but only by racing more16:17
*** garrett has quit IRC16:17
kbsinghhewbrocca: apevec: dmsimard: trown: weshay: hi guys, bstinson's been doing some perf metrics, and i/o rates on both amd and intel machines are identical; however per core compute on amd is about 50% that of the intel ones; but since amd's have higher core count, the overall compute capacity of the chassis is identical16:18
*** Goneri has joined #rdo16:18
kbsinghwill collect as much data as we can and try to put it all into that bug report on bugs.c.o16:18
apevecthanks!16:18
dmsimardkbsingh: thanks for your time16:19
*** gkadam has quit IRC16:19
*** mosulica has joined #rdo16:19
apevecso yeah, we're doing single vcpu undercloud right?16:20
*** smeyer has quit IRC16:20
trownapevec: no undercloud has 4 in the minimal case, and 2 in the ha case16:21
*** tesseract- has quit IRC16:21
*** ekuris has joined #rdo16:21
trownapevec: we could bump to 4 in the ha case as well to see if that helps, but we would probably need to explicitly make some services use fewer than 4 workers as RAM is very tight there16:22
kbsingh32gb is tight ?16:22
*** ade_b has quit IRC16:22
*** KarlchenK has quit IRC16:22
apevecmeet tripleo :)16:22
trownkbsingh: ya to run tripleo allinone it is16:22
*** hrw has joined #rdo16:23
trownit is 5 vms, one of which is deploying a massive nested heat stack16:23
kbsinghaha16:23
chemtrown: not sure I get it, if I remember well, when we had the issue the process was just stuck in a strange way (could connect to the socket, but to no available answer), so killing it and retrying was the way to go it seemed.16:23
kbsinghnext year we are going to try and canvas for some more hardware, the aim is going to be beefier, but fewer, machines.16:24
kbsinghhopefully, we can unblock some of this stuff then, but its all a bit in the air and budgets etc16:24
*** hewbrocca is now known as hewbrocca-afk16:25
*** tumble has quit IRC16:25
trownchem: hmm maybe I did not understand the specific previous issue in that case... but for RDO CI we hit the actual command timeout quite a bit too16:25
chemtrown: making the test fail or just delaying it ?16:26
trownkbsingh: thanks a ton for looking into it... I have some ideas around bridging multiple machines together as well... even 2 virthosts would help a ton16:26
kbsinghlets keep exploring options16:27
*** KarlchenK has joined #rdo16:27
kbsinghwe also have the cloud setup, if that helps - its much faster machines16:27
trownchem: well say a command is just slow because we have a CPU bottleneck, it runs on average in 21 seconds time. Sometimes we will get lucky and it will run in 19 seconds and win the race, but often it will run in >20 seconds 8 times in a row and fail16:27
trownaverage of 21 seconds there is not the best as that will win the race in 8 tries alot, but even 30 second average and we are unlikely to have one run be 20 seconds16:28
*** ohochman has joined #rdo16:32
chemtrown: oki, then, let's see how this goes.  By the way the command_timeout was untouched by the previous fix, only the total timeout (request_timeout)16:34
*** jlibosva has quit IRC16:34
trownchem: ya, for current issues in RDO we need to increase command_timeout16:35
*** trown is now known as trown|lunch16:36
*** chandankumar has joined #rdo16:37
*** lucasagomes is now known as lucas-afk16:39
*** jubapa has joined #rdo16:40
*** derekh has quit IRC16:43
*** jubapa has quit IRC16:45
*** zoliXXL is now known as zoli|gone16:46
*** zoli|gone is now known as zoli_gone-proxy16:47
*** ekuris has quit IRC16:49
*** jpich has quit IRC16:49
*** fzdarsky is now known as fzdarsky|afk16:51
*** paragan has quit IRC16:54
*** mbound has quit IRC16:55
*** lucas-afk is now known as lucasagomes17:00
*** linuxgeek_ has quit IRC17:00
*** pkovar has quit IRC17:03
*** jlabocki has joined #rdo17:06
*** ifarkas is now known as ifarkas_afk17:07
*** linuxgeek_ has joined #rdo17:08
*** KarlchenK has quit IRC17:08
*** dmsimard is now known as dmsimard|afk17:09
*** mosulica has quit IRC17:10
*** mvk has quit IRC17:12
*** anilvenkata has joined #rdo17:13
*** apevec has quit IRC17:15
*** jhershbe has joined #rdo17:16
*** linuxgeek_ has quit IRC17:22
*** pcaruana has joined #rdo17:23
*** ihrachys has quit IRC17:23
*** fbo has quit IRC17:26
*** apevec has joined #rdo17:27
rdogerritMerged rdoinfo: rdopkg: map rdo-common branch to the right CBS build target  http://review.rdoproject.org/r/168517:27
*** abregman has joined #rdo17:28
*** mbound has joined #rdo17:33
*** fragatina has quit IRC17:35
*** trown|lunch is now known as trown17:40
*** gfidente has quit IRC17:40
*** imcsk8_ has joined #rdo17:41
*** sdake has quit IRC17:41
*** nmagnezi has joined #rdo17:41
*** iberezovskiy is now known as iberezovskiy|off17:43
*** imcsk8 has quit IRC17:44
*** degorenko is now known as _degorenko|afk17:44
*** imcsk8_ is now known as imcsk817:44
*** mcornea has quit IRC17:54
*** dtrainor has quit IRC17:58
*** fragatina has joined #rdo17:59
*** lucasagomes is now known as lucas-dinner18:01
*** nmagnezi has quit IRC18:03
*** dneary has quit IRC18:03
*** fragatina has quit IRC18:04
*** aortega has joined #rdo18:04
*** tosky has quit IRC18:04
*** vaneldik has joined #rdo18:07
*** rbowen has quit IRC18:10
*** rbowen has joined #rdo18:10
*** ChanServ sets mode: +o rbowen18:10
*** dtrainor has joined #rdo18:10
*** dtrainor has quit IRC18:11
*** dtrainor has joined #rdo18:11
*** dtrainor has quit IRC18:13
*** danielbruno has quit IRC18:13
*** dtrainor has joined #rdo18:13
dhill_I found a bug in python-tripleoclient in liberty... but it seems to affect only liberty18:14
dhill_https://bugzilla.redhat.com/show_bug.cgi?id=135991118:14
openstackbugzilla.redhat.com bug 1359911 in openstack-tripleo "Cannot generate overcloud images with liberty" [Low,New] - Assigned to jslagle18:14
rdogerritAlan Pevec created rdoinfo: Normalize rdo.yml  http://review.rdoproject.org/r/172918:19
*** jubapa has joined #rdo18:21
*** mbound has quit IRC18:23
*** mbound has joined #rdo18:24
*** mbound has quit IRC18:25
apevecdhill_, which rpm version-release is that?18:25
*** jubapa has quit IRC18:25
trownya thought we had a patch for that18:26
dhill_0.3.5_2016072418:26
weshayrlandy, comments on https://review.gerrithub.io/#/c/272349/1118:27
trowndhill_: in general though it is better to use images from CI http://artifacts.ci.centos.org/rdo/images/liberty/delorean/stable/18:27
dhill_trown: do we have RHEL images ?18:27
apevecdhill_, this is fixed in http://cbs.centos.org/koji/buildinfo?buildID=708718:27
apevecah you have dlrn build18:27
trowndhill_: oh, no we cant publish RHEL images18:27
weshaydhill_, not RHEL public images18:27
dhill_trown: so this is why I'm building my images ;)18:27
rlandyweshay: looking18:28
trowndhill_: curious, why RDO on RHEL?18:28
apevectrown, looks like that's not fixed on stable/liberty ?18:28
dhill_trown: because I'm not the only one who'll try it ...18:28
trownfair enough, was just curious if there was a specific use case18:29
dhill_trown : nah... it's curiosity...18:29
dhill_trown: I'm used to building my own images18:29
*** ayoung has quit IRC18:30
trowndhill_: you are probably better off using the newer image building method at least then... you will need newer tripleo-common than liberty but only that package18:30
trowndhill_: old tripleoclient code around image building is a mess of hacks18:30
weshayrlandy, can we call this bug resolved? https://bugs.launchpad.net/tripleo-quickstart/+bug/157102818:31
openstackLaunchpad bug 1571028 in tripleo-quickstart "[RFE] Baremetal support" [Wishlist,In progress] - Assigned to John Trowbridge (trown)18:31
dhill_trown: It's not openstack overcloud images build?18:31
apevecdhill_, so for dlrn, this should be fixed upstream on stable/liberty branch18:31
trowndhill_: that is the tripleoclient method18:31
apevecbut what trown said18:31
trowndhill_: that will be patched soon to run https://github.com/openstack/tripleo-common/blob/master/scripts/tripleo-build-images instead18:32
trowndhill_: ^ takes a yaml file that describes the images and is much cleaner18:32
trowndhill_: example yamls are in https://github.com/openstack/tripleo-common/tree/master/image-yaml18:32
rlandyweshay: https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028 will be resolved when there is sufficient documentation for bm and quickstart18:32
openstackLaunchpad bug 1571028 in tripleo-quickstart "[RFE] Baremetal support" [Wishlist,In progress] - Assigned to John Trowbridge (trown)18:32
weshayrlandy, k.. thanks18:32
dhill_trown: when will that be merged?18:33
trowndhill_: ?, that is merged18:33
trownthere is not a patch up to switch `openstack image build` yet, but hopefully that will make newton18:34
trownerr `openstack overcloud image build`18:34
trowndhill_: but the script is usable as is18:34
apevecdamn, 2 greens in a row https://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/ but HA wasn't lucky18:35
dhill_trown: cool, great to know !  Thanks for the info18:35
trownnever lucky18:35
*** dtrainor has quit IRC18:36
trowncurrent ha run is on gusty too, so not too likely18:36
trownmaybe timeout patch will hit dlrn before the next run18:36
*** oshvartz has joined #rdo18:37
*** dtrainor has joined #rdo18:43
*** julim has quit IRC18:43
*** sdake has joined #rdo18:44
*** julim has joined #rdo18:45
*** fzdarsky|afk has quit IRC18:45
*** mosulica has joined #rdo18:52
*** mosulica has quit IRC18:53
weshayhrybacki, https://review.gerrithub.io/#/c/280691/18:55
weshaymyoung, https://review.gerrithub.io/#/c/280684/18:56
weshaytrown, is ansible-role-ci-centos owned by dmsimard|afk ?18:56
weshayhttps://review.gerrithub.io/#/c/280692/18:56
*** Son_Goku has joined #rdo18:59
trownweshay: ya19:00
trownthough I am not sure anything even uses that now19:01
weshayk19:01
*** mgarciam has quit IRC19:01
trownfor quickstart we just use cico client directly19:01
weshayya.. thought that may have been cico.. thought wrong19:02
weshaycan we triage the role bugs? I'd like to move foward19:02
weshayhttps://bugs.launchpad.net/tripleo-quickstart/+bug/160451719:02
openstackLaunchpad bug 1604517 in tripleo-quickstart "replace the native quickstart inventory with github.com/redhat-openstack/ansible-role-tripleo-inventory" [Undecided,New]19:02
weshayhttps://bugs.launchpad.net/tripleo-quickstart/+bug/160451819:02
openstackLaunchpad bug 1604518 in tripleo-quickstart "Remove the tripleo-quickstart overcloud role and replace it w/ github.com/redhat-openstack/ansible-role-tripleo-overcloud" [Undecided,New]19:02
weshayhttps://bugs.launchpad.net/tripleo-quickstart/+bug/160452019:02
openstackLaunchpad bug 1604520 in tripleo-quickstart "replace the native quickstart overcloud validation w/ github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate" [Undecided,New]19:02
myoungweshay, merged19:03
weshaymyoung, thanks19:03
*** laron has joined #rdo19:04
hrybackiweshay: merged19:04
trownweshay: the bugs about moving stuff out of quickstart (rather than the one about moving stuff in) are tricky given the thread on openstack-dev19:05
weshaywhich.. making oooq easier to consume?19:06
trownlol, making tripleo-ci easier to consume, freudian slip, but ya19:06
weshaytrown, huh.. it seems like a pretty open reception to do what we thought was required19:07
*** READ10 has quit IRC19:07
weshaytrown, want me to just hold off for now?19:07
*** dpeacock has quit IRC19:08
*** metabsd has joined #rdo19:08
trownweshay: I dunno, seemed like we got feedback on not having "tripleo" code on redhat-openstack github, but it is a pretty grey area19:09
*** dtrainor has quit IRC19:10
metabsdHi, I want to test openstack. I find that page about how to install openstack (packstack). I want to know what is the prereq for filesystem and partition. Thank you!19:11
mrungemetabsd, for a small install, about 50 gigs of disk space is enough19:13
mrungemetabsd, there's no specific requirement for partitioning19:14
mrungemore space is better (obviously)19:14
*** laron has quit IRC19:15
metabsdI have 40G for the moment I plan to add more storage when I deploy my first vm.19:15
*** laron has joined #rdo19:15
*** jcoufal_ has quit IRC19:15
metabsdI can use advanced lvm and split all the thing and I can also put all the stuff in / ? As a example where I store the vms ? I want to make sure that partition or fs is separate for flexibility19:16
mrungemetabsd, packstack can create a storage for cinder for you19:16
mrungemetabsd, there is no choice for vm placement19:17
metabsdmrunge: If I want to add more storage to that cinder thing I have to increase something19:17
mrungemetabsd, you can add more storage later19:17
mrungebut19:17
mrungeI would try a deployment first, and then go big with tripleo19:18
metabsdmrunge: I will put all the thing in /19:18
mrungeyou want HA for infrastructure, no?19:18
metabsdmrunge: I have only Server for my test.19:18
mrungeso, no go big then19:18
mrungemetabsd, don't worry about your / dir. it won't get polluted19:19
metabsdYes I have 3 node with 256G Ram and 6 CPU with multiple core :) I want to play with Openstack and migrate all our VMware stuff to Linux Hypervisor. I don't know if the better solution is RHEV or Openstack for the moment.19:19
*** gszasz has quit IRC19:19
mrungedepends, I'd say :D19:20
metabsdmrunge: xfs or ext4 ? I don't find that information in the Documentation of RDO19:20
mrungedoesn't really matter19:20
*** nyechiel has quit IRC19:21
mrungeif you want speed, you probably won't deploy block storage over a loopback mounted file19:21
metabsdI just read RDO don't want to work with Network Manager. I have to disable it.19:21
mrungethat is being deployed by packstack for demo reasons19:21
mrungenot necessarily19:21
mrungefor a single machine, you can use network manager, iirc19:22
mrungebut: safe bet is to disable it19:22
metabsdhttps://www.rdoproject.org/install/quickstart/ -- Network section. Disable Network Manager and Enable Network19:22
mrungeyes19:22
metabsdmrunge: Like SeLinux safe to disable it ...19:22
metabsd:)19:22
mrungewhat?19:22
mrungedo not disable selinux19:23
metabsdjoke ... I run my fedora 24 with Selinux right now :)19:23
*** dtrainor has joined #rdo19:23
mrungethere's no need to. if you get a denied, that's a bug19:23
mrunge;-)19:23
*** mbound has joined #rdo19:25
*** Son_Goku has quit IRC19:26
*** dmsimard|afk is now known as dmsimard19:26
*** ihrachys has joined #rdo19:27
metabsdRDO have something like VDS (VMware) for switching ?19:29
*** chlong_POffice has quit IRC19:30
*** chlong_POffice has joined #rdo19:31
*** mbound has quit IRC19:31
rdogerritJon Schlueter proposed openstack/glance-distgit: Conditionalize -doc building  http://review.rdoproject.org/r/172819:32
mrungewhat is vds?19:35
*** ohochman has quit IRC19:37
metabsdmrunge: it's a virtual switch. I can map physical network to multiple VM and that switch can tag vlan etc...19:40
*** jprovazn has quit IRC19:44
*** danielbruno has joined #rdo19:47
*** laron has quit IRC19:50
*** dtrainor has quit IRC19:51
*** laron has joined #rdo19:51
*** ohochman has joined #rdo19:55
*** laron has quit IRC19:55
*** laron has joined #rdo19:56
weshayhrm..19:57
weshayError: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146'\nexception: connect failed\nCould not retrieve fact='mongodb_is_master', resolution='<anonymous>': 757: unexpected token at '2016-07-25T19:22:17.325+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused\n2016-07-25T19:22:17.326+0000 Error: couldn't connect to se19:57
weshayrver 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146'\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::cpu_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::ram_allocation_ratio'; cl19:57
weshayass ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::disk_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Aodh::Api]):19:57
weshayugh.. sry19:57
*** paramite|afk is now known as paramite19:58
*** abregman has quit IRC19:58
weshaylastest master ha ^19:58
EmilienMweird19:59
EmilienMweshay: did you investigate what's diff since last successful job?20:00
EmilienMrpm -qa diff20:00
weshayjust getting into it now20:00
EmilienMlooks like tht change20:00
EmilienMwonder why OOO CI is not broken20:00
EmilienMwsI guess it's OOOQ again?20:00
trownit is just gusty chassis20:01
trownI did local ha with that image and it passed20:01
trowndisabled the promote while we wait for puppet-openstacklib change to hit dlrn20:02
weshayhttps://www.diffchecker.com/u0gw4lvm20:02
trownweshay: the relevant error is Error: Could not prefetch keystone_tenant provider 'openstack': Command: 'openstack [\"domain\", \"list\", \"--quiet\", \"--format\", \"csv\", []]' has been running for more than 20 seconds (tried 7, for a total of 170 seconds)20:03
weshaytrown, ah k20:03
trownwhich will hopefully be resolved by the puppet-openstacklib patch that just merged20:03
trownjust waiting on dlrn to build it20:03
trowns/resolved/worked around/20:04
*** shardy has quit IRC20:04
EmilienMtrown: ah :)20:05
EmilienMso yeah it's timeout20:05
trownjust landed in dlrn, kicking promote20:05
*** chandankumar has quit IRC20:05
*** ihrachys has quit IRC20:06
*** dtrainor has joined #rdo20:07
*** dtrainor has quit IRC20:08
*** dtrainor has joined #rdo20:08
*** spr1 has joined #rdo20:09
*** zodbot has quit IRC20:10
*** zodbot_ has joined #rdo20:10
*** shardy has joined #rdo20:10
*** zodbot_ is now known as zodbot20:11
*** ihrachys has joined #rdo20:12
*** sdake has quit IRC20:16
dhill_why are we using mysql as a backend for ceilometer on the undercloud?   It's really awful20:19
dmsimarddhill_: ceilometer no longer collects/aggregates data afaik so maybe mysql makes sense since gnocchi is the one collecting data now20:20
dmsimarddhill_: i.e, I wouldn't put mysql as the gnocchi backend (if that is even possible)20:21
dhill_dmsimard: I have an undercloud with a 30GB ceilometer database20:21
dhill_dmsimard: in a VM20:21
dhill_dmsimard: that VM is awfully slow and I was wondering20:21
dmsimarddhill_: what version is that ? gnocchi was implemented in mitaka for tripleo I think20:22
dhill_2015.1.320:22
dmsimardso... kilo ?20:22
dmsimardyeah back in kilo gnocchi still didn't exist and the default backend was mysql -- I think mongodb was the preferred backend back then20:23
dmsimardif my memory is correct, gnocchi appeared in liberty and was implemented in mitaka20:24
dhill_ok20:24
dhill_I'm hungry now20:25
dhill_talking about gnoccis20:25
dmsimarddhill_: https://review.openstack.org/#/c/252032/ april20:25
dmsimardor maybe that didn't even make the mitaka release ?20:25
dmsimardpradk would know ..20:26
dmsimarddhill_: also, re-reading myself I wrote that wrong.. ceilometer is still the one collecting data but gnocchi is the one storing it and gnocchi has different backends20:26
dmsimardexcuse my french :)20:27
*** dtrainor has quit IRC20:31
*** unclemarc has quit IRC20:31
*** shardy has quit IRC20:34
*** JuanDRay has quit IRC20:35
*** akrivoka has quit IRC20:39
*** akshai has quit IRC20:40
*** milan has quit IRC20:43
dmsimardlarsks: just came across http://blog.oddbit.com/2015/10/13/ansible-20-the-docker-connection-driver/, that's pretty cool :D20:43
larsksYeah, it's nifty!20:43
dmsimardlarsks: I was just looking for a way to do a docker exec in a specific container, it doesn't seem like there's a way !?20:44
dmsimardI guess I could just do a command20:44
*** aortega has quit IRC20:50
metabsdI can specify a network card when I use packstack ??20:51
imcsk8metabsd: yes, the device20:53
*** anilvenkata has quit IRC20:53
*** ihrachys has quit IRC20:54
metabsdpackstack --allinone --device=ens3f0 ?20:54
*** ihrachys has joined #rdo20:54
*** fultonj_ has quit IRC20:55
metabsdimcsk8: I never use RDO. It's my first time :)20:55
*** pilasguru has quit IRC20:55
metabsdok just find --help :)20:55
*** chlong_POffice has quit IRC20:56
*** spr1 has quit IRC20:56
imcsk8metabsd: np, we can help you20:57
imcsk8metabsd: you don't specify the network card, you specify hosts20:58
metabsdI decide to test it like https://www.rdoproject.org/install/quickstart/ allinone20:59
*** ashw has quit IRC21:00
*** ohochman has quit IRC21:01
imcsk8that's ok21:01
metabsdPuppet doing something :P21:01
*** morazi has quit IRC21:02
*** jeckersb is now known as jeckersb_gone21:03
openstackgerritMerged openstack/packstack: add wsgi threads for gnocchi::api  https://review.openstack.org/34128721:07
*** chlong_POffice has joined #rdo21:09
*** trown is now known as trown|outtypewww21:09
*** fragatina has joined #rdo21:15
*** mvk has joined #rdo21:16
*** rlandy has quit IRC21:16
*** Goneri has quit IRC21:16
*** ihrachys has quit IRC21:18
*** Son_Goku has joined #rdo21:19
*** egafford has quit IRC21:19
*** ihrachys has joined #rdo21:19
*** julim has quit IRC21:20
*** Son_Goku has quit IRC21:20
*** Son_Goku has joined #rdo21:21
*** rhallisey has quit IRC21:30
*** pilasguru has joined #rdo21:30
*** ihrachys has quit IRC21:33
*** ihrachys has joined #rdo21:34
*** laron has quit IRC21:36
dmsimardnumber80: I saw you update https://github.com/redhat-openstack/openstack-utils recently... is that packaged somewhere or somethnig ?21:37
dmsimardsomething*21:37
number80dmsimard: it is in CBS21:37
dmsimardnumber80: as openstack-utils ?21:37
* dmsimard looks21:38
number80http://cbs.centos.org/koji/buildinfo?buildID=1144321:38
number80yes, in buildlogs repo21:38
number80(all releases)21:38
*** egafford has joined #rdo21:38
dmsimardnumber80: ah ok, was looking for a spec here.. https://github.com/rdo-packages?utf8=✓&query=utils21:38
dmsimardI guess it's not built by dlrn21:39
*** laron has joined #rdo21:39
number80dmsimard: openstack-utils is a downstream-only project21:40
number80https://github.com/redhat-openstack/openstack-utils21:40
*** snecklifter_ has quit IRC21:40
number80spec file is in el7-rpm branch21:40
*** bnemec has quit IRC21:40
dmsimardoh, didn't notice the branch, ty21:40
number80wait21:41
dmsimardnumber80: context was https://review.openstack.org/#/c/346551/21:41
weshaytrown|outtypewww, EmilienM not sure why yet.. but the overcloud nodes in this test.. https://review.openstack.org/#/c/341616/21:41
weshaynever get out of spawning21:41
weshay| 91051124-3ae4-4bce-971b-504050aa0c85 | overcloud-controller-0  | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.14 |21:41
weshay| 8cb03d65-f6f5-4b26-9450-a5100800d316 | overcloud-novacompute-0 | BUILD  | spawning   | NOSTATE     | ctlplane=192.0.2.1021:41
number80dmsimard: it's in there now : https://github.com/rdo-common/openstack-utils21:43
dmsimardnumber80: rdo-common, is that new ?21:43
dmsimardfirst time I see it ..21:43
number80dmsimard: we use this branch for common deps21:44
number80openstack-utils had no official dist-git so I moved it there21:44
*** laron has quit IRC21:44
*** jeckersb_gone is now known as jeckersb21:47
*** fragatin_ has joined #rdo21:49
*** ihrachys_ has joined #rdo21:50
*** ihrachys has quit IRC21:50
*** bnemec has joined #rdo21:50
apevecdmsimard, openstack-status has no idea about HA/pacemaker and we don't want to teach about it21:51
*** dustins has quit IRC21:51
*** fragatin_ has quit IRC21:51
apevecopenstack-utils is on it's way out, we already stripped unsupported parts of it21:51
apeveclike openstack-db21:52
*** fragatin_ has joined #rdo21:52
dmsimardah, okay21:53
*** fragatina has quit IRC21:53
* dmsimard shrugs21:53
*** amuller has quit IRC21:53
apevecthat said, I'm not exactly fan of that big fat scripts/tripleo.sh21:55
apevecI'd expect sanity check to be in https://github.com/openstack/tripleo-validations21:56
apevechmm, is that empty project?21:57
*** apevec has quit IRC22:00
*** coolsvap has quit IRC22:01
*** rwsu has quit IRC22:04
*** pilasguru has quit IRC22:04
*** jmelvin has quit IRC22:08
*** paramite has quit IRC22:08
*** jhershbe has quit IRC22:09
*** jhershbe has joined #rdo22:09
*** rlandy has joined #rdo22:11
*** thrash is now known as thrash|g0ne22:17
*** jubapa has joined #rdo22:17
*** akshai has joined #rdo22:20
*** fragatina has joined #rdo22:23
*** fragatin_ has quit IRC22:25
*** akshai has quit IRC22:29
*** jubapa has quit IRC22:30
*** jubapa has joined #rdo22:31
*** chem has quit IRC22:36
*** chem has joined #rdo22:36
*** danielbruno has quit IRC22:39
*** kgiusti has left #rdo22:45
*** danielbruno has joined #rdo22:45
*** iranzo has joined #rdo22:46
*** rhallisey has joined #rdo22:46
*** elmiko is now known as _elmiko22:50
*** lucas-dinner has quit IRC22:51
*** pilasguru has joined #rdo22:53
*** lucasagomes has joined #rdo22:57
*** egafford has quit IRC23:06
*** dgurtner has quit IRC23:11
*** mlammon has quit IRC23:13
*** ihrachys_ has quit IRC23:15
*** ohochman has joined #rdo23:16
*** dtrainor has joined #rdo23:19
*** rpioso has quit IRC23:23
*** rpioso has joined #rdo23:23
*** rpioso has quit IRC23:23
*** fragatina has quit IRC23:23
*** fragatina has joined #rdo23:24
*** chlong_POffice has quit IRC23:25
*** danielbruno has quit IRC23:27
*** gildub has joined #rdo23:41
*** chlong_POffice has joined #rdo23:42
*** kaminohana has joined #rdo23:49

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!