Wednesday, 2016-03-23

*** angdraug has quit IRC00:35
*** dshulyak has joined #openstack-solar05:44
*** salmon_ has joined #openstack-solar07:32
openstackgerritMerged openstack/solar: Use WeakValueDictionary instead of WeakSet in DBModelProxy  https://review.openstack.org/29567508:41
openstackgerritDmitry Shulyak proposed openstack/solar: Refactoring of graph.py and usage of it in scheduler  https://review.openstack.org/29460508:58
*** openstackgerrit has quit IRC09:03
*** openstackgerrit has joined #openstack-solar09:04
openstackgerritDmitry Shulyak proposed openstack/solar: Implement traversal based on number of childs  https://review.openstack.org/29534809:18
mkwiekhi guys, what is the preferred way to add transport to resource via code?09:32
pigmejmkwiek: the only one way09:33
pigmejadd it :)09:33
*** zefciu has joined #openstack-solar09:33
pigmejmkwiek: check torrent example09:33
pigmejbasically you need to connect transport resource with any other resource09:33
dshulyaksalmon_: have you noticed how long it takes to deploy 3 controllers with current solar? maybe you have history?09:33
pigmejBUT because transports are kinda 'special' (ugly hack around transports_id)09:34
pigmejyuou need create transports container first09:34
pigmejmkwiek: genrally, transprots are let's say lazy evaluated09:34
mkwiekok, thanks :)09:34
pigmejeach resource have transports_id, and using this transports_id, we search for actual transports resource data, to *not* trigger updates of all resources when you change ssh_user09:34
mkwiekI see, makes sense09:35
pigmejso basically, you need a transports_container stuff (resoruces/transports)09:35
pigmejthen you connect these transports to *any* resource as you want09:35
pigmejthen you connect your transport to this transports_container09:35
pigmejUX <309:35
mkwiekindeed :)09:36
pigmejtorrent/example.py is your manual :)09:36
pigmejbut well... why you need it ?09:36
pigmejbecause I'm making currently tests of transports / hanlders09:37
salmon_dshulyak: too long09:41
salmon_dshulyak: I';m deploying 2 controllers now09:41
pigmejsalmon_: int(too long) is invalid :(09:41
dshulyaksalmon_: 2 and 3 shouldnt be any different, there is probably a bug in f2s parser or elsewhere09:43
salmon_dshulyak: what kind of bug?09:43
dshulyaki dont know :) but total time of deployment for 2 and 3 controllers should be ~ equal09:44
dshulyakor if you dont have enough cpu than for 3 vms than it wont be09:45
mkwiekpigmej: I am just making tests of my changes09:45
pigmejk09:45
pigmejyou could mock BAT function though09:46
mkwiekpigmej: but this is what I want to test :)09:52
*** dshulyak has quit IRC10:25
*** dshulyak has joined #openstack-solar10:45
openstackgerritDmitry Shulyak proposed openstack/solar: Refactoring of graph.py and usage of it in scheduler  https://review.openstack.org/29460511:08
openstackgerritDmitry Shulyak proposed openstack/solar: Implement traversal based on number of childs  https://review.openstack.org/29534811:08
salmon_dshulyak: do you want results from 2 controllers deploy?11:34
dshulyaksalmon_: yep, why not :)11:35
salmon_dshulyak: https://bpaste.net/show/39b9bc274d6111:36
dshulyakhm11:37
dshulyak6890 ?11:38
dshulyakit is about 2 hours11:38
pigmej;/11:39
pigmejsalmon_: why it's that slow ?11:39
pigmejyou said me that you was out of ram for a moment (at least), right?11:39
dshulyakit looks like something just stuck, i will add delta for each task in to report11:41
dshulyakpigmej: can you take a look at this patch again https://review.openstack.org/#/c/294605/? i had to add save_all_lazy before lock is released, otherwise there was concurrency issues with riak backend11:42
pigmejhow could it be ?11:42
dshulyakwhat?11:43
pigmejI mean, why this save_all_lazy fixes the concurrency issue?11:43
dshulyakbecause lock is released before session_end11:43
dshulyaki think it might be connected to smart waiter, but i am not sure11:43
pigmejso we need to find a way to release it after session_end :)11:43
pigmejsmart waiter just sends signals, and it then uses "normal" logic, so if smart waiter causes error then lock is not working properly :/11:44
salmon_I said it's slow :P11:44
dshulyakin fuel deployment of 3 controllers ~ 30-40 mins with provisioning11:44
dshulyakso it is either lack of ram/cpu of bugs in solar11:45
dshulyakof/or11:45
pigmejor salmon environment11:46
pigmejP:11:46
pigmejdshulyak: is there any always failing case for this lock?11:46
pigmejbeceause if solution should be "unlock after session is ended" I would prefer to have it this way, because I can imagine easily situation that we still may have similar problem in future11:48
dshulyakpigmej: u can checkout my patch and remove save_all_lazy in scheduler.py, you wont have 100% reproducibility - but i have 1-2 functional tests failures every 2-3 run11:48
dshulyakactually previously we were saving all records in graph before releasing lock11:49
dshulyakso it is not regression :)11:49
salmon_I think it's ram11:50
pigmejdshulyak: yaeh but my question is 'Should it be the fix, or we rather should change when lock is released'11:50
pigmejwe could easily add "callbacks" for session_end11:50
dshulyaki think save_all_lazy explicitly is good enough for now, also maybe it would be better to get rid from lock at all11:52
pigmejdshulyak: Could you please make note in these lines ?11:53
pigmejbecause well, this sounds "hacky" to me :D11:53
dshulyakok, will do in a moment’12:06
dshulyaksalmon_: do you have script to bootstrap solar on fuel master?12:06
salmon_dshulyak: nope. I'm doing it manually12:07
pigmejdshulyak: just take parts from centos provisioning, should be fine12:08
pigmej:)12:08
mkwiekpigmej: do we have any fixtures for tests with resources with transports connected to them?13:37
pigmejnope13:39
pigmejI just have some composer files13:39
pigmejhttps://bpaste.net/show/a3d28f084a6e13:39
pigmejlike that13:40
mkwiekcan I add them to fixtures?13:41
pigmejwhat ?13:41
pigmejwhat fixtures ?13:41
mkwiekdoes it even make sense to do it? to add this kind of file to let's say, solar/test/resource_fixtures ?13:52
pigmejwell, but what's the point ? I mean, this is for 'bat' tests ?13:53
pigmejif so, then adding resources and loading them will test in fact "resources transports passing"13:53
pigmejand not BAT 'load' stuff13:53
mkwiekwhen I have resources with transports attached to them, I can test the BAT behaviour. At least that's my understanding13:55
pigmejwhy not just mock transports() from resource?13:58
mkwiekbecause I still don't know what format is returned from there :(13:59
pigmejmkwiek: just load cli with ANY resource13:59
pigmejand call resource.transports()13:59
pigmejr = resource.load('node1')13:59
pigmejprint r.transports()14:00
pigmej:)14:00
mkwiekcrap, it makes sense, it seems I had a brainfart...14:00
pigmej;D14:00
*** openstack has joined #openstack-solar14:22
*** spyzalski has quit IRC14:58
*** openstackstatus has joined #openstack-solar15:12
*** ChanServ sets mode: +v openstackstatus15:12
*** dshulyak has quit IRC15:31
*** dshulyak has joined #openstack-solar16:20
*** dshulyak has quit IRC17:30
*** angdraug has joined #openstack-solar17:54
*** angdraug has quit IRC18:33
*** angdraug has joined #openstack-solar18:49
*** angdraug has quit IRC18:50
*** openstack has joined #openstack-solar19:08
*** openstack has joined #openstack-solar19:21
*** pigmej has joined #openstack-solar19:22
*** salmon_ has joined #openstack-solar19:23
*** openstackstatus has joined #openstack-solar19:24
*** ChanServ sets mode: +v openstackstatus19:24
*** openstack has joined #openstack-solar20:31
*** angdraug has joined #openstack-solar20:50
*** angdraug has quit IRC23:02
*** openstack has joined #openstack-solar23:23
*** openstackstatus has joined #openstack-solar23:24
*** ChanServ sets mode: +v openstackstatus23:24
*** angdraug has joined #openstack-solar23:45
*** salmon_ has quit IRC23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!