Friday, 2014-03-07

openstackgerritAlex Pecoraro proposed a change to openstack/swift: Allow hostname for nodes in Ring  https://review.openstack.org/7454200:01
*** sungju has joined #openstack-swift00:17
*** openstackgerrit has quit IRC00:35
*** miurahr has joined #openstack-swift00:35
*** openstackgerrit has joined #openstack-swift00:35
*** miurahr has left #openstack-swift00:35
zaitcevclass Writr(object):00:41
zaitcevOh Solly you magnificent00:41
*** hurricanerix has quit IRC00:44
torgomaticdfg: ping00:45
*** sungju has quit IRC00:55
*** NM has joined #openstack-swift00:56
*** dmsimard has joined #openstack-swift00:56
*** dmsimard has quit IRC01:01
*** Dharmit has joined #openstack-swift01:01
*** kazdorsnab has joined #openstack-swift01:03
*** dmsimard has joined #openstack-swift01:04
*** dmsimard has quit IRC01:10
*** dmsimard1 has joined #openstack-swift01:10
*** mkollaro has quit IRC01:12
*** kazdorsnab has quit IRC01:16
*** csd has quit IRC01:38
*** csd has joined #openstack-swift01:43
*** shri has quit IRC01:43
*** csd has quit IRC01:49
*** nosnos has joined #openstack-swift02:00
*** taras___ has joined #openstack-swift02:05
*** taneez has joined #openstack-swift02:05
*** taneez is now known as tanee02:05
*** changbl has joined #openstack-swift02:05
*** taras__ has quit IRC02:06
*** tanee-away has quit IRC02:06
*** haomaiw__ has quit IRC02:17
*** haomaiwang has joined #openstack-swift02:17
*** j_king_ has joined #openstack-swift02:21
*** ondergetekende_ has joined #openstack-swift02:24
*** otherjon_ has joined #openstack-swift02:24
*** openstackgerrit has quit IRC02:35
*** mtreinish has quit IRC02:35
*** gholt has quit IRC02:35
*** j_king_ has quit IRC02:35
*** changbl has quit IRC02:35
*** Dharmit has quit IRC02:35
*** gyee has quit IRC02:35
*** PradeepChandani has quit IRC02:35
*** Slidey has quit IRC02:35
*** clarkb has quit IRC02:35
*** haomaiwang has quit IRC02:35
*** mandarine has quit IRC02:35
*** grapsus__ has quit IRC02:35
*** joearnold has quit IRC02:35
*** Kim-Chi-San has quit IRC02:35
*** NM has quit IRC02:35
*** occupant has quit IRC02:35
*** j_king has quit IRC02:35
*** mkerrin has quit IRC02:35
*** wer has quit IRC02:35
*** ondergetekende has quit IRC02:35
*** briancline has quit IRC02:35
*** gdrudy has quit IRC02:35
*** swifterdarrell has quit IRC02:35
*** sileht has quit IRC02:35
*** sudorandom has quit IRC02:35
*** otherjon has quit IRC02:35
*** CrackerJackMack has quit IRC02:35
*** otherjon_ has quit IRC02:35
*** ondergetekende_ has quit IRC02:35
*** tanee has quit IRC02:35
*** taras___ has quit IRC02:35
*** nosnos has quit IRC02:35
*** chandan_kumar has quit IRC02:35
*** zul has quit IRC02:35
*** peluse has quit IRC02:35
*** krtaylor has quit IRC02:35
*** creiht has quit IRC02:35
*** wkelly has quit IRC02:35
*** saschpe has quit IRC02:35
*** annegentle has quit IRC02:35
*** acoles has quit IRC02:35
*** sfineberg has quit IRC02:35
*** tristanC has quit IRC02:35
*** cropalato has quit IRC02:35
*** wayneeseguin has quit IRC02:35
*** acorwin has quit IRC02:35
*** jokke_ has quit IRC02:35
*** zackmdavis has quit IRC02:35
*** jeblair has quit IRC02:35
*** greghaynes has quit IRC02:35
*** pberis has quit IRC02:35
*** yuan has quit IRC02:35
*** early has quit IRC02:35
*** redbo has quit IRC02:35
*** notmyname has quit IRC02:35
*** MooingLemur has quit IRC02:35
*** rahmu has quit IRC02:35
*** dosaboy has quit IRC02:35
*** bsdkurt has quit IRC02:35
*** torgomatic has quit IRC02:35
*** minnear has quit IRC02:35
*** glange has quit IRC02:35
*** jogo has quit IRC02:35
*** hugokuo has quit IRC02:35
*** ctennis has quit IRC02:35
*** anderstj has quit IRC02:35
*** alpha_ori has quit IRC02:35
*** ryao has quit IRC02:35
*** anticw has quit IRC02:35
*** pconstantine has quit IRC02:35
*** dfg has quit IRC02:35
*** mordred has quit IRC02:35
*** kragniz has quit IRC02:35
*** Alex_Gaynor has quit IRC02:35
*** StevenK has quit IRC02:35
*** EmilienM has quit IRC02:35
*** pandemicsyn has quit IRC02:35
*** chmouel has quit IRC02:35
*** Diddi has quit IRC02:35
*** omame has quit IRC02:35
*** zigo has quit IRC02:35
*** mhu has quit IRC02:35
*** russellb has quit IRC02:35
*** Anticimex has quit IRC02:35
*** zanc has quit IRC02:35
*** fbo_away has quit IRC02:35
*** ChanServ has quit IRC02:35
*** madhuri has quit IRC02:35
*** mjseger has quit IRC02:35
*** ekarlso has quit IRC02:35
*** Dieterbe has quit IRC02:35
*** physcx has quit IRC02:35
*** JelleB has quit IRC02:35
*** mlanner has quit IRC02:35
*** amandap has quit IRC02:35
*** therve has quit IRC02:35
*** portante has quit IRC02:35
*** luisbg has quit IRC02:35
*** ahale has quit IRC02:35
*** rpedde has quit IRC02:35
*** clayg has quit IRC02:35
*** mtreinish has joined #openstack-swift02:40
*** dmorita has joined #openstack-swift03:02
*** mkerrin has joined #openstack-swift03:02
*** NM has joined #openstack-swift03:02
*** briancline has joined #openstack-swift03:02
*** CrackerJackMack has joined #openstack-swift03:02
*** sudorandom has joined #openstack-swift03:02
*** gdrudy has joined #openstack-swift03:02
*** wer has joined #openstack-swift03:02
*** dmorita has quit IRC03:06
*** wer has quit IRC03:06
*** sudorandom has quit IRC03:06
*** gdrudy has quit IRC03:06
*** CrackerJackMack has quit IRC03:06
*** briancline has quit IRC03:06
*** NM has quit IRC03:06
*** mkerrin has quit IRC03:06
*** dmorita has joined #openstack-swift03:11
*** mkerrin has joined #openstack-swift03:11
*** NM has joined #openstack-swift03:11
*** briancline has joined #openstack-swift03:11
*** CrackerJackMack has joined #openstack-swift03:11
*** sudorandom has joined #openstack-swift03:11
*** gdrudy has joined #openstack-swift03:11
*** wer has joined #openstack-swift03:11
*** ondergetekende has joined #openstack-swift03:22
*** creiht has joined #openstack-swift03:23
*** otherjon has joined #openstack-swift03:30
*** annegentle has joined #openstack-swift03:31
*** occup4nt has joined #openstack-swift03:31
*** taras___ has joined #openstack-swift03:31
*** zul has joined #openstack-swift03:31
*** peluse has joined #openstack-swift03:31
*** krtaylor has joined #openstack-swift03:31
*** wkelly has joined #openstack-swift03:31
*** tristanC has joined #openstack-swift03:31
*** wayneeseguin has joined #openstack-swift03:31
*** annegentle has quit IRC03:31
*** zul has quit IRC03:31
*** zul has joined #openstack-swift03:32
*** annegentle has joined #openstack-swift03:33
*** basha has joined #openstack-swift03:56
*** erlon has joined #openstack-swift04:04
*** basha has quit IRC04:12
*** NM has quit IRC04:19
*** fifieldt has joined #openstack-swift04:48
*** zaitcev has quit IRC04:55
*** nosnos has joined #openstack-swift05:00
*** Dharmit has joined #openstack-swift05:20
*** fifieldt has quit IRC05:30
*** erlon has quit IRC06:12
*** nshaikh has joined #openstack-swift06:17
*** Dharmit has quit IRC06:21
*** Dharmit has joined #openstack-swift06:21
*** sungju has joined #openstack-swift06:28
*** miurahr has joined #openstack-swift06:41
*** miurahr has quit IRC06:44
*** saju_m has joined #openstack-swift07:01
*** nosnos_ has joined #openstack-swift07:27
*** nosnos has quit IRC07:27
*** psharma has joined #openstack-swift07:54
*** Midnightmyth has joined #openstack-swift07:54
*** bvandenh has joined #openstack-swift07:54
*** madhuri has joined #openstack-swift07:54
*** ppai has joined #openstack-swift07:54
*** haomaiw__ has joined #openstack-swift07:54
*** sfineberg has joined #openstack-swift07:54
*** acoles has joined #openstack-swift07:54
*** acorwin_ has joined #openstack-swift07:54
*** tanee has joined #openstack-swift07:54
*** chandan_kumar has joined #openstack-swift07:54
*** saschpe has joined #openstack-swift07:54
*** sileht has joined #openstack-swift07:54
*** swifterdarrell has joined #openstack-swift07:54
*** j_king_ has joined #openstack-swift07:54
*** changbl has joined #openstack-swift07:54
*** mandarine has joined #openstack-swift07:54
*** jogo has joined #openstack-swift07:54
*** PradeepChandani has joined #openstack-swift07:54
*** Slidey has joined #openstack-swift07:54
*** pberis has joined #openstack-swift07:54
*** yuan has joined #openstack-swift07:54
*** early has joined #openstack-swift07:54
*** Alex_Gaynor has joined #openstack-swift07:54
*** grapsus__ has joined #openstack-swift07:54
*** gholt has joined #openstack-swift07:54
*** redbo has joined #openstack-swift07:54
*** ekarlso has joined #openstack-swift07:54
*** mjseger has joined #openstack-swift07:54
*** clarkb has joined #openstack-swift07:54
*** Dieterbe has joined #openstack-swift07:54
*** physcx has joined #openstack-swift07:54
*** JelleB has joined #openstack-swift07:54
*** greghaynes has joined #openstack-swift07:54
*** notmyname has joined #openstack-swift07:54
*** MooingLemur has joined #openstack-swift07:54
*** mlanner has joined #openstack-swift07:54
*** amandap has joined #openstack-swift07:54
*** therve has joined #openstack-swift07:54
*** jokke_ has joined #openstack-swift07:54
*** zackmdavis has joined #openstack-swift07:54
*** jeblair has joined #openstack-swift07:54
*** joearnold has joined #openstack-swift07:54
*** Kim-Chi-San has joined #openstack-swift07:54
*** rahmu has joined #openstack-swift07:54
*** dickson.freenode.net sets mode: +vvv gholt redbo notmyname07:54
*** StevenK has joined #openstack-swift07:54
*** dosaboy has joined #openstack-swift07:54
*** bsdkurt has joined #openstack-swift07:54
*** EmilienM has joined #openstack-swift07:54
*** pandemicsyn has joined #openstack-swift07:54
*** torgomatic has joined #openstack-swift07:54
*** minnear has joined #openstack-swift07:54
*** glange has joined #openstack-swift07:54
*** hugokuo has joined #openstack-swift07:54
*** ctennis has joined #openstack-swift07:54
*** anderstj has joined #openstack-swift07:54
*** alpha_ori has joined #openstack-swift07:54
*** portante has joined #openstack-swift07:54
*** ryao has joined #openstack-swift07:54
*** anticw has joined #openstack-swift07:54
*** pconstantine has joined #openstack-swift07:54
*** dfg has joined #openstack-swift07:54
*** dickson.freenode.net sets mode: +vvvv torgomatic glange portante dfg07:54
*** mordred has joined #openstack-swift07:54
*** kragniz has joined #openstack-swift07:54
*** chmouel has joined #openstack-swift07:54
*** Diddi has joined #openstack-swift07:54
*** omame has joined #openstack-swift07:54
*** zigo has joined #openstack-swift07:54
*** mhu has joined #openstack-swift07:54
*** russellb has joined #openstack-swift07:54
*** Anticimex has joined #openstack-swift07:54
*** luisbg has joined #openstack-swift07:54
*** ahale has joined #openstack-swift07:54
*** zanc has joined #openstack-swift07:54
*** rpedde has joined #openstack-swift07:54
*** fbo_away has joined #openstack-swift07:54
*** ChanServ has joined #openstack-swift07:54
*** clayg has joined #openstack-swift07:54
*** dickson.freenode.net sets mode: +ov ChanServ clayg07:54
*** nshaikh has quit IRC07:56
*** nshaikh has joined #openstack-swift07:56
*** Dharmit has quit IRC07:56
*** Dharmit has joined #openstack-swift07:56
*** sungju is now known as Guest5928007:56
*** Alex_Gaynor has quit IRC07:56
*** Alex_Gaynor has joined #openstack-swift07:57
*** Guest59280 has quit IRC08:09
*** joeljwright has joined #openstack-swift08:11
*** kun_huang has joined #openstack-swift08:24
*** nacim has joined #openstack-swift08:45
*** nosnos has joined #openstack-swift08:45
*** Dharmit has quit IRC08:46
*** nosnos_ has quit IRC08:46
*** Dharmit has joined #openstack-swift08:49
*** nacim has quit IRC08:49
*** nacim has joined #openstack-swift08:50
*** Dharmit has quit IRC08:54
*** Dharmit has joined #openstack-swift08:54
*** Midnightmyth has quit IRC08:56
*** dmorita has quit IRC09:01
*** basha has joined #openstack-swift09:11
*** basha has quit IRC09:13
*** basha has joined #openstack-swift09:14
*** basha_ has joined #openstack-swift09:18
*** basha has quit IRC09:18
*** basha_ is now known as basha09:18
*** Dharmit has quit IRC09:21
*** foexle has joined #openstack-swift09:24
*** jamieh has joined #openstack-swift09:24
*** haomaiw__ has quit IRC09:28
*** haomaiwang has joined #openstack-swift09:29
*** mkerrin has quit IRC09:30
*** gdrudy has quit IRC09:30
*** tanee is now known as tanee-away09:39
*** fbo_away is now known as fbo09:44
*** gdrudy has joined #openstack-swift09:45
*** mkerrin has joined #openstack-swift09:55
*** Kim-Chi-San has quit IRC10:06
*** tanee-away is now known as tanee10:06
*** otoolee has quit IRC10:19
*** gdrudy has quit IRC10:19
*** saju_m has quit IRC10:19
*** tanee is now known as tanee-away10:20
*** mkollaro has joined #openstack-swift10:20
*** kun_huang has quit IRC10:24
*** kun_huang has joined #openstack-swift10:26
*** PradeepChandani has left #openstack-swift10:41
*** tanee-away is now known as tanee10:56
*** ccorrigan has quit IRC11:14
*** mkollaro has quit IRC11:16
*** Trixboxer has joined #openstack-swift11:29
*** saju_m has joined #openstack-swift11:30
*** psharma has quit IRC11:41
*** _bluev has joined #openstack-swift11:49
*** nosnos has quit IRC11:51
_bluevHi. What do people use for concurrency for the object-replicator ? We have 6 back-end servers. 36 SATA disks in each and find that with worked 1, replicator concurrency 8 we have 200+ hours for replicaton.....11:52
*** PradeepChandani has joined #openstack-swift11:53
*** Midnightmyth has joined #openstack-swift11:54
*** basha has quit IRC12:02
_bluevI looks to me like the 'workers' config value for the object-* daemons has the affect of running multiple *wsgi* processes - is that correct ? If that is true increasing the workers count in object-server.conf has no impact on swift-obect-replicator as that's not a wgsi user12:16
*** PradeepChandani has left #openstack-swift12:16
*** bvandenh has quit IRC12:19
*** ppai has quit IRC12:21
*** mkollaro has joined #openstack-swift12:25
*** kun_huang has quit IRC12:26
*** NM has joined #openstack-swift12:28
*** erlon has joined #openstack-swift12:30
*** PradeepChandani has joined #openstack-swift12:30
*** PradeepChandani has left #openstack-swift12:30
*** PradeepChandani has joined #openstack-swift12:31
*** PradeepChandani has left #openstack-swift12:31
*** kun_huang has joined #openstack-swift12:37
*** nshaikh has quit IRC12:42
*** miurahr has joined #openstack-swift12:47
*** saju_m has quit IRC12:49
*** miurahr has quit IRC12:54
*** kun_huang has quit IRC12:55
*** miurahr has joined #openstack-swift12:58
*** foexle has quit IRC13:02
*** foexle has joined #openstack-swift13:02
*** miurahr has quit IRC13:02
*** miurahr has joined #openstack-swift13:08
*** miurahr has quit IRC13:16
*** Kim-Chi-San has joined #openstack-swift13:31
*** bowdengl has joined #openstack-swift14:05
*** kun_huang has joined #openstack-swift14:12
*** hurricanerix has joined #openstack-swift14:21
*** j_king_ is now known as j_king14:33
*** Diddi has quit IRC14:45
*** dmsimard has joined #openstack-swift14:46
*** Diddi has joined #openstack-swift14:52
creiht_bluev: correct15:03
creihthttp://docs.openstack.org/developer/swift/deployment_guide.html#object-server-configuration15:03
creihtyou can sett concurrency under the [object-replicator] section of the conf15:04
dfgtorgomatic: did gholt answe your question? of course i guess there's other problems with that patch. clayg thanks you looking at it15:07
_bluevcreiht: we have very slow replication. e.g 3579/3734658 (0.10%) partitions replicated in 1800.01s (1.99/sec, 521h remaining)15:14
_bluevcreiht: we set concurrency for the replicator to 8, 12 and 24 but we don't see any difference.  We think the REPLICATE object-serve request could be slowing everything down.15:15
creiht_bluev: has this happened suddenly?15:16
_bluevcreiht: I think it's got worse as we've got busier and fuller to be honest.15:18
creiht_bluev: well there can be a lot of reasons15:18
creihtfirst things to look at are double check your max connections options in rsyncd.conf15:19
*** jeblair is now known as jegerritbot15:19
*** jegerritbot is now known as jeblair15:19
creihtthat limits how many incomming replication rsyncs are allowed15:19
creihtyou also don't want to set the concurrencies too high, as that will just case everything to overload15:20
creihtlook at the logs for obvious types of errors15:20
creihtlook for things that prevent replication passes from completing15:21
creihtas that can easily baloon how long replication passes can take15:21
_bluevcreiht: we have rsyncd max connection set to 30. We  increased from 10 about 4 weeks ago15:21
gholtdfg: clayg: torgomatic: Updated https://review.openstack.org/#/c/78766/ to show tempurl needs to be before dlo and slo. A tempurl test would be good. dfg and I are working on that.15:21
creiht_bluev: look at the logs to make sure the rsyncs are completing successfully15:21
creihtlook at your network throughput to see if you are using all of that15:21
*** openstackgerrit has joined #openstack-swift15:22
creihtthere is likely something going on somewhere that is preventing replication passes from completing, which causes stuff to back up15:22
*** tanee is now known as tanee-away15:23
creiht_bluev: also make sure your system overall is operating fine15:24
creihtif swift is working around issues, then there will be a lot of extra handoffs going on15:25
creihtwhich can also make replication take a lot longer because your disks start getting extra partitions15:25
_bluevcreiht: thanks for that. rsync logs look good to move from the client and server perspective .  The main log is full of messages like  "object-replicator Successful rsync of /srv/node/sdo1/objects/7630784/219 at 10.32.37.85::object/sdg1/objects/7630784 (0.827)"15:25
creiht_bluev: a quick thing you can do is calculate how many partitions you should have on disk on average, then see how many you have on some disks for real15:28
creihtmost of these are shots in the dark, since i don't have any insight into your cluster15:30
creiht_bluev: sorry that I can't help more right this instance, but hopefully that will help lead you to the source of the issue15:31
luisbgmorning15:31
_bluevOur statd data shows the REPLICATE mean is often over a second - we have WORKERS on object-server.conf set to 2 , we're thinking we may need to increase that as REPLICATE takes an age sometimes.15:34
*** judd7 has joined #openstack-swift15:36
gholt_bluev: creiht: With an example partition number of 7630784 and only 216 disks it sounds like you have way too high a partition power.15:37
gholtAre you running at 2**23 or something?15:38
*** bowdengl has quit IRC15:38
gholtWe've got tens of thousands of disks in clusters and only run at 2**20.15:38
_bluevgholt: Yes. The original cluster was 6*36 but we added an extra 9 systems each with 32 disks. The replication of the data is whats killing us15:39
gholtWell yeah, if you have it uselessly walking a tons of directory structures that aren't really needed, that'd slow things down.15:40
gholtYou might look at http://rackerlabs.github.io/swift-ppc/15:41
*** tanee-away is now known as tanee15:42
*** piyush has joined #openstack-swift15:44
*** kun_huang has quit IRC15:45
_bluevgholt: I'm in production with a loads of customers.  The PP was chosen based on the storage growth our management predicted. I have added capacity as I was becoming full. I started by adding in small weigh increments as per best practice.  Replication is very slow -eg  200h, 400h etc. Do I have any options or am I just totally screwed.15:45
_bluevgholt: I read all your consistent hashing ring blog posts and the impression given was the a large PP means you burn lots of memory which was the only guidance I could find. We have 48G per object server.15:46
gholtHmm. Well, I assume you're running at part power of 23? 2**23/(6*36+9*32) = ~16644 partitions per disk?15:47
*** Trixboxer has quit IRC15:48
_bluevgholt: Yes PP set to 2315:49
gholtEr, with 3 replicas that'd be ~49932 parts per disk. In one example you pasted it said 3579/3734658 which is still pretty high for 36 disk system but I guess maybe that's because it hasn't re-replicated yet.15:51
*** judd7_ has joined #openstack-swift15:52
gholtUnfortunately we still don't have a way to re-part-power a running cluster. :/ We want that badly, but a lot of other stuff has to fall into place first (ssync, index.db, etc.) We'll get there, but that doesn't help you now.15:52
*** judd7 has quit IRC15:53
_bluevok, thanks gholt. I think I need to focus on decreasing the amount time the REPLICATE verb takes15:54
gholtI'm thinking, but honestly the only thing I can think to do is to make a new cluster and migrate data from the old to the new. Not an easy endeavor I can imagine.15:54
gholtUpping the replicator concurrency might help to a point, but at a certain point you'd just be thrashing the disk and slowing things down. So you might have to experiment with that setting a bit to find its happiest point.15:56
_bluevgholt: Ok, thanks.15:57
*** tanee is now known as tanee-away16:06
*** russellb is now known as rustlebee16:06
*** csd has joined #openstack-swift16:07
*** otherjon has quit IRC16:08
*** otherjon_ has joined #openstack-swift16:09
*** otherjon_ is now known as otherjon16:09
*** ChanServ sets mode: +v creiht16:14
_bluevgholt: Regarding the low number of partitions-per-second as reported here:   object-replicator 82831/3734658 (2.22%) partitions replicated in 7200.78s (11.50/sec, 88h remaining)   .     If you increase replicator concurrency , should that not increase the partitions-per-second, regardless of what you PP is set to ?16:14
gholt_bluev: Theoretically yes, but disk trashing will come into play.16:17
gholtI assume from your earlier comments when you add capacity you're doing it with as many disks as you can at pretty low weight changes on each which is good. You want to get as much breadth as you can as fast as you can.16:18
_bluevThanks gholt - we'll check collect data and see if the disks are the bottlenecks16:20
gholtI'd probably keep the incoming rsync setting the same as the outgoing replicator concurrency, so you always have as many receivers as senders collectively across the cluster. As you get to higher concurrency and because everything is randomized you'll run the chance of stampeding a single disk from time to time. But you've got run that risk to let the cluster get rebalanced again.16:20
gholtYou can also consider shutting off all auditors for a while to give you just that bit of extra i/o.16:21
_bluevgholt: makes sense.16:22
_bluevif only the users would stop PUT'ing new data :-)16:22
gholtHeheh, yeah, you're kind of at the point where "Hey everybody, quit using the cluster just for a while, please?"16:22
*** basha has joined #openstack-swift16:25
gholtI'm sorry for this, it's a fine mess you're in. :( I wish I could code faster so re-partitioning was a real thing already.16:25
*** tanee-away is now known as tanee16:27
*** tanee is now known as tanee-away16:27
_bluevgholt: I think if we tweak the settings down to give us a cycle in 100hours, that hopefully will be sustainable until we grow the cluster again to ease the per-disk partition count16:28
notmynamehello world16:30
notmynameatlanta design summit session proposals are now open at http://summit.openstack.org16:31
notmynamedetails on the process are at http://lists.openstack.org/pipermail/openstack-dev/2014-March/029319.html16:31
*** mkollaro1 has joined #openstack-swift16:32
*** mkollaro has quit IRC16:32
pelusenotmyname:  these are for the technical tracks like the 'Swift day" in HK where you ran the room the whole day right?16:32
notmynamecorrect16:33
notmynameI don't know how many slots swift will get, but in the past we've normally had about 8-9 slots (one day)16:33
pelusenotmyname:  Cool, do we need to submit something for EC or just ask you to put it on the agenda and we'll figure out who talks about when later?16:33
notmynamepeluse: ya, if you are going to be in atlanta, then I think you should submit at least a placeholder for it :-)16:34
pelusenotmyname:  will do then.  and I assume we're good on policies since that's an icehouse thing right?16:34
creihtpeluse: I wouldn't mind a session going over the policies work16:35
notmynamewell, that's the plan. progress is slow, though ;-/16:35
*** mkollaro2 has joined #openstack-swift16:35
notmynameya, it's still good to talk about it, especially as it relates to the future work16:35
pelusenotmyname:  maybe we can get torgotmatic and extra coffee subscription this month :)16:35
*** tanee-away is now known as tanee16:36
*** mkollaro1 has quit IRC16:36
*** tanee is now known as tanee-away16:37
notmynamewow. I just looked at gerrit. jenkins is angry this morning16:38
*** miurahr has joined #openstack-swift16:39
notmynameyup. http://not.mn/all_gate_status.html16:40
*** mkollaro has joined #openstack-swift16:41
*** gyee has joined #openstack-swift16:41
*** mkollaro2 has quit IRC16:42
*** tanee-away is now known as tanee16:42
*** tanee is now known as tanee-away16:43
*** hurricanerix has quit IRC16:47
pelusenotmyname:  OK, EC and policies are both up there as sugegsted topics16:48
notmynamethanks :-)16:48
pelusenp16:48
peluseFYI I'm just firing up a cluster now that I'm going to install w/o policies and then get some IO going while I roll it over to the EC branch to try and ID upgrade issues that we may not have thought of...16:48
*** basha has quit IRC16:50
*** joeljwright has left #openstack-swift16:50
*** chandan_kumar has quit IRC16:51
notmynamegood16:53
*** tanee-away is now known as tanee16:56
*** tanee is now known as tanee-away16:56
openstackgerritSamuel Merritt proposed a change to openstack/swift: Functional tests for tempurl  https://review.openstack.org/7900816:58
*** tanee-away is now known as tanee17:00
*** tanee is now known as tanee-away17:00
gholttorgomatic: Sweet https://review.openstack.org/#/c/79008/17:01
torgomaticgholt: :)17:01
torgomaticI had those laying around mostly passing a while ago, but the UTF8 stuff was busted17:01
torgomaticand then you mentioned writing some, and I figured I should go polish and submit those things17:02
mandarineHello there17:02
torgomaticnext step is a separate test class for tempurl + SLO together17:02
mandarineI have a terrible problem using swift, today : it seems I cannot have consistency17:03
notmynamemandarine: how do you mean?17:03
mandarineI explain : I have 2 proxies and 2 storages. I create a container on proxy1 and I cannot access it on proxy217:04
notmynamewhat error do you get on proxy2?17:04
mandarineaccounts and container are rightly repartited on the two storage but I did not activate replication17:04
mandarineA 404 ;)17:05
mandarineIt's like the account is not the same as the one in proxy117:05
notmynamedo you have the same ring files on both machines? check the md5 sums17:05
NMmandarine: can you see the replication between the SN?17:05
mandarineShall I put every device of proxy1 in a zone and every device of proxy2 in another zone and put "2" in the replication digit during the ring creation ??17:06
mandarinenotmyname: I scp'ed them ;)17:06
mandarineNM: Is there replication if I put '1' as the number of replicas during my ring creation ?17:07
*** nacim has quit IRC17:07
mandarine(e.g. : swift-ring-builder container.builder create 11 1 1 )17:07
NMnotmyname: can correct me but I think you have to replicate at least the accounts and containers.17:08
mandarineOh.17:08
mandarineI shall try this immediately, then17:09
mandarineThank you very much in advance :)17:09
notmynameusing one replica should work. you're missing out on a lot of the reason swift exists, though17:09
*** jogo is now known as flashgordon17:10
notmynamemandarine: what do the proxy logs on proxy2 tell you? a 404 on a container could be because either the container isn't found or the account isn't found17:10
mandarinewell, as the account is not replicated, its just that the container is not in the account17:12
mandarineI agree that swift is a total overkill solution for what I actually intend to do ...17:13
mandarineI shall replicate accounts and containers17:13
notmynameboth proxies should be able to access the proxy, even if there is a single replica. assuming you have net connectivity etc17:14
mandarineYeah, i've tried this17:14
mandarineAnd accounts and containers are actually created on both storage nodes17:15
mandarineI mean : not replicated. I would describe it like a raid0 : a little bit on storage1 and a little bit on storage217:15
*** tanee-away is now known as tanee17:22
*** tanee is now known as tanee-away17:22
*** tanee-away is now known as tanee17:23
*** tanee is now known as tanee-away17:24
mandarineAlright, I have now remade my rings and I have a failure in the container replication17:28
*** hurricanerix has joined #openstack-swift17:29
*** tanee-away is now known as tanee17:30
*** tanee is now known as tanee-away17:30
*** fbo is now known as fbo_away17:31
*** tanee-away is now known as tanee17:32
*** tanee is now known as tanee-away17:32
*** tanee-away is now known as tanee17:35
*** tanee is now known as tanee-away17:36
*** tanee-away is now known as tanee17:36
*** piyush has quit IRC17:37
*** ChanServ sets mode: +v swifterdarrell17:39
*** shri has joined #openstack-swift17:39
notmynamemanaging different pipelines is a pain: (from my proxy-server.conf on my SAIO) https://gist.github.com/notmyname/733211b5f26473433e5a17:43
shrinotmyname: what is mempeek?17:44
openstackgerritAlex Holden proposed a change to openstack/swift: fixed mispelling in function call name - accross to across  https://review.openstack.org/7902617:44
notmynameshri: magic :-)17:44
shriI'm intrigued :-)17:45
notmynameshri: https://github.com/gholt/mempeek makes finding memory leaks (somewhat) easier (thanks gholt!)17:45
shrioh.. cool17:46
mandarineOkay, now that I set the total number of replica to '2', it seems to work but the replication takes about 40secs.17:48
notmynametorgomatic: wait. you _don't_ want zero byte files named after dictionary words? and especially not ones with apostrophes in them? I'm shocked!17:51
*** tanee is now known as tanee-away17:51
torgomaticnotmyname: I know, it's weird, but you'll just have to live with it. :)17:52
openstackgerritAlex Holden proposed a change to openstack/swift: Fixed mispelling in function name - accross to across  https://review.openstack.org/7903117:56
*** piyush has joined #openstack-swift18:02
*** nshaikh has joined #openstack-swift18:03
*** nshaikh has left #openstack-swift18:03
*** piyush1 has joined #openstack-swift18:05
*** piyush has quit IRC18:07
mandarineLogs tell me : account-replicator 0 successes, 2 failures, but it still seems to work :(18:13
*** jamieh has quit IRC18:21
*** piyush has joined #openstack-swift18:42
*** piyush2 has joined #openstack-swift18:44
*** piyush1 has quit IRC18:44
*** piyush has quit IRC18:47
*** occup4nt has quit IRC18:51
*** jamieh has joined #openstack-swift18:51
*** occupant has joined #openstack-swift18:51
*** zaitcev has joined #openstack-swift18:58
*** ChanServ sets mode: +v zaitcev18:58
*** _bluev has quit IRC19:02
*** gholt has quit IRC19:02
*** bada has joined #openstack-swift19:02
*** gholt has joined #openstack-swift19:03
*** ChanServ sets mode: +v gholt19:03
notmynamegholt: bouncer issues?19:03
openstackgerritA change was merged to openstack/swift: Functional tests for tempurl  https://review.openstack.org/7900819:04
gholtNah, security update on my server needing a reboot.19:04
gholtBeen a lot of Ubuntu security updates lately.19:04
notmynamegholt: torgomatic's patch landed. will you or dfg add the funtests to https://review.openstack.org/#/c/78766/ ? or should I tackle that next? :-)19:07
*** haomaiwang has quit IRC19:07
gholtIt's torgomatic's work; I'll let him take the credit for his own sweat.19:07
notmyname:-)19:07
gholtAnd tears, probably.19:08
torgomaticThis is way more fun than storage policy stuff; I can start *and* finish in a single day! :)19:09
* torgomatic has been working on storage policies for a while19:09
*** haomaiwang has joined #openstack-swift19:11
*** mjseger has quit IRC19:18
*** judd7 has joined #openstack-swift19:19
*** _bluev has joined #openstack-swift19:20
*** judd7_ has quit IRC19:20
*** _bluev has quit IRC19:28
*** bagleybter has joined #openstack-swift19:29
openstackgerritSamuel Merritt proposed a change to openstack/swift: Functional tests for TempURL and SLO together  https://review.openstack.org/7906119:32
openstackgerritSamuel Merritt proposed a change to openstack/swift: copy over swift.authorize stuff into subrequests  https://review.openstack.org/7876619:32
torgomatic^^ rebased on top of master for the functest changes, plus dependent patch with new functest that only passes if your pipeline is in the right order19:33
openstackgerritJohn Dickinson proposed a change to openstack/swift: Make PBR based setup completely optional  https://review.openstack.org/7709119:34
notmynametorgomatic: thanks19:34
torgomaticnotmyname: np19:34
notmynameI added some color commentary to the PBR patch in the comments of setup.py ("Why aren't we using PBR for every install?"). I'd love feedback if that's not phrased well or not complete19:35
portantetorgomatic: any further thoughts on the in-process functional tests?19:43
torgomaticportante: I haven't had a chance to look19:43
portantek thanks19:43
*** bsdkurt has quit IRC19:46
notmynamegholt: torgomatic: thanks for the tests. the previous pipeline asplodes with the new tests (that's good). I'll merge the first one and +2 the tests19:49
notmynamesomething to highlight in release notes19:49
torgomaticindeed19:49
gholtOh sweet, was just getting to running all that myself19:49
notmynamewhy do my computer's speakers seem to crash the computer?19:54
torgomaticmagnets?19:54
pelusegremlins?19:54
*** zul has quit IRC19:55
openstackgerritJohn Dickinson proposed a change to openstack/swift: Make PBR based setup completely optional  https://review.openstack.org/7709119:55
torgomaticalways seemed a little odd to me; you have a metric crapton of teeny-tiny transistors operating at fantastic speed, and then the laptop manufacturer is like "you know what this needs? a big moving magnet right next to it"19:56
notmynameah, in this case, it's the USB speakers I just plugged in. old Harmon Kardon speakers. never really given me too much trouble before19:57
*** Midnightmyth has quit IRC20:04
*** bsdkurt has joined #openstack-swift20:08
*** jamieh has quit IRC20:11
openstackgerritA change was merged to openstack/swift: Fixed mispelling in function name - accross to across  https://review.openstack.org/7903120:19
*** _bluev has joined #openstack-swift20:20
*** erlon has quit IRC20:22
*** csd has quit IRC20:25
*** _bluev has quit IRC20:31
*** _bluev has joined #openstack-swift20:32
*** judd7_ has joined #openstack-swift20:33
*** judd7 has quit IRC20:34
*** judd7_ has quit IRC20:36
*** _bluev has quit IRC20:41
*** physcx has quit IRC20:46
*** bsdkurt has quit IRC20:47
*** bsdkurt has joined #openstack-swift20:49
* portante wonders what a color visual of the magnetic fields lines around our laptops would look like20:49
*** csd has joined #openstack-swift20:50
*** tdasilva has joined #openstack-swift20:52
*** bagleybter has quit IRC21:09
*** csd has quit IRC21:25
openstackgerritA change was merged to openstack/swift: Speed up failing InternalClient requests  https://review.openstack.org/7807021:27
notmynamelooks like devstack is set up with tempurl so therefore the new test isn't passing21:29
torgomaticnotmyname: yeah, I'm cloning devstack to try to figure out its pipeline ordering stuff21:30
notmynametorgomatic: so far, I think the problem is in the neighborhood of https://github.com/openstack-dev/devstack/blob/master/lib/swift#L7121:31
*** acorwin_ is now known as acorwin21:32
portantetempurl needs to go in the NOAUTH one, right?21:33
notmynamemaybe? but also the test should be skipped if slo isn't enabled, and I don't see any references to that21:35
*** csd has joined #openstack-swift21:36
*** NM has quit IRC21:38
pelusenotmyname:  is there a PPA that I can add to get 1.13?21:46
notmynamepeluse: no, but you can download a tarball21:46
pelusenotmyname:  OK, stupid question.  I need to upgrade from 1.10 to 1.13, there's more to it than just copying over the old install isn't there?21:47
notmynamehttps://launchpad.net/swift/icehouse/1.13.0 or https://github.com/openstack/swift/archive/1.13.0.zip21:47
*** tsg has joined #openstack-swift21:48
notmynamepeluse: handwaving about the different ways you can have it installed? ya, just copying the code over /should/ work21:48
pelusehmm.. OK.  getting an error... one sec21:49
notmynamepeluse: a couple of config things are important to point out, like the gatekeeper middleware. as always, read the release notes ;-)21:49
peluseRTFRN?21:50
peluse:)21:50
peluseBTW the release notes on that webpage are "This is Swift 1.13.0 release."21:53
notmynamepeluse: https://github.com/openstack/swift/blob/master/CHANGELOG21:53
pelusenotmyname:  thanks, was just being a smart a&*21:54
torgomaticMar  7 19:55:02 localhost proxy-server: Pipeline is "catch_errors gatekeeper healthcheck proxy_logging memcache container_sync bulk slo dlo ratelimit crossdomain keystoneclient.middleware.auth_token:filter_factory keystoneauth tempauth tempurl formpost staticweb container_quotas account_quotas proxy_logging swift proxy"22:13
torgomaticgo go gadget logging22:13
torgomatic^^ that's the devstack config BTW22:15
torgomaticyou can probably tell by the presence of the keystone middleware :)22:16
notmynamedo you know where/how that's set?22:17
torgomaticI think it's that stuff in lib/swift that you brought up earlier22:17
notmynameeg a search for "proxy_logging" in the entirety of the devstack repo has 0 results22:18
torgomaticnotmyname: looks like they start with the sample config, and then goof it up^W^W^Wcustomize it22:18
notmynameoh, good. I'm sure that has no side-effects. what could go wrong? tests failing?22:18
torgomaticseriously22:18
notmynameyou know what would be cool? we should have a solver for middleware to dynamically create a pipeline based on the functionality available ;-)22:19
* creiht sighs22:20
notmynameI wonder if anyone has ever written that22:20
notmynamelol22:20
creihtnotmyname: I agree there is a problem with the current state of middleware/wsgi pipeline22:20
creihta magic solver is not the answer though22:20
creihtour pipeline has like 22 pieces of middleware in it22:21
creihtso what's the problem with the devstack pipeline? (sorry missed the context)22:21
torgomaticcreiht: needs to be tempurl [blahblah] slo, but is slo [blahblah] tempurl22:22
creihtahh22:22
creihtyeah22:22
notmynameand to be realistic, it's not fair to provide a sample config file and not expect people (ie non-contributors, ie normal deployers) to use sed to automatically configure it22:23
gholtHmm. Didn't we fix the sample conf?22:23
creihttempurl isn't in the sample22:24
gholtAh22:24
torgomaticlet me try adding it and see if that fixes anything22:24
creihtand to be realistic, we can't expect a solver to magically fix things without there being hidden edge cases and side effects22:25
notmynamehmm..I thought we had added everything to the sample pipeline22:25
creiht:)22:25
creiht[pipeline:main]22:25
creihtpipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk slo dlo ratelimit tempauth container-quotas account-quotas proxy-logging proxy-server22:25
notmynamewell that's why we have code reviews, right?22:25
gholtIt's in the saio pipeline, just not the -sample one22:26
creihtyeah I was looking at the sample one22:26
notmynameso after you install devstack (yes, after nearly 5 years on this project I just installed devstack for the first time) where is the code installed?22:26
openstackgerritSamuel Merritt proposed a change to openstack/swift: Add tempurl to the example proxy config's pipeline  https://review.openstack.org/7909422:26
notmynamegholt: ah, ok22:26
creihtI want to make a merge prop that uses the sample configs for the saio so that all stays in sync22:26
torgomatic^^ let's see if that blows up the functests too22:27
creihtnotmyname: lol... so when is the first time you are going to use swift? >;)22:27
torgomaticit even already had the config stanza (the [filter:tempurl] stuff)22:27
notmynamecreiht: one of these days. I hear it's pretty cool22:27
notmynametorgomatic: but that patch doesn't have your test in it. you're just looking for the setup to break?22:28
gholtThere are several things that have conf sections that aren't in the pipeline; mostly because they're optional, but I think we moved away from that at some point.22:28
gholtstaticweb, cname thingies, ..22:28
torgomaticnotmyname: yup22:28
gholtformpost, etc. :)22:29
torgomaticnotmyname: if nothing else, the logging will tell me what the pipeline ends up looking like22:29
torgomaticgholt: yeah, but if devstack is going to use the sample conf as a starting point, we'll have to add the world so we can functionally test it22:29
*** NM has joined #openstack-swift22:29
torgomaticwhich is slightly annoying, but I'm not gonna lose any sleep over it22:30
gholtThat's what I was hinting at. Does your prop add them all I guess.22:30
notmynamecreiht's plan is a good one. build saio from the sample (and put all* into the sample config)22:30
notmyname* for some value of all22:31
creihtyeah I would want them all added, and also on saio so that we can be sure that functests are running on a full pipeline22:31
gholtHeh, okay...22:31
torgomaticgholt: no, just tempurl... this one's really more of an experiment than anything, so I only changed one variable22:31
notmynamecreiht: https://gist.github.com/notmyname/733211b5f26473433e5a the "basic" one (ie everything but cname_lookup and domain_remap)22:31
gholtdomain_remap, cname_lookup, list-endpoints (things I don't intend to maintain. ;)22:32
creihtlol22:32
gholtSorry, bad humor22:32
*** NM has quit IRC22:32
* notmyname suspects gholt will care a lot more about list_endpoints in the future since it's a requirement for zerovm ;-)22:32
gholtI don't even remember what that was tbh22:34
creihtnotmyname: http://summit.openstack.org/cfp/details/1522:34
creiht:)22:34
gholtOh yeah, that ringlookupservice thing.22:35
notmynamecreiht: actually, that would be a good topic. I may edit the body though. no need for too much snark ;-)22:36
creihtnotmyname: it is just for fun22:37
gholtHmm. That summit thing is weird. Wants me to log in just to view the top level domain? And then it does some insecure redirect thing?22:37
creihtheh22:38
gholtMaybe if I log out of everything and then try to view it...22:38
notmynamecreiht: but actually a good topic. the reality is that we're adding more middleware, and there is even more available in the broader ecosystem. and keeping it straight isn't straightforward, especially to those looking to run a storage system and not write one22:39
gholtEh, nope. Apparently that's a only-registered-club-members site. Good 'nuff22:39
creihtnotmyname: also note that I equally poked fun at myself in that description22:40
gholtWe're going to shorten our pipeline here by renaming all the middleware: New pipeline = a b c d e f h i j k l m n o p22:41
gholtDon't ask about g -- it made us angry22:41
creihtnah, this is openstack, each item will have a uuid22:41
notmynamecreiht: no, a service registry for "core" functionality22:42
torgomaticthat's even funnier because you can actually do it :)22:42
creihtnotmyname: we should stop now, somebody might get a bright idea22:42
creihtnotmyname: oh and of course it is a good topic, I came up with it :)22:43
notmynameya, I was thinking the same thing. cause the concept of core features (ie independent of a particular project) being what defines things is actually a topic of conversation in some circles22:43
notmynamecreiht: a literal and figurative big head ;-)22:44
creihtnotmyname: that's Principal Big Head to you ;)22:44
notmynameheh22:44
notmynamenow with a flag!22:45
notmynametorgomatic: so I found the swift repo in the devstack install. I didn't install keystone with swift, and I'm running the .functests from the repo. it's _very_ slow (and 3 failures so far)22:45
torgomaticnotmyname: woo, fun times; this is on master?22:46
notmynametorgomatic: yes? I cloned devstack and ran stack.sh22:46
*** NM has joined #openstack-swift22:48
torgomaticwell, let it finish and let's see what blows up22:49
notmynametorgomatic: yes. it's master22:49
torgomaticer, blew up22:49
torgomaticI tried doing that same thing with devstack once; it took a really long time and tons of tests failed. I don't have that VM any more.22:49
notmynametons of retries based on connection refused errors22:51
torgomaticso are all the swift-*-server even running?22:51
notmynameya.22:52
creihtis it possible it could do with how they set up things with just one replica?22:52
*** hurricanerix has quit IRC22:52
torgomaticand is rabbitmq using the utf8mb4 collation for mysql, or just utf8?22:53
torgomaticwait, no, never mind. don't know what came over me.22:53
notmynametorgomatic: I disabled keystone and mysql both22:54
torgomatic:)22:54
torgomaticmaybe the connections to keystone are what are getting refused22:55
notmynamecould be22:55
notmynamekeystoneauth is in the pipeline, but not authtoken (since keystone isn't installed). but keystone auth doesn't talk to any external services22:56
*** krtaylor has quit IRC22:57
notmynamewell, if the only perception of swift a "normal" openstack contributor has is what they see in devstack, this is horrible. well and truly aweful22:57
torgomaticnotmyname: darn... I'm out of ideas, then.22:58
notmynamewho's trying to talk to port 3535722:58
notmynameis that a keystone thing?22:58
torgomaticI think that's a keystone thing22:58
torgomaticit's got two ports22:58
notmynamehmm...I am seeing other keystone errors. so ya, that's it22:59
notmynamebut why?22:59
gholtWell, (and I'm to blame here too), if some of us swift devs actually participated in making devstack the stack it is....23:00
notmynamegholt: by lack of participation?23:00
gholtI'm just trying to get better about only complaining about things I've actually tried to influence. :)23:01
*** NM has quit IRC23:01
notmynameargh! there is an authtoken middleware in the pipeline. I suppose that's just put there by default23:03
*** dmsimard has quit IRC23:04
notmynameanyone know the devstack commands to restart a process? it's not swift-init23:04
*** _bluev has joined #openstack-swift23:10
*** piyush2 has quit IRC23:15
shrinotmyname: just run stack.sh23:15
notmynamethat actually rebuilds the whole environment. I just wanted to restart processes23:16
shrioh.. sorry, don't know about that.23:16
*** zigo has quit IRC23:17
notmynamegot it. I now have functests working on devstack without keystone. (1) take out the keystone stuff in the pipeline (2) actually use the values in the sample test conf file (3) it works!23:20
notmynameanyone know what the default reseller prefix for keystone accounts is?23:28
notmynameah. it's AUTH23:29
notmynamealso, why do we have 2 auth systems in our codebase with the same default reseller prefix? ;-)23:29
openstackgerritAlex Pecoraro proposed a change to openstack/swift: Allow hostname for nodes in Ring  https://review.openstack.org/7454223:30
notmynameoh. *sigh* (to borrow from creiht)23:32
notmynamekeystone's authtoken middleware makes a call to keystone for every request, no matter what the reseller prefix is.23:33
*** booi has joined #openstack-swift23:35
* notmyname sees the scary jungle of "fixing" devstack to work without keystone and just goes to install keystone23:41
openstackgerritA change was merged to openstack/swift: copy over swift.authorize stuff into subrequests  https://review.openstack.org/7876623:51
openstackgerritSamuel Merritt proposed a change to openstack/swift: Functional tests for TempURL and SLO together  https://review.openstack.org/7906123:54
torgomaticalright, now that's on top of adding tempurl to the sample pipeline in the right place; let's see if it works23:54
notmynametorgomatic: I approved them all23:55
torgomaticnotmyname: thanks23:55
notmynameand a task to ensure that the saio matches the sample. or vice versa.23:57
notmynameFWIW, 78 seconds for func tests on devstack without keystone, 92 seconds with keystone23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!