Thursday, 2014-05-22

claygdfg: zaitcev: thanks for the pings - i'll look at those changes00:02
zaitcevclayg: I'm still thinking about splitting up, but the first chunk has to contain a ton of trampolines now.00:03
*** mkollaro has quit IRC00:10
*** openstackgerrit has quit IRC00:19
*** openstackgerrit has joined #openstack-swift00:20
*** dmorita has joined #openstack-swift00:28
* notmyname is out for a while00:28
*** matsuhashi has joined #openstack-swift00:29
*** openstackgerrit has quit IRC00:34
*** openstackgerrit has joined #openstack-swift00:34
*** shri has quit IRC00:47
*** openstackgerrit has quit IRC00:49
*** openstackgerrit has joined #openstack-swift00:49
*** csd has quit IRC00:58
portantejogo: I'd been keen to help with that if interested01:01
*** zul has quit IRC01:03
*** zul has joined #openstack-swift01:03
jogonotmyname: cool, well I wasn't inherintily volunteering to spin that lib out myself01:04
jogoso if you want to do it instead that would be cool01:04
cihhannotmyname: btw i have one more question if u r available: how is the connection between proxy and storage nodes? r they encrypted or completely raw?01:17
*** aurynn has joined #openstack-swift01:29
*** haomaiwang has joined #openstack-swift01:30
*** gyee has quit IRC01:32
*** haomaiwang has quit IRC01:34
*** hipster has joined #openstack-swift01:36
*** zul has quit IRC01:54
*** zul has joined #openstack-swift01:56
*** madhuri has quit IRC01:57
*** saschpe has quit IRC02:00
*** saschpe has joined #openstack-swift02:01
openstackgerritA change was merged to openstack/swift: taking the global reqs that we can  https://review.openstack.org/9466902:11
portantecihhan: unless you have taken steps to explicitly use SSL, I believe proxy -> storage node is not using SSL in your setup02:14
portantejogo: do you have a target first project as a guinea pig02:15
*** zul has quit IRC02:22
jogoportante: I was thinking nova02:23
*** hipster has quit IRC02:25
*** tanee has quit IRC02:25
*** tanee has joined #openstack-swift02:26
*** kenhui has joined #openstack-swift02:43
*** kenhui has quit IRC02:46
*** patchbot has quit IRC02:55
*** patchbot` has joined #openstack-swift02:55
*** matsuhas_ has joined #openstack-swift02:55
*** minnear_ has joined #openstack-swift02:56
*** patchbot` is now known as patchbot02:56
*** nosnos_ has joined #openstack-swift02:57
*** fbo_away has joined #openstack-swift02:57
*** mkerrin1 has joined #openstack-swift02:58
*** ryao_ has joined #openstack-swift03:00
*** wklely has joined #openstack-swift03:02
*** fbo has quit IRC03:03
*** minnear has quit IRC03:03
*** matsuhashi has quit IRC03:03
*** jeblair has quit IRC03:03
*** fbo_away is now known as fbo03:03
*** nosnos has quit IRC03:03
*** mkerrin has quit IRC03:03
*** wer has quit IRC03:03
*** wkelly has quit IRC03:03
*** russell_h has quit IRC03:03
*** chalcedony has quit IRC03:03
*** ryao has quit IRC03:03
*** russell_h has joined #openstack-swift03:03
*** russell_h has quit IRC03:03
*** wer has joined #openstack-swift03:04
*** russell_h has joined #openstack-swift03:04
*** saschpe- has joined #openstack-swift03:05
*** russell_h has quit IRC03:05
*** russell_h has joined #openstack-swift03:05
*** chalcedony has joined #openstack-swift03:06
*** serverascode has quit IRC03:08
*** saschpe has quit IRC03:09
*** gholt has quit IRC03:09
*** jeblair has joined #openstack-swift03:09
*** serverascode has joined #openstack-swift03:10
*** hipster has joined #openstack-swift03:10
*** gholt has joined #openstack-swift03:14
*** ChanServ sets mode: +v gholt03:14
*** omame has quit IRC03:15
*** aurynn has quit IRC03:22
*** mrsnivvel has joined #openstack-swift03:23
cihhanportante, is there a way to use encrypted data transfer or is it too costly?03:49
*** nosnos_ has quit IRC03:52
*** hipster has quit IRC03:55
*** omame has joined #openstack-swift03:57
*** john3213 has joined #openstack-swift04:20
*** john3213 has left #openstack-swift04:25
*** haomaiwang has joined #openstack-swift04:35
*** nosnos has joined #openstack-swift04:35
*** haomaiwang has quit IRC04:40
*** krtaylor has joined #openstack-swift04:41
*** erlon has quit IRC04:49
*** igor_ has joined #openstack-swift04:59
*** igor__ has quit IRC05:02
*** ppai has joined #openstack-swift05:02
*** omame has quit IRC05:05
*** psharma has joined #openstack-swift05:21
hugokuomorning ...05:50
*** zaitcev has quit IRC06:10
*** nshaikh has joined #openstack-swift06:12
*** nmap911 has joined #openstack-swift06:13
nmap911Hi all. When enabling ceilometer monitoring on swift, is there any specific reason it needs access to my ceilometer message queues? I though ceilometer only polls swift-proxys for deltas?06:14
cschwede_nmap911: Hi! No, the ceilometer middleware (https://github.com/openstack/ceilometer/blob/master/ceilometer/objectstore/swift_middleware.py) pushes data from Swift to Ceilometer06:25
nmap911cschwede_ : thanks for the link, I had to install the ceilometer-api package to get the right libs - do I then update the /etc/ceilometer/ceilometer.conf file with the details for the message queue?06:30
cschwede_nmap911: I think so (at least from looking at http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-notification-agent). The ceilometer middleware is developed by the ceilometer project, if modifying /etc/ceilometer/ceilometer.conf doesn’t work you might also ask on #openstack-ceilometer06:34
nmap911cool stuff. thanks allot for your help!06:36
cschwede_nmap911: you’re welcome, glad to help!06:37
*** sandywalsh has quit IRC06:42
*** sandywalsh has joined #openstack-swift06:44
*** saurabh_ has joined #openstack-swift06:48
*** saurabh_ has joined #openstack-swift06:48
*** sandywalsh has quit IRC06:56
*** sandywalsh has joined #openstack-swift06:58
*** ppai has quit IRC07:11
openstackgerritOpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements  https://review.openstack.org/8873607:12
psharmacschwede_, nmap911 , i trying to add ceilometer middleware in swift-icehouse dev setup , as middleware is not there in swift , i am getting errors , so my question is do i need to install all of ceilometer stuff on the swift node , just for its swift middleware , i am trying to keep swift and ceilometer on different VMs07:22
nmap911which errors are you receiving?07:26
nmap911I had to install 2 different pip libraries and the ceilometer-api package07:26
nmap911psharma:07:27
*** ppai has joined #openstack-swift07:29
cschwede_psharma: yes, because the middleware uses different parts of the ceilometer package (https://github.com/openstack/ceilometer/blob/master/ceilometer/objectstore/swift_middleware.py#L60-L64)07:34
*** mkollaro has joined #openstack-swift07:39
*** ppai has quit IRC07:41
*** omame has joined #openstack-swift07:45
*** nacim has joined #openstack-swift07:50
*** ppai has joined #openstack-swift07:54
*** foexle has joined #openstack-swift07:55
*** mlipchuk has joined #openstack-swift08:03
*** mlipchuk has quit IRC08:08
*** mlipchuk has joined #openstack-swift08:23
*** blazesurfer has joined #openstack-swift08:26
blazesurferHi All08:26
*** jamie_h has joined #openstack-swift08:26
psharmanmap911, can you name the pip libraries08:26
blazesurferHas any one experianced a container-replicator ERROR rsync failed with 10: ?08:28
hugokuoblazesurfer: Please paste the log on pastebin. Thanks :)08:29
nmap911psharma: sure one sec08:33
blazesurferok just with the error or some lines around as well?08:33
nmap911psharma: pecan==0.4.5 & happybase>=0.5, !=0.708:33
nmap911blazesurfer: some lines around it always helps08:34
psharmaok , how are you installing ceilometer-api from source?08:34
blazesurferOk  http://pastebin.com/aE3YHbq3 hope this is the pastebin you are referring too.08:37
nmap911psharma: on ubuntu 14.04 i do it via the package manager (apt-get install)08:38
psharmai m using fedora19 , can you suggest some steps08:39
nmap911easiest would be to git clone the repo, pip install -r on the requirements.txt file and then run python setup.py install08:41
blazesurferalso i note that this channel is logged is there a location i can go read over what has been previously discussed? might save some silly questions:)08:42
*** haomaiwa_ has joined #openstack-swift08:42
psharmanmap911, i am getting this http://fpaste.org/104051/00747483/08:43
nmap911psharma: you need the gcc compiler packages - on ubuntu its included in build-essential08:44
nmap911psharma:  yum install gcc08:44
psharmait it there08:44
psharma gcc-4.8.2-7.fc19.x86_6408:45
hugokuoblazesurfer: Could you please show me the container-ring ?08:46
*** haomaiwa_ has quit IRC08:47
nmap911psharma: looks good08:50
blazesurferhugokuo: http://pastebin.com/qmu5H4hQ08:51
hugokuoblazesurfer: you want single replica of container DB ?08:56
blazesurferHugokuo: sorry maybe i have miss understood.. i have a single replica yes i am planning to move to 3 replica solution in process of planning migration from this host to a 3 node cluster08:58
blazesurferHugokuo: my aim is actually to get my disks to level out, i have added to new drives to this single host as it ran out of space, and it has not balanced out and this is the only reoccurring error i can see so wanted to fix this to see if it helped08:59
blazesurferam i barking up the wrong tree?09:00
hugokuoblazesurfer: Rsync ERROR code : 10     Error in socket I/O09:03
*** h6w has quit IRC09:04
hugokuoThe container replicator try to sync the .db to local's container server ...... But ERROR in socket I/O . (am thinking)09:05
hugokuoblazesurfer: need more information about the rsync configuration.09:06
hugokuoSeems all ERROR were on sdb1/sdc1. There're many possibilities now. 1) rsync setting 2) Disk access permission for container-replicator daemon on sdb1/sdc1  3) available disk space on sdb1/sdc1 ...09:10
blazesurferok so disk space is issue on sdb1 but not sdbc109:12
hugokuoblazesurfer: well... perhaps the mod of sdc1 mount point is not allowing daemon to operate09:13
blazesurfersorry i am wrong sdb1 and sdc1 is full09:13
blazesurferim trying to level to sdd1 sde109:13
hugokuoblazesurfer: Perhaps to set the weight of sdb1 to 0 will help...09:14
blazesurferok ill try that now.09:14
blazesurferwith me having the 127 address can i change the ip to the machine ip as well so can expand the cluster out?09:14
hugokuoblazesurfer: sure... but there's a limitation on the replica count now. It's not a dynamic value for now....09:15
blazesurferoh ok so i can increase the replica count on this ring? can rebuild the ring with out loss of data ?09:16
hugokuoin other word, it can't be changed in current swift implementation .... (it will soon)09:16
blazesurferto migrate for grizzly to icehouse as well am i better to build a new deployment and suck data accross? this way i can use a better desing09:17
blazesurferdesign. note only 4 tb of data or so in current09:17
*** ppai has quit IRC09:18
hugokuoblazesurfer: With 1 replica, you need to do it carefully. How will you suck data across old & new deploymenr ?09:20
blazesurferim curious to my options around that. is there a cloud sync utility. or would  i be able to copy the files at the os level?09:26
hugokuoblazesurfer: https://github.com/openstack/swift/blob/master/CHANGELOG#L439-L44109:26
* hugokuo checking the Swift version of Grizzly09:26
hugokuoblazesurfer: how many containers do you have there ?09:28
hugokuoblazesurfer: how many accounts in use within the cluster ?09:28
blazesurferok 5 or 6 accounts09:28
blazesurfershould be many more containers it was  poc that grew before i got back to redesign09:29
hugokuoblazesurfer: 1) There's no account level sync tools from a cluster to another now. 2) Container-sync feature may help but that's not a good idea in your case. 3) In case of your data migration, it's much more administrator's operation.09:31
hugokuoblazesurfer: Are the object replica number been set to 0 as well in this cluster ?09:31
blazesurferok so id have to login as each account09:32
blazesurfernot yet09:32
blazesurferthe weight you mean ?09:32
hugokuoblazesurfer: not weight. I mean the replica counts . 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.77 balance09:32
hugokuoIt is 1 for container. Means only single copy of your container DB cross all your devices now.09:33
blazesurferaccount 262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.77 balance09:33
hugokuoblazesurfer: k, how about object ?09:33
blazesurfer262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.77 balance09:34
hugokuoblazesurfer: :( ....... can I have a question here ?  Why would you like to have only 1 copy of each object by using Swift ???09:35
hugokuoAny specific reason ?09:35
acolescschwede_: did you get chance to look at the change to https://review.openstack.org/#/c/94347/ ?09:35
blazesurferthus building the new cluster with 3 replicas09:35
blazesurferoriginally was built as a pilot and the way we understood when we read about it was we can set to 1 replica (note only running one host and on enterprise grade san)09:36
*** ppai has joined #openstack-swift09:36
blazesurferfound something the other day that said when replicating or leveling storage the node or replica can be offline at that time so requests get answer by one of the other replica's hope that reads right.09:37
blazesurferits being used as a backup target for offsite replication to the cloud (private cloud)09:37
hugokuoblazesurfer: got it. If you want to keep existing data in new cluster. The best way is to join new nodes into this cluster now.09:38
blazesurferok09:38
blazesurfercan i join icehouse nodes into this cluster?09:38
hugokuoblazesurfer: Yes...09:38
blazesurferok then i can migrate to another proxy node after synced to the new nodes i presume09:39
hugokuoblazesurfer: Steps - 1) join new node and devices to the ring 2) Modify the IP of 127.0.0.1 to the cluster-facing IP. 3) Set the replica to 3 for account/container/object rings 4) rebalance rings and distribute to all nodes. 5) Wait until all data been replicated to new nodes.09:41
hugokuoblazesurfer: correct ... Just keep using the same rings...09:42
blazesurferis there an easy way to tell when data has been replicated?09:42
blazesurferid like to remove this node eventually. had an issue with the enterprise san its on thus the new build and expand accross multiple09:42
hugokuoblazesurfer: yes... swift-recon or observing the log file by grep the following pattern : $> grep object-replicator $log_file | grep remain09:43
blazesurferif i had rings on the existing hosts i have built can i delete them and move the new ones accross? ill need to delete data on the drives as well?09:43
blazesurferok so need to make sure rsync is configured as well for it to replicate09:44
blazesurfercool09:44
hugokuoblazesurfer: You must using the same rings on the cluster or proxy will not able to find the right partition number of any data.09:47
hugokuoThat's very very critical.....09:47
blazesurferyep so ill clean my new host by removing rings and data from it, and then update the ring files on the existing cluster with correct ip and add the second zone (node) to that ring set, copy them to new host09:48
hugokuoblazesurfer: perfect ...09:48
blazesurferi just need to work out how to change replica09:48
blazesurfercount09:48
hugokuocheck the help of swift-ring-builder09:48
blazesurferin case of losing replica rings can you rebuild them?09:49
hugokuoswift-ring-builder <builder_file> set_replicas <replicas>09:49
blazesurferthank you :)09:49
blazesurferi do ring work, then rebalance then distibute? then restart services yes09:51
hugokuoblazesurfer: never tried to rebuild rings before .... but I have blur memory about the answer is "Yes" if you do know the swift_hash_path_suffix = bad54b74-6101-4358-99fc-fe1a551e7fd in /etc/swift.conf09:51
blazesurferyep thats what i had read as well.09:51
blazesurferi thiink you had to know the original partition sizing as well but that is where i get blury-- google be my friend if i need to i guess09:52
hugokuoblazesurfer: I do not guarantee it. :)09:52
blazesurferguarntee what part09:52
hugokuoblazesurfer: rebuild 100% rings ....09:53
blazesurferi apprechiate you taking the time to let me bounce the though process off you and answering my question09:53
hugokuoIt can be test easily by producing fake ring on your laptop :)09:53
blazesurfersorry about the spelling, bad at the best of times but getting sleepy09:53
hugokuoblazesurfer: nite ....09:53
blazesurferyep 8pm so not tolate09:54
blazesurferbut havent sleeped well with issues happening with storage and swift last week or so09:54
hugokuoblazesurfer: Don't worry about that. It's not too hard to figure out the problem :)09:55
blazesurferyer swift looks very solid actually once you understand it09:56
hugokuoblazesurfer: Those operations could be done by 2 hours( I guess)09:56
hugokuoblazesurfer: Yes, it is. Do you know what's the best part of Swift?  Simple ......09:56
hugokuoI mean comparing to some other fancy solution. It's much more simple in it's logic.09:57
blazesurferit does appear simple. i sort of understand the rings to some extent. given time its becoming more simple:) was alittle bit of a differant way of thought but very logical.09:57
blazesurferyer09:57
blazesurferhugokuowhat time is it where you are?09:58
hugokuoblazesurfer: It's about 6pm in Taipei, Taiwan.  A small country under Japan. :)09:59
blazesurfercool so not to much time differance09:59
hugokuoblazesurfer: yup... are you located in German ?10:00
blazesurferAustralia10:00
hugokuoblazesurfer: aha... good place10:00
blazesurfernot bad i like it :)10:01
blazesurferim using tempauth with swift to keep it simple. you dont know of a gui for calculating space used by accounts do you10:01
hugokuoblazesurfer: I got to keep my works here. see you then....10:01
blazesurferHugokuo: no worries thank you for your help have a good night10:02
*** eglynn has joined #openstack-swift10:02
*** mkollaro has quit IRC10:05
*** BAKfr has joined #openstack-swift10:20
*** sagar has joined #openstack-swift10:23
*** sagar is now known as Guest2708710:23
*** sagar_ has joined #openstack-swift10:28
sagar_Hello10:28
sagar_I have been following instructions for multiple server swift installation (http://docs.openstack.org/developer/swift/howto_installmultinode.html)10:29
sagar_But I am unable to create regions (above zone) that has been implemented since swift 1.910:30
sagar_does anybody have any idea?10:30
*** Midnightmyth has joined #openstack-swift10:35
ctenniswhat's not working sagar_?10:38
blazesurfersagar_:what is the error you are getting i have played with regions yet though on my todo list once i fix my cluster10:38
ctennisyou just specify the region in front of the zone in the swift-ring-builder commands10:38
sagar_yes, that's what I had done10:38
*** erlon has joined #openstack-swift10:38
blazesurferswift-ri\10:39
sagar_but the command is not accepted10:39
blazesurfersorry typo wrong screen10:39
ctennisare you using a swift > 1.9 ?10:39
ctenniswhat syntax did you use?10:39
sagar_I want to use swift > 1.9 (1.13 even)10:40
sagar_but when I run swift --version10:40
sagar_it sows me 1.010:40
sagar_I used this syntax10:40
ctennisok, then you need a newer version of swift10:40
sagar_ swift-ring-builder <builder_file> add r<region>z<zone>-<ip>:<port>/<device_name>_<meta> <weight>10:40
ctennisyes, that syntax looks right10:41
sagar_but the only syntax listed in man page of swift-ring builder is10:41
sagar_ swift-ring-builder <builder_file> add z<zone>-<ip>:<port>/<device_name>_<meta> <weight>10:41
ctennis"swift" itself is part of a different package, a python utility called "swift-pythonclient"10:41
sagar_I want to install a newer version, I just followed the instruction given the swift multinode set up doc on Ubuntu 12.0410:42
sagar_but I get swift 1.010:42
sagar_How should I install the newest version?10:42
ctennisrun swift-ring-builder by itself, and the top of the output, what version of that is it?10:43
ctennisand does it reference regions in the help output?10:43
sagar_1.310:44
sagar_no, it doesn't10:44
*** haomaiwang has joined #openstack-swift10:45
ctenniscan you paste the output you're seeing somewhere, it's not obvious to me why it wouldn't work10:49
sagar_output of what?10:49
sagar_swift-ring-builder command?10:49
ctennisWhat we need to find out is what version of the swift ubuntu package you have installed10:49
ctennisyeah10:49
ctennisI'm just not sure the actual packge name offhand10:50
ctennismaybe "swift-proxy"10:50
blazesurferhmm i just did that command on my icehouse install and my grizzly same swift-ring-builder 1.3 is what returns on the first line10:50
sagar_Is it fine if I paste the output here?10:50
ctennis"dpkg -s swift-proxy"10:50
ctennisuse something like paste.openstack.org or gist.github.com10:51
sagar_okay, here is the output of the swift-ring-builder10:53
sagar_http://paste.openstack.org/show/81136/10:53
sagar_this is for "dpkg -s swift-proxy"10:55
sagar_http://paste.openstack.org/show/81137/10:55
ctennisgot it10:56
ctennisso you have swift 1.4 installed10:56
ctenniswhich makes sense why regions aren't working10:56
ctennisit looks like that's the latest version packaged by Canonical for ubuntu precise10:56
sagar_how should I then install swift's newest version? Is it possible from the source?10:57
hugokuosagar_: upgrade it with new cloudarchive repo....10:57
ctennishugokuo probably knows more than me, you can use ubuntu trust (14.04) which has the newer version available, or this link (http://docs.openstack.org/developer/swift/development_saio.html) has some info on installing from sources vs. package10:58
hugokuosagar_: https://wiki.ubuntu.com/ServerTeam/CloudArchive  Enable Icehouse10:58
hugokuosagar_: You are on Ubuntu right ???   The CloudArchive is the easiest way :)10:59
sagar_okay, thanks. I will try this.10:59
hugokuoctennis: good morning ... bro ~~~~~~~~11:00
ctennis:)11:00
*** haomaiwang has quit IRC11:01
sagar_Thanks to both of you. Now I have version 1.13 :)11:10
blazesurferQuestion for thought. is there away to control the number of replicas of data or copies of data at the account level as well. ie if i have a users that wants their data in 3 locations Zones and a user that doesnt really want 3 but 1 location (zone)11:15
*** zul has joined #openstack-swift11:18
*** miqui has quit IRC11:20
*** matsuhas_ has quit IRC11:27
blazesurferok im gonna get some sleep thanks you Hugoko once again for your assistance.11:32
*** sagar_ has quit IRC11:33
*** praveenkumar has quit IRC11:38
*** praveenkumar has joined #openstack-swift11:47
*** Guest27087 has quit IRC11:48
*** mkollaro has joined #openstack-swift12:01
*** ppai has quit IRC12:02
*** ppai has joined #openstack-swift12:02
*** nacim has quit IRC12:07
*** PradeepChandani has quit IRC12:11
*** acoles is now known as acoles_away12:16
*** Midnightmyth has quit IRC12:17
*** nacim has joined #openstack-swift12:23
*** nshaikh has quit IRC12:26
*** krtaylor has left #openstack-swift12:27
cschwede_acoles: yes, thanks a lot for the unittest - works as expected and coverage marks the change in swiftclient/client.py as tested. thanks!12:28
*** nosnos has quit IRC12:31
*** dmorita has quit IRC12:34
*** acoles_away is now known as acoles12:37
*** lpabon has joined #openstack-swift12:51
*** wklely is now known as wkelly12:55
*** nshaikh has joined #openstack-swift12:56
*** miqui has joined #openstack-swift12:57
*** chuck__ has joined #openstack-swift12:58
*** Rikkol has joined #openstack-swift13:06
*** nacim has quit IRC13:11
*** Rikkol has left #openstack-swift13:11
*** lpabon has quit IRC13:24
*** r-daneel has joined #openstack-swift13:38
*** praveenkumar has quit IRC13:38
*** nacim has joined #openstack-swift13:41
*** gustavo has joined #openstack-swift13:47
*** nshaikh has quit IRC13:54
*** praveenkumar has joined #openstack-swift13:57
*** ppai has quit IRC13:58
*** elambert has quit IRC13:58
*** Trixboxer has joined #openstack-swift14:00
*** haomaiwang has joined #openstack-swift14:02
*** psharma has quit IRC14:02
*** jamie_h has quit IRC14:11
*** jamie_h has joined #openstack-swift14:12
*** byeager has joined #openstack-swift14:14
notmynamegood morning14:17
*** csd has joined #openstack-swift14:21
hugokuomorning14:23
*** csd has quit IRC14:31
byeagerGood Morning!14:33
*** haomaiwang has quit IRC14:34
notmynamebyeager: I read an article a few days ago on how hadoop/MapReduce has had so many problems in HPC. did you see it?14:40
notmynamebyeager: made me think of the zeroVM GTM and what y'all are trying to do14:40
notmynamebyeager: http://glennklockwood.blogspot.com/2014/05/hadoops-uncomfortable-fit-in-hpc.html14:41
byeagerI had not seen that article, thanks!14:42
notmynamebyeager: did you see what's going on with storage policies? I want to make sure you know so you aren't caught unawares14:43
notmynamebyeager: rough draft of what's I'll send out to various mailing lists, but it has the basic plan: https://gist.githubusercontent.com/notmyname/7521817bd1027adc35a7/raw/609164665ec6c9ccdb0ee90a69f045df4081ca0a/gistfile1.txt14:44
byeagernotmyname: Is there new information since Thursday of last week, that is the last real update I had seen.14:44
notmynamebyeager: no. just working on that plan now :-)14:45
byeagerPerfect, I will take a look at the link.14:45
gholtWoah, wait, byeager in channel? (Of course, what am I talking about, I'm seldom really here)14:54
byeagergholt: you have to watch what you say about me now ;)14:56
gholtYeah... Heheh14:56
*** mlipchuk has quit IRC15:03
*** ryao_ has quit IRC15:03
*** ryao_ has joined #openstack-swift15:03
*** ryao_ is now known as ryao15:03
* creiht sighs15:05
*** igor_ has quit IRC15:07
creihtmaybe it would be better for me to just unsubscribe to openstack-dev15:07
*** igor has joined #openstack-swift15:07
notmynamecreiht: what? you don't what to have standardized variable names?15:08
creihtlol15:08
creihtyou guess well :)15:09
*** kevinc_ has joined #openstack-swift15:09
notmynameya, I read that, signed, and decided to ignore it. you should do the same. if someone actually submits a patch, then we'll say "no". if they don't then it doesn't waste anyone's time15:09
notmyname:-)15:09
creihtheh yeah15:10
openstackgerritJohn Dickinson proposed a change to openstack/swift: Add Storage Policy Documentation  https://review.openstack.org/8582415:11
*** igor has quit IRC15:12
notmynamepeluse_: (I know you're on vacation) ^^ I fixed line endings and two tiny formatting typos. I'll go ahead and merge it to feature/ec.15:13
notmynamemy oldest has a kindergarten graduation this morning. I'll be back online later today15:17
gholtJust saw the StackStack party photos. Man, I am starting to look old. ;) Was a fun party though, and after party.15:17
creihthehe15:18
portanteyou are young at heart, gholt15:18
portantethat is all that matters15:18
portantesort of15:18
portante;)15:18
openstackgerritA change was merged to openstack/python-swiftclient: Fix Python3 bugs  https://review.openstack.org/9434715:24
cschwede_notmyname: i think python-swiftclient is now ready for python3 ^^15:26
*** igor_ has joined #openstack-swift15:28
creiht\o/15:30
*** chuck__ has quit IRC15:32
*** igor_ has quit IRC15:32
*** igor_ has joined #openstack-swift15:38
dmsimardwoot.15:38
*** jamie_h has quit IRC15:38
*** igor__ has joined #openstack-swift15:40
*** igor_ has quit IRC15:43
*** jamie_h has joined #openstack-swift15:43
*** igor__ has quit IRC15:44
*** kevinc_ has quit IRC15:45
*** kevinc_ has joined #openstack-swift15:50
*** pberis has joined #openstack-swift16:00
*** BAKfr has quit IRC16:04
*** byeager has quit IRC16:08
*** mwstorer has joined #openstack-swift16:08
openstackgerritAlistair Coles proposed a change to openstack/python-swiftclient: Fix wrong assertions in unit tests  https://review.openstack.org/9492016:09
*** byeager has joined #openstack-swift16:11
*** gyee has joined #openstack-swift16:17
*** kenhui has joined #openstack-swift16:28
*** byeager has quit IRC16:28
*** byeager has joined #openstack-swift16:29
*** zaitcev has joined #openstack-swift16:32
*** ChanServ sets mode: +v zaitcev16:32
clayggholt: you look "distinguished" like a principle engineer - or a "fellow" even.16:34
*** elambert has joined #openstack-swift16:34
clayggholt: creiht: who's byeager?  is that nick?16:34
gholtBlake Yeager, ZeroVM16:34
claygoh oh oh - byeager - hi blake!16:34
claygsome how I didn't parse as b yeager :P16:35
claygthe "bye" part bound too hard in my head - oops16:35
gholtHeheh16:35
claygdfg: so i'm testing the xlo auth bug with keystone - and by default it's not exactly the same problem16:36
byeagerclayg: lol16:36
zaitcevnow that you said it I cannot unsee it16:36
claygat least I can't replicate the same way because the signed tokes can't be invalidated just by restarting memcache - yay tempauth!16:36
claygi guess I'll need to find someway to lower the default ttl on a token and then time my request.... juuuuuuuuuust right16:37
claygmaybe if i stick out my tongue16:37
zaitcevdon't bite it if it works16:37
*** nottrobin is now known as cHilDPROdigY133716:38
*** nacim has quit IRC16:39
*** jergerber has joined #openstack-swift16:39
*** igor__ has joined #openstack-swift16:40
*** cHilDPROdigY1337 is now known as nottrobin16:41
dfgclayg: ok cool- i was just trying it out. but now i can stop :p16:45
*** igor__ has quit IRC16:45
dfgclayg: you don't think that relying on the auth middleware to act like how tempauth/swath (the gholt school of auth middleware) work is a little risky?16:46
dfgclayg: from looking at keystone code it looks like if there is no token in headers and the delay_auth_decision isn't set (which is would be right?) then it raises a raise InvalidUserToken('Unable to find token in headers')16:54
dfgthat was supposed to say (which it wouldn't be right?)16:55
dfgyou'd think i'd get better at typing but i just get worse...16:55
*** shri has joined #openstack-swift16:57
claygdfg: well, first of all my patch doesn't work with keystone - so the make_pre_authed request thing has that going for it16:58
claygdfg: but that's because of all the crazy stuff that authtoken does to "clean" the environment because all the stuff get's passed through as headers instead of wsgi environ because like... idk reverse proxy or someting16:59
clayganyway i just feel like the general solution requires auth change either way so it's reasonable to spend a bit of time looking at what's going to make the most sense in keystone - maybe the special cache flag for caching the get_groups/remote_user looking business will work out - i'm still poking at it17:00
dfgisn't there somebody who actually uses keystone in here? why don't we ask them. cause i sure don't know what i'm talking about...17:00
claygi use keystone more or less17:01
claygwell... more less than more I suppose17:01
dfgclayg: i agree with that last thing you said. that patch i put was mostly like a: this will work for now- here's how auth can adapt to it- hopefully. but there's no reason (except for the whole production thing) that we can't involve them at this point17:02
claygdfg: well did you guys go ahead and push something out or are you holding out for this patch to land?17:02
dfgcause this is a bug that happens to us on a daily basis. sucks when you're hours into downloading a big file and you get cut off because your token expired17:03
dfg(whcih is how we found this bug)17:03
claygoh man :\17:03
claygis the token ttl still 24 hours over there?17:03
dfgya17:03
claygi guess with enough requests it's gunna happen17:03
claygwell fuuu17:04
*** eglynn has quit IRC17:07
claygdfg: so anyway delay_auth_decision has to be set to true for acl's to work - so even though it's not the default, I think that's how you configure the auth_token middleware for swift (that's what devstack does anyway)17:07
dfgoh- thats a conf thing. i thought it was that env varible17:09
claygthe real problem is the _remove_auth_headers which is striping out all the stuff you might want to cache about authn17:09
dfgwhoa- ya i see that now. that's not going to help :) but its a perfect place to add my little env variable to not do that if its set :)17:11
dfgbut it probably is against some auth doctrine or something17:12
openstackgerritA change was merged to openstack/swift: Add Storage Policy Documentation  https://review.openstack.org/8582417:15
dfgclayg: its almost like they didn't think about having a request bounce back and forth up and down the pipeline a million times. talk about lack of foresight :p17:17
*** byeager has quit IRC17:18
clayglol17:19
*** kenhui has quit IRC17:19
*** acoles is now known as acoles_away17:22
*** eglynn has joined #openstack-swift17:39
notmyname /back17:41
*** cds has joined #openstack-swift17:41
*** igor_ has joined #openstack-swift17:42
notmynameclayg: I marked the docs patch as approved for the feature/ec branch17:42
notmynameclayg: what else are you needing from me/us for getting the patch chain ready?17:42
*** kevinc_ has quit IRC17:43
*** igor_ has quit IRC17:46
*** byeager has joined #openstack-swift17:49
*** gyee has quit IRC17:53
*** kevinc_ has joined #openstack-swift18:00
claygdfg: ok, so the cache thing doesn't really work if common.wsgi.make_env that's used by make_subrequest doesn't copy all the environ keys you need to cache identity - which for keystone's keys - it does not18:00
dfgah18:01
claygbut we *can* fix that - but I think we're exposing a leak in the abstraction :\18:01
dfgi hate those make_env functions. sam had it before where it would copy everything voer and we told him to not do that because of some reason. but that would have fixed this18:02
claygdfg: so... fuck it?  swift was a bad idea anyway?18:03
*** byeager has quit IRC18:03
dfghaha18:04
claygdfg: unrelated - are you going to be in CO for the hack-a-thon18:04
dfgno- we're going on vacation.18:05
claygdamnit18:05
dfgya- how is it going to function without me there? :p18:05
claygi often find myself thinking "man... that dfg sure is a baddass - I should buy him some whiskey"18:05
dfgnow you're talkin my language18:06
creihtlol18:06
claygbut i'm not going mail it to you - you can't avoid me forever18:06
dfgya- i would have gone. the one in austin worked out pretty well i thought18:06
claygok, sorry i'm getting all down in the weeds on the xlo auth thing - but I'm going to need to take this new insight and stew on it a little longer18:07
claygmaybe I can just give up on my fear of make_pre_authed request18:07
dfgya- its a def pain in ass.18:08
notmynamegholt: do you want me to respond to that email or just FYI until he pops up again?18:11
*** jamie_h has quit IRC18:12
gholtnotmyname: Completely FYI, just so you had a bit to go on if he pops up again.18:16
notmynamegholt: ok, cool.18:16
*** byeager has joined #openstack-swift18:17
portanteare you folks talking about the "Uniform name for logger in projects" email?18:17
notmynameportante: no. someone had been emailing gholt about a deployment question, and he bcc'd me on it18:17
*** byeager has quit IRC18:18
portanteah, k18:18
notmynameportante: we alread gripped about that other email this morning :-)18:18
portanteah, I missed it!18:18
notmynameportante: IMO it's a distraction and should be ignored. I don't think you (or anyone else contributing to swift) should be spending any time on it18:19
*** byeager has joined #openstack-swift18:19
portantesure18:19
portanteI'll spend time on it if you ask! :)18:20
portantethat would be nice easy thoughtless work18:20
notmynameportante: you need to be spending time on the thoughtful work that is actually useful to deployers and users :-)18:21
portantejust tell me what to do then!18:21
portantebesides jump in a lake18:22
*** kevinc_ has quit IRC18:28
*** byeager_ has joined #openstack-swift18:32
*** byeager has quit IRC18:33
notmynameportante: if you're looking for stuff beyond reviews, there were a few interesting ideas out of the summit: affinity on replication (and container listing updates), "parent id" on logs, /healthcheck?deep=true, swift-recon ring validator18:33
notmynameall more important than `s/logger/LOG/`18:34
*** gustavo has quit IRC18:35
claygdfg: the rabbit hole just keeps getting deeper, some of the stuff that it seems a pre-authn'd request would need to cache is acctully seemingly not being perserved in the wsgi environ outside of headers (which always get stripped passing through the keystoneclient.authtoken thing)18:35
claygchmouel: acoles_away: why on *earth* doesn't _integral_keystone_identity just return environ['keystone.identity']18:38
claygoh god, i guess that key is missing user_id because... "backwards compat"?18:40
*** eglynn has quit IRC18:40
*** igor_ has joined #openstack-swift18:42
zaitcevportante: I'm drowning in TODOs here too. 1) Delete the current /info auth and apply normal auth while we still can 2) bz#1083039 - double-logging, 3) pick https://review.openstack.org/77812 from Clay, 4) PyECLib packaging in Fedora, 5) do something about xattr.so and xattr>=4.0: create a fake egg (re 1020449)18:46
*** igor_ has quit IRC18:47
zaitcevportante: Also... 6) FIXME in Swift - left out by Portante [-- that one apparently inspired by swift/obj/mem_server.py]18:48
zaitcevtest/unit/obj/test_diskfile.py:                # FIXME - yes, this an icky way to get code coverage ... worth18:48
zaitcevswift/common/middleware/x_profile/profile_model.py:                    # FIXME: eventlet profiler don't provide full list of18:48
dfgclayg: what a drag. i'll try to talk to some keystone devs at RAX about it.19:00
dfgwe haven't iritated them recently with an openstack-dev email thread have we? ok good.19:01
*** lpabon has joined #openstack-swift19:03
portantenotmyname: do you have those captured in a wiki page or something?19:08
notmynameportante: gleaned from the ehterpads last week. notes in my own evernote. briefly talked about in yesterday's meeting19:08
portantezaitcev: I feel your pain, what is #6 about?19:09
notmynameportante: I was hoping to get some of them recorded as specs when that repo gets all set up19:09
zaitcevportante: I just saw some left over in some patches19:09
portantezaitcev: if you worked for the NFL, everything would be fine19:09
Dieterbehey notmyname , have you seen https://vimeo.com/95076197 yet? after about 11min i show a bunch of examples of generating swift metrics dashboards dynamically19:13
notmynameDieterbe: I have not.19:13
*** kevinc___ has joined #openstack-swift19:14
notmynamethanks for the link19:14
notmynameDieterbe: are you on twitter? (ie for when I tweet this)19:15
Dieterbenotmyname: https://twitter.com/Dieter_be19:18
notmynameDieterbe: thanks19:18
notmynameDieterbe: did you really just say "metrics 2.0"? ;-)19:19
*** eglynn has joined #openstack-swift19:20
notmynameoh, it's actually a thing http://metrics20.org19:22
Dieterbeyeah it's actually what the presentation is about19:23
Dieterbeand then i demo some examples of how to leverage it, but a lot of it is swift related, so that's why i shared it with you19:23
*** eglynn has quit IRC19:32
notmynameDieterbe: pretty cool. thanks for sharing19:34
*** serverascode has quit IRC19:35
notmynameDieterbe: is this (metrics 2.0, ie structured metrics) something that we should be looking at in swift? would this be a new adaptor that emits these sort of metrics natively?19:36
*** serverascode has joined #openstack-swift19:37
Dieterbenotmyname: well, right now, I'm not aware of anyone/anything else adopting it.  so sure you could try, but a bunch of people might consider it too exotic at this point.19:38
Dieterbenotmyname: basically it's still statsd metrics, but in a different format19:39
Dieterbea format that might look weird to a bunch of people19:39
Dieterbealso, there's https://github.com/vimeo/graph-explorer/blob/master/graph_explorer/structured_metrics/plugins/openstack_swift.py which upgrades some of the existing statsd metrics to their 2.0 counterpart19:40
*** eglynn has joined #openstack-swift19:43
*** igor_ has joined #openstack-swift19:43
openstackgerritgholt proposed a change to openstack/swift: New log_max_line_length option.  https://review.openstack.org/9499119:43
gholtcreiht: notmyname: https://review.openstack.org/#/c/94991/19:45
gholt^ That's the bug that hits us in at least one of our production clusters, silently making some object-replicators go brain dead.19:45
*** eglynn has quit IRC19:47
*** igor_ has quit IRC19:47
creihtgholt: cool, I'll take a look19:49
notmynamegholt: lgtm, but needs docs19:51
notmynameDieterbe: was the question at the end of that talk about ceilometer stuff?19:52
notmynameDieterbe: from the video I got "something, something, openstack maybe does this"19:53
*** gyee has joined #openstack-swift19:55
Dieterbenotmyname: ah yes, someone referred to some kind of metric naming specification in openstack/ceilometer19:55
Dieterbenotmyname: but i looked for it, and couldn't really find anything similar to metrics 2.019:55
notmynameDieterbe: ok. and "metrics 2.0" seems a little more than just a naming specification19:55
notmynameie it's more of a structured data packed rather than just a key/value pair with an interesting name19:56
notmynameright?19:56
notmyname*packet19:56
gholtnotmyname: Oh yeah, forgot docs. Can I just put it at the end of the deployment_guide.rst under the Logging Considerations section? Putting it everywhere it could be is getting a bit cumbersome, both to keep up to date and to read.19:56
notmynamegholt: I'd like it at least to be on the deployment guide and on the logs page.19:57
gholtThere's a logs page? Heh19:58
Dieterbenotmyname: it's 2 sets of key-value pairs. 1 set to identify the metric, the other is metadata that can change without changing the metric identity19:58
notmynamegholt: ya, it's quite nice. http://docs.openstack.org/developer/swift/logs.html19:58
notmynameDieterbe: how are they tied together then?19:58
notmynamegholt: seems no worse to keep up to date than the default log_level setting that's "everywhere" :-)20:00
*** kevinc___ has quit IRC20:00
gholtYes, I believe that's my complaint. :)20:01
*** byeager_ has quit IRC20:01
gholtBut either way, no biggie. There's stuff that's already missing, but I guess that doesn't make having more missing right.20:01
notmynamegholt: I hear you. I think fifieldt's complaint from yesterday (or at the summit) that we aren't keeping our config options up to date in our docs is still ringing in my ears, though20:03
*** erlon has quit IRC20:04
notmynamegholt: that is, some people say we should use oslo config because that will automatically keep our docs up to date20:04
Dieterbenotmyname: in the wire protocol, it can be as simple as "service=openstack_swift what=load_time unit=ms  env=prod 123 1234567890"20:04
gholtYeah, it might be because it's hard to do right now or something. It'd be nice if we can figure out some way to doc it in one place and have that propagate to all the places.20:04
*** erlon has joined #openstack-swift20:04
gholtIf that's olso config fine, as long as it doesn't mean adding 500 useless dependencies along with it.20:04
notmynamegholt: https://imgflip.com/i/90ddd20:04
notmynamegholt: ya. I'm all for writing it down in one place20:05
creihtlol20:05
gholtApparently we have something call log_custom_handlers though I have no idea what those do.20:06
notmynamegholt: pandemicsyn wrote it for sentry or something20:06
dfgclayg: put a comment on that ticket- https://review.openstack.org/#/c/92165/ what do you think?20:07
notmynameDieterbe: hmm..interesting. in that case it would simply be an update to the log handler that's currently writing statsd messages. but we'd also have to update the calls to include the units and other metadata. or register the metrics somewhere? that sounds complicated20:07
notmynameDieterbe: and at a higher lever, now I know about http://monitorama.com. And next year pandemicsyn and swifterdarrell should both go :-)20:08
notmynameDieterbe: have you looked at any of the stuff datadog is doing around monitoring?20:08
gholtnotmyname: There's really no place for config values in that logs.rst -- where would you like me to stick it? :)20:09
*** cds has quit IRC20:10
Dieterbenotmyname: oh sure20:11
notmynamegholt: heh. IMO that page is for people who need to parse swift logs. so maybe a sentence/paragraph like "Long log lines may be truncated if the `log_max_line_length` is set. A truncated line has the first and last with dots in the middle"20:11
Dieterbenotmyname: what about them?20:11
gholtnotmyname: Oh okay, cool20:11
dfgclayg: wait a sec- do the SLOs do ned to be to the left of auth on the building the manifest? no right? i'll try some stuff out. maybe this isn't so bad20:12
notmynameDieterbe: I saw them at the Red Hat summit. seemed to have an interesting tool set (especially as someone working at a company that shows a lot of metrics). if you had seen them, Im curious about your take on their product20:12
*** byeager has joined #openstack-swift20:13
Dieterbenotmyname: yeah they have a really cool product/service, but i'm fundamentally against proprietary hosted monitoring20:13
notmynameDieterbe: yes I know :-)20:13
creihthah20:13
notmynameDieterbe: since you are online and chatty.... ;-)20:14
notmynameDieterbe: did you ever get more hardware to take care of your networking problem in your swift cluster?20:14
pandemicsynnotmyname: https://twitter.com/obfuscurity/status/46603606040197939220:15
Dieterbenotmyname: i can't count anymore how many times i've chatted with alexis :) i like em and i hope their business will go great, but i'm all about open source :)20:15
pandemicsynprobably one of the best talks i've seen20:15
Dieterbenotmyname: yeah, it's all 10Gbps now, and we run a proxy server on every storage node20:16
notmynameDieterbe: ah, cool20:22
notmyname*sigh* haters gonna hate20:22
pandemicsyn"true dat!"20:22
notmyname(*grumble*grumble*twitter)20:22
notmynamepandemicsyn: people are wrong on the Internet, and I don't know how to fix that!20:22
pandemicsyni do, but that story ends with people calling me Emperor PandemicSyn20:24
*** igor_ has joined #openstack-swift20:25
*** miqui has quit IRC20:26
*** igor__ has joined #openstack-swift20:27
*** Trixboxer has quit IRC20:28
*** kenhui has joined #openstack-swift20:28
*** foexle has quit IRC20:29
*** igor_ has quit IRC20:30
*** igor__ has quit IRC20:32
openstackgerritgholt proposed a change to openstack/swift: New log_max_line_length option.  https://review.openstack.org/9499120:36
gholtcreiht: notmyname: ^ :)20:36
* clayg recently upped his $MaxMessageSize in rsyslog.conf20:38
gholtHeheh, UDP is what got us. Amazingly. We all though the kernel would auto-frag, but apparently not.20:40
portantegholt: < 7?20:46
gholtHeh, do I need to document that? ;)    "1 ... 7" :)20:47
portantewhy bother?20:47
*** eglynn has joined #openstack-swift20:47
gholtThat was really just joking, btw. I'm completely fine with the doc additions. :D20:47
portantewhy not like < 80 or something20:47
portante;)20:48
gholtI don't really have an answer for that. It's what I picked and I wrote the code, heheh.20:48
portanteokay, sure20:48
notmynamegholt: thanks. looks good. let me actually run tests, then I'll +220:58
*** blazesurfer has quit IRC21:01
*** lpabon has quit IRC21:10
*** kenhui has quit IRC21:10
mkollarois it possible to set the number of handoff nodes swift should use?21:16
notmynamemkollaro: there isn't a limit to the number swift uses. it can use up to all the other drives in the cluster. but you can configure how many it looks at in response to a request21:17
notmynamemkollaro: by default it looks at 2*<replica count> (ie 6 for 3 replicas)21:17
mkollaronotmyname: oh, cool21:17
mkollaronotmyname: I should probably not ask for any documentation, right? :D21:18
notmynamemkollaro: looking for it now21:18
notmynamemkollaro: it's in the sample config https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L163 and the docs http://docs.openstack.org/developer/swift/deployment_guide.html#proxy-server-configuration21:19
notmynameta da!21:19
dfgclayg: you there?21:19
*** kevinc___ has joined #openstack-swift21:21
mkollaronotmyname: awesome21:23
mkollarothanks a lot21:23
mkollarothe state of swift documentation is quite good21:24
notmynamemkollaro: thanks. we've worked hard at it (and still try to keep it mostly up to date)21:24
claygdfg: no21:24
clayggholt: I don't get why 7 is magic for doing the either split or truncate dance?21:24
claygwho would run with < 7 anyway?21:25
mkollaronotmyname: :)21:25
notmynameclayg: who would run for less than about 50?21:25
claygnotmyname: +121:25
claygor idk, 1500/mtu :P21:25
mkollaronotmyname: so, it should be possible to damage disks one by one until there is only one left and it would hold all the data, right? assuming it has enough capacity21:25
notmyname:-)21:25
dfgclayg: i think- that we just move slo and dlo to the right of auth with no code change and there's no bug.21:26
claygdfg: so don't do authz for subrequests?21:26
notmynamemkollaro: correct. (but you'd need at least 2 disks to still be able to write new data so that you have quorum in a 3-replica cluster)21:26
dfgclayg: its like i decided that slo has to be to the left of auth a long time abo and it actually doesn't\21:26
dfgclayg: the authorize call does the authz, so it still gets called21:27
claygdfg: i don't think that's true21:27
dfgi just tried it21:27
notmynamemkollaro: back in Hong Kong, clayg showed a demo of a 6 node cluster that had zero client impact when he had 4 of the 6 drives offline21:27
claygdfg: proxy.server pops that bad boy out every request - tempauth adds it back in but keystone didn't21:27
claygdfg: what did you try exactly?21:28
*** igor_ has joined #openstack-swift21:28
mkollaronotmyname: so writing new data wouldn't work, but the old ones would be still be on that last single node with only a single replica, is that correct?21:28
dfgon master, i moved slo and dlo right to the left pf the proxy-server in the pipeline. then everything works21:28
notmynamemkollaro: correct21:28
notmynamemkollaro: although I'd suggest you try not to let it get to that point :-)21:28
dfgclayg: ^^ the memcache restart and building the slo where a segment is in a container i don't have acces to i get a 403 on building21:29
claygnotmyname: you might have to ptl me on https://review.openstack.org/#/c/63327/ - or maybe we can try and get consense at the next swift meeting - but I"m apparenlty holding that one up21:29
mkollaronotmyname: I wrote some tests for this stuff https://github.com/mkollaro/destroystack/blob/master/destroystack/test_swift_small_setup.py21:29
mkollaronotmyname: but somehow my test for this case is failing...but I think it's rather something in my code21:29
notmynameclayg: ah, thanks for pointing it out. I hadn't looked at it recently21:30
claygportante: also I find it surprising that you of all people would so laissez fair about changing swob's public interface what with your investment in other public interfaces inside of swift?21:30
gholtclayg: No biggie on the 7 thing if you guys want to change it. I just arbitrarily picked it.21:31
dfgclayg: and trying to read a SLO where a segment is in a container you no longer have read access to. it all works just fine.21:31
portanteclayg: man!21:31
portanteyou are right, but ... what are we talking about?21:32
portante;)21:32
claygas in you can't read the slo if the container removes and acl?!  how could that...21:32
dfgclayg: yes21:32
*** igor_ has quit IRC21:32
notmynamegholt: clayg: I care soooo much about the 7 in that patch...that I went ahead and approved it as-is21:32
claygportante: they return value of the range thing for swob that torgomatic was trying to clean up or whatever... https://review.openstack.org/#/c/6332721:32
claygheh21:32
portanteclayg: thanks, checking21:33
portantehmm21:34
portantecaught21:34
portanteduplicity21:34
portantetwo-faced21:34
portanteharvey dent21:34
claygwe're all entitled to be a hypocrite whenver it suits us - don't let anyone make you feel bad about it21:34
portanteI'll just flip a coin21:35
dfgclayg: it goes through tempauth, sets up the user shit, then the authorize uses that user for each of the containers.21:35
clayglol21:35
portante;)21:35
dfgpretty sure keystone would do same thing21:35
claygdfg: but what set's the authorize callback on the subdequent requests?21:35
portanteclayg, torgomatic: arguably a new method we be created on swob that does the sane thing, the old method behavior is preserved, and then removed from the core swift code in favor of the new21:36
dfgclayg: swift.authorize is copied over in make_subrequest21:36
claygand it some how managed to do that before it's called - i think i see now21:37
claygi love it!21:37
dfgya21:37
dfgwait what?21:37
claygdfg: I can't explain love... it's just something you feel.21:38
dfg...21:38
dfgare you sure you can't mail that whiskey?21:38
creihtlol21:38
* clayg moves on21:38
claygdfg: so you think we can close the bug with a doc fix for the pipeline re-order?21:39
dfgclayg: yes21:39
claygdfg: great!21:40
dfgya.21:40
dfgi'm not going to go into why this is really f-ing annoying. but it is...21:41
dfgoh well.21:41
portantewhat a dance21:41
portantemasterful21:41
portantegotta go21:42
dfg?21:42
claygdfg: why not?  sometimes it helps to vent.  also I think the pipeline approach is gunna work great21:47
claygoh hrmm.... make_env may still need to be updated to copy over keystone.identity21:48
dfgclayg: that sounds likely.21:49
dfgclayg: i just had to jump through some hoops to get all this stuff working with sos and it turns out that i was just wrong about slo having to be before auth. anyway- i'll try it out to be sure. but I think thats right.21:50
dfgso its not a huge deal either way.21:50
clayghrmm... that doesn't sound like you're all that annoyed - I think you're repressing21:51
dfg:)21:52
gholtdepressing, maybe21:58
dfgits alright- at least i didn't have to sit through a karaoke competition today :p22:02
claygacoles_away: chmouel: dfg: I needed all of this https://gist.github.com/clayg/8378436d6772ac6fae3c to fix https://bugs.launchpad.net/swift/+bug/131513322:03
dfgclayg: lgtm22:04
claygdfg: the other fix might be to fix make_env to be more agressive about copying over the whole env22:05
claygdfg: it it wasn't for the fact that keystone uses 1000 headers to build up keystone.identity we could avoid the change to keystone_auth just by copying over all the headers for the subrequests22:06
claygi mean we could whitelist all of those keystone headers that need to be in the env for the subsequent call in authorize to work as is - but that seems stupid22:07
dfgclayg: ya- thats how torgomatic wrote it but me and gholt were talking about it and how we like knowing what we were copying. but I'm kinda thinking that copying over the whole thing may have been better22:07
claygso addig full on re-use of cached keystone.identity seems like the other option if we need make_env to stay as a whitelist22:07
claygok great, so we can just fix the pipleing and that fixes it everywhere as long as we change either keystone or common.wsgi22:08
dfgclayg: ya i think so22:09
claygweeee22:11
*** igor_ has joined #openstack-swift22:28
*** jergerber has quit IRC22:32
*** igor_ has quit IRC22:33
*** kevinc___ has quit IRC22:38
*** kevinc___ has joined #openstack-swift22:43
*** byeager has quit IRC22:53
*** ZBhatti_ has joined #openstack-swift23:02
*** mkollaro has quit IRC23:09
*** openstackgerrit has quit IRC23:19
*** openstackgerrit has joined #openstack-swift23:20
*** igor_ has joined #openstack-swift23:29
*** igor_ has quit IRC23:34
*** elambert has quit IRC23:39
*** shakayumi has joined #openstack-swift23:41
openstackgerritA change was merged to openstack/swift: New log_max_line_length option.  https://review.openstack.org/9499123:42
*** shakayumi has quit IRC23:50
*** r-daneel has quit IRC23:50
*** mwstorer has quit IRC23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!