51-100 of 4011 results (9ms)
2016-01-11 §
22:24 <hashar> Deleting old references on Zuul-merger for mediawiki/core : <tt>/usr/share/python/zuul/bin/python /home/hashar/zuul-clear-refs.py --until 15 /srv/ssd/zuul/git/mediawiki/core </tt> [releng]
22:21 <hashar> gallium in /srv/ssd/zuul/git/mediawiki/core$ git gc --prune=all && git remote update --prune [releng]
22:21 <hashar> scandium in /srv/ssd/zuul/git/mediawiki/core$ git gc --prune=all && git remote update --prune [releng]
07:35 <legoktm> deploying https://gerrit.wikimedia.org/r/263319 [releng]
2016-01-07 §
23:16 <legoktm> deleted /mnt/jenkins-workspace/workspace/mediawiki-extensions-qunit/src/extensions/PdfHandler/.git/refs/heads/wmf/1.26wmf16.lock on slave 1013 [releng]
06:32 <legoktm> deploying https://gerrit.wikimedia.org/r/262868 [releng]
02:24 <legoktm> deploying https://gerrit.wikimedia.org/r/262855 [releng]
01:25 <jzerebecki> reloading zuul for b0a5335..c16368a [releng]
2016-01-06 §
21:13 <thcipriani> kicking integration puppetmaster, weird node unable to find definition. [releng]
21:11 <jzerebecki> on scandium: sudo -u zuul rm -rf /srv/ssd/zuul/git/mediawiki/services/mathoid [releng]
21:04 <legoktm> ^ on gallium [releng]
21:04 <legoktm> manually deleted /srv/ssd/zuul/git/mediawiki/services/mathoid to force zuul to re-clone it [releng]
20:17 <hashar> beta: dropped a few more /etc/apt/apt.conf.d/*-proxy files. webproxy is no more reachable from labs [releng]
09:44 <hashar> CI/beta: deleting all git tags from /var/lib/git/operations/puppet and doing git repack [releng]
09:39 <hashar> restoring puppet hacks on beta cluster puppetmaster. [releng]
09:35 <hashar> beta/CI: salt -v '*' cmd.run 'rm -v /etc/apt/apt.conf.d/*-proxy' https://phabricator.wikimedia.org/T122953 [releng]
2016-01-05 §
16:54 <hashar_> Removed elastic search from CI slaves https://phabricator.wikimedia.org/T89083 https://gerrit.wikimedia.org/r/#/c/259301/ [releng]
03:45 <Krinkle> integration-slave-trusty-1015: rm -rf /mnt/home/jenkins-deploy/.npm per https://integration.wikimedia.org/ci/job/mediawiki-core-qunit/56577/console [releng]
2016-01-04 §
21:06 <hashar> gallium has puppet enabled again [releng]
20:53 <hashar> stopping puppet on gallium and live hacking Zuul configuration for https://phabricator.wikimedia.org/T122656 [releng]
2016-01-02 §
03:17 <yurik> purged varnishs on deployment-cache-text04 [releng]
2016-01-01 §
22:17 <bd808> No nodepool ci-jessie-* hosts seen in Jenkins interface and rake-jessie jobs backing up [releng]
2015-12-30 §
00:13 <bd808> rake-jessie jobs running again which will hopefully clear the large zuull backlog [releng]
00:12 <bd808> nodepool restarted by andrewbogott when no ci-jessie-* slaves seen in Jenkins [releng]
2015-12-29 §
21:56 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261114/ [releng]
21:51 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261163/ [releng]
21:42 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261322/ [releng]
21:32 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261577/ [releng]
19:53 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/261476/ to integration-puppetmaster for testing [releng]
19:51 <bd808> Fixed git remote of integration-puppetmaster.integration:/var/lib/git/labs/private to use https instead of old ssh method [releng]
2015-12-26 §
21:41 <hashar> integration: getting rid of $wgHTTPProxy https://gerrit.wikimedia.org/r/261096 (no more needed) [releng]
20:34 <hashar> integration: cherry picked puppet patches https://gerrit.wikimedia.org/r/#/c/208024/ (raita role) and https://gerrit.wikimedia.org/r/#/c/204528/ (mysql on tmpfs) [releng]
10:07 <hashar> no clue what is going on and I am traveling. Will look later tonight [releng]
10:07 <hashar> restarted Xvfb on trusty-1011 and rebooted trusty-1015. mediawiki-extensions-qunit randomly fails on some hosts ( https://phabricator.wikimedia.org/T122449 ) :( [releng]
10:00 <hashar> restarted CI puppetmaster [releng]
2015-12-23 §
23:37 <marxarelli> Reloading Zuul to deploy I39b9f292e95363addf8983eec5d08a0af527a163 [releng]
23:15 <marxarelli> Reloading Zuul to deploy I78727ce68b45f3a6305291e6e1e596b62069fc21 [releng]
2015-12-22 §
23:31 <Krinkle> (when npm jobs run) - sudo rm -rf /mnt/home/jenkins-deploy/.npm at integration-slave-trusty-1015 (due to cache corruption) [releng]
21:13 <ostriches> jenkins: kicking gearman connection, nothing is being processed from zuul queue [releng]
17:00 <hashar> If in doubt, restart Jenkins. [releng]
10:06 <hashar> Restarting Jenkins [releng]
09:58 <hashar> Delete integration-zuul-debian-glue-* files. Leftover from an experiment [releng]
09:57 <hashar> deleted cdb-* Jenkins jobs. Repo uses generic jobs [releng]
2015-12-21 §
20:06 <hashar> Downgrading Jenkins plugin from 1.24 to 1.21 [releng]
19:01 <marxarelli> Purging TMPDIR contents on idle integration slaves [releng]
18:43 <marxarelli> Updating slave scripts on all integration slaves to deploy I4edf7099acfeb0f06ea2042902bef03097137d6e [releng]
18:31 <legoktm> same thing on 1015 [releng]
18:28 <legoktm> deleted some large npm directories from tmpfs on 1017 due to tmpfs being full [releng]
13:04 <hashar> restarting cxserver on deployment-cxserver03 [releng]
10:48 <hashar> Banned testing-shinken- bot (useless duplicate notifications) [releng]