7951-8000 of 10000 results (38ms)
2016-01-15 §
02:49 <bd808> Trying to fix submodules in deployment-bastion:/srv/mediawiki-staging/php-master/extensions for T123701 [releng]
2016-01-14 §
20:06 <legoktm> deploying https://gerrit.wikimedia.org/r/264122 [releng]
19:32 <legoktm> deploying https://gerrit.wikimedia.org/r/264114 [releng]
19:18 <legoktm> deploying https://gerrit.wikimedia.org/r/264108 [releng]
2016-01-13 §
21:06 <hashar> beta cluster code is up to date again. Got delayed by roughly 4 hours. [releng]
20:55 <hashar> unlocked Jenkins jobs for beta cluster by disabling/reenabling Jenkins Gearman client [releng]
10:15 <hashar> beta: fixed puppet on deployment-elastic06 . Was still using cert/hostname without .deployment-prep. .... Mass update occurring. [releng]
2016-01-12 §
23:30 <legoktm> deploying https://gerrit.wikimedia.org/r/263757 https://gerrit.wikimedia.org/r/263756 [releng]
13:32 <hashar> beta cluster: running /usr/local/sbin/cleanup-pam-config [releng]
13:29 <hashar> integration running /usr/local/sbin/cleanup-pam-config on slaves [releng]
2016-01-11 §
22:24 <hashar> Deleting old references on Zuul-merger for mediawiki/core : <tt>/usr/share/python/zuul/bin/python /home/hashar/zuul-clear-refs.py --until 15 /srv/ssd/zuul/git/mediawiki/core </tt> [releng]
22:21 <hashar> gallium in /srv/ssd/zuul/git/mediawiki/core$ git gc --prune=all && git remote update --prune [releng]
22:21 <hashar> scandium in /srv/ssd/zuul/git/mediawiki/core$ git gc --prune=all && git remote update --prune [releng]
07:35 <legoktm> deploying https://gerrit.wikimedia.org/r/263319 [releng]
2016-01-07 §
23:16 <legoktm> deleted /mnt/jenkins-workspace/workspace/mediawiki-extensions-qunit/src/extensions/PdfHandler/.git/refs/heads/wmf/1.26wmf16.lock on slave 1013 [releng]
06:32 <legoktm> deploying https://gerrit.wikimedia.org/r/262868 [releng]
02:24 <legoktm> deploying https://gerrit.wikimedia.org/r/262855 [releng]
01:25 <jzerebecki> reloading zuul for b0a5335..c16368a [releng]
2016-01-06 §
21:13 <thcipriani> kicking integration puppetmaster, weird node unable to find definition. [releng]
21:11 <jzerebecki> on scandium: sudo -u zuul rm -rf /srv/ssd/zuul/git/mediawiki/services/mathoid [releng]
21:04 <legoktm> ^ on gallium [releng]
21:04 <legoktm> manually deleted /srv/ssd/zuul/git/mediawiki/services/mathoid to force zuul to re-clone it [releng]
20:17 <hashar> beta: dropped a few more /etc/apt/apt.conf.d/*-proxy files. webproxy is no more reachable from labs [releng]
09:44 <hashar> CI/beta: deleting all git tags from /var/lib/git/operations/puppet and doing git repack [releng]
09:39 <hashar> restoring puppet hacks on beta cluster puppetmaster. [releng]
09:35 <hashar> beta/CI: salt -v '*' cmd.run 'rm -v /etc/apt/apt.conf.d/*-proxy' https://phabricator.wikimedia.org/T122953 [releng]
2016-01-05 §
16:54 <hashar_> Removed elastic search from CI slaves https://phabricator.wikimedia.org/T89083 https://gerrit.wikimedia.org/r/#/c/259301/ [releng]
03:45 <Krinkle> integration-slave-trusty-1015: rm -rf /mnt/home/jenkins-deploy/.npm per https://integration.wikimedia.org/ci/job/mediawiki-core-qunit/56577/console [releng]
2016-01-04 §
21:06 <hashar> gallium has puppet enabled again [releng]
20:53 <hashar> stopping puppet on gallium and live hacking Zuul configuration for https://phabricator.wikimedia.org/T122656 [releng]
2016-01-02 §
03:17 <yurik> purged varnishs on deployment-cache-text04 [releng]
2016-01-01 §
22:17 <bd808> No nodepool ci-jessie-* hosts seen in Jenkins interface and rake-jessie jobs backing up [releng]
2015-12-30 §
00:13 <bd808> rake-jessie jobs running again which will hopefully clear the large zuull backlog [releng]
00:12 <bd808> nodepool restarted by andrewbogott when no ci-jessie-* slaves seen in Jenkins [releng]
2015-12-29 §
21:56 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261114/ [releng]
21:51 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261163/ [releng]
21:42 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261322/ [releng]
21:32 <bd808> Updated zuul with https://gerrit.wikimedia.org/r/#/c/261577/ [releng]
19:53 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/261476/ to integration-puppetmaster for testing [releng]
19:51 <bd808> Fixed git remote of integration-puppetmaster.integration:/var/lib/git/labs/private to use https instead of old ssh method [releng]
2015-12-26 §
21:41 <hashar> integration: getting rid of $wgHTTPProxy https://gerrit.wikimedia.org/r/261096 (no more needed) [releng]
20:34 <hashar> integration: cherry picked puppet patches https://gerrit.wikimedia.org/r/#/c/208024/ (raita role) and https://gerrit.wikimedia.org/r/#/c/204528/ (mysql on tmpfs) [releng]
10:07 <hashar> no clue what is going on and I am traveling. Will look later tonight [releng]
10:07 <hashar> restarted Xvfb on trusty-1011 and rebooted trusty-1015. mediawiki-extensions-qunit randomly fails on some hosts ( https://phabricator.wikimedia.org/T122449 ) :( [releng]
10:00 <hashar> restarted CI puppetmaster [releng]
2015-12-23 §
23:37 <marxarelli> Reloading Zuul to deploy I39b9f292e95363addf8983eec5d08a0af527a163 [releng]
23:15 <marxarelli> Reloading Zuul to deploy I78727ce68b45f3a6305291e6e1e596b62069fc21 [releng]
2015-12-22 §
23:31 <Krinkle> (when npm jobs run) - sudo rm -rf /mnt/home/jenkins-deploy/.npm at integration-slave-trusty-1015 (due to cache corruption) [releng]
21:13 <ostriches> jenkins: kicking gearman connection, nothing is being processed from zuul queue [releng]
17:00 <hashar> If in doubt, restart Jenkins. [releng]