6751-6800 of 10000 results (33ms)
2016-02-09 ยง
15:36 <elukey> disabled puppet on kafka1012, changing temporary kafka retention to purge some extra logs [production]
15:17 <cmjohnson1> snapshot1002 mistakenly taken offline -- booting now [production]
15:15 <paravoid> upgrading lvs4001/4002 to linux 4.4.0 [production]
15:11 <hashar> mira: /srv/mediawiki-staging/multiversion/checkoutMediaWiki 1.27.0-wmf.13 php-1.27.0-wmf.13 [releng]
15:07 <godog> stop cassandra on restbase1007, cpu/mem upgrade and reimage [production]
14:59 <paravoid> upgrading lvs3001/3002 to linux 4.4.0 [production]
14:53 <godog> reboot ms-be1004, xfs hosed [production]
14:51 <hashar> Cutting branches 1.27.0-wmf.13 [production]
14:51 <hashar> ./make-wmf-branch -n 1.27.0-wmf.13 -o master [releng]
14:50 <hashar> pooling back integration-slave-precise1001 - 1004. Manually fetched git repos in workspace for mediawiki core php53 [releng]
14:49 <hashar> make-wmf-branch instance: created a local ssh key pair and set the config to use User: hashar [releng]
14:46 <elukey> re-enabled puppet on mc1004.eqiad [production]
14:45 <bblack> resuming cpNNNN rolling kernel reboots [production]
14:41 <_joe_> setting mw1026-1050 as inactive in the appservers pool (T126242) [production]
14:13 <hashar> pooling https://integration.wikimedia.org/ci/computer/integration-slave-precise-1012/ Mysql is back .. Blame puppet [releng]
14:12 <hashar> de pooling https://integration.wikimedia.org/ci/computer/integration-slave-precise-1012/ Mysql is gone somehow [releng]
14:04 <hashar> Manually git fetching mediawiki-core in /mnt/jenkins-workspace/workspace/mediawiki-core-php53lint of slaves precise 1001 to 1004 (git on Precise is remarkably too slow) [releng]
13:58 <hashar> shutting down jenkins finally, and restarting it [production]
13:51 <hashar> Restarting Jenkins. It can not manage to add slaves [production]
13:28 <hashar> salt '*trusty*' cmd.run 'update-alternatives --set php /usr/bin/hhvm' [releng]
13:28 <hashar> salt '*precise*' cmd.run 'update-alternatives --set php /usr/bin/php5' [releng]
13:17 <hashar> salt -v --batch=3 '*slave*' cmd.run 'puppet agent -tv' [releng]
13:15 <paravoid> upgrading lvs1001/lvs1007/lvs1002/lvs1008/lvs1003/lvs1009 to 4.4.0 [production]
13:15 <hashar> removing https://gerrit.wikimedia.org/r/#/c/269370/ from CI puppet master [releng]
13:14 <hashar> slave recurse infinitely doing /bin/bash -eu /srv/deployment/integration/slave-scripts/bin/mw-install-mysql.sh then loop over /bin/bash /usr/bin/php maintenance/install.php --confpath /mnt/jenkins-workspace/workspace/mediawiki-core-qunit/src --dbtype=mysql --dbserver=127.0.0.1:3306 --dbuser=jenkins_u2 --dbpass=pw_jenkins_u2 --dbname=jenkins_u2_mw --pass testpass TestWiki WikiAdmin https://phabricator.wikimedia.org/T126327 [releng]
13:11 <akosiaris> reboot serpens to apply memory increase of 2G [production]
13:07 <paravoid> installing linux 4.4.0 on lvs1001 [production]
13:01 <hashar> Jenkins disabled again :( [production]
12:53 <akosiaris> reboot seaborgium to apply memory increase of 2G [production]
12:47 <hashar> Updated faulty script that caused 'php' too loop infinitely. Jenkins back up. [production]
12:46 <hashar> Mass testing php loop of death: salt -v '*slave*' cmd.run 'timeout 2s /srv/deployment/integration/slave-scripts/bin/php --version' [releng]
12:40 <hashar> mass rebooting CI slaves from wikitech [releng]
12:39 <hashar> salt -v '*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" [releng]
12:36 <hashar> Jenkins no more accept new jobs until the slaves are fixed :/ [production]
12:33 <hashar> all CI slaves looping to death because of a php loop [production]
12:33 <hashar> all slaves dieing due to PHP looping [releng]
12:02 <legoktm> re-enabling puppet on all trusty/precise slaves [releng]
11:43 <paravoid> upgrading lvs2001, lvs2002, lvs2003 to kernel 4.4.0 [production]
11:36 <paravoid> reverting lvs2005 to 3.19 and rebooting, test is over and was successful [production]
11:20 <legoktm> cherry-picked https://gerrit.wikimedia.org/r/#/c/269370/ on integration-puppetmaster [releng]
11:20 <legoktm> enabling puppet just on integration-slave-trusty-1012 [releng]
11:19 <paravoid> stopping pybal on lvs2002 [production]
11:13 <legoktm> disabling puppet on all *(trusty|precise)* slaves [releng]
11:05 <paravoid> installing linux-image-4.4.0 on lvs2005 and rebooting for testing [production]
10:53 <apergos> salt minions on labs instances that respond to labcontrol1001 will be coming back up over the next 1/2 hour as puppet runs (salt master key fixes) [production]
10:45 <elukey> disabled puppet, redis and memcached on mc1004 for jessie migration [production]
10:33 <_joe_> pybal updated everywhere [production]
10:32 <gehel> elasticsearch codfw: cleanup leftover logs /var/log/elasticsearch/*.[2-7] [production]
10:25 <hashar> pooling in integration-slave-trusty-1018 [releng]
10:24 <gehel> elasticsearch eqiad: cleanup leftover logs /var/log/elasticsearch/*.[2-7] [production]