2016-02-10
§
|
20:06 |
<hashar> |
creating integration-slave-trusty-1021 and integration-slave-trusty-1022 (ci.medium) |
[releng] |
19:48 |
<greg-g> |
that cleanup was done by apergos |
[releng] |
19:48 |
<greg-g> |
did cleanup across all integration slaves, some were very close to out of room. results: https://phabricator.wikimedia.org/P2587 |
[releng] |
19:43 |
<hashar> |
Dropping slaves Precise m1.large integration-slave-precise-1014 and integration-slave-precise-1013 , most load shifted to Trusty (php53 -> php55 transition) |
[releng] |
18:20 |
<Krinkle> |
Creating a Trusty slave to support increased demand following MediaWIki php53(precise)>php55(trusty) bump |
[releng] |
16:06 |
<jzerebecki> |
reloading zuul for 41a92d5..5b971d1 |
[releng] |
15:42 |
<jzerebecki> |
reloading zuul for 639dd40..41a92d5 |
[releng] |
14:12 |
<jzerebecki> |
recover a bit of disk space: integration-saltmaster:~# salt --show-timeout '*slave*' cmd.run 'rm -rf /mnt/jenkins-workspace/workspace/*WikibaseQuality*' |
[releng] |
13:46 |
<jzerebecki> |
reloading zuul for 639dd40 |
[releng] |
13:15 |
<jzerebecki> |
reloading zuul for 3be81c1..e8e0615 |
[releng] |
08:07 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269619 |
[releng] |
08:03 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269613 and https://gerrit.wikimedia.org/r/269618 |
[releng] |
06:41 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269607 |
[releng] |
06:34 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269605 |
[releng] |
02:59 |
<legoktm> |
deleting 14GB broken workspace of mediawiki-core-php53lint from integration-slave-precise-1004 |
[releng] |
02:37 |
<legoktm> |
deleting /mnt/jenkins-workspace/workspace/mwext-testextension-hhvm-composer on trusty-1017, it had a skin cloned into it |
[releng] |
02:26 |
<legoktm> |
queuing mwext jobs server-side to identify failing ones |
[releng] |
02:21 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269582 |
[releng] |
01:03 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269576 |
[releng] |
2016-02-09
§
|
23:17 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269551 |
[releng] |
23:02 |
<legoktm> |
gracefully restarting zuul |
[releng] |
22:57 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269547 |
[releng] |
22:29 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269540 |
[releng] |
22:18 |
<legoktm> |
re-enabling puppet on all CI slaves |
[releng] |
22:02 |
<legoktm> |
reloading zuul to see if it'll pickup the new composer-php53 job |
[releng] |
21:53 |
<legoktm> |
enabling puppet on just integration-slave-trusty-1012 |
[releng] |
21:52 |
<legoktm> |
cherry-picked https://gerrit.wikimedia.org/r/#/c/269370/ onto integration-puppetmaster |
[releng] |
21:50 |
<legoktm> |
disabling puppet on all trusty/precise CI slaves |
[releng] |
21:40 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/269533 |
[releng] |
17:49 |
<marxarelli> |
disabled/enabled gearman in jenkins, connection works this time |
[releng] |
17:49 |
<marxarelli> |
performed stop/start of zuul on gallium to restore zuul and gearman |
[releng] |
17:45 |
<marxarelli> |
"Failed: Unable to Connect" in jenkins when testing gearman connection |
[releng] |
17:40 |
<marxarelli> |
killed old zull process manually and restarted service |
[releng] |
17:39 |
<marxarelli> |
restart of zuul fails as well. old process cannot be killed |
[releng] |
17:38 |
<marxarelli> |
reloading zuul fails with "failed to kill 13660: Operation not permitted" |
[releng] |
16:06 |
<bd808> |
Deleted corrupt integration-slave-precise-1003:/mnt/jenkins-workspace/workspace/mediawiki-core-php53lint/.git |
[releng] |
15:11 |
<hashar> |
mira: /srv/mediawiki-staging/multiversion/checkoutMediaWiki 1.27.0-wmf.13 php-1.27.0-wmf.13 |
[releng] |
14:51 |
<hashar> |
./make-wmf-branch -n 1.27.0-wmf.13 -o master |
[releng] |
14:50 |
<hashar> |
pooling back integration-slave-precise1001 - 1004. Manually fetched git repos in workspace for mediawiki core php53 |
[releng] |
14:49 |
<hashar> |
make-wmf-branch instance: created a local ssh key pair and set the config to use User: hashar |
[releng] |
14:13 |
<hashar> |
pooling https://integration.wikimedia.org/ci/computer/integration-slave-precise-1012/ Mysql is back .. Blame puppet |
[releng] |
14:12 |
<hashar> |
de pooling https://integration.wikimedia.org/ci/computer/integration-slave-precise-1012/ Mysql is gone somehow |
[releng] |
14:04 |
<hashar> |
Manually git fetching mediawiki-core in /mnt/jenkins-workspace/workspace/mediawiki-core-php53lint of slaves precise 1001 to 1004 (git on Precise is remarkably too slow) |
[releng] |
13:28 |
<hashar> |
salt '*trusty*' cmd.run 'update-alternatives --set php /usr/bin/hhvm' |
[releng] |
13:28 |
<hashar> |
salt '*precise*' cmd.run 'update-alternatives --set php /usr/bin/php5' |
[releng] |
13:17 |
<hashar> |
salt -v --batch=3 '*slave*' cmd.run 'puppet agent -tv' |
[releng] |
13:15 |
<hashar> |
removing https://gerrit.wikimedia.org/r/#/c/269370/ from CI puppet master |
[releng] |
13:14 |
<hashar> |
slave recurse infinitely doing /bin/bash -eu /srv/deployment/integration/slave-scripts/bin/mw-install-mysql.sh then loop over /bin/bash /usr/bin/php maintenance/install.php --confpath /mnt/jenkins-workspace/workspace/mediawiki-core-qunit/src --dbtype=mysql --dbserver=127.0.0.1:3306 --dbuser=jenkins_u2 --dbpass=pw_jenkins_u2 --dbname=jenkins_u2_mw --pass testpass TestWiki WikiAdmin https://phabricator.wikimedia.org/T126327 |
[releng] |
12:46 |
<hashar> |
Mass testing php loop of death: salt -v '*slave*' cmd.run 'timeout 2s /srv/deployment/integration/slave-scripts/bin/php --version' |
[releng] |
12:40 |
<hashar> |
mass rebooting CI slaves from wikitech |
[releng] |