3651-3700 of 7871 results (16ms)
2016-02-11 §
11:46 <hashar> salt -v '*' cmd.run '/etc/init.d/apache2 restart' might help for Wikidata browser tests failling [releng]
11:31 <hashar> disabling hhvm service on CI slaves ( https://phabricator.wikimedia.org/T126594 , cherry picked both patches ) [releng]
10:50 <hashar> reenabled puppet on CI. All transitioned to a 128MB tmpfs (was 512MB) [releng]
10:16 <hashar> pooling back integration-slave-trusty-1009 and integration-slave-trusty-1010 (tmpfs shrunken) [releng]
10:06 <hashar> disabling puppet on all CI slaves. Trying to lower tmpfs 512MB to 128MB ( https://gerrit.wikimedia.org/r/#/c/269880/ ) [releng]
02:45 <legoktm> deploying https://gerrit.wikimedia.org/r/269853 https://gerrit.wikimedia.org/r/269893 [releng]
2016-02-10 §
23:54 <hashar_> depooling Trusty slaves that only have 2GB of ram that is not enough. https://phabricator.wikimedia.org/T126545 [releng]
22:55 <hashar_> gallium: find /var/lib/jenkins/config-history/config -type f -wholename '*/2015*' -delete ( https://phabricator.wikimedia.org/T126552 ) [releng]
22:34 <Krinkle> Zuul is back up and procesing Gerrit events, but jobs are still queued indefinitely. Jenkins is not accepting new jobs [releng]
22:31 <Krinkle> Full restart of Zuul. Seems Gearman/Zuul got stuck. All executors were idling. No new Gerrit events processed either. [releng]
21:22 <legoktm> cherry-picking https://gerrit.wikimedia.org/r/#/c/269370/ on integration-puppetmaster again [releng]
21:16 <hashar> CI dust have settled. Krinkle and I have pooled a lot more Trusty slaves to accommodate for the overload caused by switching to php55 (jobs run on Trusty) [releng]
21:08 <hashar> pooling trusty slaves 1009, 1010, 1021, 1022 with 2 executors (they are ci.medium) [releng]
20:38 <hashar> cancelling mediawiki-core-jsduck-publish and mediawiki-core-doxygen-publish jobs manually. They will catch up on next merge [releng]
20:34 <Krinkle> Pooled integration-slave-trusty-1019 (new) [releng]
20:28 <Krinkle> Pooled integration-slave-trusty-1020 (new) [releng]
20:24 <Krinkle> created integration-slave-trusty-1019 and integration-slave-trusty-1020 (ci1.medium) [releng]
20:18 <hashar> created integration-slave-trusty-1009 and 1010 (trusty ci.medium) [releng]
20:06 <hashar> creating integration-slave-trusty-1021 and integration-slave-trusty-1022 (ci.medium) [releng]
19:48 <greg-g> that cleanup was done by apergos [releng]
19:48 <greg-g> did cleanup across all integration slaves, some were very close to out of room. results: https://phabricator.wikimedia.org/P2587 [releng]
19:43 <hashar> Dropping slaves Precise m1.large integration-slave-precise-1014 and integration-slave-precise-1013 , most load shifted to Trusty (php53 -> php55 transition) [releng]
18:20 <Krinkle> Creating a Trusty slave to support increased demand following MediaWIki php53(precise)>php55(trusty) bump [releng]
16:06 <jzerebecki> reloading zuul for 41a92d5..5b971d1 [releng]
15:42 <jzerebecki> reloading zuul for 639dd40..41a92d5 [releng]
14:12 <jzerebecki> recover a bit of disk space: integration-saltmaster:~# salt --show-timeout '*slave*' cmd.run 'rm -rf /mnt/jenkins-workspace/workspace/*WikibaseQuality*' [releng]
13:46 <jzerebecki> reloading zuul for 639dd40 [releng]
13:15 <jzerebecki> reloading zuul for 3be81c1..e8e0615 [releng]
08:07 <legoktm> deploying https://gerrit.wikimedia.org/r/269619 [releng]
08:03 <legoktm> deploying https://gerrit.wikimedia.org/r/269613 and https://gerrit.wikimedia.org/r/269618 [releng]
06:41 <legoktm> deploying https://gerrit.wikimedia.org/r/269607 [releng]
06:34 <legoktm> deploying https://gerrit.wikimedia.org/r/269605 [releng]
02:59 <legoktm> deleting 14GB broken workspace of mediawiki-core-php53lint from integration-slave-precise-1004 [releng]
02:37 <legoktm> deleting /mnt/jenkins-workspace/workspace/mwext-testextension-hhvm-composer on trusty-1017, it had a skin cloned into it [releng]
02:26 <legoktm> queuing mwext jobs server-side to identify failing ones [releng]
02:21 <legoktm> deploying https://gerrit.wikimedia.org/r/269582 [releng]
01:03 <legoktm> deploying https://gerrit.wikimedia.org/r/269576 [releng]
2016-02-09 §
23:17 <legoktm> deploying https://gerrit.wikimedia.org/r/269551 [releng]
23:02 <legoktm> gracefully restarting zuul [releng]
22:57 <legoktm> deploying https://gerrit.wikimedia.org/r/269547 [releng]
22:29 <legoktm> deploying https://gerrit.wikimedia.org/r/269540 [releng]
22:18 <legoktm> re-enabling puppet on all CI slaves [releng]
22:02 <legoktm> reloading zuul to see if it'll pickup the new composer-php53 job [releng]
21:53 <legoktm> enabling puppet on just integration-slave-trusty-1012 [releng]
21:52 <legoktm> cherry-picked https://gerrit.wikimedia.org/r/#/c/269370/ onto integration-puppetmaster [releng]
21:50 <legoktm> disabling puppet on all trusty/precise CI slaves [releng]
21:40 <legoktm> deploying https://gerrit.wikimedia.org/r/269533 [releng]
17:49 <marxarelli> disabled/enabled gearman in jenkins, connection works this time [releng]
17:49 <marxarelli> performed stop/start of zuul on gallium to restore zuul and gearman [releng]
17:45 <marxarelli> "Failed: Unable to Connect" in jenkins when testing gearman connection [releng]