51-100 of 4184 results (9ms)
2016-02-08 §
22:13 <hashar> Deleted cache-rsync instance superseded by castor instance [releng]
22:10 <hashar> Deleting pmcache.integration.eqiad.wmflabs (was to investigate various kind of central caches). [releng]
20:14 <marxarelli> aborting pending mediawiki-extensions-php53 job for CheckUser [releng]
20:08 <bd808> toggled "Enable Gearman" off and on in Jenkins to wake up deployment-bastion workers [releng]
14:54 <hashar> nodepool: refreshed snapshot image , Image ci-jessie-wikimedia-1454942958 in wmflabs-eqiad is ready [releng]
14:47 <hashar> regenerated nodepool reference image (got rid of grunt-cli https://gerrit.wikimedia.org/r/269126 ) [releng]
09:41 <legoktm> deploying https://gerrit.wikimedia.org/r/269093 https://gerrit.wikimedia.org/r/269094 [releng]
09:36 <hashar> restarting integration puppetmaster (out of memory / cannot fork) [releng]
06:11 <bd808> tgr set $wgAuthenticationTokenVersion on beta cluster (test run for T124440) [releng]
02:09 <legoktm[NE]> deploying https://gerrit.wikimedia.org/r/268047 [releng]
00:57 <legoktm[NE]> deploying https://gerrit.wikimedia.org/r/268031 [releng]
2016-02-06 §
18:34 <jzerebecki> reloading zuul for bdb2ed4..46ccca9 [releng]
2016-02-05 §
13:30 <hashar> beta cleaning out /data/project/logs/archive was from pre logstash area. We no more log this way since May 2015 apparently [releng]
13:29 <hashar> beta deleting /data/project/swift-disk created in august 2014 , unused since june 2015. Was a fail attempt at bringing swift to beta [releng]
13:27 <hashar> beta: reclaiming disk space from extensions.git. On bastion: find /srv/mediawiki-staging/php-master/extensions/.git/modules -maxdepth 1 -type d -print -execdir git gc \; [releng]
13:03 <hashar> integration-slave-trusty-1011 went out of disk space. Did some brute clean up and git gc. [releng]
05:21 <TimStarling> configured mediawiki-extensions-qunit to only run on integration-slave-trusty-1017, did a rebuild and then switched it back [releng]
2016-02-04 §
22:08 <jzerebecki> reloading zuul for bed7be1..f57b7e2 [releng]
21:51 <hashar> salt-key -d integration-slave-jessie-1001.eqiad.wmflabs [releng]
21:50 <hashar> salt-key -d integration-slave-precise-1011.eqiad.wmflabs [releng]
20:11 <hashar> ping [releng]
20:08 <hashar> All wikis to 1.27.0-wmf.12 No troubles so far congratulations to everyone involved @wikimedia #wikimedia [releng]
18:37 <marxarelli> Reloading Zuul to deploy Iccf4f48fe5bf964a4c4e6db3f404f152628a4a24 [releng]
10:04 <hashar> beta: nuking the whole l10n cache ( https://phabricator.wikimedia.org/T123366 ) [releng]
10:03 <hashar> beta-scap-eqiad fails with <tt>AttributeError: 'bool' object has no attribute 'encode'</tt> [releng]
10:02 <hashar> https://integration.wikimedia.org/ci/view/Beta/job/beta-scap-eqiad/ is broken :( [releng]
00:57 <bd808> Got deployment-bastion processing Jenkins jobs again via instructions left by my past self at https://phabricator.wikimedia.org/T72597#747925 [releng]
00:43 <bd808> Jenkins agent on deployment-bastion.eqiad doing the trick where it doesn't pick up jobs again [releng]
2016-02-03 §
22:24 <bd808> Manually ran sync-common on deployment-jobrunner01.eqiad.wmflabs to pickup wmf-config changes that were missing (InitializeSettings, Wikibase, mobile) [releng]
17:43 <marxarelli> Reloading Zuul to deploy previously undeployed Icd349069ec53980ece2ce2d8df5ee481ff44d5d0 and Ib18fe48fe771a3fe381ff4b8c7ee2afb9ebb59e4 [releng]
15:12 <hashar> apt-get upgrade deployment-sentry2 [releng]
15:03 <hashar> redeployed rcstream/rcstream on deployment-stream by using git-deploy on deployment-bastion [releng]
14:55 <hashar> upgrading deployment-stream [releng]
14:42 <hashar> pooled back integration-slave-trusty-1015 Seems ok [releng]
14:35 <hashar> manually triggered a bunch of browser tests jobs [releng]
11:40 <hashar> apt-get upgrade deployment-ms-be01 and deployment-ms-be02 [releng]
11:32 <hashar> fixing puppet.conf on deployment-memc04 [releng]
11:08 <hashar> restarting beta cluster puppetmaster just in case [releng]
11:07 <hashar> beta: apt-get upgrade on delpoyment-cache* hosts and checking puppet [releng]
10:59 <hashar> integration/beta: deleting /etc/apt/apt.conf.d/*proxy files. There is no need for them, in fact web proxy is not reachable from labs [releng]
10:53 <hashar> integration: switched puppet repo back to 'production' branch, rebased. [releng]
10:49 <hashar> various beta cluster have puppet errors .. [releng]
10:46 <hashar> integration-slave-trusty-1013 heading to out of disk space on /mnt ... [releng]
10:42 <hashar> integration-slave-trusty-1016 out of disk space on /mnt ... [releng]
03:45 <bd808> Puppet failing on deployment-fluorine with "Error: Could not set uid on user[datasets]: Execution of '/usr/sbin/usermod -u 10003 datasets' returned 4: usermod: UID '10003' already exists" [releng]
03:44 <bd808> Freed 28G by deleting deployment-fluorine:/srv/mw-log/archive/*2015* [releng]
03:41 <bd808> Ran deployment-bastion.deployment-prep:/home/bd808/cleanup-var-crap.sh and freed 565M [releng]
2016-02-02 §
18:32 <marxarelli> Reloading Zuul to deploy If1f3cb60f4ccb2c1bca112900dbada03a8588370 [releng]
17:42 <marxarelli> cleaning mwext-donationinterfacecore125-testextension-php53 workspace on integration-slave-precise-1013 [releng]
17:06 <ostriches> running sync-common on mw2051 and mw1119 [releng]