101-150 of 10000 results (14ms)
2014-04-15 ยง
23:19 <mwalker> Started scap: Configuration change {{gerrit|126163}} and MultimediaViewer {{gerrit|126158}} [production]
23:08 <mwalker> Finished scap: Configuration changes, {{gerrit|113656}}, {{gerrit|121834}}, {{gerrit|126065}} (duration: 03m 11s) [production]
23:05 <mwalker> Started scap: Configuration changes, {{gerrit|113656}}, {{gerrit|121834}}, {{gerrit|126065}} [production]
23:01 <hashar> restarting Zuul to clear leaked file descriptor (know issue, fixed upstream) [production]
22:12 <awight> crm updated from e3f285984f786ca9a05ded1662aba415b0259856 to 7dafce5f8fe265fb0ab6c96e01e59fc4362ea5b4 [production]
21:51 <manybubbles> restarting elastic1009 again [production]
21:39 <hashar> jenkins /var/lib/git cleaned up on gallium [production]
21:16 <manybubbles> restarting elastic1009 to test performance changes. cluster will go yellow for a few minutes. might go red (wikitech is busted) [production]
21:15 <hashar> Jenkins is processing jobs again [production]
21:14 <hashar> cleared /tmp/ on integration-slave1002 (filled up by hhvm job, known issue, bug filled already) [production]
21:12 <hashar> Zuul locked again :/ Unpooling and repooling Jenkins slaves. [production]
19:50 <RoanKattouw> Restarting stuck Jenkins [production]
19:31 <manybubbles> setting refresh interval on elasticsearch indexes to 30s to test effect on load [production]
19:24 <reedy> synchronized wmf-config/ [production]
19:20 <reedy> synchronized php-1.23wmf22/includes/PrefixSearch.php 'I82b5ca65864099c180d915055c43e6839bd4f4a2' [production]
19:07 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: Wikisources back to 1.23wmf22 [production]
19:07 <ottomata> reinstalling elastic1010 [production]
19:07 <reedy> synchronized php-1.23wmf22/extensions/ProofreadPage [production]
18:41 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: Wikisources back to 1.23wmf21 due to ProofreadPage fatal [production]
18:36 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: Non Wikipedias to 1.23wmf22 [production]
17:09 <paravoid> stopped pybal on lvs1005 [production]
17:06 <cmjohnson1> fixing lvs1005 eth1 cable [production]
16:56 <cmjohnson1> mw1057 replacing ethernet cable [production]
16:50 <manybubbles> raised "new generation" size on elastic1009 to test a performance theory [production]
16:50 <cmjohnson1> mw1093 replacing ethernet cable [production]
16:40 <cmjohnson1> replacing eth cable on mw1193 [production]
16:31 <hashar> ... all Jenkins jobs are using /srv/ssd/gerrit instead [production]
16:30 <hashar> gallium had two Gerrit replications streams, one of them got removed {{gerrit|122419}} thus deleting the target directories under /var/lib/git [production]
16:22 <cmjohnson1> shutting down mw1163 to replace DIMM [production]
16:18 <cmjohnson1> swapping bad disk slot 4 on dataset1001 [production]
16:13 <paravoid> moving ms-fe3xxx/ms-be3xxx to private1-esams [production]
16:06 <ottomata> reinstalling elastic1009 [production]
15:21 <anomie> synchronized php-1.23wmf21/extensions/Flow 'SWAT: Flow: Prevent logspam on enwiki 125930' [production]
15:13 <anomie> synchronized php-1.23wmf21/extensions/Flow 'SWAT: Flow: Prevent logspam on enwiki 125930' [production]
15:02 <mutante> DNS update - removing Tampa service IPs [production]
13:52 <hashar> Jenkins compressing console logs of builds. On gallium as user jenkins : find /var/lib/jenkins/jobs -wholename '*/builds/*/log' -type f -exec gzip --best {} \\; [production]
13:42 <hashar> Command executed (as gerritslave user): find /srv/ssd/gerrit -type d -name '*.git' -exec bash -c 'echo; date; cd {}; echo; pwd; echo; git repack -ad; date;' \\; [production]
13:41 <hashar> Repacking Gerrit replicated repositories on lanthanum and gallium (both under /srv/ssd/gerrit/ ) [production]
13:13 <andrewbogott> shutdown and decommissioned virt12 [production]
12:19 <paravoid> adding ms-be101[345] to Swift eqiad's rings, at 33% weight; old rings kept at ms-fe1001:~/swift-2014-04-14 [production]
11:30 <mutante> DNS update - removed dbdump.pmtpa.wmnet [production]
11:26 <mutante> DNS update - remove db64,db65,db66,db67,db70 [production]
10:55 <mutante> db64,db67 - powerdown via mgmt [production]
10:51 <mutante> db65,db66 - shutdown [production]
10:07 <mutante> db70 - powerdown via mgmt [production]
09:47 <mutante> db64-67 - puppetstoredconfigclean.rb db${db}.pmtpa.wmnet ; puppetca --clean db${db}.pmtpa.wmnet ; salt-key -d db${db}.pmtpa.wmnet [production]
07:02 <springle> shutdown db67 for decom. analytics data is backed up on dbstore1002 [production]
06:47 <springle> moving pmtpa m1 and x1 slaves to db73 and db69 on 12th floor [production]
03:26 <LocalisationUpdate> ResourceLoader cache refresh completed at Tue Apr 15 03:25:52 UTC 2014 (duration 25m 51s) [production]
02:42 <LocalisationUpdate> completed (1.23wmf22) at 2014-04-15 02:42:48+00:00 [production]