5001-5050 of 10000 results (25ms)
2013-10-08 §
17:18 <olivneh> synchronized wmf-config/CommonSettings.php 'CoreEvents -> WikimediaEvents' [production]
17:15 <olivneh> synchronized php-1.22wmf19/extensions/WikimediaEvents 'CoreEvents -> WikimediaEvents' [production]
17:12 <olivneh> synchronized php-1.22wmf20/extensions/WikimediaEvents 'CoreEvents -> WikimediaEvents' [production]
16:58 <ariel> synchronized wmf-config/db-eqiad.php 'db1046 (s6) back to normal weight in pool' [production]
16:48 <reedy> synchronized wmf-config/ [production]
15:55 <apergos> fixed up salt clients on analytics 1005-6, 1009-27 [production]
14:34 <ariel> synchronized wmf-config/db-eqiad.php 'warm up db1040 (s6) after upgrade' [production]
13:38 <mutante> installing package upgrades on hooper [production]
13:28 <mutante> installing package upgrades on helium [production]
12:49 <mutante> installing package upgrades on formey [production]
12:38 <hashar> restarted Jenkins by mistake :-( [production]
12:35 <mutante> install package upgrades on gallium (jenkins) [production]
12:35 <akosiaris> apt-get purge defoma on image_scalers dsh group [production]
12:17 <mutante> installing package upgrades on ekrem (IRC) [production]
09:43 <mutante> installing package upgrades on antimony (gitblit, gerrit repl.) [production]
09:39 <mutante> puppetstoredconflicean .. killiam williams.wikimedia.org .. done [production]
09:11 <mutante> installing package upgrades on calcium (cameras) [production]
07:52 <springle> synchronized wmf-config/db-eqiad.php 'warm up db1039 in s6' [production]
07:20 <ariel> synchronized wmf-config/db-eqiad.php 'depool db1040 for upgrade/conversion to mariadb' [production]
06:37 <apergos> fixed up salt on mw1046, mw1072, mw1173 [production]
03:30 <springle> synchronized wmf-config/db-eqiad.php 's6 db1015 to full steam' [production]
03:23 <springle> xtrabackup clone s6 db1039 to db1022 [production]
03:20 <springle> rmmod nf_conntrack on db1002 causing mass mysql connect failure [production]
02:36 <LocalisationUpdate> ResourceLoader cache refresh completed at Tue Oct 8 02:35:56 UTC 2013 [production]
02:31 <springle> start online optimize logging indexes s4 & s5 [production]
02:25 <LocalisationUpdate> completed (1.22wmf19) at Tue Oct 8 02:25:27 UTC 2013 [production]
02:13 <LocalisationUpdate> completed (1.22wmf20) at Tue Oct 8 02:13:54 UTC 2013 [production]
00:11 <LeslieCarr> all ams-ix traffic is now on cr2-esams [production]
2013-10-07 §
23:36 <LeslieCarr> turning up new ams-ix port on cr2-esams [production]
23:24 <LeslieCarr> traffic draining from ams-ix on cr2-knams [production]
23:22 <LeslieCarr> moving european traffic around [production]
20:46 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon Oct 7 20:46:05 UTC 2013 [production]
20:33 <LocalisationUpdate> completed (1.22wmf19) at Mon Oct 7 20:33:53 UTC 2013 [production]
20:23 <LocalisationUpdate> completed (1.22wmf20) at Mon Oct 7 20:23:45 UTC 2013 [production]
20:05 <LocalisationUpdate> failed: git pull of extensions failed [production]
19:35 <reedy> synchronized wmf-config/ [production]
18:56 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: commonswiki back to 1.22wmf20, thumb.php fixed [production]
18:55 <reedy> synchronized php-1.22wmf20/thumb.php [production]
18:40 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: commonswiki back to 1.22wmf19 [production]
18:38 <ottomata> backporting python-docopt 0.6.1 for precise and including in our apt repo [production]
18:24 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: Everything non 'pedia to 1.22wmf20 [production]
18:17 <reedy> synchronized php-1.22wmf20/extensions/Wikibase [production]
17:43 <paravoid> changed ms-fe1/2's mgmt IPs, ip clash with nas1-a/b's e0M [production]
17:28 <mwalker> payments update from I3a09af2cbd8566bbeb431437c49e72c832b87304 to I61bc80f257c590c1abeb716145c23921b005e5a8 [production]
16:05 <krinkle> synchronized wmf-config/InitialiseSettings.php 'I59e2547002bac5' [production]
14:50 <RoanKattouw> Restarting Parsoid on wtp10[01-24] on request; load avg reaching 90% [production]
14:28 <Coren> Labstore4 Configuring; will bounce up and down over the next two days. [production]
12:57 <akosiaris> brought icinga back up, after manually running puppet on neon [production]
12:39 <hashar> gallium : restarting Zuul. [production]
12:37 <hashar> gallium / Zuul : cherry picked a change from OpenStack related to statsd metrics. Our change: {{gerrit|88063}} [production]