7701-7750 of 10000 results (46ms)
2015-07-02 §
02:30 <l10nupdate> Synchronized php-1.26wmf11/cache/l10n: (no message) (duration: 10m 23s) [production]
00:44 <ori> Repooling mw1152 (HHVM image scaler) for testing) [production]
2015-07-01 §
23:30 <springle> restart mysqld dbstore2002 T104471 [production]
23:06 <krenair> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/222202/ (duration: 00m 11s) [production]
21:39 <godog> bounce gitblit [production]
20:38 <jgage> restarted gitblit on antimony [production]
19:50 <ori> restarted gitblit on antimony [production]
19:49 <ori> mw1152 not actually re-pooled because of ongoing work on palladium. I'm undoing the change and hanging back now. [production]
19:41 <twentyafterfour> rebuilt wikiversions.cdb and synchronized wikiversions files: group1 wikis to 1.26wmf12 [production]
19:36 <twentyafterfour> Synchronized php-1.26wmf12: sync 1.26wmf12 branch revert of "Implement support for Google reCAPTCHA 2.0" 90665a737bc25ff3c859044755d662c6cd700573 (duration: 02m 04s) [production]
19:31 <jynus> replication issues for shard s7 on dbstore2001 and dbstore2002, production applications *not* affected [production]
19:31 <urandom> from restbase1002; node thin_out_key_rev_value_data.js `hostname -i` local_group_wikipedia_T_parsoid_html 2>&1 | pv --line-mode | gzip -c > wikipedia_T_parsoid_html.log.gz [production]
19:28 <ori> Repooling mw1152 for further testing of HHVM scaler [production]
19:03 <hoo> Synchronized php-1.26wmf12/extensions/Wikidata/: Update DataModel to fix SnakList (duration: 00m 20s) [production]
18:42 <hoo> Synchronized wmf-config/mobile-labs.php: consistency (duration: 00m 12s) [production]
18:41 <hoo> Synchronized wmf-config/InitialiseSettings-labs.php: consistency (duration: 00m 31s) [production]
18:02 <andrewbogott> restarted keystone on labcontrol1001 [production]
17:03 <jgage> beginning puppet CA replacement procedure [production]
16:06 <ejegg> enabled queue consumers [production]
16:05 <akosiaris> re-enabling ntp everywhere [production]
15:59 <ejegg> disabled queue consumers [production]
15:30 <hoo> Synchronized php-1.26wmf12/extensions/Wikidata/: Remove alias uniqueness constraints (duration: 00m 21s) [production]
15:06 <urandom> restbase1002: PWD=/home/eevans/restbase-mod-table-cassandra/maintenance; node thin_out_key_rev_value_data.js `hostname -i` local_group_wikimedia_T_parsoid_html 2>&1 | pv --line-mode | gzip -c > wikimedia_T_parsoid_html.log.gz [production]
15:05 <bblack> re-enabling puppet on caches [production]
14:59 <bblack> disabling puppet on caches (because puppet always breaks when you move files/modules around...) [production]
13:57 <bblack> rebooting cp2001 (test kernel update) [production]
11:32 <YuviPanda> rsync on labstore1002 finished, restarting to see what was skipped + errors [production]
10:47 <moritzm> installed patch security updates on 862 hosts [production]
10:42 <hashar> restarting Jenkins: upgrading Jenkins gearman plugin from 0.1.1-8-gf2024bd to 0.1.1-9-g08e9c42-change_192429_2 https://phabricator.wikimedia.org/T72597#1416913 [production]
07:48 <mobrovac> restbase restarting cassandra on rb1005 [production]
05:28 <LocalisationUpdate> ResourceLoader cache refresh completed at Wed Jul 1 05:28:38 UTC 2015 (duration 28m 37s) [production]
05:27 <csteipp> deployed patch for T103765 [production]
04:41 <krinkle> Synchronized php-1.26wmf12/includes/resourceloader/ResourceLoader.php: Iee884208c5c4b minify cache key (duration: 00m 11s) [production]
03:10 <mutante> git pull on strontium [production]
03:00 <LocalisationUpdate> completed (1.26wmf12) at 2015-07-01 03:00:21+00:00 [production]
02:53 <l10nupdate> Synchronized php-1.26wmf12/cache/l10n: (no message) (duration: 10m 12s) [production]
02:26 <LocalisationUpdate> completed (1.26wmf11) at 2015-07-01 02:26:55+00:00 [production]
02:23 <l10nupdate> Synchronized php-1.26wmf11/cache/l10n: (no message) (duration: 06m 50s) [production]
02:12 <springle> upgrade db1034 trusty [production]
01:37 <ori> Depooled mw1152. Req error dashboard shows elevated 5xx rates correlating with the server getting pooled, but the logs don't appear to corroborate it. Odd. [production]
01:03 <ori> Disabling Puppet on mw1152 for 12h to hack apache config to log locally [production]
00:42 <ori> Synchronized wmf-config/CommonSettings.php: I9a8018981: Double $wgMaxShellMemory on HHVM scalers (512 Mb => 1024 Mb) (duration: 00m 12s) [production]
00:34 <ori> pooled mw1152 (HHVM rendering) at weight 10 for testing [production]
00:33 <gwicke> rolling cassandra restart done [production]
00:23 <gwicke> starting rolling restart of cassandra nodes to apply new config [production]
00:01 <greg-g> we're still here [production]
2015-06-30 §
23:30 <hoo> Synchronized php-1.26wmf12/extensions/Wikidata/: Fix EntityParserOutputGenerator (duration: 00m 21s) [production]
22:55 <ori> depooled mw1152 [production]
22:52 <ori> Pooled HHVM image scaler (mw1152) at weight 1 for testing. [production]
22:52 <gwicke> updated restbase1004 to openjdk-8 [production]