751-800 of 10000 results (32ms)
2018-05-18 §
01:27 <twentyafterfour> syncing wmf.4 again to deploy https://gerrit.wikimedia.org/r/#/c/433673/ refs T194900 T191050 [production]
00:19 <mutante> rdb2004 - down in Icinga since >1d, nothing on console, dont see a SAL entry. powercycling [production]
2018-05-17 §
23:35 <twentyafterfour> MediaWiki Train for 1.32.0-wmf.4 remains blocked by critical bugs, see T191050 for a list of blockers. [production]
23:34 <twentyafterfour@tin> Synchronized php: group1 wikis to 1.32.0-wmf.3 refs T191050 (duration: 01m 20s) [production]
23:32 <twentyafterfour@tin> rebuilt and synchronized wikiversions files: group1 wikis to 1.32.0-wmf.3 refs T191050 [production]
23:29 <twentyafterfour> rolling back [production]
23:29 <twentyafterfour> still seeing Notice: Undefined variable: nonce in /srv/mediawiki/php-1.32.0-wmf.4/includes/resourceloader/ResourceLoaderClientHtml.php on line 272 [production]
23:28 <twentyafterfour@tin> Synchronized php: group1 wikis to 1.32.0-wmf.4 refs T191050 (duration: 01m 17s) [production]
23:26 <twentyafterfour@tin> rebuilt and synchronized wikiversions files: group1 wikis to 1.32.0-wmf.4 refs T191050 [production]
23:22 <twentyafterfour@tin> Synchronized php-1.32.0-wmf.4/: sync https://gerrit.wikimedia.org/r/#/c/433673/ refs T194900 (duration: 09m 54s) [production]
22:53 <twentyafterfour> deploying https://gerrit.wikimedia.org/r/#/c/433673/ refs T194900 T191050 [production]
19:46 <twentyafterfour@tin> Synchronized php: group1 wikis to 1.32.0-wmf.3 (duration: 01m 20s) [production]
19:44 <twentyafterfour@tin> rebuilt and synchronized wikiversions files: group1 wikis to 1.32.0-wmf.3 [production]
19:41 <twentyafterfour> rolling back due to spike of undefined variable notices in resourceloader and ApiCSPReport.php [production]
19:39 <twentyafterfour@tin> Synchronized php: group1 wikis to 1.32.0-wmf.4 (duration: 01m 21s) [production]
19:38 <twentyafterfour@tin> rebuilt and synchronized wikiversions files: group1 wikis to 1.32.0-wmf.4 [production]
19:33 <twentyafterfour> getting the train back on track. Starting with group1 to 1.32.0-wmf.4 right now, will do all wikis to wmf.4 after verifying that group1 looks stable. [production]
19:28 <twentyafterfour@tin> Synchronized php-1.32.0-wmf.4/extensions/Echo/: unbreak T194848 (duration: 01m 24s) [production]
19:11 <twentyafterfour> train is still blocked by T194848 [production]
17:23 <arlolra> Updated Parsoid to fd49ab4 (T194821, T194687) [production]
17:15 <arlolra@tin> Finished deploy [parsoid/deploy@091b891]: Updating Parsoid to fd49ab4 (duration: 09m 35s) [production]
17:06 <arlolra@tin> Started deploy [parsoid/deploy@091b891]: Updating Parsoid to fd49ab4 [production]
16:11 <marostegui> Reload haproxy on dbproxy1010 to depool labsdb1011 T174047 T194341 [production]
16:04 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Repool db1106 (duration: 01m 21s) [production]
15:29 <marostegui> Manually fail disk #6 on db1064 to get it replaced [production]
15:28 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Repool db1093 with full weight (duration: 01m 21s) [production]
15:00 <marostegui> Reload haproxy on dbproxy1010 to repool labsdb1010 [production]
14:39 <papaul> shutting down furud for shelves swap [production]
14:35 <marostegui> Reload haproxy on dbproxy1010 to depool labsdb1010 https://phabricator.wikimedia.org/T174047 https://phabricator.wikimedia.org/T194341 [production]
14:17 <marostegui> Manually fail disk #2 on db1064 to get it replaced [production]
14:03 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Change db1066 IP - T193847 (duration: 01m 21s) [production]
13:58 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Change db1066 IP - T193847 (duration: 01m 17s) [production]
13:50 <marostegui> Power off db1066 for a rack change - T193847 [production]
13:46 <marostegui> Stop MySQL on db1066 for a rack change - T193847 [production]
13:45 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1066 for a rack change - T193847 (duration: 01m 21s) [production]
13:38 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1105 (duration: 01m 21s) [production]
13:36 <jynus> restarted db1105 by mistake, turning it back on [production]
13:15 <jynus> stop and reimage db1106 [production]
12:53 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Depool db1106 (duration: 01m 20s) [production]
12:50 <marostegui> Deploy schema change on s3 codfw primary master (db2043) this will generate lag on codfw - T191519 T188299 T190148 [production]
11:21 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1090:3317 after alter table (duration: 01m 21s) [production]
11:08 <marostegui> Stop MySQL and poweroff db1067 - T194852 [production]
10:12 <mobrovac@tin> Finished deploy [citoid/deploy@8a26508]: Update citoid to 2f35126 - T179123 T185217 (duration: 02m 52s) [production]
10:09 <mobrovac@tin> Started deploy [citoid/deploy@8a26508]: Update citoid to 2f35126 - T179123 T185217 [production]
09:30 <reedy@tin> Synchronized wmf-config/throttle.php: Throttle for Barcelona Hackathon (duration: 01m 22s) [production]
08:41 <jynus> stop and reimage db2049 [production]
08:04 <jynus> stop and reimage db2056 [production]
07:53 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Repool db1093 with low load (duration: 01m 20s) [production]
07:25 <elukey> bounced all the prometheus burrow exporters on kafkamon* hosts to refresh their metrics and drop old/expired cgroups [production]
07:22 <marostegui> Deploy schema change on db1090:3317 - T191519 T188299 T190148 [production]