101-150 of 10000 results (35ms)
2019-04-10 §
15:00 <fsero> repooling mwdebug2002 [production]
15:00 <jijiki> Enable puppet on thumbor1001, switch back to nginx, pool thumbor1004 - T187765 [production]
14:57 <fsero> repooling mwdebug2001 [production]
14:20 <hashar> CI processing was a bit slower than usual over the past couple hours or so. It should be slightly faster now T220606 [production]
14:13 <joal@deploy1001> Finished deploy [analytics/aqs/deploy@fc1d232]: Deploying per-page limits for druid-endpoints (duration: 14m 41s) [production]
13:58 <joal@deploy1001> Started deploy [analytics/aqs/deploy@fc1d232]: Deploying per-page limits for druid-endpoints [production]
13:47 <fsero> resizing disk on mwdebug2002 T219989 [production]
13:42 <anomie@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Setting actor migration to write-both/read-new on group0 (T188327) (duration: 01m 00s) [production]
13:19 <marostegui> Deploy schema change on aawiki aawikibooks aawiktionary abwiki abwiktionary acewiki advisorswiki advisorywiki adywiki afwiki on x1 - T136427 [production]
12:41 <urandom> decommissioning cassandra-b, restbase2007 -- T208087 [production]
12:40 <hashar> contint2001: stopped puppet and zuul-merger for debugging [production]
12:17 <jbond42> rolling security update of systemd on stretch systems [production]
12:07 <Amir1> EU swat is done [production]
12:07 <ladsgroup@deploy1001> Synchronized wmf-config/CommonSettings.php: SWAT: Prep work for deploying UrlShortener extension (T108557), part II (duration: 01m 00s) [production]
12:05 <ladsgroup@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: Prep work for deploying UrlShortener extension (T108557), part I (duration: 01m 00s) [production]
11:46 <dcausse> elastisearch search cluster: reindexing zh-min-nan wikis (T219533) [production]
10:55 <moritzm> upgrading nodejs on analytics-tool1002 to latest node 10 version from component/node10 [production]
10:46 <gilles> T220265 setZoneAccess on all wikis finished [production]
10:40 <akosiaris> upgrade kubernetes-node on kubestage1002 (staging cluster) to 1.12.7-1 T220405 [production]
10:33 <moritzm> upgrading nodejs on aqs* to latest node 10 version from component/node10 [production]
10:25 <fsero> resizing disk on mwdebug2001 T219989 [production]
10:17 <akosiaris> upload kubernetes_1.12.7-1 to apt.wikimedia.org/stretch-wikimedia component main T220405 [production]
10:14 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1064 T217453 (duration: 00m 59s) [production]
10:08 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1120 T217453 (duration: 01m 03s) [production]
09:59 <moritzm> upgrading labweb hosts (wikitech) to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
09:51 <akosiaris> upgrade kubernetes-node on kubestage1001 (staging cluster) to 1.12.7-1 T220405 [production]
09:50 <moritzm> upgrading snapshot hosts to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
09:40 <akosiaris> upgrade kubernetes-master on neon (staging cluster) to 1.12.7-1 T220405 [production]
09:40 <akosiaris> upgrade kubernetes-master on neon (staging cluster) to 1.12.7-1 [production]
09:05 <moritzm> upgrading job runners mw1299-mw1311 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
08:56 <elukey> restart druid-broker on druid100[4-6] - stuck after attempt datasource delete action [production]
08:46 <godog> roll-restart swift frontends - T214289 [production]
08:36 <elukey> update thirdparty/cloudera packages to cdh 5.16.1 for jessie/stretch-wikimedia - T218343 [production]
08:26 <onimisionipe@deploy1001> Finished deploy [kartotherian/deploy@f7518bb] (stretch): Insert maps2003 into stretch environment (duration: 00m 22s) [production]
08:26 <onimisionipe@deploy1001> Started deploy [kartotherian/deploy@f7518bb] (stretch): Insert maps2003 into stretch environment [production]
08:12 <gilles> T220265 foreachwiki extensions/WikimediaMaintenance/filebackend/setZoneAccess.php --backend local-multiwrite [production]
07:22 <mholloway-shell@deploy1001> Finished deploy [mobileapps/deploy@efd5bd5]: Revert "Bifurcate imageinfo queries to improve performance" (T220574) (duration: 04m 05s) [production]
07:18 <mholloway-shell@deploy1001> Started deploy [mobileapps/deploy@efd5bd5]: Revert "Bifurcate imageinfo queries to improve performance" (T220574) [production]
07:12 <onimisionipe> depooling maps200[34] to increase cassandra replication factor - T198622 [production]
07:09 <jijiki> Rolling restart thumbor service [production]
07:08 <jijiki> Upgrading Thumbor servers to python-thumbor-wikimedia to 2.4-1+deb9u1 [production]
06:59 <marostegui> Deploy schema change on x1 master, with replication, lag will happen on x1 T217453 [production]
06:59 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool x1 slaves T217453 (duration: 01m 13s) [production]
05:52 <_joe_> setting both mwdebug200{1,2} to pooled = inactive to remove them from scap dsh list and allow deployments, T219989 [production]
05:12 <_joe_> same on mwdebug2001 [production]
05:08 <_joe_> removing hhvm cache on mwdebug2002 [production]
00:37 <Krinkle> last scap sync-file failed to mwdebug2002.codfw and mwdebug2001.codfw due to insufficient disk space [production]
00:20 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.25/resources/src/startup/: I3b9f1a13379a / Ie9db60e417cca (duration: 01m 01s) [production]
2019-04-09 §
23:14 <twentyafterfour@deploy1001> Pruned MediaWiki: 1.33.0-wmf.17 [keeping static files] (duration: 06m 03s) [production]
22:31 <twentyafterfour@deploy1001> Finished scap: testwikis wikis to 1.33.0-wmf.25 refs T206679 (duration: 39m 59s) [production]