8301-8350 of 10000 results (64ms)
2017-11-27 §
06:53 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Remove db1021 from the config as it will be decommissioned - T181378 (duration: 00m 44s) [production]
06:52 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1021 from the config as it will be decommissioned - T181378 (duration: 00m 45s) [production]
06:27 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Pool db1097:3314 with low weight - T178359 (duration: 00m 46s) [production]
02:22 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.8) (duration: 05m 44s) [production]
2017-11-25 §
19:31 <marostegui> Set 32:3 disk to offline on db1051 [production]
16:09 <kartik@tin> Finished deploy [cxserver/deploy@11aecc9]: Update cxserver to 0c242c0, Pin service-runner to 2.4.2 (duration: 03m 29s) [production]
16:05 <kartik@tin> Started deploy [cxserver/deploy@11aecc9]: Update cxserver to 0c242c0, Pin service-runner to 2.4.2 [production]
16:05 <godog> unban statsd traffic from scb on graphite1001 - T181333 [production]
15:45 <ppchelko@tin> Finished deploy [cpjobqueue/deploy@e35aa05]: Rollback. Disable GC metric reporting T181333 (duration: 00m 31s) [production]
15:45 <ppchelko@tin> Started deploy [cpjobqueue/deploy@e35aa05]: Rollback. Disable GC metric reporting T181333 [production]
15:37 <volans> restarted statsd-proxy on graphite1001 (died during investigation) T181333 [production]
14:34 <godog> rolling restart of cxserver to alleviate metrics leak - T181333 [production]
14:26 <godog> restart cxserver on scb100[34] - T181333 [production]
14:10 <godog> roll-restart cpjobqueue to alleviate metrics leak - T181333 [production]
13:40 <godog> drop incoming statsd from scb to graphite1001 temporarily - T181333 [production]
08:32 <ariel@tin> Finished deploy [dumps/dumps@ec21673]: fix abstracts recombine job (duration: 00m 02s) [production]
08:32 <ariel@tin> Started deploy [dumps/dumps@ec21673]: fix abstracts recombine job [production]
2017-11-24 §
13:52 <moritzm> removing git packages from jessie-wikimedia/experimental (replaced by component/git) [production]
13:24 <moritzm> installing openjpeg2 updates (original security already got installed after initial release, but there was a binNMU for amd64) [production]
13:17 <marostegui> Stop replication on db1097 to reimport and recompress commonswiki.watchlist [production]
12:54 <jynus> reenabling puppet on db1071 [production]
12:50 <jynus> resetting replication on es1011 for consistency with other replica sets [production]
12:40 <jynus> setting up s8 topology on eqiad [production]
12:38 <jynus> disable puppet on db1071 and stop local s5 heartbeat there [production]
12:32 <reedy@tin> Synchronized docroot/mediawiki/keys/: Fixup keys (duration: 00m 45s) [production]
12:13 <marostegui> Enable GTID on es2018 - T181293 [production]
11:57 <marostegui> Disable puppet on es2018 - T181293 [production]
11:50 <jynus@tin> Synchronized wmf-config/db-codfw.php: depool es2018 T181293 (duration: 00m 45s) [production]
11:48 <marostegui> Reboot es2018 after full-upgrade - T181293 [production]
11:25 <marostegui> Restart mysql on es2018 [production]
11:25 <jynus@tin> Synchronized wmf-config/db-eqiad.php: db2085:3318, db2086:3318 (duration: 00m 43s) [production]
11:24 <jynus@tin> Synchronized wmf-config/db-codfw.php: Pool db2038, db2085:3318, db2086:3318 (duration: 00m 45s) [production]
10:55 <marostegui> Restart MySQL on db2086 to move s5 to s8 [production]
10:37 <jynus> cancelling db2085 restart, only doing mysql:s5 [production]
10:35 <jynus> restarting db2085 (including both s5 and s3 instances) [production]
10:33 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool all future s8 slaves for a topology change - T177208 (duration: 00m 45s) [production]
10:22 <moritzm> installing ca-cerfificates updates on trusty hosts [production]
09:57 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1097:3315 and db1092 (duration: 00m 45s) [production]
09:50 <jynus> restarting db2045 [production]
09:32 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1101:3318 db1097:3315 abd db1092 (duration: 00m 45s) [production]
08:49 <marostegui> Stop MySQL on db1092 [production]
08:48 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1101:3318 in s5 to warm it up and depool db1092 - T178359 T177208 (duration: 00m 45s) [production]
08:40 <moritzm> installing java security updates on notebook* hosts [production]
08:38 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Slowly pool db1097:3315 - T178359 (duration: 00m 45s) [production]
08:37 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Slowly pool db1097:3315 - T178359 (duration: 00m 45s) [production]
08:20 <moritzm> installing java security updates on meitnerium [production]
08:15 <moritzm> installing java security updates on stat1004 [production]
08:14 <hashar> restarting jenkins on contint1001 for a java update [production]
08:07 <elukey> re-enabling piwik on bohrium (only VM running on ganeti1006 atm) after mysql tables restore completed [production]
06:47 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Slowly pool db1101:3318 in s5 to warm it up - T178359 (duration: 00m 45s) [production]