701-750 of 10000 results (45ms)
2017-11-27 §
10:53 <moritzm> installing postgresql-common security updates [production]
10:08 <ppchelko@tin> Finished deploy [cpjobqueue/deploy@e35aa05]: Revert using keep-alive (duration: 00m 22s) [production]
10:08 <ppchelko@tin> Started deploy [cpjobqueue/deploy@e35aa05]: Revert using keep-alive [production]
09:42 <ppchelko@tin> Finished deploy [cpjobqueue/deploy@b570d4e]: Make http agent use keep-alive (duration: 00m 48s) [production]
09:41 <ppchelko@tin> Started deploy [cpjobqueue/deploy@b570d4e]: Make http agent use keep-alive [production]
09:14 <godog> reimage restbase1007 - T179422 [production]
08:55 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Fully pool db1097:3314 - T178359 (duration: 00m 43s) [production]
08:40 <moritzm> installing openjdk security updates on hadoop, druid and kafka clusters [production]
08:27 <marostegui> Deploy schema change on dbstore1002 and dbstore1001 - T174569 [production]
08:22 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase db1097:3314 weight - T178359 (duration: 00m 45s) [production]
07:20 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase db1097:3314 weight - T178359 (duration: 00m 45s) [production]
07:10 <marostegui> Stop MySQL on db1021 as it will be decommissioned - T181378 [production]
06:53 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Remove db1021 from the config as it will be decommissioned - T181378 (duration: 00m 44s) [production]
06:52 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1021 from the config as it will be decommissioned - T181378 (duration: 00m 45s) [production]
06:27 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Pool db1097:3314 with low weight - T178359 (duration: 00m 46s) [production]
02:22 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.8) (duration: 05m 44s) [production]
2017-11-25 §
19:31 <marostegui> Set 32:3 disk to offline on db1051 [production]
16:09 <kartik@tin> Finished deploy [cxserver/deploy@11aecc9]: Update cxserver to 0c242c0, Pin service-runner to 2.4.2 (duration: 03m 29s) [production]
16:05 <kartik@tin> Started deploy [cxserver/deploy@11aecc9]: Update cxserver to 0c242c0, Pin service-runner to 2.4.2 [production]
16:05 <godog> unban statsd traffic from scb on graphite1001 - T181333 [production]
15:45 <ppchelko@tin> Finished deploy [cpjobqueue/deploy@e35aa05]: Rollback. Disable GC metric reporting T181333 (duration: 00m 31s) [production]
15:45 <ppchelko@tin> Started deploy [cpjobqueue/deploy@e35aa05]: Rollback. Disable GC metric reporting T181333 [production]
15:37 <volans> restarted statsd-proxy on graphite1001 (died during investigation) T181333 [production]
14:34 <godog> rolling restart of cxserver to alleviate metrics leak - T181333 [production]
14:26 <godog> restart cxserver on scb100[34] - T181333 [production]
14:10 <godog> roll-restart cpjobqueue to alleviate metrics leak - T181333 [production]
13:40 <godog> drop incoming statsd from scb to graphite1001 temporarily - T181333 [production]
08:32 <ariel@tin> Finished deploy [dumps/dumps@ec21673]: fix abstracts recombine job (duration: 00m 02s) [production]
08:32 <ariel@tin> Started deploy [dumps/dumps@ec21673]: fix abstracts recombine job [production]
2017-11-24 §
13:52 <moritzm> removing git packages from jessie-wikimedia/experimental (replaced by component/git) [production]
13:24 <moritzm> installing openjpeg2 updates (original security already got installed after initial release, but there was a binNMU for amd64) [production]
13:17 <marostegui> Stop replication on db1097 to reimport and recompress commonswiki.watchlist [production]
12:54 <jynus> reenabling puppet on db1071 [production]
12:50 <jynus> resetting replication on es1011 for consistency with other replica sets [production]
12:40 <jynus> setting up s8 topology on eqiad [production]
12:38 <jynus> disable puppet on db1071 and stop local s5 heartbeat there [production]
12:32 <reedy@tin> Synchronized docroot/mediawiki/keys/: Fixup keys (duration: 00m 45s) [production]
12:13 <marostegui> Enable GTID on es2018 - T181293 [production]
11:57 <marostegui> Disable puppet on es2018 - T181293 [production]
11:50 <jynus@tin> Synchronized wmf-config/db-codfw.php: depool es2018 T181293 (duration: 00m 45s) [production]
11:48 <marostegui> Reboot es2018 after full-upgrade - T181293 [production]
11:25 <marostegui> Restart mysql on es2018 [production]
11:25 <jynus@tin> Synchronized wmf-config/db-eqiad.php: db2085:3318, db2086:3318 (duration: 00m 43s) [production]
11:24 <jynus@tin> Synchronized wmf-config/db-codfw.php: Pool db2038, db2085:3318, db2086:3318 (duration: 00m 45s) [production]
10:55 <marostegui> Restart MySQL on db2086 to move s5 to s8 [production]
10:37 <jynus> cancelling db2085 restart, only doing mysql:s5 [production]
10:35 <jynus> restarting db2085 (including both s5 and s3 instances) [production]
10:33 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool all future s8 slaves for a topology change - T177208 (duration: 00m 45s) [production]
10:22 <moritzm> installing ca-cerfificates updates on trusty hosts [production]
09:57 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1097:3315 and db1092 (duration: 00m 45s) [production]