3051-3100 of 10000 results (58ms)
2018-07-04 ยง
15:55 <marostegui> Optimize dewiki.logging on s5 codfw master with replication, this will generate lag on s5 codfw - T197459 [production]
15:52 <ema> cp300[3-6]: puppet node clean/deactivate T167376 [production]
15:50 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1089 for after maintenance (duration: 00m 51s) [production]
14:52 <moritzm> installing libipc-run-perl updates from jessie point release [production]
14:36 <moritzm> installing perl security updates on trusty (Debian already fixed) [production]
14:25 <akosiaris> upgrade kubernetes staging API server to 1.8.14 [production]
14:22 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1089 for after maintenance (duration: 00m 50s) [production]
14:10 <moritzm> installing file/libmagic security updates on trusty (Debian already fixed) [production]
13:54 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1089 for after maintenance (duration: 00m 50s) [production]
13:41 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1089 for after maintenance (duration: 00m 50s) [production]
13:34 <jynus@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2038, db2047 (duration: 02m 56s) [production]
12:56 <marostegui> Stop MySQL and reboot db1089 to upgrade+change it to statement - T197069 [production]
12:46 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1089 for maintenance - T197069 (duration: 02m 57s) [production]
12:39 <ema> cp3034 repooled after hw maintenance T189305 [production]
12:32 <volans> shutting down bast3002 for disk replacement [production]
12:04 <moritzm> installing ruby 1.9 security updates on trusty [production]
11:54 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1077 after alter table (duration: 00m 52s) [production]
11:54 <ema> repool cp3043 after hardware maintenance T179953 [production]
11:30 <ema> shutdown cp3048 and cp3034 (both already depooled) for hardware maintenance T190607 T189305 [production]
11:28 <moritzm> rolling restart of cassandra on restbase hosts in eqiad completed [production]
11:27 <moritzm> resuming rolling restart of cassandra on restbase hosts in eqiad completed [production]
11:17 <mark> cp3043: mdadm /dev/md0 --add /dev/sdc1 (sdc is former cp3048:sdb) [production]
11:09 <mark> cp3043: mdadm /dev/md0 -- fail /dev/sdb1 [production]
11:03 <ema> depool cp3043 (cache_upload) for hardware maintenance T179953 [production]
10:58 <godog> update compiler facts [production]
10:53 <jynus> stop db2038 and db2047 [production]
10:46 <marostegui> Deploy schema change on db1077 with replication, this will generate lag on labs s3 T191316 T192926 T89737 T195193 [production]
10:42 <marostegui> Stop replication on db1077 to drop triggers on db1124:3313 - T192926 [production]
10:39 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1077 for alter table (duration: 00m 50s) [production]
10:28 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1123 after alter table (duration: 00m 50s) [production]
10:01 <moritzm> rolling reboot of sca* for "lazy fpu" kernel updates [production]
09:44 <_joe_> stopping all cronjobs via a puppet run on terbium, T192092 [production]
09:29 <akosiaris> upload kubernetes 1.8.14 to apt.wikimedia.org/stretch-wikimedia/main [production]
09:16 <moritzm> uploaded linux-meta 1.18 for jessie-wikimedia to apt.wikimedia.org [production]
09:15 <elukey> reimage aqs1009 to Debian Stretch [production]
09:09 <jynus@deploy1001> Synchronized wmf-config/db-codfw.php: Depool db2038, db2047 (duration: 00m 50s) [production]
09:03 <marostegui> Optimize recentchanges table on s2 codfw - this will generate lag on codfw s2 - T178290 [production]
09:00 <akosiaris> manually rebalance the mathoid kubernetes production cluster namespaces pods wise [production]
08:56 <moritzm> uploaded linux 4.9.107~wmf1 for jessie-wikimedia to apt.wikimedia.org [production]
08:54 <marostegui> Deploy schema change on db1123 T191316 T192926 T89737 T195193 [production]
08:53 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1123 for alter table (duration: 00m 50s) [production]
08:53 <elukey> update analytics-in4 filter rules on cr1/cr2 eqiad - T198623 [production]
08:47 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1078 after alter table (duration: 00m 50s) [production]
08:19 <marostegui> Optimize recentchanges table on s6 codfw - this will generate lag on codfw s6 - T178290 [production]
08:09 <moritzm> rebooting multatuli for kernel update to 4.9.107~wmf1 [production]
08:00 <marostegui> Deploy schema change on db1078 T191316 T192926 T89737 T195193 [production]
07:59 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1078 for alter table (duration: 00m 50s) [production]
07:58 <ema> install misc VCL on all text hosts T164609 [production]
07:53 <volans> reimaging silver (spare host, to-be-decomm'ed) as testing host for the reimage script [production]
07:50 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1101:3317 after alter table (duration: 00m 53s) [production]