9851-9900 of 10000 results (75ms)
2018-04-04 ยง
15:44 <jynus> starting backup from es2015 (will create lag) [production]
15:37 <jynus@tin> Synchronized wmf-config/db-codfw.php: Depool es2015 (duration: 01m 17s) [production]
15:19 <mobrovac@tin> Synchronized wmf-config/jobqueue.php: Clean up config for the rest of high-traffic jobs after the switch - T190327 (duration: 01m 16s) [production]
15:14 <madhuvishy> Update ttl for dumps.wikimedia.org CNAME to 1M in prep for switchover to labstore1007 T188646 [production]
15:07 <mobrovac@tin> Started restart [restbase/deploy@f3a53b6]: Pick up the net.ipv4.tcp_tw_reuse flag change - T190213 [production]
15:06 <elukey> delete /srv/deployment/prometheus from restbase* as clean up step for T181728 [production]
14:30 <anomie> Running populateArchiveRevId.php on group0 wikis for T191307 [production]
14:20 <elukey> apply net.ipv4.tcp_tw_reuse=1 to restbase* via https://gerrit.wikimedia.org/r/#/c/421901 - T190213 [production]
14:15 <moritzm> updating deployment-prep to HHVM 3.18.5+wmf6 [production]
14:11 <godog> purge cron smart-data-dump from lvs100[1-6] [production]
14:09 <marostegui> Deploy schema change on db1099:3311 - T187089 T185128 T153182 [production]
14:08 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1099:3311 for alter table (duration: 01m 16s) [production]
14:08 <moritzm> uploaded HHVM 3.18.5+wmf6 to component/icu57 for jessie-wikimedia (updated build with the security fix for CVE-2018-6334) [production]
13:59 <marostegui> Deploy schema change on dbstore1002:s1 - T187089 T185128 T153182 [production]
13:56 <godog> rollout https://gerrit.wikimedia.org/r/c/423852 across ms-fe machines - T183902 [production]
13:32 <zeljkof> EU SWAT finished [production]
13:29 <zfilipin@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:423911|Revert "Add namespace to euwiki" (T191396)]] (duration: 01m 14s) [production]
13:08 <godog> upgrade smartmontools to -backports version after https://gerrit.wikimedia.org/r/c/423871/ [production]
12:02 <elukey> removing /srv/deployment/prometheus from restbase2001/1007 - T181728 [production]
12:00 <akosiaris> revert scb hosts to apertium-fra-cat_1.2.0~r78602-1+wmf2 [production]
11:47 <marostegui@tin> Synchronized wmf-config/db-codfw.php: db2057 is now a candidate master for s3 - T191275 (duration: 01m 17s) [production]
11:13 <akosiaris> upgrade apertium on all scb hosts. Rolling update with in groups of 2 hosts with a 30 seconds delay [production]
11:06 <marostegui> Stop MySQL on db2057 for binlog format change, mariadb and kernel upgrade [production]
11:02 <akosiaris> upgrade apertium on scb1001 [production]
09:46 <marostegui> Deploy schema change on s1 codfw master db2048 (this will generate lag on codfw) - T187089 T185128 T153182 [production]
09:30 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Restore original weight for db1077 (duration: 01m 16s) [production]
09:25 <Amir1> end of the deleteAutoPatrolLogs.php script on mediawikiwiki (T184485) [production]
09:24 <marostegui@tin> Synchronized wmf-config/db-codfw.php: db2041 is now a candidate master for s2 - T191275 (duration: 01m 16s) [production]
09:16 <elukey> executed systemctl reset-failed kafka-mirror-main-eqiad_to_jumbo-eqiad.service on kafka1020 [production]
09:02 <Amir1> start of mwscript deleteAutoPatrolLogs.php --wiki=mediawikiwiki--before 20180223210426 --sleep 2 (T184485) [production]
09:02 <marostegui> Stop MySQL on db2041 for binlog format change and kernel upgrade [production]
09:01 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2041 (duration: 01m 17s) [production]
08:46 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Restore original weight for db1072 (duration: 01m 17s) [production]
08:19 <Amir1> start of ladsgroup@terbium:~$ mwscript deleteAutoPatrolLogs.php --wiki=mediawikiwiki --check-old --before 20160423210426 (T184485) [production]
08:17 <Amir1> start of ladsgroup@terbium:~$ mwscript deleteAutoPatrolLogs.php --wiki=mediawikiwiki --dry-run --check-old --before 20160423210426 [production]
08:08 <marostegui> Deploy schema change on s3 primary master (db1075) - T153182 T185128 [production]
08:05 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Slowly repool db1072 (duration: 01m 17s) [production]
07:59 <godog> depool ms-fe2005 to test rewrite.py - T183902 [production]
07:53 <marostegui> Drop flaggedrevs from s3 mediawikiwiki - T186865 [production]
07:37 <marostegui@tin> Synchronized wmf-config/db-codfw.php: db2055 is now a candidate master - T191275 (duration: 01m 16s) [production]
07:37 <moritzm> running some apache/stretch tests on mw2261 [production]
07:36 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2083 - T188279 (duration: 01m 17s) [production]
07:30 <ema> finish up cache@eqiad reboots for retpoline kernel updates T188092 [production]
07:26 <marostegui> Restart MySQL on db2055 to change its binlog to STATEMENT - T191275 [production]
05:59 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db2083 - T188279 (duration: 01m 17s) [production]
05:48 <marostegui> Deploy schema change on db1072 - s3 - with replication. This will generate lag on labs T187089 T185128 T153182 [production]
05:43 <marostegui> Drop click_tracking_events table from where it still exists - T115982 [production]
05:21 <marostegui> Stop mariadb for upgrade and kernel upgrade on db1072 - this will generate lag on s3 labs [production]
05:19 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1072 for alter table, kernel and mariadb upgrade (duration: 01m 17s) [production]
02:32 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.27) (duration: 05m 31s) [production]