9651-9700 of 10000 results (71ms)
2017-05-30 §
11:14 <godog> upgrade grafana to 4.3.1 on krypton [production]
10:44 <gilles> run refreshFileHeaders for group 0 wikis on Terbium [production]
10:32 <akosiaris> enable calico IPv6 BGP peering for cr1-eqiad [production]
10:18 <jynus> stopping and backing up db2048 in preparation for reimage [production]
09:50 <ema> upgrade prometheus-node-exporter to 0.14.0~git20170523-0 on debian systems [production]
09:43 <jynus> restarting db2055 for mariadb and kernel upgrade [production]
08:23 <elukey> restart jmxtrans on all the kafka brokers (analytics+main-codfw/eqiad) for jvm upgrades [production]
08:17 <elukey> restart kafka on kafka1018 for jvm upgrades [production]
07:38 <gehel@puppetmaster1001> conftool action : set/pooled=yes; selector: name=wdqs1002.eqiad.wmnet [production]
07:38 <gehel> wdqs1002 back in LVS - T166524 [production]
07:09 <marostegui> Deploy alter table on enwiki.revision on db1047 - T166452 [production]
06:45 <marostegui> Deploy alter table on s3 db1038 - T166278 [production]
06:41 <marostegui> Deploy alter table on s3 dbstore1002 - https://phabricator.wikimedia.org/T166278 [production]
06:35 <marostegui> Deploy alter table s4 - db1081 - https://phabricator.wikimedia.org/T166206 [production]
06:35 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1084, depool db1081 - T166206 (duration: 00m 59s) [production]
06:23 <marostegui> Deploy alter table on s3 dbstore2001 - T166278 [production]
02:49 <l10nupdate@tin> ResourceLoader cache refresh completed at Tue May 30 02:49:20 UTC 2017 (duration 6m 44s) [production]
02:42 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.2) (duration: 07m 54s) [production]
02:22 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.1) (duration: 08m 22s) [production]
2017-05-29 §
20:04 <mobrovac@tin> Started restart [zotero/translation-server@50f216a]: Memory at 50% [production]
19:56 <gehel> removing wdqs1002 from LVS pending investigation of T166524 [production]
19:55 <gehel@puppetmaster1001> conftool action : set/pooled=no; selector: name=wdqs1002.eqiad.wmnet [production]
18:57 <gehel> restarting wdqs-updater on wdqs1002 [production]
17:40 <volans> re-enabled puppet on tegmen and re-enabled raid_handler T163998 [production]
17:29 <volans> disabled puppet on tegmen and disabled raid_handler temporarily T163998 [production]
15:02 <gehel> restarting wdqs-updater on wdqs1002 [production]
14:33 <moritzm> rebooting multatuli for systemd modules-load.d debugging [production]
14:24 <godog> upgrade prometheus-hhvm-exporter to 0.3-1 in codfw/eqiad with less verbose logging - T158286 [production]
14:15 <gehel> reset remote for elasticsearch/plugins deployment - T163708 [production]
14:14 <marostegui> Stop MySQL labsdb1009 to take a backup - T153743 [production]
14:04 <gehel> starting upgrade to elasticsearch 5.3.2 on cirrus codfw cluster - T163708 [production]
14:03 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2036 - T166278 (duration: 00m 41s) [production]
14:01 <marostegui> Deploy alter table s3 on codfw master db2018 - T166278 [production]
13:42 <moritzm> updating gdb on mw* servers [production]
13:10 <marostegui> Stop replication on db1070 to flush tables for export - T153743 [production]
13:07 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1070 - T153743 (duration: 00m 41s) [production]
13:02 <akosiaris> enable puppet across eqiad/esams after puppetmaster upgrade. [production]
12:52 <akosiaris> disable puppet across eqiad/esams for puppetmaster upgrade. This should avoid any irc spam about failed puppet agent runs [production]
12:52 <akosiaris> enable puppet across codfw/ulsfo after puppetmaster upgrade [production]
12:41 <akosiaris> disable puppet across codfw/ulsfo for puppetmaster upgrade. This should avoid any irc spam about failed puppet agent runs [production]
12:36 <moritzm> installing imagemagick security updates on jessie [production]
12:31 <akosiaris> update kubernetes policy-options on cr{1,2}-{eqiad,codfw}. T165732 [production]
10:39 <moritzm> installing fop security updates [production]
10:18 <ema> upgrade nginx to 1.11.10-1+wmf1 on hassium and hassaleh [production]
09:53 <moritzm> upgrade remaining mw* hosts already running HHVM 3.18 to 3.18.2+dfsg-1+wmf4 [production]
09:22 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1045 (duration: 00m 41s) [production]
09:01 <marostegui> Drop gather tables from: testwiki, test2wiki, enwikivoyage, hewiki, enwiki - T166097 [production]
08:02 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1023 - T166486 (duration: 00m 41s) [production]
08:02 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Remove db1023 - T166486 (duration: 00m 42s) [production]
07:38 <marostegui> Stop MySQL on db1095 to take a backup - this will make labsdb1009,10 and 11 break replication while it is down - T153743 [production]