2951-3000 of 10000 results (48ms)
2017-05-30 §
06:23 <marostegui> Deploy alter table on s3 dbstore2001 - T166278 [production]
02:49 <l10nupdate@tin> ResourceLoader cache refresh completed at Tue May 30 02:49:20 UTC 2017 (duration 6m 44s) [production]
02:42 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.2) (duration: 07m 54s) [production]
02:22 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.1) (duration: 08m 22s) [production]
2017-05-29 §
20:04 <mobrovac@tin> Started restart [zotero/translation-server@50f216a]: Memory at 50% [production]
19:56 <gehel> removing wdqs1002 from LVS pending investigation of T166524 [production]
19:55 <gehel@puppetmaster1001> conftool action : set/pooled=no; selector: name=wdqs1002.eqiad.wmnet [production]
18:57 <gehel> restarting wdqs-updater on wdqs1002 [production]
17:40 <volans> re-enabled puppet on tegmen and re-enabled raid_handler T163998 [production]
17:29 <volans> disabled puppet on tegmen and disabled raid_handler temporarily T163998 [production]
15:02 <gehel> restarting wdqs-updater on wdqs1002 [production]
14:33 <moritzm> rebooting multatuli for systemd modules-load.d debugging [production]
14:24 <godog> upgrade prometheus-hhvm-exporter to 0.3-1 in codfw/eqiad with less verbose logging - T158286 [production]
14:15 <gehel> reset remote for elasticsearch/plugins deployment - T163708 [production]
14:14 <marostegui> Stop MySQL labsdb1009 to take a backup - T153743 [production]
14:04 <gehel> starting upgrade to elasticsearch 5.3.2 on cirrus codfw cluster - T163708 [production]
14:03 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2036 - T166278 (duration: 00m 41s) [production]
14:01 <marostegui> Deploy alter table s3 on codfw master db2018 - T166278 [production]
13:42 <moritzm> updating gdb on mw* servers [production]
13:10 <marostegui> Stop replication on db1070 to flush tables for export - T153743 [production]
13:07 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1070 - T153743 (duration: 00m 41s) [production]
13:02 <akosiaris> enable puppet across eqiad/esams after puppetmaster upgrade. [production]
12:52 <akosiaris> disable puppet across eqiad/esams for puppetmaster upgrade. This should avoid any irc spam about failed puppet agent runs [production]
12:52 <akosiaris> enable puppet across codfw/ulsfo after puppetmaster upgrade [production]
12:41 <akosiaris> disable puppet across codfw/ulsfo for puppetmaster upgrade. This should avoid any irc spam about failed puppet agent runs [production]
12:36 <moritzm> installing imagemagick security updates on jessie [production]
12:31 <akosiaris> update kubernetes policy-options on cr{1,2}-{eqiad,codfw}. T165732 [production]
10:39 <moritzm> installing fop security updates [production]
10:18 <ema> upgrade nginx to 1.11.10-1+wmf1 on hassium and hassaleh [production]
09:53 <moritzm> upgrade remaining mw* hosts already running HHVM 3.18 to 3.18.2+dfsg-1+wmf4 [production]
09:22 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1045 (duration: 00m 41s) [production]
09:01 <marostegui> Drop gather tables from: testwiki, test2wiki, enwikivoyage, hewiki, enwiki - T166097 [production]
08:02 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1023 - T166486 (duration: 00m 41s) [production]
08:02 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Remove db1023 - T166486 (duration: 00m 42s) [production]
07:38 <marostegui> Stop MySQL on db1095 to take a backup - this will make labsdb1009,10 and 11 break replication while it is down - T153743 [production]
07:01 <_joe_> reeanbling scap on mw2140, T166328 [production]
06:45 <_joe_> restarting changeprop on scb1002, using 15 gigs of RAM [production]
06:42 <marostegui> Deploy alter table s3 - dbstore2002 - T166278 [production]
06:41 <marostegui> Deploy alter table s4 - dbstore1002 - T166206 [production]
06:33 <_joe_> trying to restart pdfrender on scb1002 [production]
06:32 <marostegui> Deploy alter table s3 - db2036 - T166278 [production]
06:32 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2043, depool db2036 - T166278 (duration: 01m 44s) [production]
06:29 <_joe_> powercycling mw1294 [production]
06:11 <marostegui> Deploy alter table on s4 db1084 - T166206 [production]
06:10 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1091, depool db1084 - T166206 (duration: 02m 45s) [production]
06:01 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1091 - T166206 (duration: 03m 01s) [production]
05:54 <marostegui> Restart MySQL on db1047 - T166452 [production]
02:24 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.1) (duration: 08m 20s) [production]
2017-05-28 §
13:19 <jynus> restart db1069:3313 mysql instance, stuck on replication [production]
02:24 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.1) (duration: 08m 46s) [production]