5101-5150 of 10000 results (47ms)
2017-04-27 §
09:46 <moritzm> upgrading mysql on bohrium/piwik [production]
09:25 <_joe_> restarting all redis instances for jobqueues on eqiad to force a full resync with masters in codfw T163337 [production]
08:55 <jynus> deploying alter table to all wikis on s6 T163979 [production]
08:54 <_joe_> restarting redis rdb1001:6380 after cleaning up the current AOF files for investigation of T163337 [production]
08:50 <moritzm> installing django security updates [production]
08:29 <godog> ms-be1039 issue "controller slot=3 pd 1I:1:5 modify disablepd" to force failed sdc - T163690 [production]
08:25 <ema> restart varnish-be on cp2024 with expiry thread RT experiment enabled [production]
08:19 <ema> upgrade varnish to 4.1.5-1wm3 on cp2024 [production]
07:56 <elukey> aqs100[69] back serving AQS traffic [production]
07:55 <ema> varnish 4.1.5-1wm3 uploaded to apt.w.o T145661 [production]
07:16 <marostegui@naos> Synchronized wmf-config/db-eqiad.php: Repool hosts that needed to be moved for the network maintenance - T162681 (duration: 02m 32s) [production]
06:53 <marostegui> Reboot es1014 for kernel upgrade - T162029 [production]
06:50 <elukey> executed kafka preferred-replica-election to rebalance topic leaders in the analytics cluster after maintenance [production]
06:45 <marostegui> Reboot es1011 for kernel upgrade - T162029 [production]
06:39 <marostegui> Logging for the record: drop table hashs from s2, s3 and s7 (only places where it existed) - T54927 [production]
06:23 <_joe_> moving orphaned objects in ms-be1039's root partition in sdc1/stale_root to save space [production]
06:17 <marostegui> Deploy schema change on s7 metawiki.pagelinks to remove partitioning on db1041 - T153300 [production]
06:14 <marostegui> Deploy alter table on s5 (wikidatawiki) on db1049 - T163548 [production]
06:14 <marostegui> Deploy alter table on s5 (wikidatawiki) on db1070 (running locally instead of neodymium as this host will be affected by the network maintenance) - T163548 [production]
06:11 <marostegui> Deploy alter table on s5 (wikidatawiki) on db1070 (running locally instead of neodymium as this host will be affected by the network maintenance) - T130067 T162539 [production]
06:08 <marostegui> Deploy alter table on s5 (wikidatawiki) on db1049 - T130067 T162539 [production]
05:59 <marostegui> Deploy alter table labsdb1003 (wikidatawiki) https://phabricator.wikimedia.org/T162539%C2%A0https://phabricator.wikimedia.org/T163548 [production]
05:24 <Amir1> cleaning some rows in ores_classification in enwiki (T159753) [production]
03:44 <ottomata> starting kafka broker on kafka1020 [production]
03:40 <ottomata> running kafka replica election to bring kafka1018 back as preferred leader [production]
02:21 <Jamesofur> running populateEditCount.php in screen on wast for T163854, counting edits for board vote eligibility [production]
02:16 <RoanKattouw> Reset 2FA for T163931 on labswiki [production]
00:14 <twentyafterfour> starting phabricator update [production]
00:05 <ebernhardson@naos> Synchronized php-1.29.0-wmf.21/extensions/CirrusSearch/includes/Searcher.php: cirrus: align sister search boost template config variable with documentation (duration: 00m 50s) [production]
2017-04-26 §
23:51 <niharika29@naos> Synchronized php-1.29.0-wmf.21/includes/interwiki/ClassicInterwikiLookup.php: Interwiki: Dont override interwiki map order (T145337) (duration: 01m 00s) [production]
23:38 <niharika29@naos> Synchronized php-1.29.0-wmf.21/extensions/CirrusSearch/: Align other index template boosting config names (duration: 00m 57s) [production]
23:34 <niharika29@naos> Synchronized wmf-config/InitialiseSettings.php: Increase max field count for wikidata; Enable Flow beta feature on arwiki (T155720) (duration: 00m 58s) [production]
23:31 <niharika29@naos> Synchronized wmf-config/InitialiseSettings.php: Increase max field count for wikidata; Enable Flow beta feature on arwiki (T155720) (duration: 01m 04s) [production]
23:29 <niharika29@naos> Synchronized wmf-config/CirrusSearch-common.php: [cirrus] Increase max field count for wikidata (duration: 01m 23s) [production]
21:42 <mutante> running puppet on all cache::misc nodes via cumin to switch ORES to eqiad [production]
21:30 <mutante> restarting uwsgi-ores service on all scb2* with systemctl restart [production]
21:15 <twentyafterfour> finished with mediawiki deployment train for group1. Everything appears stable, no increase in logspam. [production]
21:12 <twentyafterfour@naos> rebuilt wikiversions.php and synchronized wikiversions files: group1 wikis to 1.29.0-wmf.21 [production]
21:09 <halfak@naos> Started restart [ores/deploy@cc12103]: (no justification provided) [production]
21:07 <twentyafterfour@naos> Synchronized php-1.29.0-wmf.21/extensions/Flow/Hooks.php: sync https://gerrit.wikimedia.org/r/#/c/350481/ refs T163896 T161733 (duration: 01m 20s) [production]
21:05 <arlolra> Updated Parsoid to 4949857a (T116508, T64270, T133673) [production]
20:55 <arlolra@naos> Finished deploy [parsoid/deploy@8d109eb]: Updating Parsoid to 4949857a (duration: 06m 52s) [production]
20:48 <arlolra@naos> Started deploy [parsoid/deploy@8d109eb]: Updating Parsoid to 4949857a [production]
20:48 <twentyafterfour> deploying https://gerrit.wikimedia.org/r/#/c/350481/1 to get the train back on track refs T161733 [production]
20:35 <bsitzmann@naos> Finished deploy [mobileapps/deploy@b5afcb8]: Update mobileapps to 14bd4a5 (duration: 15m 17s) [production]
20:34 <halfak@naos> Finished deploy [ores/deploy@cc12103]: T162892 (duration: 21m 28s) [production]
20:31 <elukey> restart zookeeper on conf1003 after network maintenance [production]
20:20 <bsitzmann@naos> Started deploy [mobileapps/deploy@b5afcb8]: Update mobileapps to 14bd4a5 [production]
20:12 <halfak@naos> Started deploy [ores/deploy@cc12103]: T162892 [production]
19:50 <elukey> restart kafka nodes (kafka1018 and kafka1020) after network maintenance [production]