201-250 of 10000 results (29ms)
2018-04-02 §
15:56 <ebernhardson> restart elasticsearch on elastic1024, been stuck at 100% cpu for 3+ hours [production]
15:42 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Change db2035 IP - T191193 (duration: 01m 15s) [production]
15:40 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Change db2035 IP - T191193 (duration: 01m 15s) [production]
15:28 <marostegui> Stop MySQL and power off db2035 (s2 codfw master - this will stop replication on s2 codfw slaves) for rack change - T191193 [production]
15:06 <madhuvishy> Reenabled puppet and rolled out mounting new dumps NFS shares from labstore1006|7 on VPS instances T188643 [production]
14:40 <cmjohnson1> disabling puppet on decom host db1020 [production]
14:28 <madhuvishy> Disabling puppet across VPS instances with dumps mounted (https://phabricator.wikimedia.org/P6921) T188643 [production]
14:22 <marostegui> Drop contest* tables from s3 - T186867 [production]
14:12 <akosiaris@puppetmaster1001> conftool action : set/weight=15; selector: dc=eqiad,service=recommendation-api,cluster=scb,name=scb1003.* [production]
14:12 <akosiaris@puppetmaster1001> conftool action : set/weight=15; selector: dc=eqiad,service=recommendation-api,cluster=scb,name=scb1004.* [production]
14:10 <akosiaris> lower weight for scb1001, scb1002 from 10 to 8 for all services. T191199. scb1003, scb1004 have a weight of 15 already [production]
14:09 <akosiaris@puppetmaster1001> conftool action : set/weight=8; selector: dc=eqiad,cluster=scb,name=scb1002.* [production]
14:09 <akosiaris@puppetmaster1001> conftool action : set/weight=8; selector: dc=eqiad,cluster=scb,name=scb1001.* [production]
13:54 <ariel@tin> Finished deploy [dumps/dumps@0363d50]: add check that xml files don't have binary corruption (nulls) after the header (duration: 00m 04s) [production]
13:54 <ariel@tin> Started deploy [dumps/dumps@0363d50]: add check that xml files don't have binary corruption (nulls) after the header [production]
13:48 <twentyafterfour@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: Sync initializesettings for T190445 (duration: 01m 16s) [production]
13:36 <twentyafterfour@tin> Synchronized wmf-config/throttle.php: SWAT: Sync throttle rules for T191187 (duration: 01m 15s) [production]
13:30 <twentyafterfour@tin> Synchronized wmf-config/throttle.php: SWAT: Sync throttle rules for T191168 (duration: 01m 16s) [production]
13:27 <jynus> restarting pdfrender on scd1003 (Socket timeout) [production]
12:49 <akosiaris> upgrade mediawiki servers for hhvm upgrade [production]
12:06 <marostegui> Deploy schema change on dbstore1002 - s3 - T187089 T185128 T153182 [production]
11:51 <akosiaris> repool mediawiki canary servers after hhvm upgrade [production]
11:44 <akosiaris> depool mediawiki canary servers for hhvm upgrade [production]
10:16 <jdrewniak@tin> Synchronized portals: Wikimedia Portals Update: [[gerrit:423456|Bumping portals to master (T128546)]] (duration: 01m 16s) [production]
10:15 <jdrewniak@tin> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:423456|Bumping portals to master (T128546)]] (duration: 01m 16s) [production]
09:13 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Remove references to virt1000 (duration: 01m 16s) [production]
09:12 <jynus@tin> Synchronized wmf-config/db-codfw.php: Remove references to virt1000 (duration: 01m 16s) [production]
08:50 <marostegui> Deploy schema change on s3 codfw master db2043 (this will generate lag on codfw) - T187089 T185128 T153182 [production]
08:21 <jynus> stop mariadb at labsdb1009 and labsdb1010 [production]
08:15 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Specify current m5 codfw master (duration: 01m 17s) [production]
08:11 <jynus> depool labsdb1011 from web wikirreplicas [production]
07:21 <apergos> restarted pdfrender on scb1004 after poking around there a bit [production]
07:01 <apergos> restarted pdfrender on scb1001,2, service paged and no jobs were being processed [production]
06:06 <marostegui> Drop localisation table from the hosts where it still existed - T119811 [production]
02:50 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.26) (duration: 12m 53s) [production]
2018-03-31 §
21:15 <mutante> bast1001 has been shutdown and decom'ed as planned. if you have any issues with shell access make sure you have replaced with bast1002 or any other bast host [production]
11:26 <urandom> removing corrupt commitlog segment, restbase1009-c [production]
11:25 <urandom> removing corrupt commitlog segment, restbase1009-b [production]
11:19 <urandom> starting restbase1009-c [production]
11:18 <urandom> truncating hints, restbase1009-a [production]
11:14 <urandom> restarting restbase1009-b [production]
11:13 <urandom> stopping restbase1009-a (high hints storage) [production]
2018-03-30 §
14:16 <akosiaris> T189076 upload apertium-fra-cat to apt.wikimedia.org/jessie-wikimedia/main [production]
12:47 <akosiaris> T189076 upload apertium-cat to apt.wikimedia.org/jessie-wikimedia/main [production]
12:47 <akosiaris> T189075 upload apertium-lex-tools to apt.wikimedia.org/jessie-wikimedia/main [production]
12:47 <akosiaris> T189075 upload apertium-separable to apt.wikimedia.org/jessie-wikimedia/main [production]
12:47 <akosiaris> T189076 upload apertium-fra to apt.wikimedia.org/jessie-wikimedia/main [production]
11:44 <dcausse> running forceSearchIndex from terbium to cleanup elastic indices for (testwiki, mediawikiwiki, labswiki, labtestwiki, svwiki) (T189694) [production]
11:40 <dcausse> elastic@codfw cluster restarts complete (T189239) [production]
10:55 <dcausse> resuming elastic@codfw cluster restarts [production]