9701-9750 of 10000 results (78ms)
2018-04-02 §
19:06 <twentyafterfour> sync rdbms: avoid lag estimates in getLagFromPtHeartbeat ruined by snapshots Bug: T190960 Change-Id: I57dd8d3d0ca96d6fb2f9e83f062f29b1d53224dd [production]
19:04 <twentyafterfour> Getting the train back on track: deploying 1.31.0-wmf.27 to Group0 [production]
17:49 <mobrovac@tin> Synchronized wmf-config/jobqueue.php: Switch the remaining high-traffic jobs to EventBus, test wikis only, file 2/2 - T190327 (duration: 01m 15s) [production]
17:48 <ppchelko@tin> Finished deploy [cpjobqueue/deploy@9e1b203]: Switch remaining high traffic jobs for test wikis. T190327 (duration: 00m 43s) [production]
17:47 <ppchelko@tin> Started deploy [cpjobqueue/deploy@9e1b203]: Switch remaining high traffic jobs for test wikis. T190327 [production]
17:47 <mobrovac@tin> Synchronized wmf-config/InitialiseSettings.php: Switch the remaining high-traffic jobs to EventBus, test wikis only, file 1/2 - T190327 (duration: 01m 16s) [production]
17:36 <ebernhardson@tin> Synchronized wmf-config/InitialiseSettings.php: Shift serach traffic for enwiki to codfw (duration: 01m 17s) [production]
17:21 <smalyshev@tin> Finished deploy [wdqs/wdqs@49f4eed]: GUI update (duration: 09m 49s) [production]
17:11 <smalyshev@tin> Started deploy [wdqs/wdqs@49f4eed]: GUI update [production]
16:37 <madhuvishy> Rolling out new symlinks to /public/dumps for labstore1006 dumps nfs mount T188643 [production]
15:59 <madhuvishy> Absenting /public/dumps mount from labstore1003 across the VPS fleet T188643 [production]
15:56 <ebernhardson> restart elasticsearch on elastic1024, been stuck at 100% cpu for 3+ hours [production]
15:42 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Change db2035 IP - T191193 (duration: 01m 15s) [production]
15:40 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Change db2035 IP - T191193 (duration: 01m 15s) [production]
15:28 <marostegui> Stop MySQL and power off db2035 (s2 codfw master - this will stop replication on s2 codfw slaves) for rack change - T191193 [production]
15:06 <madhuvishy> Reenabled puppet and rolled out mounting new dumps NFS shares from labstore1006|7 on VPS instances T188643 [production]
14:40 <cmjohnson1> disabling puppet on decom host db1020 [production]
14:28 <madhuvishy> Disabling puppet across VPS instances with dumps mounted (https://phabricator.wikimedia.org/P6921) T188643 [production]
14:22 <marostegui> Drop contest* tables from s3 - T186867 [production]
14:12 <akosiaris@puppetmaster1001> conftool action : set/weight=15; selector: dc=eqiad,service=recommendation-api,cluster=scb,name=scb1003.* [production]
14:12 <akosiaris@puppetmaster1001> conftool action : set/weight=15; selector: dc=eqiad,service=recommendation-api,cluster=scb,name=scb1004.* [production]
14:10 <akosiaris> lower weight for scb1001, scb1002 from 10 to 8 for all services. T191199. scb1003, scb1004 have a weight of 15 already [production]
14:09 <akosiaris@puppetmaster1001> conftool action : set/weight=8; selector: dc=eqiad,cluster=scb,name=scb1002.* [production]
14:09 <akosiaris@puppetmaster1001> conftool action : set/weight=8; selector: dc=eqiad,cluster=scb,name=scb1001.* [production]
13:54 <ariel@tin> Finished deploy [dumps/dumps@0363d50]: add check that xml files don't have binary corruption (nulls) after the header (duration: 00m 04s) [production]
13:54 <ariel@tin> Started deploy [dumps/dumps@0363d50]: add check that xml files don't have binary corruption (nulls) after the header [production]
13:48 <twentyafterfour@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: Sync initializesettings for T190445 (duration: 01m 16s) [production]
13:36 <twentyafterfour@tin> Synchronized wmf-config/throttle.php: SWAT: Sync throttle rules for T191187 (duration: 01m 15s) [production]
13:30 <twentyafterfour@tin> Synchronized wmf-config/throttle.php: SWAT: Sync throttle rules for T191168 (duration: 01m 16s) [production]
13:27 <jynus> restarting pdfrender on scd1003 (Socket timeout) [production]
12:49 <akosiaris> upgrade mediawiki servers for hhvm upgrade [production]
12:06 <marostegui> Deploy schema change on dbstore1002 - s3 - T187089 T185128 T153182 [production]
11:51 <akosiaris> repool mediawiki canary servers after hhvm upgrade [production]
11:44 <akosiaris> depool mediawiki canary servers for hhvm upgrade [production]
10:16 <jdrewniak@tin> Synchronized portals: Wikimedia Portals Update: [[gerrit:423456|Bumping portals to master (T128546)]] (duration: 01m 16s) [production]
10:15 <jdrewniak@tin> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:423456|Bumping portals to master (T128546)]] (duration: 01m 16s) [production]
09:13 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Remove references to virt1000 (duration: 01m 16s) [production]
09:12 <jynus@tin> Synchronized wmf-config/db-codfw.php: Remove references to virt1000 (duration: 01m 16s) [production]
08:50 <marostegui> Deploy schema change on s3 codfw master db2043 (this will generate lag on codfw) - T187089 T185128 T153182 [production]
08:21 <jynus> stop mariadb at labsdb1009 and labsdb1010 [production]
08:15 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Specify current m5 codfw master (duration: 01m 17s) [production]
08:11 <jynus> depool labsdb1011 from web wikirreplicas [production]
07:21 <apergos> restarted pdfrender on scb1004 after poking around there a bit [production]
07:01 <apergos> restarted pdfrender on scb1001,2, service paged and no jobs were being processed [production]
06:06 <marostegui> Drop localisation table from the hosts where it still existed - T119811 [production]
02:50 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.26) (duration: 12m 53s) [production]
2018-03-31 §
21:15 <mutante> bast1001 has been shutdown and decom'ed as planned. if you have any issues with shell access make sure you have replaced with bast1002 or any other bast host [production]
11:26 <urandom> removing corrupt commitlog segment, restbase1009-c [production]
11:25 <urandom> removing corrupt commitlog segment, restbase1009-b [production]
11:19 <urandom> starting restbase1009-c [production]