6001-6050 of 10000 results (75ms)
2017-11-22 ยง
19:51 <demon@tin> Synchronized docroot/: removing old foundation docroot (duration: 00m 46s) [production]
19:49 <urandom> starting cassandra cleanups, restbase-200{1,3,5}-a - T179422 [production]
19:41 <demon@tin> Synchronized w/extract2.php: removing old portal support (duration: 00m 45s) [production]
18:47 <demon@tin> Synchronized dblists/closed.dblist: closed transitionteamwiki (duration: 00m 45s) [production]
18:34 <demon@tin> Synchronized wmf-config/InitialiseSettings-labs.php: no-op (duration: 00m 45s) [production]
18:32 <demon@tin> Synchronized scap/plugins/updatewikiversions.py: minor fix (duration: 00m 45s) [production]
18:30 <demon@tin> Pruned MediaWiki: 1.31.0-wmf.7 [keeping static files] (duration: 01m 46s) [production]
18:12 <herron> re-enabling puppet agents after puppetdb postgres security updates [production]
18:09 <moritzm> installing postgres security updates on nitrogen/puppetdb [production]
18:05 <moritzm> installing postgres security updates on nihal/puppetdb [production]
18:02 <herron> disabling puppet agents for puppetdb postgres security update [production]
17:30 <demon@tin> Synchronized php-1.31.0-wmf.8/extensions/AdvancedSearch/: fixing layout issues in timeless (duration: 00m 46s) [production]
16:54 <mepps> updated payments-wiki from 1ca91b1b029161457c86f4f403be0ac78e715d79 to 6b3019b1f18b4d6cd1705a49e95005124435e3d2 [production]
16:45 <chasemp> disable puppet accross labtest things [production]
16:34 <marostegui> Compress s4 on db1097 - T178359 [production]
16:28 <herron> starting canary deploy/cutover of codfw scb hosts to codfw puppet 4 masters [production]
16:16 <elukey> restart druid broker,coordinator,historical daemons on druid100[123] to pick up new logging settings [production]
15:41 <jynus> starting manually pt-heartbeat for s8 on db1071 [production]
15:22 <herron> beginning cut over of codfw db servers (^db2.*) to codfw puppet 4 masters [production]
14:49 <jynus@tin> Synchronized wmf-config/db-codfw.php: mariadb: Setup s8 replica set on codfw (duration: 00m 45s) [production]
14:27 <moritzm> installing libxml-libxml-perl security updates [production]
14:21 <jynus> starting database topology changes for s8 on codfw T177208 [production]
14:11 <urandom> bootstrapping cassandra, restbase2004-c.codfw.wmnet - T179422 [production]
13:43 <apergos> one more round of labstore1006 <-- ms1001 rsync catchup [production]
13:37 <moritzm> installing imagemagick security updates [production]
12:39 <marostegui> Stop MySQL on db1053 to clone db1097.s4 - T178359 [production]
12:38 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1053 - T178359 (duration: 00m 45s) [production]
12:17 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Start adapting the config to move db1097 to s4 and s5 as multi-instance rc slave T178359 (duration: 00m 45s) [production]
12:04 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Fully pool db1101.s7 - T178359 (duration: 00m 45s) [production]
11:51 <jynus> starting dropping incorrectly created database on s7 amwikimedia (not to be confused with production wiki s3 amwikimedia) [production]
11:41 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1101.s7 - T178359 (duration: 00m 45s) [production]
11:19 <akosiaris> gnt-node evacuate -s -f ganeti1005. T181121 [production]
11:02 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1101.s7 - T178359 (duration: 00m 45s) [production]
10:54 <akosiaris> gnt-node migrate -f ganeti1005. T181121 [production]
10:51 <marostegui> Drop index from ores_classification on s5 - T180045 [production]
10:44 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Pool db1051 and db1063 in vslow service for s5 to warm them up for the s8 split - T177208 (duration: 00m 45s) [production]
10:23 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Add db1101 to s5 and s7 as recentchanges multi-instance slave - T178359 (duration: 00m 45s) [production]
10:04 <moritzm> running "scap pull" on mw1191, it's depooled and marked as "inactive", but health checks are triggering db errors [production]
09:35 <bblack> cr[12]-ulsfo - switch static fallback LVS routes from lvs400[12] to lvs400[56] [production]
09:27 <bblack> lvs@ulsfo - done switching primaries (host MED config) - lvs400[56] now primary for text/upload traffic [production]
09:11 <akosiaris@tin> Finished deploy [parsoid/deploy@b150764]: T180211 (duration: 05m 05s) [production]
09:08 <bblack> puppet disabled on lvs400[1256] for switching primaries [production]
09:06 <akosiaris@tin> Started deploy [parsoid/deploy@b150764]: T180211 [production]
09:04 <akosiaris@puppetmaster1001> conftool action : set/pooled=yes; selector: name=wtp2017.codfw.wmnet [production]
09:00 <bblack> lvs4005 - reboot to clear experimental stuff [production]
08:16 <bblack> backend restart on cp4024 (upload@ulsfo) - mailbox lag [production]
07:56 <marostegui> Drop index from ores_classification on s3 - T180045 [production]
07:50 <marostegui> Drop index from ores_classification on s6 - T180045 [production]
07:48 <marostegui> Drop index from ores_classification on s7 - T180045 [production]
07:29 <_joe_> stopping the additional workers for htmlCacheUpdate (commons and ruwiki), adding one additional runner for refreshLinks on ruwiki [production]