5251-5300 of 10000 results (94ms)
2019-04-24 ยง
15:32 <jijiki> Restarting php7.2-fpm on mw2* in codfw for 505383 and T211488 [production]
15:00 <elukey> set innodb_file_format=Barracuda and innodb_large_prefix=1 on mariadb on an-coord1001 to allow bigger indexes for Superset db upgrades [analytics]
15:00 <herron> switching kibana lvs to source hash scheduler [production]
14:41 <jijiki> restart pdfrender on scb1002 [production]
14:28 <godog> being rollout rsyslog 8.1901.0-1 to jessie hosts - T219764 [production]
13:37 <marostegui> Poweroff db2080 for onsite maintenance - T216240 [production]
13:01 <jijiki> Restarting php7.2-fpm on mw13* for 505383 and T211488 [production]
12:54 <arturo> T220051 puppet broken in every VM in Cloud VPS, fixing right now [admin]
12:54 <arturo> puppet broken, fixing right now [tools]
12:36 <jijiki> restarting pdfrender on scb1004 [production]
12:23 <moritzm> rolling restart of Cassandra on restbase/eqiad to pick up Java security update [production]
11:59 <jijiki> Restarting php7.2-fpm on mw12* for 505383 and T211488 [production]
11:45 <gehel> restarting relforge for jvm ugprade [production]
11:33 <jbond42> security update ghostscript on scb jessie servers [production]
11:25 <jijiki> Restarting php7.2-fpm on mw-canary for 505383 and T211488 [production]
11:23 <ladsgroup@deploy1001> Finished deploy [ores/deploy@060fc37]: (no justification provided) (duration: 16m 18s) [production]
11:07 <ladsgroup@deploy1001> Started deploy [ores/deploy@060fc37]: (no justification provided) [production]
10:28 <akosiaris@deploy1001> scap-helm cxserver finished [production]
10:28 <akosiaris@deploy1001> scap-helm cxserver cluster staging completed [production]
10:28 <akosiaris@deploy1001> scap-helm cxserver upgrade -f cxserver-staging-values.yaml staging stable/cxserver [namespace: cxserver, clusters: staging] [production]
10:23 <jijiki> Restarting php-fpm on mw1238 for 505383 and T211488 [production]
09:58 <moritzm> installing rsync security updates on jessie [production]
09:18 <arturo> T221225 reallocating tools-sgebastion-09 to cloudvirt1008 [tools]
08:44 <moritzm> rolling restart of Cassandra on restbase/codfw to pick up Java security update [production]
08:29 <godog> swift eqiad-prod: start decom for ms-be101[45] - T220590 [production]
08:17 <godog> bounce prometheus on bast5001 after migration and backfill [production]
08:04 <gehel@cumin1001> END (PASS) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=0) [production]
08:04 <gehel@cumin1001> START - Cookbook sre.elasticsearch.force-shard-allocation [production]
08:02 <gehel@cumin1001> END (PASS) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=0) [production]
08:02 <gehel@cumin1001> START - Cookbook sre.elasticsearch.force-shard-allocation [production]
07:43 <fdans> refinery uploaded to hdfs and webrequest bundle restarted [analytics]
07:06 <fdans> restarted webrequest bundle [analytics]
06:55 <xSavitar> Restart successful, tool up and running again [tools.awmd-stats]
06:52 <xSavitar> Restarting the awmd-stats tool after minor maintenance [tools.awmd-stats]
06:50 <xSavitar> Doing some minor maintenance to the tool's environment [tools.awmd-stats]
06:41 <marostegui> Optimize tables on pc1010 [production]
06:38 <elukey> restart pdfrender on scb1003 [production]
06:37 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2082 (duration: 00m 52s) [production]
06:24 <elukey> kill of application_1555511316215_18282 on Hadoop due to excessive resource usage [analytics]
06:22 <marostegui> Upgrade db2082 [production]
06:22 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2079, depool db2082 (duration: 00m 55s) [production]
06:18 <marostegui> Upgrade db2081 [production]
06:10 <marostegui> Upgrade db2079 [production]
06:10 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2086, depool db2079 (duration: 00m 53s) [production]
05:55 <marostegui> Upgrade db2086 [production]
05:55 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2083 and depool db2086 (duration: 00m 52s) [production]
05:38 <marostegui> Upgrade db2080 and db2083 [production]
05:37 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Depool db2080 and db2083 (duration: 00m 54s) [production]
03:45 <SMalyshev> repooled wdqs1003, it's good now [production]
01:26 <eileen> jobs restarted process-control config revision is ef6d4761e5 [production]