2019-04-24
ยง
|
18:46 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw1297.eqiad.wmnet,cluster=api_appserver |
[production] |
18:45 |
<mutante> |
mw1297 - scap pull |
[production] |
18:17 |
<mutante> |
sudo icinga-downtime -h ms-be2031 -r swift-rebalancing -d 86400 |
[production] |
17:52 |
<mutante> |
contint1001 - for logfile in $(find /var/log/zuul/ ! -name "*.gz"); do gzip $logfile; done to get more disk space (T207707) |
[production] |
17:33 |
<mutante> |
contint1001 - apt-get clean for 1% more disk space |
[production] |
17:23 |
<mutante> |
proton1001 - restarting proton service - low RAM caused facter/puppet fails (https://tickets.puppetlabs.com/browse/PUP-8048) freed memory and fixed puppet run (cc: T219456 T214975) |
[production] |
16:33 |
<catrope@deploy1001> |
Synchronized php-1.34.0-wmf.1/extensions/GrowthExperiments/: Fix exceptions in Homepage logging (duration: 00m 56s) |
[production] |
15:52 |
<herron> |
performing rolling restart of pybal on low-traffic eqiad/codfw lvs hosts |
[production] |
15:32 |
<jijiki> |
Restarting php7.2-fpm on mw2* in codfw for 505383 and T211488 |
[production] |
15:00 |
<herron> |
switching kibana lvs to source hash scheduler |
[production] |
14:41 |
<jijiki> |
restart pdfrender on scb1002 |
[production] |
14:28 |
<godog> |
being rollout rsyslog 8.1901.0-1 to jessie hosts - T219764 |
[production] |
13:37 |
<marostegui> |
Poweroff db2080 for onsite maintenance - T216240 |
[production] |
13:01 |
<jijiki> |
Restarting php7.2-fpm on mw13* for 505383 and T211488 |
[production] |
12:36 |
<jijiki> |
restarting pdfrender on scb1004 |
[production] |
12:23 |
<moritzm> |
rolling restart of Cassandra on restbase/eqiad to pick up Java security update |
[production] |
11:59 |
<jijiki> |
Restarting php7.2-fpm on mw12* for 505383 and T211488 |
[production] |
11:45 |
<gehel> |
restarting relforge for jvm ugprade |
[production] |
11:33 |
<jbond42> |
security update ghostscript on scb jessie servers |
[production] |
11:25 |
<jijiki> |
Restarting php7.2-fpm on mw-canary for 505383 and T211488 |
[production] |
11:23 |
<ladsgroup@deploy1001> |
Finished deploy [ores/deploy@060fc37]: (no justification provided) (duration: 16m 18s) |
[production] |
11:07 |
<ladsgroup@deploy1001> |
Started deploy [ores/deploy@060fc37]: (no justification provided) |
[production] |
10:28 |
<akosiaris@deploy1001> |
scap-helm cxserver finished |
[production] |
10:28 |
<akosiaris@deploy1001> |
scap-helm cxserver cluster staging completed |
[production] |
10:28 |
<akosiaris@deploy1001> |
scap-helm cxserver upgrade -f cxserver-staging-values.yaml staging stable/cxserver [namespace: cxserver, clusters: staging] |
[production] |
10:23 |
<jijiki> |
Restarting php-fpm on mw1238 for 505383 and T211488 |
[production] |
09:58 |
<moritzm> |
installing rsync security updates on jessie |
[production] |
08:44 |
<moritzm> |
rolling restart of Cassandra on restbase/codfw to pick up Java security update |
[production] |
08:29 |
<godog> |
swift eqiad-prod: start decom for ms-be101[45] - T220590 |
[production] |
08:17 |
<godog> |
bounce prometheus on bast5001 after migration and backfill |
[production] |
08:04 |
<gehel@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=0) |
[production] |
08:04 |
<gehel@cumin1001> |
START - Cookbook sre.elasticsearch.force-shard-allocation |
[production] |
08:02 |
<gehel@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.force-shard-allocation (exit_code=0) |
[production] |
08:02 |
<gehel@cumin1001> |
START - Cookbook sre.elasticsearch.force-shard-allocation |
[production] |
06:41 |
<marostegui> |
Optimize tables on pc1010 |
[production] |
06:38 |
<elukey> |
restart pdfrender on scb1003 |
[production] |
06:37 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool db2082 (duration: 00m 52s) |
[production] |
06:22 |
<marostegui> |
Upgrade db2082 |
[production] |
06:22 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool db2079, depool db2082 (duration: 00m 55s) |
[production] |
06:18 |
<marostegui> |
Upgrade db2081 |
[production] |
06:10 |
<marostegui> |
Upgrade db2079 |
[production] |
06:10 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool db2086, depool db2079 (duration: 00m 53s) |
[production] |
05:55 |
<marostegui> |
Upgrade db2086 |
[production] |
05:55 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool db2083 and depool db2086 (duration: 00m 52s) |
[production] |
05:38 |
<marostegui> |
Upgrade db2080 and db2083 |
[production] |
05:37 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool db2080 and db2083 (duration: 00m 54s) |
[production] |
03:45 |
<SMalyshev> |
repooled wdqs1003, it's good now |
[production] |
01:26 |
<eileen> |
jobs restarted process-control config revision is ef6d4761e5 |
[production] |
01:06 |
<eileen> |
civicrm revision changed from 31982324b8 to 468f85e524, config revision is 13b9eefe7b |
[production] |
01:02 |
<eileen> |
process-control config revision is 13b9eefe7b |
[production] |