|
2019-05-17
§
|
| 14:43 |
<fsero> |
reenabling puppet puppet on mcrouter hosts for T221346, checks in place is there any alert for cert expiration and mcrouter this is the source :) |
[production] |
| 14:17 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1098 & db1131 after maintenance (duration: 00m 49s) |
[production] |
| 14:09 |
<fsero> |
second round of setting up cert check, disablign puppet on mcrouter hosts T221346 |
[production] |
| 12:58 |
<mobrovac> |
bootstrap restbase1021-c - T219404 |
[production] |
| 10:59 |
<mobrovac> |
bootstrap restbase1021-b - T219404 |
[production] |
| 09:27 |
<godog> |
swift remove ms-be101[345] from rings - T220590 |
[production] |
| 09:02 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Fully repool db1083 (duration: 00m 48s) |
[production] |
| 08:53 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1083 (duration: 00m 49s) |
[production] |
| 08:43 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1083 (duration: 00m 49s) |
[production] |
| 08:32 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1083 (duration: 00m 49s) |
[production] |
| 08:24 |
<fsero> |
reenabling puppet after reverting T221346 |
[production] |
| 08:19 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1083 (duration: 00m 59s) |
[production] |
| 07:57 |
<fsero> |
disabling puppet on mcrouter hosts for T221346 |
[production] |
| 07:46 |
<elukey> |
restart mediawiki history and denormalize coordinators with the new analytics user (left mediawiki-history-wikitext-coord aside for further investigation) |
[analytics] |
| 07:22 |
<elukey> |
chown -R analytics:analytics /wmf/data/wmf/mediawiki |
[analytics] |
| 07:22 |
<elukey> |
chown sure! |
[analytics] |
| 07:11 |
<marostegui> |
Compress s7 on labsdb1012 T222978 |
[production] |
| 06:36 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Pool db2111 and db2113 into s5 T222772 (duration: 00m 49s) |
[production] |
| 06:35 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Pool db2111 and db2113 into s5 T222772 (duration: 00m 50s) |
[production] |
| 05:19 |
<marostegui> |
Stop MySQL on db1083 to clone db1134 |
[production] |
| 05:17 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1083 (duration: 00m 50s) |
[production] |
| 05:00 |
<mobrovac> |
bootstrap 1021-a - T219404 |
[production] |
| 00:35 |
<brennen> |
Updating dev-images docker-pkg files on contint1001 |
[releng] |
|
2019-05-16
§
|
| 21:06 |
<zhuyifei1999_> |
repool v2ccelery on encoding02, drain encoding01 |
[video] |
| 21:02 |
<Jeff_Green> |
authdns-update to switch payments.wikimedia.org back to eqiad cluster |
[production] |
| 20:08 |
<joal> |
Manually fixing banner job |
[analytics] |
| 19:53 |
<joal> |
Restarting banner_activity-druid-monthly-coord after chuu chuu |
[analytics] |
| 19:24 |
<onimisionipe> |
pooling elastic2038 - shards are properly balanced across nodes |
[production] |
| 18:58 |
<wm-bot> |
<lucaswerkmeister> deployed f53ae8ac71 (remove commands help from index page) |
[tools.quickcategories] |
| 18:31 |
<onimisionipe> |
depooling elastic2038 to investigate more |
[production] |
| 17:26 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
| 17:26 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 17:26 |
<jbond42> |
reboot ores1007-1009 |
[production] |
| 17:15 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
| 17:15 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 17:15 |
<jbond42> |
reboot ores1005-1006 |
[production] |
| 17:10 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
| 17:10 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 17:10 |
<jbond42> |
reboot ores1003-1004 |
[production] |
| 17:05 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
| 17:05 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 17:05 |
<jbond42> |
reboot ores1001-1002 |
[production] |
| 17:00 |
<jbond42> |
reboot orespoolcounter[12]002 |
[production] |
| 16:53 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
| 16:53 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 16:53 |
<jbond@cumin1001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |
| 16:53 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 16:53 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
| 16:52 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
| 16:52 |
<jbond@cumin1001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |