2020-03-02
ยง
|
13:26 |
<kartik@deploy1001> |
helmfile [STAGING] Ran 'apply' command on namespace 'cxserver' for release 'staging' . |
[production] |
13:18 |
<elukey> |
roll restart Hadoop master daemons on an-master100[1,2] for openjdk upgrades |
[production] |
13:11 |
<addshore> |
START warm cache for db1111 & db1126 for Q10-12 million T219123 (pass 1) |
[production] |
13:08 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q10M for the new term store for clients (was Q8M) + warm db1126 & db1111 caches (T219123) cache bust (duration: 00m 55s) |
[production] |
13:07 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q10M for the new term store for clients (was Q8M) + warm db1126 & db1111 caches (T219123) (duration: 00m 56s) |
[production] |
12:58 |
<Urbanecm> |
Deploy security fix for T229731 |
[production] |
12:16 |
<urbanecm@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: 8280f81: Set cswiki and cywiki to use custom minerva logo again (T246535): take II (duration: 00m 57s) |
[production] |
12:15 |
<urbanecm@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: 8280f81: Set cswiki and cywiki to use custom minerva logo again (T246535) (duration: 00m 58s) |
[production] |
12:09 |
<oblivian@deploy1001> |
Synchronized wmf-config/ProductionServices.php: Switch search to use envoy as a proxy (duration: 00m 56s) |
[production] |
11:54 |
<vgutierrez> |
enable BGP in lvs5002 - T245984 |
[production] |
11:44 |
<addshore> |
START warm cache for db1111 & db1126 for Q8-10 million T219123 (pass 2) |
[production] |
11:41 |
<jdrewniak@deploy1001> |
Synchronized portals: Wikimedia Portals Update: [[gerrit:576004| Bumping portals to master (563985)]] (duration: 00m 57s) |
[production] |
11:40 |
<jdrewniak@deploy1001> |
Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:576004| Bumping portals to master (563985)]] (duration: 00m 57s) |
[production] |
11:39 |
<jbond42> |
enable strict_hostname_checking on the puppet masters https://gerrit.wikimedia.org/r/c/operations/puppet/+/575220 |
[production] |
11:12 |
<kart_> |
Update cxserver to 2020-02-28-043702-production (T246319) |
[production] |
11:07 |
<kartik@deploy1001> |
helmfile [CODFW] Ran 'apply' command on namespace 'cxserver' for release 'production' . |
[production] |
11:05 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
11:04 |
<kartik@deploy1001> |
helmfile [EQIAD] Ran 'apply' command on namespace 'cxserver' for release 'production' . |
[production] |
11:03 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
11:02 |
<kartik@deploy1001> |
helmfile [STAGING] Ran 'apply' command on namespace 'cxserver' for release 'staging' . |
[production] |
11:01 |
<addshore> |
START warm cache for db1111 & db1126 for Q8-10 million T219123 (pass 1) |
[production] |
10:55 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q8M for the new term store for clients (was Q6M) + warm db1126 & db1111 caches (T219123) (duration: 00m 58s) |
[production] |
10:35 |
<vgutierrez> |
reimage lvs5002 with buster - T245984 |
[production] |
10:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 150 to 200 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10576 and previous config saved to /var/cache/conftool/dbconfig/20200302-103445-marostegui.json |
[production] |
10:22 |
<addshore> |
START warm cache for db1111 & db1126 for Q6-8 million T219123 (pass 2) |
[production] |
09:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 100 to 150 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10575 and previous config saved to /var/cache/conftool/dbconfig/20200302-095921-marostegui.json |
[production] |
09:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10574 and previous config saved to /var/cache/conftool/dbconfig/20200302-095841-marostegui.json |
[production] |
09:52 |
<moritzm> |
installing remaining curl security updates |
[production] |
09:51 |
<addshore> |
START warm cache for db1111 & db1126 for Q6-8 million T219123 (pass 1) |
[production] |
09:50 |
<elukey> |
powercycle an-worker1083 (no ssh, mgmt console available but tty not really usable, CPU soft lockups reported) |
[production] |
09:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10573 and previous config saved to /var/cache/conftool/dbconfig/20200302-094633-marostegui.json |
[production] |
09:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10572 and previous config saved to /var/cache/conftool/dbconfig/20200302-093848-marostegui.json |
[production] |
09:38 |
<moritzm> |
installing openssh updates for jessie |
[production] |
09:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 80 to 100 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10571 and previous config saved to /var/cache/conftool/dbconfig/20200302-093449-marostegui.json |
[production] |
09:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10570 and previous config saved to /var/cache/conftool/dbconfig/20200302-092743-marostegui.json |
[production] |
09:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1119 T239791', diff saved to https://phabricator.wikimedia.org/P10569 and previous config saved to /var/cache/conftool/dbconfig/20200302-091947-marostegui.json |
[production] |
09:12 |
<addshore> |
warm cache for db1111 for Q0-6 million T219123 T246447 (pass 2) |
[production] |
08:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 50 to 80 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10568 and previous config saved to /var/cache/conftool/dbconfig/20200302-085420-marostegui.json |
[production] |
08:44 |
<moritzm> |
installing openssh updates for stretch |
[production] |
08:33 |
<addshore> |
warm cache for db1111 for Q0-6 million T219123 T246447 |
[production] |
08:14 |
<addshore> |
resume item term table rebuild script (from Q54 mill) T219123 |
[production] |
08:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 30 to 50 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10567 and previous config saved to /var/cache/conftool/dbconfig/20200302-080721-marostegui.json |
[production] |
07:22 |
<vgutierrez> |
upgrading NICs FW on lvs2008 - T196560 T203194 |
[production] |
07:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 10 to 30 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10566 and previous config saved to /var/cache/conftool/dbconfig/20200302-072118-marostegui.json |
[production] |
07:10 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
07:08 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
06:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Increase weight from 1 to 10 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10565 and previous config saved to /var/cache/conftool/dbconfig/20200302-064522-marostegui.json |
[production] |
06:42 |
<marostegui> |
Enable events on db1111 T246447 |
[production] |
06:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1111 to s8 with minimal weight to check grants and any other issues T246447', diff saved to https://phabricator.wikimedia.org/P10564 and previous config saved to /var/cache/conftool/dbconfig/20200302-062435-marostegui.json |
[production] |
06:04 |
<marostegui> |
Re-add db1111 to s8 in tendril and zarcillo - T246447 |
[production] |