2020-09-18
§
|
21:48 |
<tzatziki> |
changed password for Millennium bug@ptwiki |
[production] |
19:28 |
<eileen> |
process-control config revision is 739ea754ca |
[production] |
18:52 |
<pt1979@cumin2001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
18:46 |
<pt1979@cumin2001> |
START - Cookbook sre.dns.netbox |
[production] |
18:44 |
<ryankemper> |
`sudo kill 254017 254018 254028 254029` to kill some dangling serdi / gzip processes, all the wikidata cleanup should be complete |
[production] |
18:38 |
<ryankemper> |
`sudo kill 126121 126122 126124 126128 249520 249521 254016 254027` on `snapshot1008` to terminate wikidata dump jobs that are in a bad state |
[production] |
18:10 |
<ryankemper> |
Removed stale `wikidatardf-dumps` crontab entry from `dumpsgen@snapshot1008`, stored backup of previous state of crontab in the (admittedly verbose) `/tmp/dumpsgen_crontab_before_removing_stale_wikidata_dump_entry_see_gerrit_puppet_patch_622342` |
[production] |
17:15 |
<mutante> |
lists1001 - apt-get install pwgen to generate passwords (this was installed on previous list server but apparently not puppetized, puppet patch coming up) |
[production] |
16:23 |
<pt1979@cumin2001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |
16:21 |
<pt1979@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
15:09 |
<mutante> |
restarting gerrit service to apply gerrit::628338 to make it dump heap if out of memory (T263008) |
[production] |
14:15 |
<ladsgroup@deploy1001> |
Synchronized wmf-config/Wikibase.php: labs: Turn on termbox v2 on desktop for wikidatawiki -- noop for production, sanity sync (T261488) (duration: 00m 56s) |
[production] |
14:13 |
<ladsgroup@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: labs: Turn on termbox v2 on desktop for wikidatawiki -- noop for production, sanity sync (T261488) (duration: 01m 00s) |
[production] |
13:02 |
<kormat@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
13:00 |
<kormat@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
12:48 |
<cdanis@cumin1001> |
conftool action : set/pooled=true; selector: dnsdisc=swift,name=eqiad |
[production] |
12:41 |
<kormat> |
reimaging db2125 T263244 |
[production] |
12:39 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2089:3316 (re)pooling @ 100%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12665 and previous config saved to /var/cache/conftool/dbconfig/20200918-123947-kormat.json |
[production] |
12:24 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2089:3316 (re)pooling @ 75%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12664 and previous config saved to /var/cache/conftool/dbconfig/20200918-122444-kormat.json |
[production] |
12:09 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2089:3316 (re)pooling @ 50%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12663 and previous config saved to /var/cache/conftool/dbconfig/20200918-120940-kormat.json |
[production] |
11:54 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2089:3316 (re)pooling @ 25%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12662 and previous config saved to /var/cache/conftool/dbconfig/20200918-115437-kormat.json |
[production] |
11:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2125', diff saved to https://phabricator.wikimedia.org/P12661 and previous config saved to /var/cache/conftool/dbconfig/20200918-113509-marostegui.json |
[production] |
11:15 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2089:3316 depooling: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12660 and previous config saved to /var/cache/conftool/dbconfig/20200918-111529-kormat.json |
[production] |
10:56 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 (re)pooling @ 100%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12659 and previous config saved to /var/cache/conftool/dbconfig/20200918-105645-kormat.json |
[production] |
10:45 |
<jiji@deploy1001> |
helmfile [codfw] Ran 'sync' command on namespace 'push-notifications' for release 'main' . |
[production] |
10:41 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 (re)pooling @ 75%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12658 and previous config saved to /var/cache/conftool/dbconfig/20200918-104141-kormat.json |
[production] |
10:35 |
<jiji@deploy1001> |
helmfile [eqiad] Ran 'sync' command on namespace 'push-notifications' for release 'main' . |
[production] |
10:34 |
<hnowlan@deploy1001> |
helmfile [eqiad] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . |
[production] |
10:31 |
<hnowlan@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . |
[production] |
10:28 |
<hnowlan@deploy1001> |
helmfile [codfw] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . |
[production] |
10:26 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 (re)pooling @ 50%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12657 and previous config saved to /var/cache/conftool/dbconfig/20200918-102638-kormat.json |
[production] |
10:11 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 (re)pooling @ 25%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12656 and previous config saved to /var/cache/conftool/dbconfig/20200918-101135-kormat.json |
[production] |
09:55 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 depooling: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12655 and previous config saved to /var/cache/conftool/dbconfig/20200918-095554-kormat.json |
[production] |
09:55 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:55 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:47 |
<twentyafterfour> |
deployed hotfix for T263063 to phab1001 |
[production] |
09:47 |
<jayme> |
deleting some random pods in kubernetes staging to rebalance load back on kubestage1001 - T262527 |
[production] |
09:46 |
<jayme> |
uncordoned kubestage1001 - T262527 |
[production] |
09:46 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 100%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12654 and previous config saved to /var/cache/conftool/dbconfig/20200918-094608-kormat.json |
[production] |
09:31 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 80%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12653 and previous config saved to /var/cache/conftool/dbconfig/20200918-093105-kormat.json |
[production] |
09:24 |
<klausman@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:22 |
<klausman@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |