2024-06-26
§
|
08:39 |
<hashar@deploy1002> |
Finished deploy [gerrit/gerrit@2fc2b03]: Gerrit to 3.10 on gerrit1003 # T367419 (duration: 00m 43s) |
[production] |
08:39 |
<hashar@deploy1002> |
Started deploy [gerrit/gerrit@2fc2b03]: Gerrit to 3.10 on gerrit1003 # T367419 |
[production] |
08:38 |
<slyngshede@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on idp-test1002.wikimedia.org with reason: host reimage |
[production] |
08:37 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1203 (T364069)', diff saved to https://phabricator.wikimedia.org/P65462 and previous config saved to /var/cache/conftool/dbconfig/20240626-083733-marostegui.json |
[production] |
08:37 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1203.eqiad.wmnet with reason: Maintenance |
[production] |
08:37 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1203.eqiad.wmnet with reason: Maintenance |
[production] |
08:37 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1193 (T364069)', diff saved to https://phabricator.wikimedia.org/P65461 and previous config saved to /var/cache/conftool/dbconfig/20240626-083711-marostegui.json |
[production] |
08:32 |
<hashar@deploy1002> |
Finished deploy [gerrit/gerrit@2fc2b03]: Gerrit to 3.10 on gerrit2002 # T367419 (duration: 00m 48s) |
[production] |
08:31 |
<hashar@deploy1002> |
Started deploy [gerrit/gerrit@2fc2b03]: Gerrit to 3.10 on gerrit2002 # T367419 |
[production] |
08:25 |
<slyngshede@cumin1002> |
START - Cookbook sre.hosts.reimage for host idp-test1002.wikimedia.org with OS bookworm |
[production] |
08:22 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1193', diff saved to https://phabricator.wikimedia.org/P65460 and previous config saved to /var/cache/conftool/dbconfig/20240626-082204-marostegui.json |
[production] |
08:11 |
<jynus@cumin1002> |
dbctl commit (dc=all): 'Depool es1025 for backups T363812', diff saved to https://phabricator.wikimedia.org/P65458 and previous config saved to /var/cache/conftool/dbconfig/20240626-081130-jynus.json |
[production] |
08:10 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Set es1023 as es5 master - this is a NOOP', diff saved to https://phabricator.wikimedia.org/P65457 and previous config saved to /var/cache/conftool/dbconfig/20240626-081014-marostegui.json |
[production] |
08:06 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1193', diff saved to https://phabricator.wikimedia.org/P65456 and previous config saved to /var/cache/conftool/dbconfig/20240626-080657-marostegui.json |
[production] |
08:06 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Fix weights for es2021 and es2024', diff saved to https://phabricator.wikimedia.org/P65455 and previous config saved to /var/cache/conftool/dbconfig/20240626-080649-marostegui.json |
[production] |
07:59 |
<jynus@cumin1002> |
dbctl commit (dc=all): 'Depool es1022 for backups T363812', diff saved to https://phabricator.wikimedia.org/P65454 and previous config saved to /var/cache/conftool/dbconfig/20240626-075946-jynus.json |
[production] |
07:54 |
<jynus@cumin1002> |
dbctl commit (dc=all): 'Repool es2025 at 100% load', diff saved to https://phabricator.wikimedia.org/P65453 and previous config saved to /var/cache/conftool/dbconfig/20240626-075428-jynus.json |
[production] |
07:50 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1193 (T364069)', diff saved to https://phabricator.wikimedia.org/P65451 and previous config saved to /var/cache/conftool/dbconfig/20240626-075043-marostegui.json |
[production] |
07:44 |
<kevinbazira@deploy1002> |
helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'experimental' for release 'main' . |
[production] |
07:33 |
<jynus@cumin1002> |
dbctl commit (dc=all): 'Repool es2025 at 50% load', diff saved to https://phabricator.wikimedia.org/P65449 and previous config saved to /var/cache/conftool/dbconfig/20240626-073304-jynus.json |
[production] |
07:28 |
<jynus@cumin1002> |
dbctl commit (dc=all): 'Repool es2025 with low load for warmup', diff saved to https://phabricator.wikimedia.org/P65448 and previous config saved to /var/cache/conftool/dbconfig/20240626-072810-jynus.json |
[production] |
07:03 |
<moritzm> |
installing emacs security updates |
[production] |
06:56 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Pool db2136 - running 10.11 with minium weight T365805', diff saved to https://phabricator.wikimedia.org/P65447 and previous config saved to /var/cache/conftool/dbconfig/20240626-065636-marostegui.json |
[production] |
06:52 |
<marostegui> |
Enable slow query log on db2136 running 10.11 T365805 |
[production] |
06:39 |
<marostegui> |
Install mariadb 10.11 on s4 db2136 (depooled for now) T365805 |
[production] |
06:31 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2136 T365805', diff saved to https://phabricator.wikimedia.org/P65446 and previous config saved to /var/cache/conftool/dbconfig/20240626-063109-root.json |
[production] |
06:01 |
<marostegui> |
dbmaint eqiad Drop ipblocks in s1 T367632 |
[production] |
05:59 |
<marostegui> |
dbmaint eqiad Drop ipblocks in s3 T367632 |
[production] |
05:57 |
<marostegui> |
dbmaint eqiad Drop ipblocks in s4 T367632 |
[production] |
05:39 |
<ryankemper> |
[Elastic] `curl -s -X POST https://search.svc.eqiad.wmnet:9243/_cluster/reroute?retry_failed=true` did the trick. Shard initializing, cluster should be back to green soon enough |
[production] |
05:36 |
<ryankemper> |
[Elastic] One unassigned shard; cluster status yellow. Not a big deal, looks like `shard has exceeded the maximum number of retries [5] on failed allocation attempts`, I'll try a manual `/_cluster/reroute?retry_failed=true` |
[production] |
05:01 |
<marostegui> |
dbmaint eqiad Drop ipblocks in s5 T367632 |
[production] |
04:53 |
<marostegui> |
dbmaint eqiad Drop ipblocks in s2 T367632 |
[production] |
04:51 |
<marostegui> |
dbmaint eqiad Drop ipblocks in s8 T367632 |
[production] |
03:39 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1193 (T364069)', diff saved to https://phabricator.wikimedia.org/P65445 and previous config saved to /var/cache/conftool/dbconfig/20240626-033955-marostegui.json |
[production] |
03:39 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1193.eqiad.wmnet with reason: Maintenance |
[production] |
03:39 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1193.eqiad.wmnet with reason: Maintenance |
[production] |
03:39 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1178 (T364069)', diff saved to https://phabricator.wikimedia.org/P65444 and previous config saved to /var/cache/conftool/dbconfig/20240626-033933-marostegui.json |
[production] |
03:24 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1178', diff saved to https://phabricator.wikimedia.org/P65443 and previous config saved to /var/cache/conftool/dbconfig/20240626-032426-marostegui.json |
[production] |
03:09 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1178', diff saved to https://phabricator.wikimedia.org/P65442 and previous config saved to /var/cache/conftool/dbconfig/20240626-030919-marostegui.json |
[production] |
02:54 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1178 (T364069)', diff saved to https://phabricator.wikimedia.org/P65441 and previous config saved to /var/cache/conftool/dbconfig/20240626-025412-marostegui.json |
[production] |
00:21 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2138 (T367856)', diff saved to https://phabricator.wikimedia.org/P65440 and previous config saved to /var/cache/conftool/dbconfig/20240626-002103-marostegui.json |
[production] |
00:20 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2138.codfw.wmnet with reason: Maintenance |
[production] |
00:20 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2138.codfw.wmnet with reason: Maintenance |
[production] |
00:20 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2126 (T367856)', diff saved to https://phabricator.wikimedia.org/P65439 and previous config saved to /var/cache/conftool/dbconfig/20240626-002041-marostegui.json |
[production] |
00:05 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2126', diff saved to https://phabricator.wikimedia.org/P65438 and previous config saved to /var/cache/conftool/dbconfig/20240626-000534-marostegui.json |
[production] |