2022-02-04
§
|
11:41 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 100%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20174 and previous config saved to /var/cache/conftool/dbconfig/20220204-114117-root.json |
[production] |
11:26 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 75%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20173 and previous config saved to /var/cache/conftool/dbconfig/20220204-112613-root.json |
[production] |
11:14 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reimage for host ganeti1020.eqiad.wmnet with OS buster |
[production] |
11:13 |
<akosiaris@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
11:11 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 50%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20172 and previous config saved to /var/cache/conftool/dbconfig/20220204-111110-root.json |
[production] |
11:07 |
<akosiaris@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
11:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove all special groups from s1 codfw T263127', diff saved to https://phabricator.wikimedia.org/P20171 and previous config saved to /var/cache/conftool/dbconfig/20220204-110427-marostegui.json |
[production] |
10:56 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 25%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20170 and previous config saved to /var/cache/conftool/dbconfig/20220204-105606-root.json |
[production] |
10:41 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 10%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20165 and previous config saved to /var/cache/conftool/dbconfig/20220204-104102-root.json |
[production] |
10:40 |
<moritzm> |
rebalancing row A in ganeti/eqiad, all nodes of that row are now running Buster T296721 |
[production] |
10:03 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti1008.eqiad.wmnet to ganeti01.svc.eqiad.wmnet |
[production] |
10:02 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti1008.eqiad.wmnet to ganeti01.svc.eqiad.wmnet |
[production] |
09:58 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1008.eqiad.wmnet |
[production] |
09:53 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1008.eqiad.wmnet |
[production] |
08:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove watchlist group from s4 eqiad T263127', diff saved to https://phabricator.wikimedia.org/P20164 and previous config saved to /var/cache/conftool/dbconfig/20220204-082010-marostegui.json |
[production] |
07:18 |
<elukey> |
`git checkout main.html` on miscweb1002:/srv/org/wikidata/query to avoid puppet corrective actions (and the host being listed in alarms) |
[production] |
07:09 |
<elukey> |
cleanup wmf_auto_restart_prometheus-mysqld-exporter@analytics-meta on an-test-coord1001 and unmasked wmf_auto_restart_prometheus-mysqld-exporter (now used) |
[production] |
07:03 |
<elukey> |
clean up wmf_auto_restart_prometheus-mysqld-exporter@matomo on matomo1002 (not used anymore, listed as failed) |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1096:3316 schema change', diff saved to https://phabricator.wikimedia.org/P20163 and previous config saved to /var/cache/conftool/dbconfig/20220204-070003-marostegui.json |
[production] |
05:59 |
<legoktm> |
uploaded pygments 2.11.2 to apt.wm.o (T298399) |
[production] |
02:48 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts elastic2035.codfw.wmnet |
[production] |
02:42 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) for hosts elastic2035.codfw.wmnet |
[production] |
02:41 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts elastic2035.codfw.wmnet |
[production] |
01:08 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
01:06 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
01:06 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
01:05 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
01:04 |
<brennen> |
for-real end of utc late backport & config window |
[production] |
01:04 |
<brennen@deploy1002> |
Synchronized php-1.38.0-wmf.20/extensions/Thanks/modules/ext.thanks.flowthank.js: Backport: [[gerrit:759319|Correct attribute for flow thanks (T300831)]] (duration: 00m 49s) |
[production] |
00:50 |
<brennen> |
reopening utc late backport window for [[gerrit:759319|Correct attribute for flow thanks (T300831)]] |
[production] |
00:15 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
00:12 |
<cjming> |
end of UTC late backport & config window |
[production] |
00:11 |
<cjming@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:759560|Update icons, wordmark for test wikis (T299512)]] (duration: 00m 49s) |
[production] |
00:11 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
00:10 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
00:10 |
<cjming@deploy1002> |
Synchronized static/images/mobile/copyright/: Config: [[gerrit:759560|Update icons, wordmark for test wikis (T299512)]] (duration: 00m 53s) |
[production] |
00:09 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
2022-02-03
§
|
23:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20159 and previous config saved to /var/cache/conftool/dbconfig/20220203-233447-marostegui.json |
[production] |
23:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P20158 and previous config saved to /var/cache/conftool/dbconfig/20220203-231942-marostegui.json |
[production] |
23:15 |
<ryankemper> |
T294805 Added a silence on alerts.wikimedia.org for `CirrusSearchJVMGCOldPoolFlatlined` |
[production] |
23:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P20157 and previous config saved to /var/cache/conftool/dbconfig/20220203-230437-marostegui.json |
[production] |
22:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20156 and previous config saved to /var/cache/conftool/dbconfig/20220203-224933-marostegui.json |
[production] |
22:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20155 and previous config saved to /var/cache/conftool/dbconfig/20220203-223923-marostegui.json |
[production] |
22:39 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
22:39 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
22:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177 (T300402)', diff saved to https://phabricator.wikimedia.org/P20154 and previous config saved to /var/cache/conftool/dbconfig/20220203-223916-marostegui.json |
[production] |
22:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P20153 and previous config saved to /var/cache/conftool/dbconfig/20220203-222411-marostegui.json |
[production] |
22:18 |
<ryankemper> |
T294805 Monitoring https://grafana.wikimedia.org/d/000000455/elasticsearch-percentiles?orgId=1&var-cirrus_group=eqiad&var-cluster=elasticsearch&var-exported_cluster=production-search&var-smoothing=1&refresh=1m&from=now-3h&to=now as new hosts join the fleet |
[production] |
22:18 |
<ryankemper> |
T294805 Bringing in new eqiad hosts in batches of 4, with 15-20 mins between batches: `ryankemper@cumin1001:~$ sudo -E cumin -b 4 'elastic1*' 'sudo run-puppet-agent --force; sudo run-puppet-agent; sleep 900'` tmux session `es_eqiad` |
[production] |
22:13 |
<ryankemper> |
T294805 https://gerrit.wikimedia.org/r/c/operations/puppet/+/759617/ fixed the dependency issues, going to start bringing new hosts into service |
[production] |