2022-02-04
§
|
07:09 |
<elukey> |
cleanup wmf_auto_restart_prometheus-mysqld-exporter@analytics-meta on an-test-coord1001 and unmasked wmf_auto_restart_prometheus-mysqld-exporter (now used) |
[production] |
07:03 |
<elukey> |
clean up wmf_auto_restart_prometheus-mysqld-exporter@matomo on matomo1002 (not used anymore, listed as failed) |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1096:3316 schema change', diff saved to https://phabricator.wikimedia.org/P20163 and previous config saved to /var/cache/conftool/dbconfig/20220204-070003-marostegui.json |
[production] |
05:59 |
<legoktm> |
uploaded pygments 2.11.2 to apt.wm.o (T298399) |
[production] |
02:48 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts elastic2035.codfw.wmnet |
[production] |
02:42 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) for hosts elastic2035.codfw.wmnet |
[production] |
02:41 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts elastic2035.codfw.wmnet |
[production] |
01:08 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
01:06 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
01:06 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
01:05 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
01:04 |
<brennen> |
for-real end of utc late backport & config window |
[production] |
01:04 |
<brennen@deploy1002> |
Synchronized php-1.38.0-wmf.20/extensions/Thanks/modules/ext.thanks.flowthank.js: Backport: [[gerrit:759319|Correct attribute for flow thanks (T300831)]] (duration: 00m 49s) |
[production] |
00:50 |
<brennen> |
reopening utc late backport window for [[gerrit:759319|Correct attribute for flow thanks (T300831)]] |
[production] |
00:15 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
00:12 |
<cjming> |
end of UTC late backport & config window |
[production] |
00:11 |
<cjming@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:759560|Update icons, wordmark for test wikis (T299512)]] (duration: 00m 49s) |
[production] |
00:11 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
00:10 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
00:10 |
<cjming@deploy1002> |
Synchronized static/images/mobile/copyright/: Config: [[gerrit:759560|Update icons, wordmark for test wikis (T299512)]] (duration: 00m 53s) |
[production] |
00:09 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
2022-02-03
§
|
23:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20159 and previous config saved to /var/cache/conftool/dbconfig/20220203-233447-marostegui.json |
[production] |
23:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P20158 and previous config saved to /var/cache/conftool/dbconfig/20220203-231942-marostegui.json |
[production] |
23:15 |
<ryankemper> |
T294805 Added a silence on alerts.wikimedia.org for `CirrusSearchJVMGCOldPoolFlatlined` |
[production] |
23:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P20157 and previous config saved to /var/cache/conftool/dbconfig/20220203-230437-marostegui.json |
[production] |
22:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20156 and previous config saved to /var/cache/conftool/dbconfig/20220203-224933-marostegui.json |
[production] |
22:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20155 and previous config saved to /var/cache/conftool/dbconfig/20220203-223923-marostegui.json |
[production] |
22:39 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
22:39 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
22:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177 (T300402)', diff saved to https://phabricator.wikimedia.org/P20154 and previous config saved to /var/cache/conftool/dbconfig/20220203-223916-marostegui.json |
[production] |
22:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P20153 and previous config saved to /var/cache/conftool/dbconfig/20220203-222411-marostegui.json |
[production] |
22:18 |
<ryankemper> |
T294805 Monitoring https://grafana.wikimedia.org/d/000000455/elasticsearch-percentiles?orgId=1&var-cirrus_group=eqiad&var-cluster=elasticsearch&var-exported_cluster=production-search&var-smoothing=1&refresh=1m&from=now-3h&to=now as new hosts join the fleet |
[production] |
22:18 |
<ryankemper> |
T294805 Bringing in new eqiad hosts in batches of 4, with 15-20 mins between batches: `ryankemper@cumin1001:~$ sudo -E cumin -b 4 'elastic1*' 'sudo run-puppet-agent --force; sudo run-puppet-agent; sleep 900'` tmux session `es_eqiad` |
[production] |
22:13 |
<ryankemper> |
T294805 https://gerrit.wikimedia.org/r/c/operations/puppet/+/759617/ fixed the dependency issues, going to start bringing new hosts into service |
[production] |
22:09 |
<volans@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
22:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P20152 and previous config saved to /var/cache/conftool/dbconfig/20220203-220906-marostegui.json |
[production] |
22:05 |
<eileen> |
civicrm revision 7dcdc017 -> 04cbf35b |
[production] |
22:04 |
<volans@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
21:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1177 (T300402)', diff saved to https://phabricator.wikimedia.org/P20150 and previous config saved to /var/cache/conftool/dbconfig/20220203-215402-marostegui.json |
[production] |
21:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1177 (T300402)', diff saved to https://phabricator.wikimedia.org/P20149 and previous config saved to /var/cache/conftool/dbconfig/20220203-215154-marostegui.json |
[production] |
21:51 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1177.eqiad.wmnet with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1177.eqiad.wmnet with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on 12 hosts with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on 12 hosts with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2079.codfw.wmnet with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2079.codfw.wmnet with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
21:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172 (T300402)', diff saved to https://phabricator.wikimedia.org/P20148 and previous config saved to /var/cache/conftool/dbconfig/20220203-215121-marostegui.json |
[production] |
21:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172', diff saved to https://phabricator.wikimedia.org/P20147 and previous config saved to /var/cache/conftool/dbconfig/20220203-213616-marostegui.json |
[production] |