2024-08-25
§
|
15:32 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2165 (T367856)', diff saved to https://phabricator.wikimedia.org/P67754 and previous config saved to /var/cache/conftool/dbconfig/20240825-153206-marostegui.json |
[production] |
15:32 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 7:00:00 on db2165.codfw.wmnet with reason: Maintenance |
[production] |
15:31 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2 days, 7:00:00 on db2165.codfw.wmnet with reason: Maintenance |
[production] |
15:31 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2164 (T367856)', diff saved to https://phabricator.wikimedia.org/P67753 and previous config saved to /var/cache/conftool/dbconfig/20240825-153144-marostegui.json |
[production] |
15:16 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2164', diff saved to https://phabricator.wikimedia.org/P67752 and previous config saved to /var/cache/conftool/dbconfig/20240825-151637-marostegui.json |
[production] |
15:01 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2164', diff saved to https://phabricator.wikimedia.org/P67751 and previous config saved to /var/cache/conftool/dbconfig/20240825-150130-marostegui.json |
[production] |
14:46 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2164 (T367856)', diff saved to https://phabricator.wikimedia.org/P67750 and previous config saved to /var/cache/conftool/dbconfig/20240825-144623-marostegui.json |
[production] |
08:05 |
<oblivian@cumin1002> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 100%: Replication fixed', diff saved to https://phabricator.wikimedia.org/P67749 and previous config saved to /var/cache/conftool/dbconfig/20240825-080544-oblivian.json |
[production] |
07:50 |
<oblivian@cumin1002> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 75%: Replication fixed', diff saved to https://phabricator.wikimedia.org/P67748 and previous config saved to /var/cache/conftool/dbconfig/20240825-075038-oblivian.json |
[production] |
07:35 |
<oblivian@cumin1002> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 50%: Replication fixed', diff saved to https://phabricator.wikimedia.org/P67747 and previous config saved to /var/cache/conftool/dbconfig/20240825-073533-oblivian.json |
[production] |
07:20 |
<oblivian@cumin1002> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 25%: Replication fixed', diff saved to https://phabricator.wikimedia.org/P67746 and previous config saved to /var/cache/conftool/dbconfig/20240825-072027-oblivian.json |
[production] |
07:05 |
<oblivian@cumin1002> |
dbctl commit (dc=all): 'db1161 (re)pooling @ 10%: Replication fixed', diff saved to https://phabricator.wikimedia.org/P67745 and previous config saved to /var/cache/conftool/dbconfig/20240825-070522-oblivian.json |
[production] |
06:57 |
<_joe_> |
repairing mgwiktionary.pagelinks on db1161 |
[production] |
06:12 |
<oblivian@cumin1002> |
dbctl commit (dc=all): 'depooling db1161, broken replica', diff saved to https://phabricator.wikimedia.org/P67744 and previous config saved to /var/cache/conftool/dbconfig/20240825-061206-oblivian.json |
[production] |
2024-08-23
§
|
22:26 |
<eileen> |
civicrm upgraded from e629834c to 75c86184 (that didn't turn out to have anything relevant to the new deduper error) |
[production] |
16:50 |
<conniecc1@deploy1003> |
Finished deploy [airflow-dags/analytics_product@c55c7de]: (no justification provided) (duration: 00m 03s) |
[production] |
16:50 |
<conniecc1@deploy1003> |
Started deploy [airflow-dags/analytics_product@c55c7de]: (no justification provided) |
[production] |
16:45 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2205 (T371742)', diff saved to https://phabricator.wikimedia.org/P67740 and previous config saved to /var/cache/conftool/dbconfig/20240823-164554-ladsgroup.json |
[production] |
16:45 |
<nettrom@deploy1003> |
Finished deploy [airflow-dags/analytics_product@c55c7de]: (no justification provided) (duration: 00m 17s) |
[production] |
16:45 |
<nettrom@deploy1003> |
Started deploy [airflow-dags/analytics_product@c55c7de]: (no justification provided) |
[production] |
16:37 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:37 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: updating mgmt for frack servers in codfw - jhancock@cumin2002" |
[production] |
16:37 |
<jhancock@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: updating mgmt for frack servers in codfw - jhancock@cumin2002" |
[production] |
16:34 |
<jhancock@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
16:33 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/spark-history: apply |
[production] |
16:32 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/spark-history: apply |
[production] |
16:30 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2205', diff saved to https://phabricator.wikimedia.org/P67738 and previous config saved to /var/cache/conftool/dbconfig/20240823-163047-ladsgroup.json |
[production] |
16:19 |
<btullis@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cephosd1001.eqiad.wmnet with OS bookworm |
[production] |
16:16 |
<bearloga@deploy1003> |
Finished deploy [airflow-dags/wmde@c55c7de]: (no justification provided) (duration: 00m 06s) |
[production] |
16:16 |
<bearloga@deploy1003> |
Started deploy [airflow-dags/wmde@c55c7de]: (no justification provided) |
[production] |
16:15 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2205', diff saved to https://phabricator.wikimedia.org/P67737 and previous config saved to /var/cache/conftool/dbconfig/20240823-161540-ladsgroup.json |
[production] |
16:00 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2205 (T371742)', diff saved to https://phabricator.wikimedia.org/P67736 and previous config saved to /var/cache/conftool/dbconfig/20240823-160033-ladsgroup.json |
[production] |
15:59 |
<claime> |
Running homer 'cr*codfw*' commit T372878 |
[production] |
15:59 |
<cgoubert@cumin1002> |
END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker2027.codfw.wmnet |
[production] |
15:59 |
<cgoubert@cumin1002> |
START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker2027.codfw.wmnet |
[production] |
15:54 |
<btullis@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cephosd1001.eqiad.wmnet with reason: host reimage |
[production] |
15:53 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 17:00:00 on wdqs[1023-1024].eqiad.wmnet with reason: noisy alerts related to graph split T337013 |
[production] |
15:52 |
<bking@cumin2002> |
START - Cookbook sre.hosts.downtime for 17:00:00 on wdqs[1023-1024].eqiad.wmnet with reason: noisy alerts related to graph split T337013 |
[production] |
15:52 |
<btullis@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cephosd1001.eqiad.wmnet with reason: host reimage |
[production] |
15:38 |
<claime> |
Running homer 'lsw1-a6-codfw*' commit T372878 |
[production] |
15:35 |
<cdanis@deploy1003> |
helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply |
[production] |
15:35 |
<cdanis@deploy1003> |
helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply |
[production] |
15:33 |
<btullis@cumin1002> |
START - Cookbook sre.hosts.reimage for host cephosd1001.eqiad.wmnet with OS bookworm |
[production] |