201-250 of 10000 results (78ms)
2024-05-15 ยง
12:23 <jayme@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on kubestagetcd[2001-2003].codfw.wmnet with reason: decom [production]
12:23 <jayme@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on kubestagetcd[2001-2003].codfw.wmnet with reason: decom [production]
12:19 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-host for host db1203.eqiad.wmnet [production]
11:52 <aborrero@cumin1002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host cloudvirt1041.eqiad.wmnet with OS bookworm [production]
11:44 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: openldap::rw [production]
11:34 <mvolz@deploy1002> helmfile [eqiad] DONE helmfile.d/services/zotero: apply [production]
11:33 <mvolz@deploy1002> helmfile [eqiad] START helmfile.d/services/zotero: apply [production]
11:33 <mvolz@deploy1002> helmfile [codfw] DONE helmfile.d/services/zotero: apply [production]
11:32 <mvolz@deploy1002> helmfile [codfw] START helmfile.d/services/zotero: apply [production]
11:31 <mvolz@deploy1002> helmfile [staging] DONE helmfile.d/services/zotero: apply [production]
11:31 <mvolz@deploy1002> helmfile [staging] START helmfile.d/services/zotero: apply [production]
11:29 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-role for role: openldap::rw [production]
11:28 <logmsgbot> lucaswerkmeister-wmde@deploy1002 Finished scap: Backport for [[gerrit:1031485|backend: Fix Unknown column 'Array' in 'where clause' (T364974)]], [[gerrit:1031846|backend: Fix Unknown column 'Array' in 'where clause' (T364974)]] (duration: 15m 36s) [production]
11:18 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host db1193.eqiad.wmnet [production]
11:16 <logmsgbot> lucaswerkmeister-wmde@deploy1002 lucaswerkmeister-wmde: Continuing with sync [production]
11:15 <logmsgbot> lucaswerkmeister-wmde@deploy1002 lucaswerkmeister-wmde: Backport for [[gerrit:1031485|backend: Fix Unknown column 'Array' in 'where clause' (T364974)]], [[gerrit:1031846|backend: Fix Unknown column 'Array' in 'where clause' (T364974)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
11:13 <logmsgbot> lucaswerkmeister-wmde@deploy1002 Started scap: Backport for [[gerrit:1031485|backend: Fix Unknown column 'Array' in 'where clause' (T364974)]], [[gerrit:1031846|backend: Fix Unknown column 'Array' in 'where clause' (T364974)]] [production]
11:10 <aborrero@cumin1002> START - Cookbook sre.hosts.reimage for host cloudvirt1041.eqiad.wmnet with OS bookworm [production]
11:09 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-host for host db1193.eqiad.wmnet [production]
11:05 <aborrero@cumin1002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host cloudvirt1041.eqiad.wmnet with OS bookworm [production]
11:03 <logmsgbot> lucaswerkmeister-wmde@deploy1002 Sync cancelled. [production]
10:54 <gmodena@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-page-content-change-enrich: apply [production]
10:54 <gmodena@deploy1002> helmfile [codfw] START helmfile.d/services/mw-page-content-change-enrich: apply [production]
10:53 <gmodena@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-page-content-change-enrich: apply [production]
10:53 <gmodena@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-page-content-change-enrich: apply [production]
10:53 <logmsgbot> lucaswerkmeister-wmde@deploy1002 zabe and lucaswerkmeister-wmde: Backport for [[gerrit:1031484|Fix capitalization of Subquery (T364974)]], [[gerrit:1031483|Fix capitalization of Subquery (T364974)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
10:52 <aborrero@cumin1002> START - Cookbook sre.hosts.reimage for host cloudvirt1041.eqiad.wmnet with OS bookworm [production]
10:50 <logmsgbot> lucaswerkmeister-wmde@deploy1002 Started scap: Backport for [[gerrit:1031484|Fix capitalization of Subquery (T364974)]], [[gerrit:1031483|Fix capitalization of Subquery (T364974)]] [production]
10:49 <gmodena@deploy1002> helmfile [staging] DONE helmfile.d/services/mw-page-content-change-enrich: apply [production]
10:49 <gmodena@deploy1002> helmfile [staging] START helmfile.d/services/mw-page-content-change-enrich: apply [production]
10:40 <jiji@deploy1002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
10:32 <dcausse@deploy1002> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
10:32 <dcausse@deploy1002> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
10:31 <cmooney@cumin1002> END (FAIL) - Cookbook sre.network.tls (exit_code=99) for network device cloudsw1-e4-eqiad [production]
10:29 <cmooney@cumin1002> START - Cookbook sre.network.tls for network device cloudsw1-e4-eqiad [production]
10:28 <jiji@deploy1002> helmfile [eqiad] DONE helmfile.d/admin 'apply'. [production]
10:28 <jiji@deploy1002> helmfile [eqiad] START helmfile.d/admin 'apply'. [production]
10:20 <jiji@deploy1002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
10:15 <dcausse@deploy1002> helmfile [codfw] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
10:15 <dcausse@deploy1002> helmfile [codfw] START helmfile.d/services/cirrus-streaming-updater: apply [production]
10:09 <jiji@deploy1002> helmfile [codfw] DONE helmfile.d/admin 'apply'. [production]
10:09 <jiji@deploy1002> helmfile [codfw] START helmfile.d/admin 'apply'. [production]
10:06 <btullis@deploy1002> Finished deploy [airflow-dags/analytics@ecf603d]: (no justification provided) (duration: 00m 30s) [production]
10:06 <btullis@deploy1002> Started deploy [airflow-dags/analytics@ecf603d]: (no justification provided) [production]
10:06 <btullis@deploy1002> Finished deploy [airflow-dags/analytics_test@ecf603d]: (no justification provided) (duration: 00m 11s) [production]
10:06 <btullis@deploy1002> Started deploy [airflow-dags/analytics_test@ecf603d]: (no justification provided) [production]
10:02 <dcausse@deploy1002> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
10:02 <dcausse@deploy1002> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
09:59 <dcausse@deploy1002> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
09:59 <dcausse@deploy1002> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]