1201-1250 of 10000 results (43ms)
2025-06-16 §
08:21 <ladsgroup@deploy1003> anzx, ladsgroup: Backport for [[gerrit:1159292|IP cap lift for wikipedia workshop - cs.wikipedia on 19June2025 (T396980)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
08:21 <stevemunene@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1159.eqiad.wmnet [production]
08:20 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1158.eqiad.wmnet [production]
08:20 <vgutierrez@cumin1003> END (PASS) - Cookbook sre.loadbalancer.admin (exit_code=0) depooling P{lvs7001.magru.wmnet} and A:liberica (T396561) [production]
08:19 <ladsgroup@deploy1003> Started scap sync-world: Backport for [[gerrit:1159292|IP cap lift for wikipedia workshop - cs.wikipedia on 19June2025 (T396980)]] [production]
08:19 <marostegui@cumin1002> dbctl commit (dc=all): 'db2207 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P77980 and previous config saved to /var/cache/conftool/dbconfig/20250616-081922-root.json [production]
08:19 <vgutierrez@cumin1003> START - Cookbook sre.loadbalancer.admin depooling P{lvs7001.magru.wmnet} and A:liberica (T396561) [production]
08:14 <ladsgroup@deploy1003> Finished scap sync-world: Backport for [[gerrit:1156092|mrwiki: add मसूदा (draft) namespace (T396551)]] (duration: 15m 11s) [production]
08:14 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P77979 and previous config saved to /var/cache/conftool/dbconfig/20250616-081402-marostegui.json [production]
08:13 <stevemunene@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1158.eqiad.wmnet [production]
08:13 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1157.eqiad.wmnet [production]
08:06 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-wmde: apply [production]
08:06 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-wmde: apply [production]
08:05 <ladsgroup@deploy1003> ladsgroup, anzx: Continuing with sync [production]
08:05 <stevemunene@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1157.eqiad.wmnet [production]
08:04 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-platform-eng: apply [production]
08:04 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-platform-eng: apply [production]
08:04 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1162.eqiad.wmnet [production]
08:03 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-ml: apply [production]
08:03 <ladsgroup@deploy1003> ladsgroup, anzx: Backport for [[gerrit:1156092|mrwiki: add मसूदा (draft) namespace (T396551)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
08:03 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-ml: apply [production]
08:00 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-analytics-product: apply [production]
07:59 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-analytics-product: apply [production]
07:59 <ladsgroup@deploy1003> Started scap sync-world: Backport for [[gerrit:1156092|mrwiki: add मसूदा (draft) namespace (T396551)]] [production]
07:58 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1177', diff saved to https://phabricator.wikimedia.org/P77978 and previous config saved to /var/cache/conftool/dbconfig/20250616-075855-marostegui.json [production]
07:57 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-search: apply [production]
07:56 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-search: apply [production]
07:56 <stevemunene@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1162.eqiad.wmnet [production]
07:55 <stevemunene@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on an-worker[1175-1176].eqiad.wmnet with reason: Upgrade an-worker hard drives from 4TB to 8TB group 9 and 10 [production]
07:55 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-research: apply [production]
07:55 <ladsgroup@deploy1003> Finished scap sync-world: Backport for [[gerrit:1156741|Enable sub-referencing on test wiki (T395871)]] (duration: 40m 51s) [production]
07:55 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-research: apply [production]
07:55 <stevemunene@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on an-worker[1149-1153].eqiad.wmnet with reason: Upgrade an-worker hard drives from 4TB to 8TB group 9 and 10 [production]
07:54 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-analytics-test: apply [production]
07:53 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-analytics-test: apply [production]
07:53 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1161.eqiad.wmnet [production]
07:45 <stevemunene@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1161.eqiad.wmnet [production]
07:44 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1160.eqiad.wmnet [production]
07:43 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1177 (T396130)', diff saved to https://phabricator.wikimedia.org/P77977 and previous config saved to /var/cache/conftool/dbconfig/20250616-074346-marostegui.json [production]
07:42 <ladsgroup@deploy1003> lilients, ladsgroup: Continuing with sync [production]
07:36 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-reload (exit_code=0) reloading wikidata_main on wdqs1022.eqiad.wmnet from DumpsSource.HDFS (hdfs:///wmf/data/discovery/wikidata/munged_n3_dump/wikidata/main/20250526/ using stat1009.eqiad.wmnet) [production]
07:35 <ladsgroup@deploy1003> lilients, ladsgroup: Backport for [[gerrit:1156741|Enable sub-referencing on test wiki (T395871)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
07:31 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db2207.codfw.wmnet with reason: Maintenance [production]
07:30 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2207 T396976', diff saved to https://phabricator.wikimedia.org/P77976 and previous config saved to /var/cache/conftool/dbconfig/20250616-073045-marostegui.json [production]
07:29 <marostegui@cumin1002> dbctl commit (dc=all): 'Promote db2204 to s2 primary T396976', diff saved to https://phabricator.wikimedia.org/P77975 and previous config saved to /var/cache/conftool/dbconfig/20250616-072955-root.json [production]
07:29 <marostegui> Starting s2 codfw failover from db2207 to db2204 - T396976 [production]
07:28 <stevemunene@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1160.eqiad.wmnet [production]
07:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1177 (T396130)', diff saved to https://phabricator.wikimedia.org/P77974 and previous config saved to /var/cache/conftool/dbconfig/20250616-072702-marostegui.json [production]
07:26 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1177.eqiad.wmnet with reason: Maintenance [production]
07:26 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1172 (T396130)', diff saved to https://phabricator.wikimedia.org/P77973 and previous config saved to /var/cache/conftool/dbconfig/20250616-072640-marostegui.json [production]