651-700 of 10000 results (103ms)
2024-12-05 ยง
08:54 <jelto@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Renaming kubernetes1025 to wikikube-worker1044 - jelto@cumin1002" [production]
08:49 <jelto@cumin1002> START - Cookbook sre.dns.netbox [production]
08:49 <jelto@cumin1002> START - Cookbook sre.hosts.rename from kubernetes1025 to wikikube-worker1044 [production]
08:47 <brouberol@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
08:46 <brouberol@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
08:46 <moritzm> rebalance Ganeti eqiad/D following server refreshes [production]
08:08 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1216.eqiad.wmnet with reason: Maintenance [production]
08:07 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db1216.eqiad.wmnet with reason: Maintenance [production]
08:07 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1214 (T371742)', diff saved to https://phabricator.wikimedia.org/P71611 and previous config saved to /var/cache/conftool/dbconfig/20241205-080745-ladsgroup.json [production]
07:52 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1214', diff saved to https://phabricator.wikimedia.org/P71610 and previous config saved to /var/cache/conftool/dbconfig/20241205-075237-ladsgroup.json [production]
07:37 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1214', diff saved to https://phabricator.wikimedia.org/P71609 and previous config saved to /var/cache/conftool/dbconfig/20241205-073730-ladsgroup.json [production]
07:36 <jelto@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes[1025-1026].eqiad.wmnet [production]
07:32 <jelto@cumin1002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes[1025-1026].eqiad.wmnet [production]
07:22 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1214 (T371742)', diff saved to https://phabricator.wikimedia.org/P71608 and previous config saved to /var/cache/conftool/dbconfig/20241205-072223-ladsgroup.json [production]
07:16 <kevinbazira@deploy2002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'experimental' for release 'main' . [production]
06:31 <marostegui@cumin1002> dbctl commit (dc=all): 'es2043 (re)pooling @ 100%: Pooling in es5', diff saved to https://phabricator.wikimedia.org/P71607 and previous config saved to /var/cache/conftool/dbconfig/20241205-063132-root.json [production]
06:16 <marostegui@cumin1002> dbctl commit (dc=all): 'es2043 (re)pooling @ 75%: Pooling in es5', diff saved to https://phabricator.wikimedia.org/P71606 and previous config saved to /var/cache/conftool/dbconfig/20241205-061626-root.json [production]
06:06 <marostegui@cumin1002> dbctl commit (dc=all): 'es2024 (re)pooling @ 100%: Repooling cloning', diff saved to https://phabricator.wikimedia.org/P71605 and previous config saved to /var/cache/conftool/dbconfig/20241205-060631-root.json [production]
06:06 <marostegui@cumin1002> dbctl commit (dc=all): 'es2022 (re)pooling @ 100%: Pooling in production', diff saved to https://phabricator.wikimedia.org/P71604 and previous config saved to /var/cache/conftool/dbconfig/20241205-060612-root.json [production]
06:01 <marostegui@cumin1002> dbctl commit (dc=all): 'es2043 (re)pooling @ 50%: Pooling in es5', diff saved to https://phabricator.wikimedia.org/P71603 and previous config saved to /var/cache/conftool/dbconfig/20241205-060121-root.json [production]
05:58 <eileen> civicrm upgraded from 74c059a4 to f9c89e50 [production]
05:54 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Depooling db1214 (T371742)', diff saved to https://phabricator.wikimedia.org/P71602 and previous config saved to /var/cache/conftool/dbconfig/20241205-055442-ladsgroup.json [production]
05:54 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1214.eqiad.wmnet with reason: Maintenance [production]
05:54 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db1214.eqiad.wmnet with reason: Maintenance [production]
05:54 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1211 (T371742)', diff saved to https://phabricator.wikimedia.org/P71601 and previous config saved to /var/cache/conftool/dbconfig/20241205-055420-ladsgroup.json [production]
05:51 <marostegui@cumin1002> dbctl commit (dc=all): 'es2024 (re)pooling @ 75%: Repooling cloning', diff saved to https://phabricator.wikimedia.org/P71600 and previous config saved to /var/cache/conftool/dbconfig/20241205-055125-root.json [production]
05:51 <marostegui@cumin1002> dbctl commit (dc=all): 'es2022 (re)pooling @ 75%: Pooling in production', diff saved to https://phabricator.wikimedia.org/P71599 and previous config saved to /var/cache/conftool/dbconfig/20241205-055106-root.json [production]
05:46 <marostegui@cumin1002> dbctl commit (dc=all): 'es2043 (re)pooling @ 25%: Pooling in es5', diff saved to https://phabricator.wikimedia.org/P71598 and previous config saved to /var/cache/conftool/dbconfig/20241205-054615-root.json [production]
05:42 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on es2023.codfw.wmnet with reason: cloning [production]
05:42 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on es2023.codfw.wmnet with reason: cloning [production]
05:42 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool es2023 to clone es2044', diff saved to https://phabricator.wikimedia.org/P71597 and previous config saved to /var/cache/conftool/dbconfig/20241205-054200-marostegui.json [production]
05:41 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on es2025.codfw.wmnet with reason: cloning [production]
05:41 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on es2025.codfw.wmnet with reason: cloning [production]
05:41 <marostegui@cumin1002> dbctl commit (dc=all): 'Promote es2025 to es5 master T381259', diff saved to https://phabricator.wikimedia.org/P71596 and previous config saved to /var/cache/conftool/dbconfig/20241205-054114-marostegui.json [production]
05:39 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1211', diff saved to https://phabricator.wikimedia.org/P71595 and previous config saved to /var/cache/conftool/dbconfig/20241205-053912-ladsgroup.json [production]
05:36 <marostegui@cumin1002> dbctl commit (dc=all): 'es2024 (re)pooling @ 50%: Repooling cloning', diff saved to https://phabricator.wikimedia.org/P71593 and previous config saved to /var/cache/conftool/dbconfig/20241205-053620-root.json [production]
05:36 <marostegui@cumin1002> dbctl commit (dc=all): 'es2022 (re)pooling @ 50%: Pooling in production', diff saved to https://phabricator.wikimedia.org/P71592 and previous config saved to /var/cache/conftool/dbconfig/20241205-053601-root.json [production]
05:31 <marostegui@cumin1002> dbctl commit (dc=all): 'es2043 (re)pooling @ 10%: Pooling in es5', diff saved to https://phabricator.wikimedia.org/P71591 and previous config saved to /var/cache/conftool/dbconfig/20241205-053109-root.json [production]
05:28 <marostegui> Failover m3 from db1159 to db1213 - T381365 [production]
05:24 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1211', diff saved to https://phabricator.wikimedia.org/P71590 and previous config saved to /var/cache/conftool/dbconfig/20241205-052405-ladsgroup.json [production]
05:21 <marostegui@cumin1002> dbctl commit (dc=all): 'es2024 (re)pooling @ 25%: Repooling cloning', diff saved to https://phabricator.wikimedia.org/P71589 and previous config saved to /var/cache/conftool/dbconfig/20241205-052114-root.json [production]
05:20 <marostegui@cumin1002> dbctl commit (dc=all): 'es2022 (re)pooling @ 25%: Pooling in production', diff saved to https://phabricator.wikimedia.org/P71588 and previous config saved to /var/cache/conftool/dbconfig/20241205-052056-root.json [production]
05:20 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db[2134,2160].codfw.wmnet,db[1159,1213,1217].eqiad.wmnet with reason: m3 master switchover T381365 [production]
05:20 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on db[2134,2160].codfw.wmnet,db[1159,1213,1217].eqiad.wmnet with reason: m3 master switchover T381365 [production]
05:16 <marostegui@cumin1002> dbctl commit (dc=all): 'es2043 (re)pooling @ 1%: Pooling in es5', diff saved to https://phabricator.wikimedia.org/P71587 and previous config saved to /var/cache/conftool/dbconfig/20241205-051604-root.json [production]
05:15 <marostegui@cumin1002> dbctl commit (dc=all): 'Add es2043 depooled T381259', diff saved to https://phabricator.wikimedia.org/P71586 and previous config saved to /var/cache/conftool/dbconfig/20241205-051545-marostegui.json [production]
05:08 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1211 (T371742)', diff saved to https://phabricator.wikimedia.org/P71585 and previous config saved to /var/cache/conftool/dbconfig/20241205-050858-ladsgroup.json [production]
05:06 <marostegui@cumin1002> dbctl commit (dc=all): 'es2024 (re)pooling @ 10%: Repooling cloning', diff saved to https://phabricator.wikimedia.org/P71584 and previous config saved to /var/cache/conftool/dbconfig/20241205-050609-root.json [production]
05:05 <marostegui@cumin1002> dbctl commit (dc=all): 'es2022 (re)pooling @ 10%: Pooling in production', diff saved to https://phabricator.wikimedia.org/P71583 and previous config saved to /var/cache/conftool/dbconfig/20241205-050550-root.json [production]
03:38 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Depooling db1211 (T371742)', diff saved to https://phabricator.wikimedia.org/P71578 and previous config saved to /var/cache/conftool/dbconfig/20241205-033803-ladsgroup.json [production]