7351-7400 of 10000 results (89ms)
2022-11-07 §
07:17 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
07:13 <marostegui@deploy1002> marostegui and marostegui: Backport for [[gerrit:853711|ProductionServices.php: Promote pc1014 to pc1 master (T322295)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug1001.eqiad.wmnet [production]
07:13 <marostegui@deploy1002> Started scap: Backport for [[gerrit:853711|ProductionServices.php: Promote pc1014 to pc1 master (T322295)]] [production]
07:12 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
07:07 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on pc2011.codfw.wmnet,pc[1011,1014].eqiad.wmnet with reason: Primary switchover [production]
07:07 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on pc2011.codfw.wmnet,pc[1011,1014].eqiad.wmnet with reason: Primary switchover [production]
07:05 <urbanecm> Run `time mwscript extensions/GrowthExperiments/maintenance/updateIsActiveFlagForMentees.php --wiki=bnwiki` in a tmux at mwmaint1002 (T318457) [production]
07:04 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1141 (T321123)', diff saved to https://phabricator.wikimedia.org/P38179 and previous config saved to /var/cache/conftool/dbconfig/20221107-070418-marostegui.json [production]
07:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1141 (T321123)', diff saved to https://phabricator.wikimedia.org/P38178 and previous config saved to /var/cache/conftool/dbconfig/20221107-070311-marostegui.json [production]
07:03 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1141.eqiad.wmnet with reason: Maintenance [production]
07:02 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 8:00:00 on db1141.eqiad.wmnet with reason: Maintenance [production]
07:02 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1121 (T321123)', diff saved to https://phabricator.wikimedia.org/P38177 and previous config saved to /var/cache/conftool/dbconfig/20221107-070249-marostegui.json [production]
07:02 <urbanecm> Run `time mwscript extensions/GrowthExperiments/maintenance/updateIsActiveFlagForMentees.php --wiki=cswiki` in a tmux at mwmaint1002 (T318457) [production]
07:01 <urbanecm@deploy1002> Finished scap: Backport for [[gerrit:853509|Add support for gemm_mentee_is_active (T318457)]], [[gerrit:853440|Calculate mentorship-related metrics (T318684)]] (duration: 06m 27s) [production]
06:55 <urbanecm@deploy1002> urbanecm and urbanecm: Backport for [[gerrit:853509|Add support for gemm_mentee_is_active (T318457)]], [[gerrit:853440|Calculate mentorship-related metrics (T318684)]] synced to the testservers: mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet [production]
06:54 <urbanecm@deploy1002> Started scap: Backport for [[gerrit:853509|Add support for gemm_mentee_is_active (T318457)]], [[gerrit:853440|Calculate mentorship-related metrics (T318684)]] [production]
06:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es2024 T322406', diff saved to https://phabricator.wikimedia.org/P38176 and previous config saved to /var/cache/conftool/dbconfig/20221107-065251-root.json [production]
06:50 <marostegui@cumin1001> dbctl commit (dc=all): 'Promote es2023 to es5 primary and set section read-write T322406', diff saved to https://phabricator.wikimedia.org/P38175 and previous config saved to /var/cache/conftool/dbconfig/20221107-065048-root.json [production]
06:49 <marostegui> Starting es5 codfw failover from es2024 to es2023 - T322406 [production]
06:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1121', diff saved to https://phabricator.wikimedia.org/P38174 and previous config saved to /var/cache/conftool/dbconfig/20221107-064743-marostegui.json [production]
06:46 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 6 hosts with reason: Primary switchover es5 T322406 [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Set es2023 with weight 0 T322406', diff saved to https://phabricator.wikimedia.org/P38173 and previous config saved to /var/cache/conftool/dbconfig/20221107-064608-root.json [production]
06:45 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on 6 hosts with reason: Primary switchover es5 T322406 [production]
06:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1121', diff saved to https://phabricator.wikimedia.org/P38172 and previous config saved to /var/cache/conftool/dbconfig/20221107-063236-marostegui.json [production]
06:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1121 (T321123)', diff saved to https://phabricator.wikimedia.org/P38171 and previous config saved to /var/cache/conftool/dbconfig/20221107-061730-marostegui.json [production]
06:10 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1121 (T321123)', diff saved to https://phabricator.wikimedia.org/P38170 and previous config saved to /var/cache/conftool/dbconfig/20221107-061019-marostegui.json [production]
06:10 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 16:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
06:10 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 3292 [production]
06:09 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 16:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
06:09 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1121.eqiad.wmnet with reason: Maintenance [production]
06:09 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 8:00:00 on db1121.eqiad.wmnet with reason: Maintenance [production]
06:09 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 3292 [production]
06:06 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 61461 [production]
06:05 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 61461 [production]
06:01 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 25091 [production]
05:59 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 25091 [production]
05:54 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 20115 [production]
05:54 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 20115 [production]
05:53 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 7843 [production]
05:53 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 7843 [production]
2022-11-06 §
08:23 <elukey> restart rsyslog on centralog2002 [production]
08:19 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
08:19 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
08:17 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
08:17 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
07:50 <elukey> restart kube-apiserver on ml-serve-ctrl1001 [production]
07:48 <elukey> restart kube-apiserver on ml-serve-ctrl1002 - high HTTP 409 registered since days ago [production]
2022-11-05 §
12:56 <mfossati@deploy1002> Finished deploy [airflow-dags/platform_eng@c849762]: (no justification provided) (duration: 00m 49s) [production]
12:55 <mfossati@deploy1002> Started deploy [airflow-dags/platform_eng@c849762]: (no justification provided) [production]
09:39 <elukey> reinstall kubernetes-node on ml-staging200[12] to allow puppet to run (cleanup after yesterday issue, worker nodes had master role applied) [production]