3151-3200 of 10000 results (108ms)
2024-05-13 ยง
07:38 <brouberol@cumin2002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-flink-eqiad cluster: Roll restart of jvm daemons. [production]
07:37 <marostegui@cumin1002> dbctl commit (dc=all): 'db2213 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P62329 and previous config saved to /var/cache/conftool/dbconfig/20240513-073750-root.json [production]
07:37 <kartik@deploy1002> Finished scap: Backport for [[gerrit:1025300|ContentTranslation: Update publishing setting for cswiki (T353049)]] (duration: 32m 03s) [production]
07:35 <ayounsi@cumin1002> START - Cookbook sre.network.peering with action 'configure' for AS: 17451 [production]
07:30 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Depooling db1158 (T352010)', diff saved to https://phabricator.wikimedia.org/P62328 and previous config saved to /var/cache/conftool/dbconfig/20240513-073031-ladsgroup.json [production]
07:30 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
07:30 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
07:30 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1158.eqiad.wmnet with reason: Maintenance [production]
07:30 <brouberol@cumin2002> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-flink-codfw cluster: Roll restart of jvm daemons. [production]
07:29 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1158.eqiad.wmnet with reason: Maintenance [production]
07:25 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1183', diff saved to https://phabricator.wikimedia.org/P62327 and previous config saved to /var/cache/conftool/dbconfig/20240513-072533-marostegui.json [production]
07:23 <brouberol@cumin2002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-flink-codfw cluster: Roll restart of jvm daemons. [production]
07:23 <kartik@deploy1002> kartik: Continuing with sync [production]
07:22 <marostegui@cumin1002> dbctl commit (dc=all): 'db2213 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P62326 and previous config saved to /var/cache/conftool/dbconfig/20240513-072244-root.json [production]
07:22 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: wmcs::openstack::eqiad1::instance_backups [production]
07:19 <kartik@deploy1002> kartik: Backport for [[gerrit:1025300|ContentTranslation: Update publishing setting for cswiki (T353049)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
07:10 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-role for role: wmcs::openstack::eqiad1::instance_backups [production]
07:10 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1183', diff saved to https://phabricator.wikimedia.org/P62325 and previous config saved to /var/cache/conftool/dbconfig/20240513-071026-marostegui.json [production]
07:08 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host cloudbackup1004.eqiad.wmnet [production]
07:07 <marostegui@cumin1002> dbctl commit (dc=all): 'db2213 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P62324 and previous config saved to /var/cache/conftool/dbconfig/20240513-070738-root.json [production]
07:05 <kartik@deploy1002> Started scap: Backport for [[gerrit:1025300|ContentTranslation: Update publishing setting for cswiki (T353049)]] [production]
06:59 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-host for host cloudbackup1004.eqiad.wmnet [production]
06:55 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1183 (T364299)', diff saved to https://phabricator.wikimedia.org/P62323 and previous config saved to /var/cache/conftool/dbconfig/20240513-065518-marostegui.json [production]
06:52 <marostegui@cumin1002> dbctl commit (dc=all): 'db2213 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P62322 and previous config saved to /var/cache/conftool/dbconfig/20240513-065230-root.json [production]
06:46 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2183.codfw.wmnet with OS bookworm [production]
06:37 <marostegui@cumin1002> dbctl commit (dc=all): 'db2213 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P62321 and previous config saved to /var/cache/conftool/dbconfig/20240513-063724-root.json [production]
06:28 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2183.codfw.wmnet with reason: host reimage [production]
06:25 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2183.codfw.wmnet with reason: host reimage [production]
06:22 <marostegui@cumin1002> dbctl commit (dc=all): 'db2213 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P62320 and previous config saved to /var/cache/conftool/dbconfig/20240513-062219-root.json [production]
06:21 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1183 (T364299)', diff saved to https://phabricator.wikimedia.org/P62319 and previous config saved to /var/cache/conftool/dbconfig/20240513-062129-marostegui.json [production]
06:21 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1183.eqiad.wmnet with reason: Maintenance [production]
06:21 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 8:00:00 on db1183.eqiad.wmnet with reason: Maintenance [production]
06:21 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161 (T364299)', diff saved to https://phabricator.wikimedia.org/P62318 and previous config saved to /var/cache/conftool/dbconfig/20240513-062117-marostegui.json [production]
06:12 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2184.codfw.wmnet with reason: Reimage of the master [production]
06:12 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2184.codfw.wmnet with reason: Reimage of the master [production]
06:07 <marostegui@cumin1002> START - Cookbook sre.hosts.reimage for host db2183.codfw.wmnet with OS bookworm [production]
06:06 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2183.codfw.wmnet with reason: Reimage [production]
06:06 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2183.codfw.wmnet with reason: Reimage [production]
06:06 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161', diff saved to https://phabricator.wikimedia.org/P62317 and previous config saved to /var/cache/conftool/dbconfig/20240513-060610-marostegui.json [production]
06:05 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db2213.codfw.wmnet with reason: Schema change [production]
06:05 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on db2213.codfw.wmnet with reason: Schema change [production]
05:51 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2213.codfw.wmnet with reason: Schema change [production]
05:51 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2213.codfw.wmnet with reason: Schema change [production]
05:51 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161', diff saved to https://phabricator.wikimedia.org/P62316 and previous config saved to /var/cache/conftool/dbconfig/20240513-055102-marostegui.json [production]
05:48 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2213 T364703', diff saved to https://phabricator.wikimedia.org/P62315 and previous config saved to /var/cache/conftool/dbconfig/20240513-054841-root.json [production]
05:48 <marostegui@cumin1002> dbctl commit (dc=all): 'Promote db2123 to s5 primary T364703', diff saved to https://phabricator.wikimedia.org/P62314 and previous config saved to /var/cache/conftool/dbconfig/20240513-054802-root.json [production]
05:47 <marostegui> Starting s5 codfw failover from db2213 to db2123 - T364703 [production]
05:35 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161 (T364299)', diff saved to https://phabricator.wikimedia.org/P62313 and previous config saved to /var/cache/conftool/dbconfig/20240513-053553-marostegui.json [production]
05:24 <marostegui@cumin1002> dbctl commit (dc=all): 'Remove vslow from db2123 T364703', diff saved to https://phabricator.wikimedia.org/P62312 and previous config saved to /var/cache/conftool/dbconfig/20240513-052424-marostegui.json [production]
05:23 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 24 hosts with reason: Primary switchover s5 T364703 [production]