2024-05-13
ยง
|
08:32 |
<marostegui@deploy1002> |
marostegui: Continuing with sync |
[production] |
08:29 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1185', diff saved to https://phabricator.wikimedia.org/P62334 and previous config saved to /var/cache/conftool/dbconfig/20240513-082956-marostegui.json |
[production] |
08:24 |
<moritzm> |
installing PHP 7.3 security updates |
[production] |
08:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1185 (T364299)', diff saved to https://phabricator.wikimedia.org/P62333 and previous config saved to /var/cache/conftool/dbconfig/20240513-081448-marostegui.json |
[production] |
08:03 |
<marostegui@deploy1002> |
marostegui: Backport for [[gerrit:1029109|db-production.php: Enable writes on es6 and es7 (T364446)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
08:01 |
<marostegui@deploy1002> |
Started scap: Backport for [[gerrit:1029109|db-production.php: Enable writes on es6 and es7 (T364446)]] |
[production] |
08:00 |
<moritzm> |
installing python2.7 security updates |
[production] |
07:58 |
<ladsgroup@deploy1002> |
Finished scap: Backport for [[gerrit:1030866|Fix static cache access (T364693)]] (duration: 16m 54s) |
[production] |
07:54 |
<ayounsi@cumin1002> |
END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 17451 |
[production] |
07:53 |
<moritzm> |
installing libgd2 security updates |
[production] |
07:52 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P62332 and previous config saved to /var/cache/conftool/dbconfig/20240513-075256-root.json |
[production] |
07:46 |
<ladsgroup@deploy1002> |
ladsgroup: Continuing with sync |
[production] |
07:44 |
<brouberol@cumin2002> |
END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-flink-eqiad cluster: Roll restart of jvm daemons. |
[production] |
07:44 |
<ladsgroup@deploy1002> |
ladsgroup: Backport for [[gerrit:1030866|Fix static cache access (T364693)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
07:41 |
<ladsgroup@deploy1002> |
Started scap: Backport for [[gerrit:1030866|Fix static cache access (T364693)]] |
[production] |
07:41 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1185 (T364299)', diff saved to https://phabricator.wikimedia.org/P62331 and previous config saved to /var/cache/conftool/dbconfig/20240513-074103-marostegui.json |
[production] |
07:40 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1185.eqiad.wmnet with reason: Maintenance |
[production] |
07:40 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1185.eqiad.wmnet with reason: Maintenance |
[production] |
07:40 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1183 (T364299)', diff saved to https://phabricator.wikimedia.org/P62330 and previous config saved to /var/cache/conftool/dbconfig/20240513-074041-marostegui.json |
[production] |
07:38 |
<brouberol@cumin2002> |
START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-flink-eqiad cluster: Roll restart of jvm daemons. |
[production] |
07:37 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P62329 and previous config saved to /var/cache/conftool/dbconfig/20240513-073750-root.json |
[production] |
07:37 |
<kartik@deploy1002> |
Finished scap: Backport for [[gerrit:1025300|ContentTranslation: Update publishing setting for cswiki (T353049)]] (duration: 32m 03s) |
[production] |
07:35 |
<ayounsi@cumin1002> |
START - Cookbook sre.network.peering with action 'configure' for AS: 17451 |
[production] |
07:30 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1158 (T352010)', diff saved to https://phabricator.wikimedia.org/P62328 and previous config saved to /var/cache/conftool/dbconfig/20240513-073031-ladsgroup.json |
[production] |
07:30 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance |
[production] |
07:30 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance |
[production] |
07:30 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1158.eqiad.wmnet with reason: Maintenance |
[production] |
07:30 |
<brouberol@cumin2002> |
END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-flink-codfw cluster: Roll restart of jvm daemons. |
[production] |
07:29 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1158.eqiad.wmnet with reason: Maintenance |
[production] |
07:25 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1183', diff saved to https://phabricator.wikimedia.org/P62327 and previous config saved to /var/cache/conftool/dbconfig/20240513-072533-marostegui.json |
[production] |
07:23 |
<brouberol@cumin2002> |
START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-flink-codfw cluster: Roll restart of jvm daemons. |
[production] |
07:23 |
<kartik@deploy1002> |
kartik: Continuing with sync |
[production] |
07:22 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P62326 and previous config saved to /var/cache/conftool/dbconfig/20240513-072244-root.json |
[production] |
07:22 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: wmcs::openstack::eqiad1::instance_backups |
[production] |
07:19 |
<kartik@deploy1002> |
kartik: Backport for [[gerrit:1025300|ContentTranslation: Update publishing setting for cswiki (T353049)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
07:10 |
<jmm@cumin2002> |
START - Cookbook sre.puppet.migrate-role for role: wmcs::openstack::eqiad1::instance_backups |
[production] |
07:10 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1183', diff saved to https://phabricator.wikimedia.org/P62325 and previous config saved to /var/cache/conftool/dbconfig/20240513-071026-marostegui.json |
[production] |
07:08 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host cloudbackup1004.eqiad.wmnet |
[production] |
07:07 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P62324 and previous config saved to /var/cache/conftool/dbconfig/20240513-070738-root.json |
[production] |
07:05 |
<kartik@deploy1002> |
Started scap: Backport for [[gerrit:1025300|ContentTranslation: Update publishing setting for cswiki (T353049)]] |
[production] |
06:59 |
<jmm@cumin2002> |
START - Cookbook sre.puppet.migrate-host for host cloudbackup1004.eqiad.wmnet |
[production] |
06:55 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1183 (T364299)', diff saved to https://phabricator.wikimedia.org/P62323 and previous config saved to /var/cache/conftool/dbconfig/20240513-065518-marostegui.json |
[production] |
06:52 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P62322 and previous config saved to /var/cache/conftool/dbconfig/20240513-065230-root.json |
[production] |
06:46 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2183.codfw.wmnet with OS bookworm |
[production] |
06:37 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P62321 and previous config saved to /var/cache/conftool/dbconfig/20240513-063724-root.json |
[production] |
06:28 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2183.codfw.wmnet with reason: host reimage |
[production] |
06:25 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2183.codfw.wmnet with reason: host reimage |
[production] |
06:22 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2213 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P62320 and previous config saved to /var/cache/conftool/dbconfig/20240513-062219-root.json |
[production] |
06:21 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1183 (T364299)', diff saved to https://phabricator.wikimedia.org/P62319 and previous config saved to /var/cache/conftool/dbconfig/20240513-062129-marostegui.json |
[production] |
06:21 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1183.eqiad.wmnet with reason: Maintenance |
[production] |