1301-1350 of 10000 results (76ms)
2023-08-07 §
11:54 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
11:54 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:53 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:946521|Stop writing to the old externallinks columns in testwiki (T342683)]] (duration: 08m 06s) [production]
10:48 <ladsgroup@deploy1002> ladsgroup: Continuing with sync [production]
10:47 <ladsgroup@deploy1002> ladsgroup: Backport for [[gerrit:946521|Stop writing to the old externallinks columns in testwiki (T342683)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
10:45 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:946521|Stop writing to the old externallinks columns in testwiki (T342683)]] [production]
10:26 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
10:26 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
10:25 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:25 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:25 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
10:25 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
10:25 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:25 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:24 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
10:24 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
10:24 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:23 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance [production]
10:08 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1138 (T342617)', diff saved to https://phabricator.wikimedia.org/P50158 and previous config saved to /var/cache/conftool/dbconfig/20230807-100805-ladsgroup.json [production]
10:08 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2099.codfw.wmnet with reason: Maintenance [production]
10:08 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1138.eqiad.wmnet with reason: Maintenance [production]
10:07 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2099.codfw.wmnet with reason: Maintenance [production]
10:07 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1138.eqiad.wmnet with reason: Maintenance [production]
09:23 <dcausse> restarting blazegraph on wdqs1004 [production]
08:31 <elukey@deploy1002> Finished scap: Backport for [[gerrit:946510|ext-ORES: force cswiki to use the ORES settings/backend (T343308)]] (duration: 14m 50s) [production]
08:25 <elukey@deploy1002> elukey: Continuing with sync [production]
08:24 <elukey@deploy1002> elukey: Backport for [[gerrit:946510|ext-ORES: force cswiki to use the ORES settings/backend (T343308)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
08:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 100%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50157 and previous config saved to /var/cache/conftool/dbconfig/20230807-081639-root.json [production]
08:16 <elukey@deploy1002> Started scap: Backport for [[gerrit:946510|ext-ORES: force cswiki to use the ORES settings/backend (T343308)]] [production]
08:08 <godog> start docker-image-prune-old on alert hosts - T329939 [production]
08:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 75%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50156 and previous config saved to /var/cache/conftool/dbconfig/20230807-080133-root.json [production]
07:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 50%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50155 and previous config saved to /var/cache/conftool/dbconfig/20230807-074628-root.json [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 25%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50154 and previous config saved to /var/cache/conftool/dbconfig/20230807-073123-root.json [production]
07:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 10%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50153 and previous config saved to /var/cache/conftool/dbconfig/20230807-071618-root.json [production]
07:11 <marostegui> Depool clouddb1015 T334650 [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 5%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50152 and previous config saved to /var/cache/conftool/dbconfig/20230807-070113-root.json [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 3%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50151 and previous config saved to /var/cache/conftool/dbconfig/20230807-064608-root.json [production]
06:33 <kart_> Updated cxserver to 2023-08-03-132800-production (T338602, T333969, T343211) [production]
06:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 1%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50150 and previous config saved to /var/cache/conftool/dbconfig/20230807-063104-root.json [production]
06:28 <kartik@deploy1002> helmfile [eqiad] DONE helmfile.d/services/cxserver: apply [production]
06:28 <kartik@deploy1002> helmfile [eqiad] START helmfile.d/services/cxserver: apply [production]
06:26 <kartik@deploy1002> helmfile [codfw] DONE helmfile.d/services/cxserver: apply [production]
06:25 <kartik@deploy1002> helmfile [codfw] START helmfile.d/services/cxserver: apply [production]
06:22 <kartik@deploy1002> helmfile [staging] DONE helmfile.d/services/cxserver: apply [production]
06:22 <kartik@deploy1002> helmfile [staging] START helmfile.d/services/cxserver: apply [production]
06:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1224 upgrade to mariadb 10.6', diff saved to https://phabricator.wikimedia.org/P50149 and previous config saved to /var/cache/conftool/dbconfig/20230807-061653-root.json [production]
06:10 <ayounsi@cumin1001> END (PASS) - Cookbook sre.deploy.python-code (exit_code=0) homer to cumin2002.codfw.wmnet,cumin1001.eqiad.wmnet with reason: Update wheels for Aerleon 1.6.0 upgrade - ayounsi@cumin1001 [production]
06:09 <ayounsi@cumin1001> START - Cookbook sre.deploy.python-code homer to cumin2002.codfw.wmnet,cumin1001.eqiad.wmnet with reason: Update wheels for Aerleon 1.6.0 upgrade - ayounsi@cumin1001 [production]
2023-08-05 §
05:57 <_joe_> mounting the volume under /srv/dataimport on both puppetmaster frontends [production]
05:53 <_joe_> creating logical volume "dataimport" on the puppetmaster frontends [production]