5901-5950 of 10000 results (87ms)
2023-07-19 §
08:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 50%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49594 and previous config saved to /var/cache/conftool/dbconfig/20230719-083319-root.json [production]
08:30 <dcausse@deploy1002> dcausse: Backport for [[gerrit:939327|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] synced to the testservers mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
08:29 <dcausse@deploy1002> Started scap: Backport for [[gerrit:939327|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] [production]
08:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49593 and previous config saved to /var/cache/conftool/dbconfig/20230719-082651-root.json [production]
08:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49592 and previous config saved to /var/cache/conftool/dbconfig/20230719-081814-root.json [production]
08:11 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49591 and previous config saved to /var/cache/conftool/dbconfig/20230719-081146-root.json [production]
08:10 <dcausse@deploy1002> Finished scap: Backport for [[gerrit:939328|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] (duration: 07m 36s) [production]
08:04 <dcausse@deploy1002> dcausse: Backport for [[gerrit:939328|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
08:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49590 and previous config saved to /var/cache/conftool/dbconfig/20230719-080309-root.json [production]
08:02 <dcausse@deploy1002> Started scap: Backport for [[gerrit:939328|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] [production]
07:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49589 and previous config saved to /var/cache/conftool/dbconfig/20230719-075642-root.json [production]
07:54 <_joe_> ran scap pull, pool on parse1002 after powercycling [production]
07:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49588 and previous config saved to /var/cache/conftool/dbconfig/20230719-074804-root.json [production]
07:47 <_joe_> powercycling parse1002, console blank, unreachable to network [production]
07:46 <dcausse@deploy1002> Backport cancelled. [production]
07:45 <oblivian@cumin1001> conftool action : set/pooled=inactive; selector: name=parse1002.eqiad.wmnet [production]
07:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 3%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49587 and previous config saved to /var/cache/conftool/dbconfig/20230719-074137-root.json [production]
07:36 <dcausse@deploy1002> Finished scap: Backport for [[gerrit:927701|Add channel for TtmServerMessageUpdate of Translate extension]] (duration: 17m 44s) [production]
07:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 3%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49586 and previous config saved to /var/cache/conftool/dbconfig/20230719-073300-root.json [production]
07:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 1%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49585 and previous config saved to /var/cache/conftool/dbconfig/20230719-072632-root.json [production]
07:22 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1180', diff saved to https://phabricator.wikimedia.org/P49584 and previous config saved to /var/cache/conftool/dbconfig/20230719-072207-root.json [production]
07:20 <dcausse@deploy1002> dcausse and abi: Backport for [[gerrit:927701|Add channel for TtmServerMessageUpdate of Translate extension]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
07:18 <dcausse@deploy1002> Started scap: Backport for [[gerrit:927701|Add channel for TtmServerMessageUpdate of Translate extension]] [production]
07:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 1%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49583 and previous config saved to /var/cache/conftool/dbconfig/20230719-071755-root.json [production]
07:12 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2158', diff saved to https://phabricator.wikimedia.org/P49582 and previous config saved to /var/cache/conftool/dbconfig/20230719-071204-root.json [production]
06:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49581 and previous config saved to /var/cache/conftool/dbconfig/20230719-062313-root.json [production]
06:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 75%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49580 and previous config saved to /var/cache/conftool/dbconfig/20230719-060809-root.json [production]
05:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 50%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49579 and previous config saved to /var/cache/conftool/dbconfig/20230719-055304-root.json [production]
05:38 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49578 and previous config saved to /var/cache/conftool/dbconfig/20230719-053759-root.json [production]
05:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49577 and previous config saved to /var/cache/conftool/dbconfig/20230719-052254-root.json [production]
05:07 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49576 and previous config saved to /var/cache/conftool/dbconfig/20230719-050750-root.json [production]
04:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 3%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49575 and previous config saved to /var/cache/conftool/dbconfig/20230719-045245-root.json [production]
04:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1198 (re)pooling @ 1%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49574 and previous config saved to /var/cache/conftool/dbconfig/20230719-043740-root.json [production]
00:16 <eileen> civicrm upgraded from 67c526e7 to 7642b3d9 [production]
2023-07-18 §
22:51 <brett@cumin2002> END (PASS) - Cookbook sre.dns.roll-restart-reboot-wikimedia-dns (exit_code=0) rolling reboot on P{doh5002*} and A:wikidough [production]
22:44 <brett@cumin2002> START - Cookbook sre.dns.roll-restart-reboot-wikimedia-dns rolling reboot on P{doh5002*} and A:wikidough [production]
22:34 <bking@cumin1001> END (FAIL) - Cookbook sre.ganeti.makevm (exit_code=99) for new host flink-zk1003.eqiad.wmnet [production]
22:34 <bking@cumin1001> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
22:32 <bking@cumin1001> START - Cookbook sre.dns.netbox [production]
22:32 <bking@cumin1001> START - Cookbook sre.ganeti.makevm for new host flink-zk1003.eqiad.wmnet [production]
22:24 <bking@cumin1001> END (FAIL) - Cookbook sre.ganeti.makevm (exit_code=99) for new host flink-zk1003.eqiad.wmnet [production]
22:24 <bking@cumin1001> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) flink-zk1003.eqiad.wmnet on all recursors [production]
22:24 <bking@cumin1001> START - Cookbook sre.dns.wipe-cache flink-zk1003.eqiad.wmnet on all recursors [production]
22:24 <bking@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
22:24 <bking@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Remove records for VM flink-zk1003.eqiad.wmnet - bking@cumin1001" [production]
22:23 <bking@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Remove records for VM flink-zk1003.eqiad.wmnet - bking@cumin1001" [production]
22:18 <bking@cumin1001> START - Cookbook sre.dns.netbox [production]
22:18 <bking@cumin1001> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) flink-zk1003.eqiad.wmnet on all recursors [production]
22:18 <bking@cumin1001> START - Cookbook sre.dns.wipe-cache flink-zk1003.eqiad.wmnet on all recursors [production]
22:18 <bking@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]