2301-2350 of 10000 results (75ms)
2023-08-16 §
04:01 <zabe@deploy1002> zabe: Continuing with sync [production]
04:00 <zabe@deploy1002> zabe: T343539 synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
03:58 <zabe@deploy1002> Started scap: T343539 [production]
03:58 <zabe> create Wikisource Sundanese # T343539 [production]
03:53 <taavi@deploy1002> Finished scap: Backport for [[gerrit:949161|Set WRITE_BOTH for OAuth multiple devices to techconductwiki (T242031)]] (duration: 09m 07s) [production]
03:47 <taavi@deploy1002> taavi: Continuing with sync [production]
03:45 <taavi@deploy1002> taavi: Backport for [[gerrit:949161|Set WRITE_BOTH for OAuth multiple devices to techconductwiki (T242031)]] synced to the testservers mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
03:44 <taavi@deploy1002> Started scap: Backport for [[gerrit:949161|Set WRITE_BOTH for OAuth multiple devices to techconductwiki (T242031)]] [production]
03:40 <taavi@deploy1002> Finished scap: Backport for [[gerrit:949169|Keep both tables up-to-date on WRITE_BOTH (T242031)]], [[gerrit:949168|Keep both tables up-to-date on WRITE_BOTH (T242031)]] (duration: 10m 58s) [production]
03:33 <taavi@deploy1002> taavi: Continuing with sync [production]
03:31 <taavi@deploy1002> taavi: Backport for [[gerrit:949169|Keep both tables up-to-date on WRITE_BOTH (T242031)]], [[gerrit:949168|Keep both tables up-to-date on WRITE_BOTH (T242031)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
03:30 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
03:29 <taavi@deploy1002> Started scap: Backport for [[gerrit:949169|Keep both tables up-to-date on WRITE_BOTH (T242031)]], [[gerrit:949168|Keep both tables up-to-date on WRITE_BOTH (T242031)]] [production]
03:27 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
03:24 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
03:22 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
02:20 <taavi> create oathauth_devices and oathauth_types tables on wikitech, private.dblist, fishbowl.dblist, centralauth T242031 [production]
01:57 <taavi@deploy1002> Finished scap: Backport for [[gerrit:949166|OAuthUserRepository: Ensure we don't end up with duplicate rows (T242031)]], [[gerrit:949167|OAuthUserRepository: Ensure we don't end up with duplicate rows (T242031)]] (duration: 10m 59s) [production]
01:54 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
01:51 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
01:50 <taavi@deploy1002> taavi: Continuing with sync [production]
01:47 <taavi@deploy1002> taavi: Backport for [[gerrit:949166|OAuthUserRepository: Ensure we don't end up with duplicate rows (T242031)]], [[gerrit:949167|OAuthUserRepository: Ensure we don't end up with duplicate rows (T242031)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug1001.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD [production]
01:46 <taavi@deploy1002> Started scap: Backport for [[gerrit:949166|OAuthUserRepository: Ensure we don't end up with duplicate rows (T242031)]], [[gerrit:949167|OAuthUserRepository: Ensure we don't end up with duplicate rows (T242031)]] [production]
01:41 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
01:36 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
01:35 <ryankemper@cumin1001> END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) [production]
01:34 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
01:30 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
01:30 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
01:22 <ryankemper@deploy1002> Finished deploy [wdqs/wdqs@f1a6177]: deploy to freshly reimaged host (duration: 00m 10s) [production]
01:22 <ryankemper@deploy1002> Started deploy [wdqs/wdqs@f1a6177]: deploy to freshly reimaged host [production]
01:21 <ryankemper@deploy1002> Finished deploy [wdqs/wdqs@f1a6177]: deploy to freshly reimaged host (duration: 00m 10s) [production]
01:21 <ryankemper@deploy1002> Started deploy [wdqs/wdqs@f1a6177]: deploy to freshly reimaged host [production]
2023-08-15 §
23:26 <hmonroy@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Set wikidiff2 maxSplitSize = 10 on group0 wikis T341754 (duration: 07m 39s) [production]
22:27 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wdqs1012.eqiad.wmnet with OS bullseye [production]
22:22 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wdqs1013.eqiad.wmnet with OS bullseye [production]
21:53 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1013.eqiad.wmnet with reason: host reimage [production]
21:50 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1012.eqiad.wmnet with reason: host reimage [production]
21:47 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1013.eqiad.wmnet with reason: host reimage [production]
21:47 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1012.eqiad.wmnet with reason: host reimage [production]
21:32 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host wdqs1013.eqiad.wmnet with OS bullseye [production]
21:32 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host wdqs1012.eqiad.wmnet with OS bullseye [production]
21:16 <robh@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
21:16 <robh@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: pdus - robh@cumin1001" [production]
21:15 <robh@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: pdus - robh@cumin1001" [production]
21:09 <robh@cumin1001> START - Cookbook sre.dns.netbox [production]
21:07 <robh@cumin1001> END (ERROR) - Cookbook sre.dns.netbox (exit_code=97) [production]
21:07 <robh@cumin1001> START - Cookbook sre.dns.netbox [production]
20:55 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs3010.esams.wmnet with OS bullseye [production]
20:55 <sukhe@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - sukhe@cumin2002" [production]