751-800 of 10000 results (86ms)
2023-07-19 ยง
11:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: dbproxy1016.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - ladsgroup@cumin1001" [production]
11:47 <ladsgroup@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: dbproxy1016.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - ladsgroup@cumin1001" [production]
11:45 <ladsgroup@cumin1001> START - Cookbook sre.dns.netbox [production]
11:40 <ladsgroup@cumin1001> START - Cookbook sre.hosts.decommission for hosts dbproxy1016.eqiad.wmnet [production]
11:13 <jebe@deploy1002> Finished deploy [analytics/refinery@eaabff2] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@eaabff2] (duration: 01m 43s) [production]
11:12 <jebe@deploy1002> Started deploy [analytics/refinery@eaabff2] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@eaabff2] [production]
11:11 <jebe@deploy1002> Finished deploy [analytics/refinery@eaabff2] (thin): Regular analytics weekly train THIN [analytics/refinery@eaabff2] (duration: 00m 04s) [production]
11:11 <jebe@deploy1002> Started deploy [analytics/refinery@eaabff2] (thin): Regular analytics weekly train THIN [analytics/refinery@eaabff2] [production]
11:09 <jebe@deploy1002> Finished deploy [analytics/refinery@eaabff2]: Regular analytics weekly train [analytics/refinery@eaabff2] (duration: 10m 24s) [production]
10:59 <jebe@deploy1002> Started deploy [analytics/refinery@eaabff2]: Regular analytics weekly train [analytics/refinery@eaabff2] [production]
10:02 <elukey@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
09:54 <jayme@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
09:54 <jayme@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
09:50 <jayme@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
09:48 <jayme@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
09:43 <isaranto@deploy1002> helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'experimental' for release 'main' . [production]
09:14 <btullis@deploy1002> Finished deploy [airflow-dags/analytics_test@be05071]: (no justification provided) (duration: 00m 04s) [production]
09:14 <btullis@deploy1002> Started deploy [airflow-dags/analytics_test@be05071]: (no justification provided) [production]
09:12 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49599 and previous config saved to /var/cache/conftool/dbconfig/20230719-091205-root.json [production]
09:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49598 and previous config saved to /var/cache/conftool/dbconfig/20230719-090328-root.json [production]
08:57 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 75%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49597 and previous config saved to /var/cache/conftool/dbconfig/20230719-085700-root.json [production]
08:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 75%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49596 and previous config saved to /var/cache/conftool/dbconfig/20230719-084823-root.json [production]
08:45 <elukey@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
08:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 50%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49595 and previous config saved to /var/cache/conftool/dbconfig/20230719-084156-root.json [production]
08:38 <dcausse> closing the UTC morning backport window [production]
08:37 <dcausse@deploy1002> Finished scap: Backport for [[gerrit:939327|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] (duration: 07m 59s) [production]
08:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 50%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49594 and previous config saved to /var/cache/conftool/dbconfig/20230719-083319-root.json [production]
08:30 <dcausse@deploy1002> dcausse: Backport for [[gerrit:939327|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] synced to the testservers mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
08:29 <dcausse@deploy1002> Started scap: Backport for [[gerrit:939327|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] [production]
08:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49593 and previous config saved to /var/cache/conftool/dbconfig/20230719-082651-root.json [production]
08:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49592 and previous config saved to /var/cache/conftool/dbconfig/20230719-081814-root.json [production]
08:11 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49591 and previous config saved to /var/cache/conftool/dbconfig/20230719-081146-root.json [production]
08:10 <dcausse@deploy1002> Finished scap: Backport for [[gerrit:939328|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] (duration: 07m 36s) [production]
08:04 <dcausse@deploy1002> dcausse: Backport for [[gerrit:939328|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
08:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49590 and previous config saved to /var/cache/conftool/dbconfig/20230719-080309-root.json [production]
08:02 <dcausse@deploy1002> Started scap: Backport for [[gerrit:939328|Use the LinksUpdate::isRecursive flag again to route cirrusSearchLinksUpdate]] [production]
07:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49589 and previous config saved to /var/cache/conftool/dbconfig/20230719-075642-root.json [production]
07:54 <_joe_> ran scap pull, pool on parse1002 after powercycling [production]
07:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49588 and previous config saved to /var/cache/conftool/dbconfig/20230719-074804-root.json [production]
07:47 <_joe_> powercycling parse1002, console blank, unreachable to network [production]
07:46 <dcausse@deploy1002> Backport cancelled. [production]
07:45 <oblivian@cumin1001> conftool action : set/pooled=inactive; selector: name=parse1002.eqiad.wmnet [production]
07:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 3%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49587 and previous config saved to /var/cache/conftool/dbconfig/20230719-074137-root.json [production]
07:36 <dcausse@deploy1002> Finished scap: Backport for [[gerrit:927701|Add channel for TtmServerMessageUpdate of Translate extension]] (duration: 17m 44s) [production]
07:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 3%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49586 and previous config saved to /var/cache/conftool/dbconfig/20230719-073300-root.json [production]
07:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 1%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49585 and previous config saved to /var/cache/conftool/dbconfig/20230719-072632-root.json [production]
07:22 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1180', diff saved to https://phabricator.wikimedia.org/P49584 and previous config saved to /var/cache/conftool/dbconfig/20230719-072207-root.json [production]
07:20 <dcausse@deploy1002> dcausse and abi: Backport for [[gerrit:927701|Add channel for TtmServerMessageUpdate of Translate extension]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
07:18 <dcausse@deploy1002> Started scap: Backport for [[gerrit:927701|Add channel for TtmServerMessageUpdate of Translate extension]] [production]
07:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 1%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49583 and previous config saved to /var/cache/conftool/dbconfig/20230719-071755-root.json [production]